pax_global_header00006660000000000000000000000064127462070270014521gustar00rootroot0000000000000052 comment=94f7016fb73a7174c5249c0ca2abfab44663ce7a compose-1.8.0/000077500000000000000000000000001274620702700131745ustar00rootroot00000000000000compose-1.8.0/.dockerignore000066400000000000000000000001101274620702700156400ustar00rootroot00000000000000*.egg-info .coverage .git .tox build coverage-html docs/_site venv .tox compose-1.8.0/.gitignore000066400000000000000000000001521274620702700151620ustar00rootroot00000000000000*.egg-info *.pyc .coverage* /.tox /build /coverage-html /dist /docs/_site /venv README.rst compose/GITSHA compose-1.8.0/.pre-commit-config.yaml000066400000000000000000000013531274620702700174570ustar00rootroot00000000000000- repo: git://github.com/pre-commit/pre-commit-hooks sha: 'v0.4.2' hooks: - id: check-added-large-files - id: check-docstring-first - id: check-merge-conflict - id: check-yaml - id: check-json - id: debug-statements - id: end-of-file-fixer - id: flake8 - id: name-tests-test exclude: 'tests/(integration/testcases\.py|helpers\.py)' - id: requirements-txt-fixer - id: trailing-whitespace - repo: git://github.com/asottile/reorder_python_imports sha: v0.1.0 hooks: - id: reorder-python-imports language_version: 'python2.7' args: - --add-import - from __future__ import absolute_import - --add-import - from __future__ import unicode_literals compose-1.8.0/.travis.yml000066400000000000000000000010011274620702700152750ustar00rootroot00000000000000sudo: required language: python matrix: include: - os: linux services: - docker - os: osx language: generic install: ./script/travis/install script: - ./script/travis/ci - ./script/travis/build-binary before_deploy: - "./script/travis/render-bintray-config.py < ./script/travis/bintray.json.tmpl > ./bintray.json" deploy: provider: bintray user: docker-compose-roleuser key: '$BINTRAY_API_KEY' file: ./bintray.json skip_cleanup: true on: all_branches: true compose-1.8.0/CHANGELOG.md000066400000000000000000001226061274620702700150140ustar00rootroot00000000000000Change log ========== 1.8.0 (2016-06-14) ----------------- **Breaking Changes** - As announced in 1.7.0, `docker-compose rm` now removes containers created by `docker-compose run` by default. - Setting `entrypoint` on a service now empties out any default command that was set on the image (i.e. any `CMD` instruction in the Dockerfile used to build it). This makes it consistent with the `--entrypoint` flag to `docker run`. New Features - Added `docker-compose bundle`, a command that builds a bundle file to be consumed by the new *Docker Stack* commands in Docker 1.12. - Added `docker-compose push`, a command that pushes service images to a registry. - Compose now supports specifying a custom TLS version for interaction with the Docker Engine using the `COMPOSE_TLS_VERSION` environment variable. Bug Fixes - Fixed a bug where Compose would erroneously try to read `.env` at the project's root when it is a directory. - `docker-compose run -e VAR` now passes `VAR` through from the shell to the container, as with `docker run -e VAR`. - Improved config merging when multiple compose files are involved for several service sub-keys. - Fixed a bug where volume mappings containing Windows drives would sometimes be parsed incorrectly. - Fixed a bug in Windows environment where volume mappings of the host's root directory would be parsed incorrectly. - Fixed a bug where `docker-compose config` would ouput an invalid Compose file if external networks were specified. - Fixed an issue where unset buildargs would be assigned a string containing `'None'` instead of the expected empty value. - Fixed a bug where yes/no prompts on Windows would not show before receiving input. - Fixed a bug where trying to `docker-compose exec` on Windows without the `-d` option would exit with a stacktrace. This will still fail for the time being, but should do so gracefully. - Fixed a bug where errors during `docker-compose up` would show an unrelated stacktrace at the end of the process. - `docker-compose create` and `docker-compose start` show more descriptive error messages when something goes wrong. 1.7.1 (2016-05-04) ----------------- Bug Fixes - Fixed a bug where the output of `docker-compose config` for v1 files would be an invalid configuration file. - Fixed a bug where `docker-compose config` would not check the validity of links. - Fixed an issue where `docker-compose help` would not output a list of available commands and generic options as expected. - Fixed an issue where filtering by service when using `docker-compose logs` would not apply for newly created services. - Fixed a bug where unchanged services would sometimes be recreated in in the up phase when using Compose with Python 3. - Fixed an issue where API errors encountered during the up phase would not be recognized as a failure state by Compose. - Fixed a bug where Compose would raise a NameError because of an undefined exception name on non-Windows platforms. - Fixed a bug where the wrong version of `docker-py` would sometimes be installed alongside Compose. - Fixed a bug where the host value output by `docker-machine config default` would not be recognized as valid options by the `docker-compose` command line. - Fixed an issue where Compose would sometimes exit unexpectedly while reading events broadcasted by a Swarm cluster. - Corrected a statement in the docs about the location of the `.env` file, which is indeed read from the current directory, instead of in the same location as the Compose file. 1.7.0 (2016-04-13) ------------------ **Breaking Changes** - `docker-compose logs` no longer follows log output by default. It now matches the behaviour of `docker logs` and exits after the current logs are printed. Use `-f` to get the old default behaviour. - Booleans are no longer allows as values for mappings in the Compose file (for keys `environment`, `labels` and `extra_hosts`). Previously this was a warning. Boolean values should be quoted so they become string values. New Features - Compose now looks for a `.env` file in the directory where it's run and reads any environment variables defined inside, if they're not already set in the shell environment. This lets you easily set defaults for variables used in the Compose file, or for any of the `COMPOSE_*` or `DOCKER_*` variables. - Added a `--remove-orphans` flag to both `docker-compose up` and `docker-compose down` to remove containers for services that were removed from the Compose file. - Added a `--all` flag to `docker-compose rm` to include containers created by `docker-compose run`. This will become the default behavior in the next version of Compose. - Added support for all the same TLS configuration flags used by the `docker` client: `--tls`, `--tlscert`, `--tlskey`, etc. - Compose files now support the `tmpfs` and `shm_size` options. - Added the `--workdir` flag to `docker-compose run` - `docker-compose logs` now shows logs for new containers that are created after it starts. - The `COMPOSE_FILE` environment variable can now contain multiple files, separated by the host system's standard path separator (`:` on Mac/Linux, `;` on Windows). - You can now specify a static IP address when connecting a service to a network with the `ipv4_address` and `ipv6_address` options. - Added `--follow`, `--timestamp`, and `--tail` flags to the `docker-compose logs` command. - `docker-compose up`, and `docker-compose start` will now start containers in parallel where possible. - `docker-compose stop` now stops containers in reverse dependency order instead of all at once. - Added the `--build` flag to `docker-compose up` to force it to build a new image. It now shows a warning if an image is automatically built when the flag is not used. - Added the `docker-compose exec` command for executing a process in a running container. Bug Fixes - `docker-compose down` now removes containers created by `docker-compose run`. - A more appropriate error is shown when a timeout is hit during `up` when using a tty. - Fixed a bug in `docker-compose down` where it would abort if some resources had already been removed. - Fixed a bug where changes to network aliases would not trigger a service to be recreated. - Fix a bug where a log message was printed about creating a new volume when it already existed. - Fixed a bug where interrupting `up` would not always shut down containers. - Fixed a bug where `log_opt` and `log_driver` were not properly carried over when extending services in the v1 Compose file format. - Fixed a bug where empty values for build args would cause file validation to fail. 1.6.2 (2016-02-23) ------------------ - Fixed a bug where connecting to a TLS-enabled Docker Engine would fail with a certificate verification error. 1.6.1 (2016-02-23) ------------------ Bug Fixes - Fixed a bug where recreating a container multiple times would cause the new container to be started without the previous volumes. - Fixed a bug where Compose would set the value of unset environment variables to an empty string, instead of a key without a value. - Provide a better error message when Compose requires a more recent version of the Docker API. - Add a missing config field `network.aliases` which allows setting a network scoped alias for a service. - Fixed a bug where `run` would not start services listed in `depends_on`. - Fixed a bug where `networks` and `network_mode` where not merged when using extends or multiple Compose files. - Fixed a bug with service aliases where the short container id alias was only contained 10 characters, instead of the 12 characters used in previous versions. - Added a missing log message when creating a new named volume. - Fixed a bug where `build.args` was not merged when using `extends` or multiple Compose files. - Fixed some bugs with config validation when null values or incorrect types were used instead of a mapping. - Fixed a bug where a `build` section without a `context` would show a stack trace instead of a helpful validation message. - Improved compatibility with swarm by only setting a container affinity to the previous instance of a services' container when the service uses an anonymous container volume. Previously the affinity was always set on all containers. - Fixed the validation of some `driver_opts` would cause an error if a number was used instead of a string. - Some improvements to the `run.sh` script used by the Compose container install option. - Fixed a bug with `up --abort-on-container-exit` where Compose would exit, but would not stop other containers. - Corrected the warning message that is printed when a boolean value is used as a value in a mapping. 1.6.0 (2016-01-15) ------------------ Major Features: - Compose 1.6 introduces a new format for `docker-compose.yml` which lets you define networks and volumes in the Compose file as well as services. It also makes a few changes to the structure of some configuration options. You don't have to use it - your existing Compose files will run on Compose 1.6 exactly as they do today. Check the upgrade guide for full details: https://docs.docker.com/compose/compose-file#upgrading - Support for networking has exited experimental status and is the recommended way to enable communication between containers. If you use the new file format, your app will use networking. If you aren't ready yet, just leave your Compose file as it is and it'll continue to work just the same. By default, you don't have to configure any networks. In fact, using networking with Compose involves even less configuration than using links. Consult the networking guide for how to use it: https://docs.docker.com/compose/networking The experimental flags `--x-networking` and `--x-network-driver`, introduced in Compose 1.5, have been removed. - You can now pass arguments to a build if you're using the new file format: build: context: . args: buildno: 1 - You can now specify both a `build` and an `image` key if you're using the new file format. `docker-compose build` will build the image and tag it with the name you've specified, while `docker-compose pull` will attempt to pull it. - There's a new `events` command for monitoring container events from the application, much like `docker events`. This is a good primitive for building tools on top of Compose for performing actions when particular things happen, such as containers starting and stopping. - There's a new `depends_on` option for specifying dependencies between services. This enforces the order of startup, and ensures that when you run `docker-compose up SERVICE` on a service with dependencies, those are started as well. New Features: - Added a new command `config` which validates and prints the Compose configuration after interpolating variables, resolving relative paths, and merging multiple files and `extends`. - Added a new command `create` for creating containers without starting them. - Added a new command `down` to stop and remove all the resources created by `up` in a single command. - Added support for the `cpu_quota` configuration option. - Added support for the `stop_signal` configuration option. - Commands `start`, `restart`, `pause`, and `unpause` now exit with an error status code if no containers were modified. - Added a new `--abort-on-container-exit` flag to `up` which causes `up` to stop all container and exit once the first container exits. - Removed support for `FIG_FILE`, `FIG_PROJECT_NAME`, and no longer reads `fig.yml` as a default Compose file location. - Removed the `migrate-to-labels` command. - Removed the `--allow-insecure-ssl` flag. Bug Fixes: - Fixed a validation bug that prevented the use of a range of ports in the `expose` field. - Fixed a validation bug that prevented the use of arrays in the `entrypoint` field if they contained duplicate entries. - Fixed a bug that caused `ulimits` to be ignored when used with `extends`. - Fixed a bug that prevented ipv6 addresses in `extra_hosts`. - Fixed a bug that caused `extends` to be ignored when included from multiple Compose files. - Fixed an incorrect warning when a container volume was defined in the Compose file. - Fixed a bug that prevented the force shutdown behaviour of `up` and `logs`. - Fixed a bug that caused `None` to be printed as the network driver name when the default network driver was used. - Fixed a bug where using the string form of `dns` or `dns_search` would cause an error. - Fixed a bug where a container would be reported as "Up" when it was in the restarting state. - Fixed a confusing error message when DOCKER_CERT_PATH was not set properly. - Fixed a bug where attaching to a container would fail if it was using a non-standard logging driver (or none at all). 1.5.2 (2015-12-03) ------------------ - Fixed a bug which broke the use of `environment` and `env_file` with `extends`, and caused environment keys without values to have a `None` value, instead of a value from the host environment. - Fixed a regression in 1.5.1 that caused a warning about volumes to be raised incorrectly when containers were recreated. - Fixed a bug which prevented building a `Dockerfile` that used `ADD ` - Fixed a bug with `docker-compose restart` which prevented it from starting stopped containers. - Fixed handling of SIGTERM and SIGINT to properly stop containers - Add support for using a url as the value of `build` - Improved the validation of the `expose` option 1.5.1 (2015-11-12) ------------------ - Add the `--force-rm` option to `build`. - Add the `ulimit` option for services in the Compose file. - Fixed a bug where `up` would error with "service needs to be built" if a service changed from using `image` to using `build`. - Fixed a bug that would cause incorrect output of parallel operations on some terminals. - Fixed a bug that prevented a container from being recreated when the mode of a `volumes_from` was changed. - Fixed a regression in 1.5.0 where non-utf-8 unicode characters would cause `up` or `logs` to crash. - Fixed a regression in 1.5.0 where Compose would use a success exit status code when a command fails due to an HTTP timeout communicating with the docker daemon. - Fixed a regression in 1.5.0 where `name` was being accepted as a valid service option which would override the actual name of the service. - When using `--x-networking` Compose no longer sets the hostname to the container name. - When using `--x-networking` Compose will only create the default network if at least one container is using the network. - When printings logs during `up` or `logs`, flush the output buffer after each line to prevent buffering issues from hideing logs. - Recreate a container if one of its dependencies is being created. Previously a container was only recreated if it's dependencies already existed, but were being recreated as well. - Add a warning when a `volume` in the Compose file is being ignored and masked by a container volume from a previous container. - Improve the output of `pull` when run without a tty. - When using multiple Compose files, validate each before attempting to merge them together. Previously invalid files would result in not helpful errors. - Allow dashes in keys in the `environment` service option. - Improve validation error messages by including the filename as part of the error message. 1.5.0 (2015-11-03) ------------------ **Breaking changes:** With the introduction of variable substitution support in the Compose file, any Compose file that uses an environment variable (`$VAR` or `${VAR}`) in the `command:` or `entrypoint:` field will break. Previously these values were interpolated inside the container, with a value from the container environment. In Compose 1.5.0, the values will be interpolated on the host, with a value from the host environment. To migrate a Compose file to 1.5.0, escape the variables with an extra `$` (ex: `$$VAR` or `$${VAR}`). See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution Major features: - Compose is now available for Windows. - Environment variables can be used in the Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution - Multiple compose files can be specified, allowing you to override settings in the default Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/reference/docker-compose.md for more details. - Compose now produces better error messages when a file contains invalid configuration. - `up` now waits for all services to exit before shutting down, rather than shutting down as soon as one container exits. - Experimental support for the new docker networking system can be enabled with the `--x-networking` flag. Read more here: https://github.com/docker/docker/blob/8fee1c20/docs/userguide/dockernetworks.md New features: - You can now optionally pass a mode to `volumes_from`, e.g. `volumes_from: ["servicename:ro"]`. - Since Docker now lets you create volumes with names, you can refer to those volumes by name in `docker-compose.yml`. For example, `volumes: ["mydatavolume:/data"]` will mount the volume named `mydatavolume` at the path `/data` inside the container. If the first component of an entry in `volumes` starts with a `.`, `/` or `~`, it is treated as a path and expansion of relative paths is performed as necessary. Otherwise, it is treated as a volume name and passed straight through to Docker. Read more on named volumes and volume drivers here: https://github.com/docker/docker/blob/244d9c33/docs/userguide/dockervolumes.md - `docker-compose build --pull` instructs Compose to pull the base image for each Dockerfile before building. - `docker-compose pull --ignore-pull-failures` instructs Compose to continue if it fails to pull a single service's image, rather than aborting. - You can now specify an IPC namespace in `docker-compose.yml` with the `ipc` option. - Containers created by `docker-compose run` can now be named with the `--name` flag. - If you install Compose with pip or use it as a library, it now works with Python 3. - `image` now supports image digests (in addition to ids and tags), e.g. `image: "busybox@sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d"` - `ports` now supports ranges of ports, e.g. ports: - "3000-3005" - "9000-9001:8000-8001" - `docker-compose run` now supports a `-p|--publish` parameter, much like `docker run -p`, for publishing specific ports to the host. - `docker-compose pause` and `docker-compose unpause` have been implemented, analogous to `docker pause` and `docker unpause`. - When using `extends` to copy configuration from another service in the same Compose file, you can omit the `file` option. - Compose can be installed and run as a Docker image. This is an experimental feature. Bug fixes: - All values for the `log_driver` option which are supported by the Docker daemon are now supported by Compose. - `docker-compose build` can now be run successfully against a Swarm cluster. 1.4.2 (2015-09-22) ------------------ - Fixed a regression in the 1.4.1 release that would cause `docker-compose up` without the `-d` option to exit immediately. 1.4.1 (2015-09-10) ------------------ The following bugs have been fixed: - Some configuration changes (notably changes to `links`, `volumes_from`, and `net`) were not properly triggering a container recreate as part of `docker-compose up`. - `docker-compose up ` was showing logs for all services instead of just the specified services. - Containers with custom container names were showing up in logs as `service_number` instead of their custom container name. - When scaling a service sometimes containers would be recreated even when the configuration had not changed. 1.4.0 (2015-08-04) ------------------ - By default, `docker-compose up` now only recreates containers for services whose configuration has changed since they were created. This should result in a dramatic speed-up for many applications. The experimental `--x-smart-recreate` flag which introduced this feature in Compose 1.3.0 has been removed, and a `--force-recreate` flag has been added for when you want to recreate everything. - Several of Compose's commands - `scale`, `stop`, `kill` and `rm` - now perform actions on multiple containers in parallel, rather than in sequence, which will run much faster on larger applications. - You can now specify a custom name for a service's container with `container_name`. Because Docker container names must be unique, this means you can't scale the service beyond one container. - You no longer have to specify a `file` option when using `extends` - it will default to the current file. - Service names can now contain dots, dashes and underscores. - Compose can now read YAML configuration from standard input, rather than from a file, by specifying `-` as the filename. This makes it easier to generate configuration dynamically: $ echo 'redis: {"image": "redis"}' | docker-compose --file - up - There's a new `docker-compose version` command which prints extended information about Compose's bundled dependencies. - `docker-compose.yml` now supports `log_opt` as well as `log_driver`, allowing you to pass extra configuration to a service's logging driver. - `docker-compose.yml` now supports `memswap_limit`, similar to `docker run --memory-swap`. - When mounting volumes with the `volumes` option, you can now pass in any mode supported by the daemon, not just `:ro` or `:rw`. For example, SELinux users can pass `:z` or `:Z`. - You can now specify a custom volume driver with the `volume_driver` option in `docker-compose.yml`, much like `docker run --volume-driver`. - A bug has been fixed where Compose would fail to pull images from private registries serving plain (unsecured) HTTP. The `--allow-insecure-ssl` flag, which was previously used to work around this issue, has been deprecated and now has no effect. - A bug has been fixed where `docker-compose build` would fail if the build depended on a private Hub image or an image from a private registry. - A bug has been fixed where Compose would crash if there were containers which the Docker daemon had not finished removing. - Two bugs have been fixed where Compose would sometimes fail with a "Duplicate bind mount" error, or fail to attach volumes to a container, if there was a volume path specified in `docker-compose.yml` with a trailing slash. Thanks @mnowster, @dnephin, @ekristen, @funkyfuture, @jeffk and @lukemarsden! 1.3.3 (2015-07-15) ------------------ Two regressions have been fixed: - When stopping containers gracefully, Compose was setting the timeout to 0, effectively forcing a SIGKILL every time. - Compose would sometimes crash depending on the formatting of container data returned from the Docker API. 1.3.2 (2015-07-14) ------------------ The following bugs have been fixed: - When there were one-off containers created by running `docker-compose run` on an older version of Compose, `docker-compose run` would fail with a name collision. Compose now shows an error if you have leftover containers of this type lying around, and tells you how to remove them. - Compose was not reading Docker authentication config files created in the new location, `~/docker/config.json`, and authentication against private registries would therefore fail. - When a container had a pseudo-TTY attached, its output in `docker-compose up` would be truncated. - `docker-compose up --x-smart-recreate` would sometimes fail when an image tag was updated. - `docker-compose up` would sometimes create two containers with the same numeric suffix. - `docker-compose rm` and `docker-compose ps` would sometimes list services that aren't part of the current project (though no containers were erroneously removed). - Some `docker-compose` commands would not show an error if invalid service names were passed in. Thanks @dano, @josephpage, @kevinsimper, @lieryan, @phemmer, @soulrebel and @sschepens! 1.3.1 (2015-06-21) ------------------ The following bugs have been fixed: - `docker-compose build` would always attempt to pull the base image before building. - `docker-compose help migrate-to-labels` failed with an error. - If no network mode was specified, Compose would set it to "bridge", rather than allowing the Docker daemon to use its configured default network mode. 1.3.0 (2015-06-18) ------------------ Firstly, two important notes: - **This release contains breaking changes, and you will need to either remove or migrate your existing containers before running your app** - see the [upgrading section of the install docs](https://github.com/docker/compose/blob/1.3.0rc1/docs/install.md#upgrading) for details. - Compose now requires Docker 1.6.0 or later. We've done a lot of work in this release to remove hacks and make Compose more stable: - Compose now uses container labels, rather than names, to keep track of containers. This makes Compose both faster and easier to integrate with your own tools. - Compose no longer uses "intermediate containers" when recreating containers for a service. This makes `docker-compose up` less complex and more resilient to failure. There are some new features: - `docker-compose up` has an **experimental** new behaviour: it will only recreate containers for services whose configuration has changed in `docker-compose.yml`. This will eventually become the default, but for now you can take it for a spin: $ docker-compose up --x-smart-recreate - When invoked in a subdirectory of a project, `docker-compose` will now climb up through parent directories until it finds a `docker-compose.yml`. Several new configuration keys have been added to `docker-compose.yml`: - `dockerfile`, like `docker build --file`, lets you specify an alternate Dockerfile to use with `build`. - `labels`, like `docker run --labels`, lets you add custom metadata to containers. - `extra_hosts`, like `docker run --add-host`, lets you add entries to a container's `/etc/hosts` file. - `pid: host`, like `docker run --pid=host`, lets you reuse the same PID namespace as the host machine. - `cpuset`, like `docker run --cpuset-cpus`, lets you specify which CPUs to allow execution in. - `read_only`, like `docker run --read-only`, lets you mount a container's filesystem as read-only. - `security_opt`, like `docker run --security-opt`, lets you specify [security options](https://docs.docker.com/engine/reference/run/#security-configuration). - `log_driver`, like `docker run --log-driver`, lets you specify a [log driver](https://docs.docker.com/engine/reference/run/#logging-drivers-log-driver). Many bugs have been fixed, including the following: - The output of `docker-compose run` was sometimes truncated, especially when running under Jenkins. - A service's volumes would sometimes not update after volume configuration was changed in `docker-compose.yml`. - Authenticating against third-party registries would sometimes fail. - `docker-compose run --rm` would fail to remove the container if the service had a `restart` policy in place. - `docker-compose scale` would refuse to scale a service beyond 1 container if it exposed a specific port number on the host. - Compose would refuse to create multiple volume entries with the same host path. Thanks @ahromis, @albers, @aleksandr-vin, @antoineco, @ccverak, @chernjie, @dnephin, @edmorley, @fordhurley, @josephpage, @KyleJamesWalker, @lsowen, @mchasal, @noironetworks, @sdake, @sdurrheimer, @sherter, @stephenlawrence, @thaJeztah, @thieman, @turtlemonvh, @twhiteman, @vdemeester, @xuxinkun and @zwily! 1.2.0 (2015-04-16) ------------------ - `docker-compose.yml` now supports an `extends` option, which enables a service to inherit configuration from another service in another configuration file. This is really good for sharing common configuration between apps, or for configuring the same app for different environments. Here's the [documentation](https://github.com/docker/compose/blob/master/docs/yml.md#extends). - When using Compose with a Swarm cluster, containers that depend on one another will be co-scheduled on the same node. This means that most Compose apps will now work out of the box, as long as they don't use `build`. - Repeated invocations of `docker-compose up` when using Compose with a Swarm cluster now work reliably. - Directories passed to `build`, filenames passed to `env_file` and volume host paths passed to `volumes` are now treated as relative to the *directory of the configuration file*, not the directory that `docker-compose` is being run in. In the majority of cases, those are the same, but if you use the `-f|--file` argument to specify a configuration file in another directory, **this is a breaking change**. - A service can now share another service's network namespace with `net: container:`. - `volumes_from` and `net: container:` entries are taken into account when resolving dependencies, so `docker-compose up ` will correctly start all dependencies of ``. - `docker-compose run` now accepts a `--user` argument to specify a user to run the command as, just like `docker run`. - The `up`, `stop` and `restart` commands now accept a `--timeout` (or `-t`) argument to specify how long to wait when attempting to gracefully stop containers, just like `docker stop`. - `docker-compose rm` now accepts `-f` as a shorthand for `--force`, just like `docker rm`. Thanks, @abesto, @albers, @alunduil, @dnephin, @funkyfuture, @gilclark, @IanVS, @KingsleyKelly, @knutwalker, @thaJeztah and @vmalloc! 1.1.0 (2015-02-25) ------------------ Fig has been renamed to Docker Compose, or just Compose for short. This has several implications for you: - The command you type is now `docker-compose`, not `fig`. - You should rename your fig.yml to docker-compose.yml. - If you’re installing via PyPi, the package is now `docker-compose`, so install it with `pip install docker-compose`. Besides that, there’s a lot of new stuff in this release: - We’ve made a few small changes to ensure that Compose will work with Swarm, Docker’s new clustering tool (https://github.com/docker/swarm). Eventually you'll be able to point Compose at a Swarm cluster instead of a standalone Docker host and it’ll run your containers on the cluster with no extra work from you. As Swarm is still developing, integration is rough and lots of Compose features don't work yet. - `docker-compose run` now has a `--service-ports` flag for exposing ports on the given service. This is useful for e.g. running your webapp with an interactive debugger. - You can now link to containers outside your app with the `external_links` option in docker-compose.yml. - You can now prevent `docker-compose up` from automatically building images with the `--no-build` option. This will make fewer API calls and run faster. - If you don’t specify a tag when using the `image` key, Compose will default to the `latest` tag, rather than pulling all tags. - `docker-compose kill` now supports the `-s` flag, allowing you to specify the exact signal you want to send to a service’s containers. - docker-compose.yml now has an `env_file` key, analogous to `docker run --env-file`, letting you specify multiple environment variables in a separate file. This is great if you have a lot of them, or if you want to keep sensitive information out of version control. - docker-compose.yml now supports the `dns_search`, `cap_add`, `cap_drop`, `cpu_shares` and `restart` options, analogous to `docker run`’s `--dns-search`, `--cap-add`, `--cap-drop`, `--cpu-shares` and `--restart` options. - Compose now ships with Bash tab completion - see the installation and usage docs at https://github.com/docker/compose/blob/1.1.0/docs/completion.md - A number of bugs have been fixed - see the milestone for details: https://github.com/docker/compose/issues?q=milestone%3A1.1.0+ Thanks @dnephin, @squebe, @jbalonso, @raulcd, @benlangfield, @albers, @ggtools, @bersace, @dtenenba, @petercv, @drewkett, @TFenby, @paulRbr, @Aigeruth and @salehe! 1.0.1 (2014-11-04) ------------------ - Added an `--allow-insecure-ssl` option to allow `fig up`, `fig run` and `fig pull` to pull from insecure registries. - Fixed `fig run` not showing output in Jenkins. - Fixed a bug where Fig couldn't build Dockerfiles with ADD statements pointing at URLs. 1.0.0 (2014-10-16) ------------------ The highlights: - [Fig has joined Docker.](https://www.orchardup.com/blog/orchard-is-joining-docker) Fig will continue to be maintained, but we'll also be incorporating the best bits of Fig into Docker itself. This means the GitHub repository has moved to [https://github.com/docker/fig](https://github.com/docker/fig) and our IRC channel is now #docker-fig on Freenode. - Fig can be used with the [official Docker OS X installer](https://docs.docker.com/installation/mac/). Boot2Docker will mount the home directory from your host machine so volumes work as expected. - Fig supports Docker 1.3. - It is now possible to connect to the Docker daemon using TLS by using the `DOCKER_CERT_PATH` and `DOCKER_TLS_VERIFY` environment variables. - There is a new `fig port` command which outputs the host port binding of a service, in a similar way to `docker port`. - There is a new `fig pull` command which pulls the latest images for a service. - There is a new `fig restart` command which restarts a service's containers. - Fig creates multiple containers in service by appending a number to the service name (e.g. `db_1`, `db_2`, etc). As a convenience, Fig will now give the first container an alias of the service name (e.g. `db`). This link alias is also a valid hostname and added to `/etc/hosts` so you can connect to linked services using their hostname. For example, instead of resolving the environment variables `DB_PORT_5432_TCP_ADDR` and `DB_PORT_5432_TCP_PORT`, you could just use the hostname `db` and port `5432` directly. - Volume definitions now support `ro` mode, expanding `~` and expanding environment variables. - `.dockerignore` is supported when building. - The project name can be set with the `FIG_PROJECT_NAME` environment variable. - The `--env` and `--entrypoint` options have been added to `fig run`. - The Fig binary for Linux is now linked against an older version of glibc so it works on CentOS 6 and Debian Wheezy. Other things: - `fig ps` now works on Jenkins and makes fewer API calls to the Docker daemon. - `--verbose` displays more useful debugging output. - When starting a service where `volumes_from` points to a service without any containers running, that service will now be started. - Lots of docs improvements. Notably, environment variables are documented and official repositories are used throughout. Thanks @dnephin, @d11wtq, @marksteve, @rubbish, @jbalonso, @timfreund, @alunduil, @mieciu, @shuron, @moss, @suzaku and @chmouel! Whew. 0.5.2 (2014-07-28) ------------------ - Added a `--no-cache` option to `fig build`, which bypasses the cache just like `docker build --no-cache`. - Fixed the `dns:` fig.yml option, which was causing fig to error out. - Fixed a bug where fig couldn't start under Python 2.6. - Fixed a log-streaming bug that occasionally caused fig to exit. Thanks @dnephin and @marksteve! 0.5.1 (2014-07-11) ------------------ - If a service has a command defined, `fig run [service]` with no further arguments will run it. - The project name now defaults to the directory containing fig.yml, not the current working directory (if they're different) - `volumes_from` now works properly with containers as well as services - Fixed a race condition when recreating containers in `fig up` Thanks @ryanbrainard and @d11wtq! 0.5.0 (2014-07-11) ------------------ - Fig now starts links when you run `fig run` or `fig up`. For example, if you have a `web` service which depends on a `db` service, `fig run web ...` will start the `db` service. - Environment variables can now be resolved from the environment that Fig is running in. Just specify it as a blank variable in your `fig.yml` and, if set, it'll be resolved: ``` environment: RACK_ENV: development SESSION_SECRET: ``` - `volumes_from` is now supported in `fig.yml`. All of the volumes from the specified services and containers will be mounted: ``` volumes_from: - service_name - container_name ``` - A host address can now be specified in `ports`: ``` ports: - "0.0.0.0:8000:8000" - "127.0.0.1:8001:8001" ``` - The `net` and `workdir` options are now supported in `fig.yml`. - The `hostname` option now works in the same way as the Docker CLI, splitting out into a `domainname` option. - TTY behaviour is far more robust, and resizes are supported correctly. - Load YAML files safely. Thanks to @d11wtq, @ryanbrainard, @rail44, @j0hnsmith, @binarin, @Elemecca, @mozz100 and @marksteve for their help with this release! 0.4.2 (2014-06-18) ------------------ - Fix various encoding errors when using `fig run`, `fig up` and `fig build`. 0.4.1 (2014-05-08) ------------------ - Add support for Docker 0.11.0. (Thanks @marksteve!) - Make project name configurable. (Thanks @jefmathiot!) - Return correct exit code from `fig run`. 0.4.0 (2014-04-29) ------------------ - Support Docker 0.9 and 0.10 - Display progress bars correctly when pulling images (no more ski slopes) - `fig up` now stops all services when any container exits - Added support for the `privileged` config option in fig.yml (thanks @kvz!) - Shortened and aligned log prefixes in `fig up` output - Only containers started with `fig run` link back to their own service - Handle UTF-8 correctly when streaming `fig build/run/up` output (thanks @mauvm and @shanejonas!) - Error message improvements 0.3.2 (2014-03-05) ------------------ - Added an `--rm` option to `fig run`. (Thanks @marksteve!) - Added an `expose` option to `fig.yml`. 0.3.1 (2014-03-04) ------------------ - Added contribution instructions. (Thanks @kvz!) - Fixed `fig rm` throwing an error. - Fixed a bug in `fig ps` on Docker 0.8.1 when there is a container with no command. 0.3.0 (2014-03-03) ------------------ - We now ship binaries for OS X and Linux. No more having to install with Pip! - Add `-f` flag to specify alternate `fig.yml` files - Add support for custom link names - Fix a bug where recreating would sometimes hang - Update docker-py to support Docker 0.8.0. - Various documentation improvements - Various error message improvements Thanks @marksteve, @Gazler and @teozkr! 0.2.2 (2014-02-17) ------------------ - Resolve dependencies using Cormen/Tarjan topological sort - Fix `fig up` not printing log output - Stop containers in reverse order to starting - Fix scale command not binding ports Thanks to @barnybug and @dustinlacewell for their work on this release. 0.2.1 (2014-02-04) ------------------ - General improvements to error reporting (#77, #79) 0.2.0 (2014-01-31) ------------------ - Link services to themselves so run commands can access the running service. (#67) - Much better documentation. - Make service dependency resolution more reliable. (#48) - Load Fig configurations with a `.yaml` extension. (#58) Big thanks to @cameronmaske, @mrchrisadams and @damianmoore for their help with this release. 0.1.4 (2014-01-27) ------------------ - Add a link alias without the project name. This makes the environment variables a little shorter: `REDIS_1_PORT_6379_TCP_ADDR`. (#54) 0.1.3 (2014-01-23) ------------------ - Fix ports sometimes being configured incorrectly. (#46) - Fix log output sometimes not displaying. (#47) 0.1.2 (2014-01-22) ------------------ - Add `-T` option to `fig run` to disable pseudo-TTY. (#34) - Fix `fig up` requiring the ubuntu image to be pulled to recreate containers. (#33) Thanks @cameronmaske! - Improve reliability, fix arrow keys and fix a race condition in `fig run`. (#34, #39, #40) 0.1.1 (2014-01-17) ------------------ - Fix bug where ports were not exposed correctly (#29). Thanks @dustinlacewell! 0.1.0 (2014-01-16) ------------------ - Containers are recreated on each `fig up`, ensuring config is up-to-date with `fig.yml` (#2) - Add `fig scale` command (#9) - Use `DOCKER_HOST` environment variable to find Docker daemon, for consistency with the official Docker client (was previously `DOCKER_URL`) (#19) - Truncate long commands in `fig ps` (#18) - Fill out CLI help banners for commands (#15, #16) - Show a friendlier error when `fig.yml` is missing (#4) - Fix bug with `fig build` logging (#3) - Fix bug where builds would time out if a step took a long time without generating output (#6) - Fix bug where streaming container output over the Unix socket raised an error (#7) Big thanks to @tomstuart, @EnTeQuAk, @schickling, @aronasorman and @GeoffreyPlitt. 0.0.2 (2014-01-02) ------------------ - Improve documentation - Try to connect to Docker on `tcp://localdocker:4243` and a UNIX socket in addition to `localhost`. - Improve `fig up` behaviour - Add confirmation prompt to `fig rm` - Add `fig build` command 0.0.1 (2013-12-20) ------------------ Initial release. compose-1.8.0/CHANGES.md000077700000000000000000000000001274620702700163722CHANGELOG.mdustar00rootroot00000000000000compose-1.8.0/CONTRIBUTING.md000066400000000000000000000064571274620702700154410ustar00rootroot00000000000000# Contributing to Compose Compose is a part of the Docker project, and follows the same rules and principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md) to get an overview. ## TL;DR Pull requests will need: - Tests - Documentation - [To be signed off](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work) - A logical series of [well written commits](https://github.com/alphagov/styleguides/blob/master/git.md) ## Development environment If you're looking contribute to Compose but you're new to the project or maybe even to Python, here are the steps that should get you started. 1. Fork [https://github.com/docker/compose](https://github.com/docker/compose) to your username. 2. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`. 3. You must [configure a remote](https://help.github.com/articles/configuring-a-remote-for-a-fork/) for your fork so that you can [sync changes you make](https://help.github.com/articles/syncing-a-fork/) with the original repository. 4. Enter the local directory `cd compose`. 5. Set up a development environment by running `python setup.py develop`. This will install the dependencies and set up a symlink from your `docker-compose` executable to the checkout of the repository. When you now run `docker-compose` from anywhere on your machine, it will run your development version of Compose. ## Install pre-commit hooks This step is optional, but recommended. Pre-commit hooks will run style checks and in some cases fix style issues for you, when you commit code. Install the git pre-commit hooks using [tox](https://tox.readthedocs.org) by running `tox -e pre-commit` or by following the [pre-commit install guide](http://pre-commit.com/#install). To run the style checks at any time run `tox -e pre-commit`. ## Submitting a pull request See Docker's [basic contribution workflow](https://docs.docker.com/opensource/workflow/make-a-contribution/#the-basic-contribution-workflow) for a guide on how to submit a pull request for code or documentation. ## Running the test suite Use the test script to run linting checks and then the full test suite against different Python interpreters: $ script/test/default Tests are run against a Docker daemon inside a container, so that we can test against multiple Docker versions. By default they'll run against only the latest Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run against all supported versions: $ DOCKER_VERSIONS=all script/test/default Arguments to `script/test/default` are passed through to the `tox` executable, so you can specify a test directory, file, module, class or method: $ script/test/default tests/unit $ script/test/default tests/unit/cli_test.py $ script/test/default tests/unit/config_test.py::ConfigTest $ script/test/default tests/unit/config_test.py::ConfigTest::test_load ## Finding things to work on We use a [ZenHub board](https://www.zenhub.io/) to keep track of specific things we are working on and planning to work on. If you're looking for things to work on, stuff in the backlog is a great place to start. For more information about our project planning, take a look at our [GitHub wiki](https://github.com/docker/compose/wiki). compose-1.8.0/Dockerfile000066400000000000000000000034171274620702700151730ustar00rootroot00000000000000FROM debian:wheezy RUN set -ex; \ apt-get update -qq; \ apt-get install -y \ locales \ gcc \ make \ zlib1g \ zlib1g-dev \ libssl-dev \ git \ ca-certificates \ curl \ libsqlite3-dev \ ; \ rm -rf /var/lib/apt/lists/* RUN curl https://get.docker.com/builds/Linux/x86_64/docker-1.8.3 \ -o /usr/local/bin/docker && \ chmod +x /usr/local/bin/docker # Build Python 2.7.9 from source RUN set -ex; \ curl -L https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz | tar -xz; \ cd Python-2.7.9; \ ./configure --enable-shared; \ make; \ make install; \ cd ..; \ rm -rf /Python-2.7.9 # Build python 3.4 from source RUN set -ex; \ curl -L https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tgz | tar -xz; \ cd Python-3.4.3; \ ./configure --enable-shared; \ make; \ make install; \ cd ..; \ rm -rf /Python-3.4.3 # Make libpython findable ENV LD_LIBRARY_PATH /usr/local/lib # Install setuptools RUN set -ex; \ curl -L https://bootstrap.pypa.io/ez_setup.py | python # Install pip RUN set -ex; \ curl -L https://pypi.python.org/packages/source/p/pip/pip-8.1.1.tar.gz | tar -xz; \ cd pip-8.1.1; \ python setup.py install; \ cd ..; \ rm -rf pip-8.1.1 # Python3 requires a valid locale RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen ENV LANG en_US.UTF-8 RUN useradd -d /home/user -m -s /bin/bash user WORKDIR /code/ RUN pip install tox==2.1.1 ADD requirements.txt /code/ ADD requirements-dev.txt /code/ ADD .pre-commit-config.yaml /code/ ADD setup.py /code/ ADD tox.ini /code/ ADD compose /code/compose/ RUN tox --notest ADD . /code/ RUN chown -R user /code/ ENTRYPOINT ["/code/.tox/py27/bin/docker-compose"] compose-1.8.0/Dockerfile.run000066400000000000000000000005351274620702700157740ustar00rootroot00000000000000 FROM alpine:3.4 RUN apk -U add \ python \ py-pip COPY requirements.txt /code/requirements.txt RUN pip install -r /code/requirements.txt ADD dist/docker-compose-release.tar.gz /code/docker-compose RUN pip install --no-deps /code/docker-compose/docker-compose-* ENTRYPOINT ["/usr/bin/docker-compose"] compose-1.8.0/LICENSE000066400000000000000000000250061274620702700142040ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright 2014 Docker, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. compose-1.8.0/MAINTAINERS000066400000000000000000000021011274620702700146630ustar00rootroot00000000000000# Compose maintainers file # # This file describes who runs the docker/compose project and how. # This is a living document - if you see something out of date or missing, speak up! # # It is structured to be consumable by both humans and programs. # To extract its contents programmatically, use any TOML-compliant parser. # # This file is compiled into the MAINTAINERS file in docker/opensource. # [Org] [Org."Core maintainers"] people = [ "aanand", "bfirsh", "dnephin", "mnowster", ] [people] # A reference list of all people associated with the project. # All other sections should refer to people by their canonical key # in the people section. # ADD YOURSELF HERE IN ALPHABETICAL ORDER [people.aanand] Name = "Aanand Prasad" Email = "aanand.prasad@gmail.com" GitHub = "aanand" [people.bfirsh] Name = "Ben Firshman" Email = "ben@firshman.co.uk" GitHub = "bfirsh" [people.dnephin] Name = "Daniel Nephin" Email = "dnephin@gmail.com" GitHub = "dnephin" [people.mnowster] Name = "Mazz Mosley" Email = "mazz@houseofmnowster.com" GitHub = "mnowster" compose-1.8.0/MANIFEST.in000066400000000000000000000005201274620702700147270ustar00rootroot00000000000000include Dockerfile include LICENSE include requirements.txt include requirements-dev.txt include tox.ini include *.md exclude README.md include README.rst include compose/config/*.json include compose/GITSHA recursive-include contrib/completion * recursive-include tests * global-exclude *.pyc global-exclude *.pyo global-exclude *.un~ compose-1.8.0/README.md000066400000000000000000000052601274620702700144560ustar00rootroot00000000000000Docker Compose ============== ![Docker Compose](logo.png?raw=true "Docker Compose Logo") Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration. To learn more about all the features of Compose see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features). Compose is great for development, testing, and staging environments, as well as CI workflows. You can learn more about each case in [Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases). Using Compose is basically a three-step process. 1. Define your app's environment with a `Dockerfile` so it can be reproduced anywhere. 2. Define the services that make up your app in `docker-compose.yml` so they can be run together in an isolated environment: 3. Lastly, run `docker-compose up` and Compose will start and run your entire app. A `docker-compose.yml` looks like this: version: '2' services: web: build: . ports: - "5000:5000" volumes: - .:/code redis: image: redis For more information about the Compose file, see the [Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md) Compose has commands for managing the whole lifecycle of your application: * Start, stop and rebuild services * View the status of running services * Stream the log output of running services * Run a one-off command on a service Installation and documentation ------------------------------ - Full documentation is available on [Docker's website](https://docs.docker.com/compose/). - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose) - Code repository for Compose is on [Github](https://github.com/docker/compose) - If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new) Contributing ------------ [![Build Status](http://jenkins.dockerproject.org/buildStatus/icon?job=Compose%20Master)](http://jenkins.dockerproject.org/job/Compose%20Master/) Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md). Releasing --------- Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md). compose-1.8.0/ROADMAP.md000066400000000000000000000042721274620702700146060ustar00rootroot00000000000000# Roadmap ## An even better tool for development environments Compose is a great tool for development environments, but it could be even better. For example: - It should be possible to define hostnames for containers which work from the host machine, e.g. “mywebcontainer.local”. This is needed by apps comprising multiple web services which generate links to one another (e.g. a frontend website and a separate admin webapp) ## More than just development environments Compose currently works really well in development, but we want to make the Compose file format better for test, staging, and production environments. To support these use cases, there will need to be improvements to the file format, improvements to the command-line tool, integrations with other tools, and perhaps new tools altogether. Some specific things we are considering: - Compose currently will attempt to get your application into the correct state when running `up`, but it has a number of shortcomings: - It should roll back to a known good state if it fails. - It should allow a user to check the actions it is about to perform before running them. - It should be possible to partially modify the config file for different environments (dev/test/staging/prod), passing in e.g. custom ports, volume mount paths, or volume drivers. ([#1377](https://github.com/docker/compose/issues/1377)) - Compose should recommend a technique for zero-downtime deploys. - It should be possible to continuously attempt to keep an application in the correct state, instead of just performing `up` a single time. ## Integration with Swarm Compose should integrate really well with Swarm so you can take an application you've developed on your laptop and run it on a Swarm cluster. The current state of integration is documented in [SWARM.md](SWARM.md). ## Applications spanning multiple teams Compose works well for applications that are in a single repository and depend on services that are hosted on Docker Hub. If your application depends on another application within your organisation, Compose doesn't work as well. There are several ideas about how this could work, such as [including external files](https://github.com/docker/fig/issues/318). compose-1.8.0/SWARM.md000066400000000000000000000000771274620702700144130ustar00rootroot00000000000000This file has moved to: https://docs.docker.com/compose/swarm/ compose-1.8.0/appveyor.yml000066400000000000000000000007611274620702700155700ustar00rootroot00000000000000 version: '{branch}-{build}' install: - "SET PATH=C:\\Python27-x64;C:\\Python27-x64\\Scripts;%PATH%" - "python --version" - "pip install tox==2.1.1 virtualenv==13.1.2" # Build the binary after tests build: false test_script: - "tox -e py27,py34 -- tests/unit" - ps: ".\\script\\build\\windows.ps1" artifacts: - path: .\dist\docker-compose-Windows-x86_64.exe name: "Compose Windows binary" deploy: - provider: Environment name: master-builds on: branch: master compose-1.8.0/bin/000077500000000000000000000000001274620702700137445ustar00rootroot00000000000000compose-1.8.0/bin/docker-compose000077500000000000000000000000771274620702700166100ustar00rootroot00000000000000#!/usr/bin/env python from compose.cli.main import main main() compose-1.8.0/compose/000077500000000000000000000000001274620702700146415ustar00rootroot00000000000000compose-1.8.0/compose/__init__.py000066400000000000000000000001461274620702700167530ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals __version__ = '1.8.0' compose-1.8.0/compose/__main__.py000066400000000000000000000001721274620702700167330ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from compose.cli.main import main main() compose-1.8.0/compose/bundle.py000066400000000000000000000155741274620702700165000ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import json import logging import six from docker.utils import split_command from docker.utils.ports import split_port from .cli.errors import UserError from .config.serialize import denormalize_config from .network import get_network_defs_for_service from .service import format_environment from .service import NoSuchImageError from .service import parse_repository_tag log = logging.getLogger(__name__) SERVICE_KEYS = { 'working_dir': 'WorkingDir', 'user': 'User', 'labels': 'Labels', } IGNORED_KEYS = {'build'} SUPPORTED_KEYS = { 'image', 'ports', 'expose', 'networks', 'command', 'environment', 'entrypoint', } | set(SERVICE_KEYS) VERSION = '0.1' class NeedsPush(Exception): def __init__(self, image_name): self.image_name = image_name class NeedsPull(Exception): def __init__(self, image_name): self.image_name = image_name class MissingDigests(Exception): def __init__(self, needs_push, needs_pull): self.needs_push = needs_push self.needs_pull = needs_pull def serialize_bundle(config, image_digests): return json.dumps(to_bundle(config, image_digests), indent=2, sort_keys=True) def get_image_digests(project, allow_push=False): digests = {} needs_push = set() needs_pull = set() for service in project.services: try: digests[service.name] = get_image_digest( service, allow_push=allow_push, ) except NeedsPush as e: needs_push.add(e.image_name) except NeedsPull as e: needs_pull.add(e.image_name) if needs_push or needs_pull: raise MissingDigests(needs_push, needs_pull) return digests def get_image_digest(service, allow_push=False): if 'image' not in service.options: raise UserError( "Service '{s.name}' doesn't define an image tag. An image name is " "required to generate a proper image digest for the bundle. Specify " "an image repo and tag with the 'image' option.".format(s=service)) _, _, separator = parse_repository_tag(service.options['image']) # Compose file already uses a digest, no lookup required if separator == '@': return service.options['image'] try: image = service.image() except NoSuchImageError: action = 'build' if 'build' in service.options else 'pull' raise UserError( "Image not found for service '{service}'. " "You might need to run `docker-compose {action} {service}`." .format(service=service.name, action=action)) if image['RepoDigests']: # TODO: pick a digest based on the image tag if there are multiple # digests return image['RepoDigests'][0] if 'build' not in service.options: raise NeedsPull(service.image_name) if not allow_push: raise NeedsPush(service.image_name) return push_image(service) def push_image(service): try: digest = service.push() except: log.error( "Failed to push image for service '{s.name}'. Please use an " "image tag that can be pushed to a Docker " "registry.".format(s=service)) raise if not digest: raise ValueError("Failed to get digest for %s" % service.name) repo, _, _ = parse_repository_tag(service.options['image']) identifier = '{repo}@{digest}'.format(repo=repo, digest=digest) # only do this if RepoDigests isn't already populated image = service.image() if not image['RepoDigests']: # Pull by digest so that image['RepoDigests'] is populated for next time # and we don't have to pull/push again service.client.pull(identifier) log.info("Stored digest for {}".format(service.image_name)) return identifier def to_bundle(config, image_digests): if config.networks: log.warn("Unsupported top level key 'networks' - ignoring") if config.volumes: log.warn("Unsupported top level key 'volumes' - ignoring") config = denormalize_config(config) return { 'Version': VERSION, 'Services': { name: convert_service_to_bundle( name, service_dict, image_digests[name], ) for name, service_dict in config['services'].items() }, } def convert_service_to_bundle(name, service_dict, image_digest): container_config = {'Image': image_digest} for key, value in service_dict.items(): if key in IGNORED_KEYS: continue if key not in SUPPORTED_KEYS: log.warn("Unsupported key '{}' in services.{} - ignoring".format(key, name)) continue if key == 'environment': container_config['Env'] = format_environment({ envkey: envvalue for envkey, envvalue in value.items() if envvalue }) continue if key in SERVICE_KEYS: container_config[SERVICE_KEYS[key]] = value continue set_command_and_args( container_config, service_dict.get('entrypoint', []), service_dict.get('command', [])) container_config['Networks'] = make_service_networks(name, service_dict) ports = make_port_specs(service_dict) if ports: container_config['Ports'] = ports return container_config # See https://github.com/docker/swarmkit/blob//agent/exec/container/container.go#L95 def set_command_and_args(config, entrypoint, command): if isinstance(entrypoint, six.string_types): entrypoint = split_command(entrypoint) if isinstance(command, six.string_types): command = split_command(command) if entrypoint: config['Command'] = entrypoint + command return if command: config['Args'] = command def make_service_networks(name, service_dict): networks = [] for network_name, network_def in get_network_defs_for_service(service_dict).items(): for key in network_def.keys(): log.warn( "Unsupported key '{}' in services.{}.networks.{} - ignoring" .format(key, name, network_name)) networks.append(network_name) return networks def make_port_specs(service_dict): ports = [] internal_ports = [ internal_port for port_def in service_dict.get('ports', []) for internal_port in split_port(port_def)[0] ] internal_ports += service_dict.get('expose', []) for internal_port in internal_ports: spec = make_port_spec(internal_port) if spec not in ports: ports.append(spec) return ports def make_port_spec(value): components = six.text_type(value).partition('/') return { 'Protocol': components[2] or 'tcp', 'Port': int(components[0]), } compose-1.8.0/compose/cli/000077500000000000000000000000001274620702700154105ustar00rootroot00000000000000compose-1.8.0/compose/cli/__init__.py000066400000000000000000000000001274620702700175070ustar00rootroot00000000000000compose-1.8.0/compose/cli/colors.py000066400000000000000000000015341274620702700172660ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals NAMES = [ 'grey', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'white' ] def get_pairs(): for i, name in enumerate(NAMES): yield(name, str(30 + i)) yield('intense_' + name, str(30 + i) + ';1') def ansi(code): return '\033[{0}m'.format(code) def ansi_color(code, s): return '{0}{1}{2}'.format(ansi(code), s, ansi(0)) def make_color_fn(code): return lambda s: ansi_color(code, s) for (name, code) in get_pairs(): globals()[name] = make_color_fn(code) def rainbow(): cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue', 'intense_cyan', 'intense_yellow', 'intense_green', 'intense_magenta', 'intense_red', 'intense_blue'] for c in cs: yield globals()[c] compose-1.8.0/compose/cli/command.py000066400000000000000000000077111274620702700174060ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging import os import re import ssl import six from . import verbose_proxy from .. import config from ..config.environment import Environment from ..const import API_VERSIONS from ..project import Project from .docker_client import docker_client from .docker_client import tls_config_from_options from .utils import get_version_info log = logging.getLogger(__name__) def project_from_options(project_dir, options): environment = Environment.from_env_file(project_dir) host = options.get('--host') if host is not None: host = host.lstrip('=') return get_project( project_dir, get_config_path_from_options(project_dir, options, environment), project_name=options.get('--project-name'), verbose=options.get('--verbose'), host=host, tls_config=tls_config_from_options(options), environment=environment ) def get_config_from_options(base_dir, options): environment = Environment.from_env_file(base_dir) config_path = get_config_path_from_options( base_dir, options, environment ) return config.load( config.find(base_dir, config_path, environment) ) def get_config_path_from_options(base_dir, options, environment): file_option = options.get('--file') if file_option: return file_option config_files = environment.get('COMPOSE_FILE') if config_files: return config_files.split(os.pathsep) return None def get_tls_version(environment): compose_tls_version = environment.get('COMPOSE_TLS_VERSION', None) if not compose_tls_version: return None tls_attr_name = "PROTOCOL_{}".format(compose_tls_version) if not hasattr(ssl, tls_attr_name): log.warn( 'The "{}" protocol is unavailable. You may need to update your ' 'version of Python or OpenSSL. Falling back to TLSv1 (default).' .format(compose_tls_version) ) return None return getattr(ssl, tls_attr_name) def get_client(environment, verbose=False, version=None, tls_config=None, host=None, tls_version=None): client = docker_client( version=version, tls_config=tls_config, host=host, environment=environment, tls_version=get_tls_version(environment) ) if verbose: version_info = six.iteritems(client.version()) log.info(get_version_info('full')) log.info("Docker base_url: %s", client.base_url) log.info("Docker version: %s", ", ".join("%s=%s" % item for item in version_info)) return verbose_proxy.VerboseProxy('docker', client) return client def get_project(project_dir, config_path=None, project_name=None, verbose=False, host=None, tls_config=None, environment=None): if not environment: environment = Environment.from_env_file(project_dir) config_details = config.find(project_dir, config_path, environment) project_name = get_project_name( config_details.working_dir, project_name, environment ) config_data = config.load(config_details) api_version = environment.get( 'COMPOSE_API_VERSION', API_VERSIONS[config_data.version]) client = get_client( verbose=verbose, version=api_version, tls_config=tls_config, host=host, environment=environment ) return Project.from_config(project_name, config_data, client) def get_project_name(working_dir, project_name=None, environment=None): def normalize_name(name): return re.sub(r'[^a-z0-9]', '', name.lower()) if not environment: environment = Environment.from_env_file(working_dir) project_name = project_name or environment.get('COMPOSE_PROJECT_NAME') if project_name: return normalize_name(project_name) project = os.path.basename(os.path.abspath(working_dir)) if project: return normalize_name(project) return 'default' compose-1.8.0/compose/cli/docker_client.py000066400000000000000000000041271274620702700205730ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging from docker import Client from docker.errors import TLSParameterError from docker.tls import TLSConfig from docker.utils import kwargs_from_env from ..const import HTTP_TIMEOUT from .errors import UserError from .utils import generate_user_agent log = logging.getLogger(__name__) def tls_config_from_options(options): tls = options.get('--tls', False) ca_cert = options.get('--tlscacert') cert = options.get('--tlscert') key = options.get('--tlskey') verify = options.get('--tlsverify') skip_hostname_check = options.get('--skip-hostname-check', False) advanced_opts = any([ca_cert, cert, key, verify]) if tls is True and not advanced_opts: return True elif advanced_opts: # --tls is a noop client_cert = None if cert or key: client_cert = (cert, key) return TLSConfig( client_cert=client_cert, verify=verify, ca_cert=ca_cert, assert_hostname=False if skip_hostname_check else None ) return None def docker_client(environment, version=None, tls_config=None, host=None, tls_version=None): """ Returns a docker-py client configured using environment variables according to the same logic as the official Docker client. """ try: kwargs = kwargs_from_env(environment=environment, ssl_version=tls_version) except TLSParameterError: raise UserError( "TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY " "and DOCKER_CERT_PATH are set correctly.\n" "You might need to run `eval \"$(docker-machine env default)\"`") if host: kwargs['base_url'] = host if tls_config: kwargs['tls'] = tls_config if version: kwargs['version'] = version timeout = environment.get('COMPOSE_HTTP_TIMEOUT') if timeout: kwargs['timeout'] = int(timeout) else: kwargs['timeout'] = HTTP_TIMEOUT kwargs['user_agent'] = generate_user_agent() return Client(**kwargs) compose-1.8.0/compose/cli/docopt_command.py000066400000000000000000000032441274620702700207530ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from inspect import getdoc from docopt import docopt from docopt import DocoptExit def docopt_full_help(docstring, *args, **kwargs): try: return docopt(docstring, *args, **kwargs) except DocoptExit: raise SystemExit(docstring) class DocoptDispatcher(object): def __init__(self, command_class, options): self.command_class = command_class self.options = options def parse(self, argv): command_help = getdoc(self.command_class) options = docopt_full_help(command_help, argv, **self.options) command = options['COMMAND'] if command is None: raise SystemExit(command_help) handler = get_handler(self.command_class, command) docstring = getdoc(handler) if docstring is None: raise NoSuchCommand(command, self) command_options = docopt_full_help(docstring, options['ARGS'], options_first=True) return options, handler, command_options def get_handler(command_class, command): command = command.replace('-', '_') # we certainly want to have "exec" command, since that's what docker client has # but in python exec is a keyword if command == "exec": command = "exec_command" if not hasattr(command_class, command): raise NoSuchCommand(command, command_class) return getattr(command_class, command) class NoSuchCommand(Exception): def __init__(self, command, supercommand): super(NoSuchCommand, self).__init__("No such command: %s" % command) self.command = command self.supercommand = supercommand compose-1.8.0/compose/cli/errors.py000066400000000000000000000075131274620702700173040ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import contextlib import logging import socket from textwrap import dedent from docker.errors import APIError from requests.exceptions import ConnectionError as RequestsConnectionError from requests.exceptions import ReadTimeout from requests.exceptions import SSLError from requests.packages.urllib3.exceptions import ReadTimeoutError from ..const import API_VERSION_TO_ENGINE_VERSION from .utils import call_silently from .utils import is_docker_for_mac_installed from .utils import is_mac from .utils import is_ubuntu log = logging.getLogger(__name__) class UserError(Exception): def __init__(self, msg): self.msg = dedent(msg).strip() def __unicode__(self): return self.msg __str__ = __unicode__ class ConnectionError(Exception): pass @contextlib.contextmanager def handle_connection_errors(client): try: yield except SSLError as e: log.error('SSL error: %s' % e) raise ConnectionError() except RequestsConnectionError as e: if e.args and isinstance(e.args[0], ReadTimeoutError): log_timeout_error(client.timeout) raise ConnectionError() exit_with_error(get_conn_error_message(client.base_url)) except APIError as e: log_api_error(e, client.api_version) raise ConnectionError() except (ReadTimeout, socket.timeout) as e: log_timeout_error() raise ConnectionError() def log_timeout_error(timeout): log.error( "An HTTP request took too long to complete. Retry with --verbose to " "obtain debug information.\n" "If you encounter this issue regularly because of slow network " "conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher " "value (current value: %s)." % timeout) def log_api_error(e, client_version): if b'client is newer than server' not in e.explanation: log.error(e.explanation) return version = API_VERSION_TO_ENGINE_VERSION.get(client_version) if not version: # They've set a custom API version log.error(e.explanation) return log.error( "The Docker Engine version is less than the minimum required by " "Compose. Your current project requires a Docker Engine of " "version {version} or greater.".format(version=version)) def exit_with_error(msg): log.error(dedent(msg).strip()) raise ConnectionError() def get_conn_error_message(url): if call_silently(['which', 'docker']) != 0: if is_mac(): return docker_not_found_mac if is_ubuntu(): return docker_not_found_ubuntu return docker_not_found_generic if is_docker_for_mac_installed(): return conn_error_docker_for_mac if call_silently(['which', 'docker-machine']) == 0: return conn_error_docker_machine return conn_error_generic.format(url=url) docker_not_found_mac = """ Couldn't connect to Docker daemon. You might need to install Docker: https://docs.docker.com/engine/installation/mac/ """ docker_not_found_ubuntu = """ Couldn't connect to Docker daemon. You might need to install Docker: https://docs.docker.com/engine/installation/ubuntulinux/ """ docker_not_found_generic = """ Couldn't connect to Docker daemon. You might need to install Docker: https://docs.docker.com/engine/installation/ """ conn_error_docker_machine = """ Couldn't connect to Docker daemon - you might need to run `docker-machine start default`. """ conn_error_docker_for_mac = """ Couldn't connect to Docker daemon. You might need to start Docker for Mac. """ conn_error_generic = """ Couldn't connect to Docker daemon at {url} - is it running? If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. """ compose-1.8.0/compose/cli/formatter.py000066400000000000000000000025161274620702700177710ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging import os import texttable from compose.cli import colors def get_tty_width(): tty_size = os.popen('stty size', 'r').read().split() if len(tty_size) != 2: return 0 _, width = tty_size return int(width) class Formatter(object): """Format tabular data for printing.""" def table(self, headers, rows): table = texttable.Texttable(max_width=get_tty_width()) table.set_cols_dtype(['t' for h in headers]) table.add_rows([headers] + rows) table.set_deco(table.HEADER) table.set_chars(['-', '|', '+', '-']) return table.draw() class ConsoleWarningFormatter(logging.Formatter): """A logging.Formatter which prints WARNING and ERROR messages with a prefix of the log level colored appropriate for the log level. """ def get_level_message(self, record): separator = ': ' if record.levelno == logging.WARNING: return colors.yellow(record.levelname) + separator if record.levelno == logging.ERROR: return colors.red(record.levelname) + separator return '' def format(self, record): message = super(ConsoleWarningFormatter, self).format(record) return self.get_level_message(record) + message compose-1.8.0/compose/cli/log_printer.py000066400000000000000000000147151274620702700203160ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import sys from collections import namedtuple from itertools import cycle from threading import Thread from six.moves import _thread as thread from six.moves.queue import Empty from six.moves.queue import Queue from . import colors from compose import utils from compose.cli.signals import ShutdownException from compose.utils import split_buffer class LogPresenter(object): def __init__(self, prefix_width, color_func): self.prefix_width = prefix_width self.color_func = color_func def present(self, container, line): prefix = container.name_without_project.ljust(self.prefix_width) return '{prefix} {line}'.format( prefix=self.color_func(prefix + ' |'), line=line) def build_log_presenters(service_names, monochrome): """Return an iterable of functions. Each function can be used to format the logs output of a container. """ prefix_width = max_name_width(service_names) def no_color(text): return text for color_func in cycle([no_color] if monochrome else colors.rainbow()): yield LogPresenter(prefix_width, color_func) def max_name_width(service_names, max_index_width=3): """Calculate the maximum width of container names so we can make the log prefixes line up like so: db_1 | Listening web_1 | Listening """ return max(len(name) for name in service_names) + max_index_width class LogPrinter(object): """Print logs from many containers to a single output stream.""" def __init__(self, containers, presenters, event_stream, output=sys.stdout, cascade_stop=False, log_args=None): self.containers = containers self.presenters = presenters self.event_stream = event_stream self.output = utils.get_output_stream(output) self.cascade_stop = cascade_stop self.log_args = log_args or {} def run(self): if not self.containers: return queue = Queue() thread_args = queue, self.log_args thread_map = build_thread_map(self.containers, self.presenters, thread_args) start_producer_thread(( thread_map, self.event_stream, self.presenters, thread_args)) for line in consume_queue(queue, self.cascade_stop): remove_stopped_threads(thread_map) if not line: if not thread_map: # There are no running containers left to tail, so exit return # We got an empty line because of a timeout, but there are still # active containers to tail, so continue continue self.output.write(line) self.output.flush() def remove_stopped_threads(thread_map): for container_id, tailer_thread in list(thread_map.items()): if not tailer_thread.is_alive(): thread_map.pop(container_id, None) def build_thread(container, presenter, queue, log_args): tailer = Thread( target=tail_container_logs, args=(container, presenter, queue, log_args)) tailer.daemon = True tailer.start() return tailer def build_thread_map(initial_containers, presenters, thread_args): return { container.id: build_thread(container, next(presenters), *thread_args) for container in initial_containers } class QueueItem(namedtuple('_QueueItem', 'item is_stop exc')): @classmethod def new(cls, item): return cls(item, None, None) @classmethod def exception(cls, exc): return cls(None, None, exc) @classmethod def stop(cls): return cls(None, True, None) def tail_container_logs(container, presenter, queue, log_args): generator = get_log_generator(container) try: for item in generator(container, log_args): queue.put(QueueItem.new(presenter.present(container, item))) except Exception as e: queue.put(QueueItem.exception(e)) return if log_args.get('follow'): queue.put(QueueItem.new(presenter.color_func(wait_on_exit(container)))) queue.put(QueueItem.stop()) def get_log_generator(container): if container.has_api_logs: return build_log_generator return build_no_log_generator def build_no_log_generator(container, log_args): """Return a generator that prints a warning about logs and waits for container to exit. """ yield "WARNING: no logs are available with the '{}' log driver\n".format( container.log_driver) def build_log_generator(container, log_args): # if the container doesn't have a log_stream we need to attach to container # before log printer starts running if container.log_stream is None: stream = container.logs(stdout=True, stderr=True, stream=True, **log_args) else: stream = container.log_stream return split_buffer(stream) def wait_on_exit(container): exit_code = container.wait() return "%s exited with code %s\n" % (container.name, exit_code) def start_producer_thread(thread_args): producer = Thread(target=watch_events, args=thread_args) producer.daemon = True producer.start() def watch_events(thread_map, event_stream, presenters, thread_args): for event in event_stream: if event['action'] == 'stop': thread_map.pop(event['id'], None) if event['action'] != 'start': continue if event['id'] in thread_map: if thread_map[event['id']].is_alive(): continue # Container was stopped and started, we need a new thread thread_map.pop(event['id'], None) thread_map[event['id']] = build_thread( event['container'], next(presenters), *thread_args) def consume_queue(queue, cascade_stop): """Consume the queue by reading lines off of it and yielding them.""" while True: try: item = queue.get(timeout=0.1) except Empty: yield None continue # See https://github.com/docker/compose/issues/189 except thread.error: raise ShutdownException() if item.exc: raise item.exc if item.is_stop: if cascade_stop: raise StopIteration else: continue yield item.item compose-1.8.0/compose/cli/main.py000066400000000000000000001104311274620702700167060ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import contextlib import functools import json import logging import re import sys from inspect import getdoc from operator import attrgetter from . import errors from . import signals from .. import __version__ from ..bundle import get_image_digests from ..bundle import MissingDigests from ..bundle import serialize_bundle from ..config import ConfigurationError from ..config import parse_environment from ..config.environment import Environment from ..config.serialize import serialize_config from ..const import DEFAULT_TIMEOUT from ..const import IS_WINDOWS_PLATFORM from ..progress_stream import StreamOutputError from ..project import NoSuchService from ..project import OneOffFilter from ..project import ProjectError from ..service import BuildAction from ..service import BuildError from ..service import ConvergenceStrategy from ..service import ImageType from ..service import NeedsBuildError from ..service import OperationFailedError from .command import get_config_from_options from .command import project_from_options from .docopt_command import DocoptDispatcher from .docopt_command import get_handler from .docopt_command import NoSuchCommand from .errors import UserError from .formatter import ConsoleWarningFormatter from .formatter import Formatter from .log_printer import build_log_presenters from .log_printer import LogPrinter from .utils import get_version_info from .utils import yesno if not IS_WINDOWS_PLATFORM: from dockerpty.pty import PseudoTerminal, RunOperation, ExecOperation log = logging.getLogger(__name__) console_handler = logging.StreamHandler(sys.stderr) def main(): command = dispatch() try: command() except (KeyboardInterrupt, signals.ShutdownException): log.error("Aborting.") sys.exit(1) except (UserError, NoSuchService, ConfigurationError, ProjectError, OperationFailedError) as e: log.error(e.msg) sys.exit(1) except BuildError as e: log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason)) sys.exit(1) except StreamOutputError as e: log.error(e) sys.exit(1) except NeedsBuildError as e: log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name) sys.exit(1) except errors.ConnectionError: sys.exit(1) def dispatch(): setup_logging() dispatcher = DocoptDispatcher( TopLevelCommand, {'options_first': True, 'version': get_version_info('compose')}) try: options, handler, command_options = dispatcher.parse(sys.argv[1:]) except NoSuchCommand as e: commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand))) log.error("No such command: %s\n\n%s", e.command, commands) sys.exit(1) setup_console_handler(console_handler, options.get('--verbose')) return functools.partial(perform_command, options, handler, command_options) def perform_command(options, handler, command_options): if options['COMMAND'] in ('help', 'version'): # Skip looking up the compose file. handler(command_options) return if options['COMMAND'] in ('config', 'bundle'): command = TopLevelCommand(None) handler(command, options, command_options) return project = project_from_options('.', options) command = TopLevelCommand(project) with errors.handle_connection_errors(project.client): handler(command, command_options) def setup_logging(): root_logger = logging.getLogger() root_logger.addHandler(console_handler) root_logger.setLevel(logging.DEBUG) # Disable requests logging logging.getLogger("requests").propagate = False def setup_console_handler(handler, verbose): if handler.stream.isatty(): format_class = ConsoleWarningFormatter else: format_class = logging.Formatter if verbose: handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s')) handler.setLevel(logging.DEBUG) else: handler.setFormatter(format_class()) handler.setLevel(logging.INFO) # stolen from docopt master def parse_doc_section(name, source): pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)', re.IGNORECASE | re.MULTILINE) return [s.strip() for s in pattern.findall(source)] class TopLevelCommand(object): """Define and run multi-container applications with Docker. Usage: docker-compose [-f ...] [options] [COMMAND] [ARGS...] docker-compose -h|--help Options: -f, --file FILE Specify an alternate compose file (default: docker-compose.yml) -p, --project-name NAME Specify an alternate project name (default: directory name) --verbose Show more output -v, --version Print version and exit -H, --host HOST Daemon socket to connect to --tls Use TLS; implied by --tlsverify --tlscacert CA_PATH Trust certs signed only by this CA --tlscert CLIENT_CERT_PATH Path to TLS certificate file --tlskey TLS_KEY_PATH Path to TLS key file --tlsverify Use TLS and verify the remote --skip-hostname-check Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address) Commands: build Build or rebuild services bundle Generate a Docker bundle from the Compose file config Validate and view the compose file create Create services down Stop and remove containers, networks, images, and volumes events Receive real time events from containers exec Execute a command in a running container help Get help on a command kill Kill containers logs View output from containers pause Pause services port Print the public port for a port binding ps List containers pull Pulls service images push Push service images restart Restart services rm Remove stopped containers run Run a one-off command scale Set number of containers for a service start Start services stop Stop services unpause Unpause services up Create and start containers version Show the Docker-Compose version information """ def __init__(self, project, project_dir='.'): self.project = project self.project_dir = '.' def build(self, options): """ Build or rebuild services. Services are built once and then tagged as `project_service`, e.g. `composetest_db`. If you change a service's `Dockerfile` or the contents of its build directory, you can run `docker-compose build` to rebuild it. Usage: build [options] [SERVICE...] Options: --force-rm Always remove intermediate containers. --no-cache Do not use cache when building the image. --pull Always attempt to pull a newer version of the image. """ self.project.build( service_names=options['SERVICE'], no_cache=bool(options.get('--no-cache', False)), pull=bool(options.get('--pull', False)), force_rm=bool(options.get('--force-rm', False))) def bundle(self, config_options, options): """ Generate a Distributed Application Bundle (DAB) from the Compose file. Images must have digests stored, which requires interaction with a Docker registry. If digests aren't stored for all images, you can fetch them with `docker-compose pull` or `docker-compose push`. To push images automatically when bundling, pass `--push-images`. Only services with a `build` option specified will have their images pushed. Usage: bundle [options] Options: --push-images Automatically push images for any services which have a `build` option specified. -o, --output PATH Path to write the bundle file to. Defaults to ".dab". """ self.project = project_from_options('.', config_options) compose_config = get_config_from_options(self.project_dir, config_options) output = options["--output"] if not output: output = "{}.dab".format(self.project.name) with errors.handle_connection_errors(self.project.client): try: image_digests = get_image_digests( self.project, allow_push=options['--push-images'], ) except MissingDigests as e: def list_images(images): return "\n".join(" {}".format(name) for name in sorted(images)) paras = ["Some images are missing digests."] if e.needs_push: command_hint = ( "Use `docker-compose push {}` to push them. " "You can do this automatically with `docker-compose bundle --push-images`." .format(" ".join(sorted(e.needs_push))) ) paras += [ "The following images can be pushed:", list_images(e.needs_push), command_hint, ] if e.needs_pull: command_hint = ( "Use `docker-compose pull {}` to pull them. " .format(" ".join(sorted(e.needs_pull))) ) paras += [ "The following images need to be pulled:", list_images(e.needs_pull), command_hint, ] raise UserError("\n\n".join(paras)) with open(output, 'w') as f: f.write(serialize_bundle(compose_config, image_digests)) log.info("Wrote bundle to {}".format(output)) def config(self, config_options, options): """ Validate and view the compose file. Usage: config [options] Options: -q, --quiet Only validate the configuration, don't print anything. --services Print the service names, one per line. """ compose_config = get_config_from_options(self.project_dir, config_options) if options['--quiet']: return if options['--services']: print('\n'.join(service['name'] for service in compose_config.services)) return print(serialize_config(compose_config)) def create(self, options): """ Creates containers for a service. Usage: create [options] [SERVICE...] Options: --force-recreate Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate. --no-recreate If containers already exist, don't recreate them. Incompatible with --force-recreate. --no-build Don't build an image, even if it's missing. --build Build images before creating containers. """ service_names = options['SERVICE'] self.project.create( service_names=service_names, strategy=convergence_strategy_from_opts(options), do_build=build_action_from_opts(options), ) def down(self, options): """ Stops containers and removes containers, networks, volumes, and images created by `up`. By default, the only things removed are: - Containers for services defined in the Compose file - Networks defined in the `networks` section of the Compose file - The default network, if one is used Networks and volumes defined as `external` are never removed. Usage: down [options] Options: --rmi type Remove images. Type must be one of: 'all': Remove all images used by any service. 'local': Remove only images that don't have a custom tag set by the `image` field. -v, --volumes Remove named volumes declared in the `volumes` section of the Compose file and anonymous volumes attached to containers. --remove-orphans Remove containers for services not defined in the Compose file """ image_type = image_type_from_opt('--rmi', options['--rmi']) self.project.down(image_type, options['--volumes'], options['--remove-orphans']) def events(self, options): """ Receive real time events from containers. Usage: events [options] [SERVICE...] Options: --json Output events as a stream of json objects """ def format_event(event): attributes = ["%s=%s" % item for item in event['attributes'].items()] return ("{time} {type} {action} {id} ({attrs})").format( attrs=", ".join(sorted(attributes)), **event) def json_format_event(event): event['time'] = event['time'].isoformat() event.pop('container') return json.dumps(event) for event in self.project.events(): formatter = json_format_event if options['--json'] else format_event print(formatter(event)) sys.stdout.flush() def exec_command(self, options): """ Execute a command in a running container Usage: exec [options] SERVICE COMMAND [ARGS...] Options: -d Detached mode: Run command in the background. --privileged Give extended privileges to the process. --user USER Run the command as this user. -T Disable pseudo-tty allocation. By default `docker-compose exec` allocates a TTY. --index=index index of the container if there are multiple instances of a service [default: 1] """ index = int(options.get('--index')) service = self.project.get_service(options['SERVICE']) detach = options['-d'] if IS_WINDOWS_PLATFORM and not detach: raise UserError( "Interactive mode is not yet supported on Windows.\n" "Please pass the -d flag when using `docker-compose exec`." ) try: container = service.get_container(number=index) except ValueError as e: raise UserError(str(e)) command = [options['COMMAND']] + options['ARGS'] tty = not options["-T"] create_exec_options = { "privileged": options["--privileged"], "user": options["--user"], "tty": tty, "stdin": tty, } exec_id = container.create_exec(command, **create_exec_options) if detach: container.start_exec(exec_id, tty=tty) return signals.set_signal_handler_to_shutdown() try: operation = ExecOperation( self.project.client, exec_id, interactive=tty, ) pty = PseudoTerminal(self.project.client, operation) pty.start() except signals.ShutdownException: log.info("received shutdown exception: closing") exit_code = self.project.client.exec_inspect(exec_id).get("ExitCode") sys.exit(exit_code) @classmethod def help(cls, options): """ Get help on a command. Usage: help [COMMAND] """ if options['COMMAND']: subject = get_handler(cls, options['COMMAND']) else: subject = cls print(getdoc(subject)) def kill(self, options): """ Force stop service containers. Usage: kill [options] [SERVICE...] Options: -s SIGNAL SIGNAL to send to the container. Default signal is SIGKILL. """ signal = options.get('-s', 'SIGKILL') self.project.kill(service_names=options['SERVICE'], signal=signal) def logs(self, options): """ View output from containers. Usage: logs [options] [SERVICE...] Options: --no-color Produce monochrome output. -f, --follow Follow log output. -t, --timestamps Show timestamps. --tail="all" Number of lines to show from the end of the logs for each container. """ containers = self.project.containers(service_names=options['SERVICE'], stopped=True) tail = options['--tail'] if tail is not None: if tail.isdigit(): tail = int(tail) elif tail != 'all': raise UserError("tail flag must be all or a number") log_args = { 'follow': options['--follow'], 'tail': tail, 'timestamps': options['--timestamps'] } print("Attaching to", list_containers(containers)) log_printer_from_project( self.project, containers, options['--no-color'], log_args, event_stream=self.project.events(service_names=options['SERVICE'])).run() def pause(self, options): """ Pause services. Usage: pause [SERVICE...] """ containers = self.project.pause(service_names=options['SERVICE']) exit_if(not containers, 'No containers to pause', 1) def port(self, options): """ Print the public port for a port binding. Usage: port [options] SERVICE PRIVATE_PORT Options: --protocol=proto tcp or udp [default: tcp] --index=index index of the container if there are multiple instances of a service [default: 1] """ index = int(options.get('--index')) service = self.project.get_service(options['SERVICE']) try: container = service.get_container(number=index) except ValueError as e: raise UserError(str(e)) print(container.get_local_port( options['PRIVATE_PORT'], protocol=options.get('--protocol') or 'tcp') or '') def ps(self, options): """ List containers. Usage: ps [options] [SERVICE...] Options: -q Only display IDs """ containers = sorted( self.project.containers(service_names=options['SERVICE'], stopped=True) + self.project.containers(service_names=options['SERVICE'], one_off=OneOffFilter.only), key=attrgetter('name')) if options['-q']: for container in containers: print(container.id) else: headers = [ 'Name', 'Command', 'State', 'Ports', ] rows = [] for container in containers: command = container.human_readable_command if len(command) > 30: command = '%s ...' % command[:26] rows.append([ container.name, command, container.human_readable_state, container.human_readable_ports, ]) print(Formatter().table(headers, rows)) def pull(self, options): """ Pulls images for services. Usage: pull [options] [SERVICE...] Options: --ignore-pull-failures Pull what it can and ignores images with pull failures. """ self.project.pull( service_names=options['SERVICE'], ignore_pull_failures=options.get('--ignore-pull-failures') ) def push(self, options): """ Pushes images for services. Usage: push [options] [SERVICE...] Options: --ignore-push-failures Push what it can and ignores images with push failures. """ self.project.push( service_names=options['SERVICE'], ignore_push_failures=options.get('--ignore-push-failures') ) def rm(self, options): """ Removes stopped service containers. By default, anonymous volumes attached to containers will not be removed. You can override this with `-v`. To list all volumes, use `docker volume ls`. Any data which is not in a volume will be lost. Usage: rm [options] [SERVICE...] Options: -f, --force Don't ask to confirm removal -v Remove any anonymous volumes attached to containers -a, --all Obsolete. Also remove one-off containers created by docker-compose run """ if options.get('--all'): log.warn( '--all flag is obsolete. This is now the default behavior ' 'of `docker-compose rm`' ) one_off = OneOffFilter.include all_containers = self.project.containers( service_names=options['SERVICE'], stopped=True, one_off=one_off ) stopped_containers = [c for c in all_containers if not c.is_running] if len(stopped_containers) > 0: print("Going to remove", list_containers(stopped_containers)) if options.get('--force') \ or yesno("Are you sure? [yN] ", default=False): self.project.remove_stopped( service_names=options['SERVICE'], v=options.get('-v', False), one_off=one_off ) else: print("No stopped containers") def run(self, options): """ Run a one-off command on a service. For example: $ docker-compose run web python manage.py shell By default, linked services will be started, unless they are already running. If you do not want to start linked services, use `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`. Usage: run [options] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...] Options: -d Detached mode: Run container in the background, print new container name. --name NAME Assign a name to the container --entrypoint CMD Override the entrypoint of the image. -e KEY=VAL Set an environment variable (can be used multiple times) -u, --user="" Run as specified username or uid --no-deps Don't start linked services. --rm Remove container after run. Ignored in detached mode. -p, --publish=[] Publish a container's port(s) to the host --service-ports Run command with the service's ports enabled and mapped to the host. -T Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY. -w, --workdir="" Working directory inside the container """ service = self.project.get_service(options['SERVICE']) detach = options['-d'] if IS_WINDOWS_PLATFORM and not detach: raise UserError( "Interactive mode is not yet supported on Windows.\n" "Please pass the -d flag when using `docker-compose run`." ) if options['--publish'] and options['--service-ports']: raise UserError( 'Service port mapping and manual port mapping ' 'can not be used togather' ) if options['COMMAND'] is not None: command = [options['COMMAND']] + options['ARGS'] elif options['--entrypoint'] is not None: command = [] else: command = service.options.get('command') container_options = build_container_options(options, detach, command) run_one_off_container(container_options, self.project, service, options) def scale(self, options): """ Set number of containers to run for a service. Numbers are specified in the form `service=num` as arguments. For example: $ docker-compose scale web=2 worker=3 Usage: scale [options] [SERVICE=NUM...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) """ timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) for s in options['SERVICE=NUM']: if '=' not in s: raise UserError('Arguments to scale should be in the form service=num') service_name, num = s.split('=', 1) try: num = int(num) except ValueError: raise UserError('Number of containers for service "%s" is not a ' 'number' % service_name) self.project.get_service(service_name).scale(num, timeout=timeout) def start(self, options): """ Start existing containers. Usage: start [SERVICE...] """ containers = self.project.start(service_names=options['SERVICE']) exit_if(not containers, 'No containers to start', 1) def stop(self, options): """ Stop running containers without removing them. They can be started again with `docker-compose start`. Usage: stop [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) """ timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) self.project.stop(service_names=options['SERVICE'], timeout=timeout) def restart(self, options): """ Restart running containers. Usage: restart [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) """ timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) containers = self.project.restart(service_names=options['SERVICE'], timeout=timeout) exit_if(not containers, 'No containers to restart', 1) def unpause(self, options): """ Unpause services. Usage: unpause [SERVICE...] """ containers = self.project.unpause(service_names=options['SERVICE']) exit_if(not containers, 'No containers to unpause', 1) def up(self, options): """ Builds, (re)creates, starts, and attaches to containers for a service. Unless they are already running, this command also starts any linked services. The `docker-compose up` command aggregates the output of each container. When the command exits, all containers are stopped. Running `docker-compose up -d` starts the containers in the background and leaves them running. If there are existing containers for a service, and the service's configuration or image was changed after the container's creation, `docker-compose up` picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag. If you want to force Compose to stop and recreate all containers, use the `--force-recreate` flag. Usage: up [options] [SERVICE...] Options: -d Detached mode: Run containers in the background, print new container names. Incompatible with --abort-on-container-exit. --no-color Produce monochrome output. --no-deps Don't start linked services. --force-recreate Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate. --no-recreate If containers already exist, don't recreate them. Incompatible with --force-recreate. --no-build Don't build an image, even if it's missing. --build Build images before starting containers. --abort-on-container-exit Stops all containers if any container was stopped. Incompatible with -d. -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10) --remove-orphans Remove containers for services not defined in the Compose file """ start_deps = not options['--no-deps'] cascade_stop = options['--abort-on-container-exit'] service_names = options['SERVICE'] timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) remove_orphans = options['--remove-orphans'] detached = options.get('-d') if detached and cascade_stop: raise UserError("--abort-on-container-exit and -d cannot be combined.") with up_shutdown_context(self.project, service_names, timeout, detached): to_attach = self.project.up( service_names=service_names, start_deps=start_deps, strategy=convergence_strategy_from_opts(options), do_build=build_action_from_opts(options), timeout=timeout, detached=detached, remove_orphans=remove_orphans) if detached: return log_printer = log_printer_from_project( self.project, filter_containers_to_service_names(to_attach, service_names), options['--no-color'], {'follow': True}, cascade_stop, event_stream=self.project.events(service_names=service_names)) print("Attaching to", list_containers(log_printer.containers)) log_printer.run() if cascade_stop: print("Aborting on container exit...") self.project.stop(service_names=service_names, timeout=timeout) @classmethod def version(cls, options): """ Show version informations Usage: version [--short] Options: --short Shows only Compose's version number. """ if options['--short']: print(__version__) else: print(get_version_info('full')) def convergence_strategy_from_opts(options): no_recreate = options['--no-recreate'] force_recreate = options['--force-recreate'] if force_recreate and no_recreate: raise UserError("--force-recreate and --no-recreate cannot be combined.") if force_recreate: return ConvergenceStrategy.always if no_recreate: return ConvergenceStrategy.never return ConvergenceStrategy.changed def image_type_from_opt(flag, value): if not value: return ImageType.none try: return ImageType[value] except KeyError: raise UserError("%s flag must be one of: all, local" % flag) def build_action_from_opts(options): if options['--build'] and options['--no-build']: raise UserError("--build and --no-build can not be combined.") if options['--build']: return BuildAction.force if options['--no-build']: return BuildAction.skip return BuildAction.none def build_container_options(options, detach, command): container_options = { 'command': command, 'tty': not (detach or options['-T'] or not sys.stdin.isatty()), 'stdin_open': not detach, 'detach': detach, } if options['-e']: container_options['environment'] = Environment.from_command_line( parse_environment(options['-e']) ) if options['--entrypoint']: container_options['entrypoint'] = options.get('--entrypoint') if options['--rm']: container_options['restart'] = None if options['--user']: container_options['user'] = options.get('--user') if not options['--service-ports']: container_options['ports'] = [] if options['--publish']: container_options['ports'] = options.get('--publish') if options['--name']: container_options['name'] = options['--name'] if options['--workdir']: container_options['working_dir'] = options['--workdir'] return container_options def run_one_off_container(container_options, project, service, options): if not options['--no-deps']: deps = service.get_dependency_names() if deps: project.up( service_names=deps, start_deps=True, strategy=ConvergenceStrategy.never) project.initialize() container = service.create_container( quiet=True, one_off=True, **container_options) if options['-d']: service.start_container(container) print(container.name) return def remove_container(force=False): if options['--rm']: project.client.remove_container(container.id, force=True) signals.set_signal_handler_to_shutdown() try: try: operation = RunOperation( project.client, container.id, interactive=not options['-T'], logs=False, ) pty = PseudoTerminal(project.client, operation) sockets = pty.sockets() service.start_container(container) pty.start(sockets) exit_code = container.wait() except signals.ShutdownException: project.client.stop(container.id) exit_code = 1 except signals.ShutdownException: project.client.kill(container.id) remove_container(force=True) sys.exit(2) remove_container() sys.exit(exit_code) def log_printer_from_project( project, containers, monochrome, log_args, cascade_stop=False, event_stream=None, ): return LogPrinter( containers, build_log_presenters(project.service_names, monochrome), event_stream or project.events(), cascade_stop=cascade_stop, log_args=log_args) def filter_containers_to_service_names(containers, service_names): if not service_names: return containers return [ container for container in containers if container.service in service_names ] @contextlib.contextmanager def up_shutdown_context(project, service_names, timeout, detached): if detached: yield return signals.set_signal_handler_to_shutdown() try: try: yield except signals.ShutdownException: print("Gracefully stopping... (press Ctrl+C again to force)") project.stop(service_names=service_names, timeout=timeout) except signals.ShutdownException: project.kill(service_names=service_names) sys.exit(2) def list_containers(containers): return ", ".join(c.name for c in containers) def exit_if(condition, message, exit_code): if condition: log.error(message) raise SystemExit(exit_code) compose-1.8.0/compose/cli/signals.py000066400000000000000000000006131274620702700174220ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import signal class ShutdownException(Exception): pass def shutdown(signal, frame): raise ShutdownException() def set_signal_handler(handler): signal.signal(signal.SIGINT, handler) signal.signal(signal.SIGTERM, handler) def set_signal_handler_to_shutdown(): set_signal_handler(shutdown) compose-1.8.0/compose/cli/utils.py000066400000000000000000000061701274620702700171260ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import unicode_literals import os import platform import ssl import subprocess import sys import docker import compose # WindowsError is not defined on non-win32 platforms. Avoid runtime errors by # defining it as OSError (its parent class) if missing. try: WindowsError except NameError: WindowsError = OSError def yesno(prompt, default=None): """ Prompt the user for a yes or no. Can optionally specify a default value, which will only be used if they enter a blank line. Unrecognised input (anything other than "y", "n", "yes", "no" or "") will return None. """ answer = input(prompt).strip().lower() if answer == "y" or answer == "yes": return True elif answer == "n" or answer == "no": return False elif answer == "": return default else: return None def input(prompt): """ Version of input (raw_input in Python 2) which forces a flush of sys.stdout to avoid problems where the prompt fails to appear due to line buffering """ sys.stdout.write(prompt) sys.stdout.flush() return sys.stdin.readline().rstrip('\n') def call_silently(*args, **kwargs): """ Like subprocess.call(), but redirects stdout and stderr to /dev/null. """ with open(os.devnull, 'w') as shutup: try: return subprocess.call(*args, stdout=shutup, stderr=shutup, **kwargs) except WindowsError: # On Windows, subprocess.call() can still raise exceptions. Normalize # to POSIXy behaviour by returning a nonzero exit code. return 1 def is_mac(): return platform.system() == 'Darwin' def is_ubuntu(): return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu' def get_version_info(scope): versioninfo = 'docker-compose version {}, build {}'.format( compose.__version__, get_build_version()) if scope == 'compose': return versioninfo if scope == 'full': return ( "{}\n" "docker-py version: {}\n" "{} version: {}\n" "OpenSSL version: {}" ).format( versioninfo, docker.version, platform.python_implementation(), platform.python_version(), ssl.OPENSSL_VERSION) raise ValueError("{} is not a valid version scope".format(scope)) def get_build_version(): filename = os.path.join(os.path.dirname(compose.__file__), 'GITSHA') if not os.path.exists(filename): return 'unknown' with open(filename) as fh: return fh.read().strip() def is_docker_for_mac_installed(): return is_mac() and os.path.isdir('/Applications/Docker.app') def generate_user_agent(): parts = [ "docker-compose/{}".format(compose.__version__), "docker-py/{}".format(docker.__version__), ] try: p_system = platform.system() p_release = platform.release() except IOError: pass else: parts.append("{}/{}".format(p_system, p_release)) return " ".join(parts) compose-1.8.0/compose/cli/verbose_proxy.py000066400000000000000000000033521274620702700206730ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import functools import logging import pprint from itertools import chain import six def format_call(args, kwargs): args = (repr(a) for a in args) kwargs = ("{0!s}={1!r}".format(*item) for item in six.iteritems(kwargs)) return "({0})".format(", ".join(chain(args, kwargs))) def format_return(result, max_lines): if isinstance(result, (list, tuple, set)): return "({0} with {1} items)".format(type(result).__name__, len(result)) if result: lines = pprint.pformat(result).split('\n') extra = '\n...' if len(lines) > max_lines else '' return '\n'.join(lines[:max_lines]) + extra return result class VerboseProxy(object): """Proxy all function calls to another class and log method name, arguments and return values for each call. """ def __init__(self, obj_name, obj, log_name=None, max_lines=10): self.obj_name = obj_name self.obj = obj self.max_lines = max_lines self.log = logging.getLogger(log_name or __name__) def __getattr__(self, name): attr = getattr(self.obj, name) if not six.callable(attr): return attr return functools.partial(self.proxy_callable, name) def proxy_callable(self, call_name, *args, **kwargs): self.log.info("%s %s <- %s", self.obj_name, call_name, format_call(args, kwargs)) result = getattr(self.obj, call_name)(*args, **kwargs) self.log.info("%s %s -> %s", self.obj_name, call_name, format_return(result, self.max_lines)) return result compose-1.8.0/compose/config/000077500000000000000000000000001274620702700161065ustar00rootroot00000000000000compose-1.8.0/compose/config/__init__.py000066400000000000000000000005051274620702700202170ustar00rootroot00000000000000# flake8: noqa from __future__ import absolute_import from __future__ import unicode_literals from . import environment from .config import ConfigurationError from .config import DOCKER_CONFIG_KEYS from .config import find from .config import load from .config import merge_environment from .config import parse_environment compose-1.8.0/compose/config/config.py000066400000000000000000000771071274620702700177410ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import functools import logging import ntpath import os import string import sys from collections import namedtuple import six import yaml from cached_property import cached_property from ..const import COMPOSEFILE_V1 as V1 from ..const import COMPOSEFILE_V2_0 as V2_0 from ..utils import build_string_dict from .environment import env_vars_from_file from .environment import Environment from .environment import split_env from .errors import CircularReference from .errors import ComposeFileNotFound from .errors import ConfigurationError from .errors import VERSION_EXPLANATION from .interpolation import interpolate_environment_variables from .sort_services import get_container_name_from_network_mode from .sort_services import get_service_name_from_network_mode from .sort_services import sort_service_dicts from .types import parse_extra_hosts from .types import parse_restart_spec from .types import ServiceLink from .types import VolumeFromSpec from .types import VolumeSpec from .validation import match_named_volumes from .validation import validate_against_config_schema from .validation import validate_config_section from .validation import validate_depends_on from .validation import validate_extends_file_path from .validation import validate_links from .validation import validate_network_mode from .validation import validate_service_constraints from .validation import validate_top_level_object from .validation import validate_ulimits DOCKER_CONFIG_KEYS = [ 'cap_add', 'cap_drop', 'cgroup_parent', 'command', 'cpu_quota', 'cpu_shares', 'cpuset', 'detach', 'devices', 'dns', 'dns_search', 'domainname', 'entrypoint', 'env_file', 'environment', 'extra_hosts', 'hostname', 'image', 'ipc', 'labels', 'links', 'mac_address', 'mem_limit', 'memswap_limit', 'net', 'pid', 'ports', 'privileged', 'read_only', 'restart', 'security_opt', 'shm_size', 'stdin_open', 'stop_signal', 'tty', 'user', 'volume_driver', 'volumes', 'volumes_from', 'working_dir', ] ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [ 'build', 'container_name', 'dockerfile', 'log_driver', 'log_opt', 'logging', 'network_mode', ] DOCKER_VALID_URL_PREFIXES = ( 'http://', 'https://', 'git://', 'github.com/', 'git@', ) SUPPORTED_FILENAMES = [ 'docker-compose.yml', 'docker-compose.yaml', ] DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml' log = logging.getLogger(__name__) class ConfigDetails(namedtuple('_ConfigDetails', 'working_dir config_files environment')): """ :param working_dir: the directory to use for relative paths in the config :type working_dir: string :param config_files: list of configuration files to load :type config_files: list of :class:`ConfigFile` :param environment: computed environment values for this project :type environment: :class:`environment.Environment` """ def __new__(cls, working_dir, config_files, environment=None): if environment is None: environment = Environment.from_env_file(working_dir) return super(ConfigDetails, cls).__new__( cls, working_dir, config_files, environment ) class ConfigFile(namedtuple('_ConfigFile', 'filename config')): """ :param filename: filename of the config file :type filename: string :param config: contents of the config file :type config: :class:`dict` """ @classmethod def from_filename(cls, filename): return cls(filename, load_yaml(filename)) @cached_property def version(self): if 'version' not in self.config: return V1 version = self.config['version'] if isinstance(version, dict): log.warn('Unexpected type for "version" key in "{}". Assuming ' '"version" is the name of a service, and defaulting to ' 'Compose file version 1.'.format(self.filename)) return V1 if not isinstance(version, six.string_types): raise ConfigurationError( 'Version in "{}" is invalid - it should be a string.' .format(self.filename)) if version == '1': raise ConfigurationError( 'Version in "{}" is invalid. {}' .format(self.filename, VERSION_EXPLANATION)) if version == '2': version = V2_0 if version != V2_0: raise ConfigurationError( 'Version in "{}" is unsupported. {}' .format(self.filename, VERSION_EXPLANATION)) return version def get_service(self, name): return self.get_service_dicts()[name] def get_service_dicts(self): return self.config if self.version == V1 else self.config.get('services', {}) def get_volumes(self): return {} if self.version == V1 else self.config.get('volumes', {}) def get_networks(self): return {} if self.version == V1 else self.config.get('networks', {}) class Config(namedtuple('_Config', 'version services volumes networks')): """ :param version: configuration version :type version: int :param services: List of service description dictionaries :type services: :class:`list` :param volumes: Dictionary mapping volume names to description dictionaries :type volumes: :class:`dict` :param networks: Dictionary mapping network names to description dictionaries :type networks: :class:`dict` """ class ServiceConfig(namedtuple('_ServiceConfig', 'working_dir filename name config')): @classmethod def with_abs_paths(cls, working_dir, filename, name, config): if not working_dir: raise ValueError("No working_dir for ServiceConfig.") return cls( os.path.abspath(working_dir), os.path.abspath(filename) if filename else filename, name, config) def find(base_dir, filenames, environment): if filenames == ['-']: return ConfigDetails( os.getcwd(), [ConfigFile(None, yaml.safe_load(sys.stdin))], environment ) if filenames: filenames = [os.path.join(base_dir, f) for f in filenames] else: filenames = get_default_config_files(base_dir) log.debug("Using configuration files: {}".format(",".join(filenames))) return ConfigDetails( os.path.dirname(filenames[0]), [ConfigFile.from_filename(f) for f in filenames], environment ) def validate_config_version(config_files): main_file = config_files[0] validate_top_level_object(main_file) for next_file in config_files[1:]: validate_top_level_object(next_file) if main_file.version != next_file.version: raise ConfigurationError( "Version mismatch: file {0} specifies version {1} but " "extension file {2} uses version {3}".format( main_file.filename, main_file.version, next_file.filename, next_file.version)) def get_default_config_files(base_dir): (candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir) if not candidates: raise ComposeFileNotFound(SUPPORTED_FILENAMES) winner = candidates[0] if len(candidates) > 1: log.warn("Found multiple config files with supported names: %s", ", ".join(candidates)) log.warn("Using %s\n", winner) return [os.path.join(path, winner)] + get_default_override_file(path) def get_default_override_file(path): override_filename = os.path.join(path, DEFAULT_OVERRIDE_FILENAME) return [override_filename] if os.path.exists(override_filename) else [] def find_candidates_in_parent_dirs(filenames, path): """ Given a directory path to start, looks for filenames in the directory, and then each parent directory successively, until found. Returns tuple (candidates, path). """ candidates = [filename for filename in filenames if os.path.exists(os.path.join(path, filename))] if not candidates: parent_dir = os.path.join(path, '..') if os.path.abspath(parent_dir) != os.path.abspath(path): return find_candidates_in_parent_dirs(filenames, parent_dir) return (candidates, path) def load(config_details): """Load the configuration from a working directory and a list of configuration files. Files are loaded in order, and merged on top of each other to create the final configuration. Return a fully interpolated, extended and validated configuration. """ validate_config_version(config_details.config_files) processed_files = [ process_config_file(config_file, config_details.environment) for config_file in config_details.config_files ] config_details = config_details._replace(config_files=processed_files) main_file = config_details.config_files[0] volumes = load_mapping( config_details.config_files, 'get_volumes', 'Volume' ) networks = load_mapping( config_details.config_files, 'get_networks', 'Network' ) service_dicts = load_services(config_details, main_file) if main_file.version != V1: for service_dict in service_dicts: match_named_volumes(service_dict, volumes) return Config(main_file.version, service_dicts, volumes, networks) def load_mapping(config_files, get_func, entity_type): mapping = {} for config_file in config_files: for name, config in getattr(config_file, get_func)().items(): mapping[name] = config or {} if not config: continue external = config.get('external') if external: if len(config.keys()) > 1: raise ConfigurationError( '{} {} declared as external but specifies' ' additional attributes ({}). '.format( entity_type, name, ', '.join([k for k in config.keys() if k != 'external']) ) ) if isinstance(external, dict): config['external_name'] = external.get('name') else: config['external_name'] = name mapping[name] = config if 'driver_opts' in config: config['driver_opts'] = build_string_dict( config['driver_opts'] ) return mapping def load_services(config_details, config_file): def build_service(service_name, service_dict, service_names): service_config = ServiceConfig.with_abs_paths( config_details.working_dir, config_file.filename, service_name, service_dict) resolver = ServiceExtendsResolver( service_config, config_file, environment=config_details.environment ) service_dict = process_service(resolver.run()) service_config = service_config._replace(config=service_dict) validate_service(service_config, service_names, config_file.version) service_dict = finalize_service( service_config, service_names, config_file.version, config_details.environment) return service_dict def build_services(service_config): service_names = service_config.keys() return sort_service_dicts([ build_service(name, service_dict, service_names) for name, service_dict in service_config.items() ]) def merge_services(base, override): all_service_names = set(base) | set(override) return { name: merge_service_dicts_from_files( base.get(name, {}), override.get(name, {}), config_file.version) for name in all_service_names } service_configs = [ file.get_service_dicts() for file in config_details.config_files ] service_config = service_configs[0] for next_config in service_configs[1:]: service_config = merge_services(service_config, next_config) return build_services(service_config) def interpolate_config_section(filename, config, section, environment): validate_config_section(filename, config, section) return interpolate_environment_variables(config, section, environment) def process_config_file(config_file, environment, service_name=None): services = interpolate_config_section( config_file.filename, config_file.get_service_dicts(), 'service', environment,) if config_file.version == V2_0: processed_config = dict(config_file.config) processed_config['services'] = services processed_config['volumes'] = interpolate_config_section( config_file.filename, config_file.get_volumes(), 'volume', environment,) processed_config['networks'] = interpolate_config_section( config_file.filename, config_file.get_networks(), 'network', environment,) if config_file.version == V1: processed_config = services config_file = config_file._replace(config=processed_config) validate_against_config_schema(config_file) if service_name and service_name not in services: raise ConfigurationError( "Cannot extend service '{}' in {}: Service not found".format( service_name, config_file.filename)) return config_file class ServiceExtendsResolver(object): def __init__(self, service_config, config_file, environment, already_seen=None): self.service_config = service_config self.working_dir = service_config.working_dir self.already_seen = already_seen or [] self.config_file = config_file self.environment = environment @property def signature(self): return self.service_config.filename, self.service_config.name def detect_cycle(self): if self.signature in self.already_seen: raise CircularReference(self.already_seen + [self.signature]) def run(self): self.detect_cycle() if 'extends' in self.service_config.config: service_dict = self.resolve_extends(*self.validate_and_construct_extends()) return self.service_config._replace(config=service_dict) return self.service_config def validate_and_construct_extends(self): extends = self.service_config.config['extends'] if not isinstance(extends, dict): extends = {'service': extends} config_path = self.get_extended_config_path(extends) service_name = extends['service'] extends_file = ConfigFile.from_filename(config_path) validate_config_version([self.config_file, extends_file]) extended_file = process_config_file( extends_file, self.environment, service_name=service_name ) service_config = extended_file.get_service(service_name) return config_path, service_config, service_name def resolve_extends(self, extended_config_path, service_dict, service_name): resolver = ServiceExtendsResolver( ServiceConfig.with_abs_paths( os.path.dirname(extended_config_path), extended_config_path, service_name, service_dict), self.config_file, already_seen=self.already_seen + [self.signature], environment=self.environment ) service_config = resolver.run() other_service_dict = process_service(service_config) validate_extended_service_dict( other_service_dict, extended_config_path, service_name) return merge_service_dicts( other_service_dict, self.service_config.config, self.config_file.version) def get_extended_config_path(self, extends_options): """Service we are extending either has a value for 'file' set, which we need to obtain a full path too or we are extending from a service defined in our own file. """ filename = self.service_config.filename validate_extends_file_path( self.service_config.name, extends_options, filename) if 'file' in extends_options: return expand_path(self.working_dir, extends_options['file']) return filename def resolve_environment(service_dict, environment=None): """Unpack any environment variables from an env_file, if set. Interpolate environment values if set. """ env = {} for env_file in service_dict.get('env_file', []): env.update(env_vars_from_file(env_file)) env.update(parse_environment(service_dict.get('environment'))) return dict(resolve_env_var(k, v, environment) for k, v in six.iteritems(env)) def resolve_build_args(build, environment): args = parse_build_arguments(build.get('args')) return dict(resolve_env_var(k, v, environment) for k, v in six.iteritems(args)) def validate_extended_service_dict(service_dict, filename, service): error_prefix = "Cannot extend service '%s' in %s:" % (service, filename) if 'links' in service_dict: raise ConfigurationError( "%s services with 'links' cannot be extended" % error_prefix) if 'volumes_from' in service_dict: raise ConfigurationError( "%s services with 'volumes_from' cannot be extended" % error_prefix) if 'net' in service_dict: if get_container_name_from_network_mode(service_dict['net']): raise ConfigurationError( "%s services with 'net: container' cannot be extended" % error_prefix) if 'network_mode' in service_dict: if get_service_name_from_network_mode(service_dict['network_mode']): raise ConfigurationError( "%s services with 'network_mode: service' cannot be extended" % error_prefix) if 'depends_on' in service_dict: raise ConfigurationError( "%s services with 'depends_on' cannot be extended" % error_prefix) def validate_service(service_config, service_names, version): service_dict, service_name = service_config.config, service_config.name validate_service_constraints(service_dict, service_name, version) validate_paths(service_dict) validate_ulimits(service_config) validate_network_mode(service_config, service_names) validate_depends_on(service_config, service_names) validate_links(service_config, service_names) if not service_dict.get('image') and has_uppercase(service_name): raise ConfigurationError( "Service '{name}' contains uppercase characters which are not valid " "as part of an image name. Either use a lowercase service name or " "use the `image` field to set a custom name for the service image." .format(name=service_name)) def process_service(service_config): working_dir = service_config.working_dir service_dict = dict(service_config.config) if 'env_file' in service_dict: service_dict['env_file'] = [ expand_path(working_dir, path) for path in to_list(service_dict['env_file']) ] if 'build' in service_dict: if isinstance(service_dict['build'], six.string_types): service_dict['build'] = resolve_build_path(working_dir, service_dict['build']) elif isinstance(service_dict['build'], dict) and 'context' in service_dict['build']: path = service_dict['build']['context'] service_dict['build']['context'] = resolve_build_path(working_dir, path) if 'volumes' in service_dict and service_dict.get('volume_driver') is None: service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict) if 'labels' in service_dict: service_dict['labels'] = parse_labels(service_dict['labels']) if 'extra_hosts' in service_dict: service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts']) for field in ['dns', 'dns_search', 'tmpfs']: if field in service_dict: service_dict[field] = to_list(service_dict[field]) return service_dict def finalize_service(service_config, service_names, version, environment): service_dict = dict(service_config.config) if 'environment' in service_dict or 'env_file' in service_dict: service_dict['environment'] = resolve_environment(service_dict, environment) service_dict.pop('env_file', None) if 'volumes_from' in service_dict: service_dict['volumes_from'] = [ VolumeFromSpec.parse(vf, service_names, version) for vf in service_dict['volumes_from'] ] if 'volumes' in service_dict: service_dict['volumes'] = [ VolumeSpec.parse(v) for v in service_dict['volumes']] if 'net' in service_dict: network_mode = service_dict.pop('net') container_name = get_container_name_from_network_mode(network_mode) if container_name and container_name in service_names: service_dict['network_mode'] = 'service:{}'.format(container_name) else: service_dict['network_mode'] = network_mode if 'networks' in service_dict: service_dict['networks'] = parse_networks(service_dict['networks']) if 'restart' in service_dict: service_dict['restart'] = parse_restart_spec(service_dict['restart']) normalize_build(service_dict, service_config.working_dir, environment) service_dict['name'] = service_config.name return normalize_v1_service_format(service_dict) def normalize_v1_service_format(service_dict): if 'log_driver' in service_dict or 'log_opt' in service_dict: if 'logging' not in service_dict: service_dict['logging'] = {} if 'log_driver' in service_dict: service_dict['logging']['driver'] = service_dict['log_driver'] del service_dict['log_driver'] if 'log_opt' in service_dict: service_dict['logging']['options'] = service_dict['log_opt'] del service_dict['log_opt'] if 'dockerfile' in service_dict: service_dict['build'] = service_dict.get('build', {}) service_dict['build'].update({ 'dockerfile': service_dict.pop('dockerfile') }) return service_dict def merge_service_dicts_from_files(base, override, version): """When merging services from multiple files we need to merge the `extends` field. This is not handled by `merge_service_dicts()` which is used to perform the `extends`. """ new_service = merge_service_dicts(base, override, version) if 'extends' in override: new_service['extends'] = override['extends'] elif 'extends' in base: new_service['extends'] = base['extends'] return new_service class MergeDict(dict): """A dict-like object responsible for merging two dicts into one.""" def __init__(self, base, override): self.base = base self.override = override def needs_merge(self, field): return field in self.base or field in self.override def merge_field(self, field, merge_func, default=None): if not self.needs_merge(field): return self[field] = merge_func( self.base.get(field, default), self.override.get(field, default)) def merge_mapping(self, field, parse_func): if not self.needs_merge(field): return self[field] = parse_func(self.base.get(field)) self[field].update(parse_func(self.override.get(field))) def merge_sequence(self, field, parse_func): def parse_sequence_func(seq): return to_mapping((parse_func(item) for item in seq), 'merge_field') if not self.needs_merge(field): return merged = parse_sequence_func(self.base.get(field, [])) merged.update(parse_sequence_func(self.override.get(field, []))) self[field] = [item.repr() for item in sorted(merged.values())] def merge_scalar(self, field): if self.needs_merge(field): self[field] = self.override.get(field, self.base.get(field)) def merge_service_dicts(base, override, version): md = MergeDict(base, override) md.merge_mapping('environment', parse_environment) md.merge_mapping('labels', parse_labels) md.merge_mapping('ulimits', parse_ulimits) md.merge_mapping('networks', parse_networks) md.merge_sequence('links', ServiceLink.parse) for field in ['volumes', 'devices']: md.merge_field(field, merge_path_mappings) for field in [ 'ports', 'cap_add', 'cap_drop', 'expose', 'external_links', 'security_opt', 'volumes_from', 'depends_on', ]: md.merge_field(field, merge_unique_items_lists, default=[]) for field in ['dns', 'dns_search', 'env_file', 'tmpfs']: md.merge_field(field, merge_list_or_string) for field in set(ALLOWED_KEYS) - set(md): md.merge_scalar(field) if version == V1: legacy_v1_merge_image_or_build(md, base, override) elif md.needs_merge('build'): md['build'] = merge_build(md, base, override) return dict(md) def merge_unique_items_lists(base, override): return sorted(set().union(base, override)) def merge_build(output, base, override): def to_dict(service): build_config = service.get('build', {}) if isinstance(build_config, six.string_types): return {'context': build_config} return build_config md = MergeDict(to_dict(base), to_dict(override)) md.merge_scalar('context') md.merge_scalar('dockerfile') md.merge_mapping('args', parse_build_arguments) return dict(md) def legacy_v1_merge_image_or_build(output, base, override): output.pop('image', None) output.pop('build', None) if 'image' in override: output['image'] = override['image'] elif 'build' in override: output['build'] = override['build'] elif 'image' in base: output['image'] = base['image'] elif 'build' in base: output['build'] = base['build'] def merge_environment(base, override): env = parse_environment(base) env.update(parse_environment(override)) return env def split_label(label): if '=' in label: return label.split('=', 1) else: return label, '' def parse_dict_or_list(split_func, type_name, arguments): if not arguments: return {} if isinstance(arguments, list): return dict(split_func(e) for e in arguments) if isinstance(arguments, dict): return dict(arguments) raise ConfigurationError( "%s \"%s\" must be a list or mapping," % (type_name, arguments) ) parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments') parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment') parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels') parse_networks = functools.partial(parse_dict_or_list, lambda k: (k, None), 'networks') def parse_ulimits(ulimits): if not ulimits: return {} if isinstance(ulimits, dict): return dict(ulimits) def resolve_env_var(key, val, environment): if val is not None: return key, val elif environment and key in environment: return key, environment[key] else: return key, None def resolve_volume_paths(working_dir, service_dict): return [ resolve_volume_path(working_dir, volume) for volume in service_dict['volumes'] ] def resolve_volume_path(working_dir, volume): container_path, host_path = split_path_mapping(volume) if host_path is not None: if host_path.startswith('.'): host_path = expand_path(working_dir, host_path) host_path = os.path.expanduser(host_path) return u"{}:{}".format(host_path, container_path) else: return container_path def normalize_build(service_dict, working_dir, environment): if 'build' in service_dict: build = {} # Shortcut where specifying a string is treated as the build context if isinstance(service_dict['build'], six.string_types): build['context'] = service_dict.pop('build') else: build.update(service_dict['build']) if 'args' in build: build['args'] = build_string_dict( resolve_build_args(build, environment) ) service_dict['build'] = build def resolve_build_path(working_dir, build_path): if is_url(build_path): return build_path return expand_path(working_dir, build_path) def is_url(build_path): return build_path.startswith(DOCKER_VALID_URL_PREFIXES) def validate_paths(service_dict): if 'build' in service_dict: build = service_dict.get('build', {}) if isinstance(build, six.string_types): build_path = build elif isinstance(build, dict) and 'context' in build: build_path = build['context'] else: # We have a build section but no context, so nothing to validate return if ( not is_url(build_path) and (not os.path.exists(build_path) or not os.access(build_path, os.R_OK)) ): raise ConfigurationError( "build path %s either does not exist, is not accessible, " "or is not a valid URL." % build_path) def merge_path_mappings(base, override): d = dict_from_path_mappings(base) d.update(dict_from_path_mappings(override)) return path_mappings_from_dict(d) def dict_from_path_mappings(path_mappings): if path_mappings: return dict(split_path_mapping(v) for v in path_mappings) else: return {} def path_mappings_from_dict(d): return [join_path_mapping(v) for v in sorted(d.items())] def split_path_mapping(volume_path): """ Ascertain if the volume_path contains a host path as well as a container path. Using splitdrive so windows absolute paths won't cause issues with splitting on ':'. """ # splitdrive is very naive, so handle special cases where we can be sure # the first character is not a drive. if (volume_path.startswith('.') or volume_path.startswith('~') or volume_path.startswith('/')): drive, volume_config = '', volume_path else: drive, volume_config = ntpath.splitdrive(volume_path) if ':' in volume_config: (host, container) = volume_config.split(':', 1) return (container, drive + host) else: return (volume_path, None) def join_path_mapping(pair): (container, host) = pair if host is None: return container else: return ":".join((host, container)) def expand_path(working_dir, path): return os.path.abspath(os.path.join(working_dir, os.path.expanduser(path))) def merge_list_or_string(base, override): return to_list(base) + to_list(override) def to_list(value): if value is None: return [] elif isinstance(value, six.string_types): return [value] else: return value def to_mapping(sequence, key_field): return {getattr(item, key_field): item for item in sequence} def has_uppercase(name): return any(char in string.ascii_uppercase for char in name) def load_yaml(filename): try: with open(filename, 'r') as fh: return yaml.safe_load(fh) except (IOError, yaml.YAMLError) as e: error_name = getattr(e, '__module__', '') + '.' + e.__class__.__name__ raise ConfigurationError(u"{}: {}".format(error_name, e)) compose-1.8.0/compose/config/config_schema_v1.json000066400000000000000000000125431274620702700222010ustar00rootroot00000000000000{ "$schema": "http://json-schema.org/draft-04/schema#", "id": "config_schema_v1.json", "type": "object", "patternProperties": { "^[a-zA-Z0-9._-]+$": { "$ref": "#/definitions/service" } }, "additionalProperties": false, "definitions": { "service": { "id": "#/definitions/service", "type": "object", "properties": { "build": {"type": "string"}, "cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cgroup_parent": {"type": "string"}, "command": { "oneOf": [ {"type": "string"}, {"type": "array", "items": {"type": "string"}} ] }, "container_name": {"type": "string"}, "cpu_shares": {"type": ["number", "string"]}, "cpu_quota": {"type": ["number", "string"]}, "cpuset": {"type": "string"}, "devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "dns": {"$ref": "#/definitions/string_or_list"}, "dns_search": {"$ref": "#/definitions/string_or_list"}, "dockerfile": {"type": "string"}, "domainname": {"type": "string"}, "entrypoint": { "oneOf": [ {"type": "string"}, {"type": "array", "items": {"type": "string"}} ] }, "env_file": {"$ref": "#/definitions/string_or_list"}, "environment": {"$ref": "#/definitions/list_or_dict"}, "expose": { "type": "array", "items": { "type": ["string", "number"], "format": "expose" }, "uniqueItems": true }, "extends": { "oneOf": [ { "type": "string" }, { "type": "object", "properties": { "service": {"type": "string"}, "file": {"type": "string"} }, "required": ["service"], "additionalProperties": false } ] }, "extra_hosts": {"$ref": "#/definitions/list_or_dict"}, "external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "hostname": {"type": "string"}, "image": {"type": "string"}, "ipc": {"type": "string"}, "labels": {"$ref": "#/definitions/list_or_dict"}, "links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "log_driver": {"type": "string"}, "log_opt": {"type": "object"}, "mac_address": {"type": "string"}, "mem_limit": {"type": ["number", "string"]}, "memswap_limit": {"type": ["number", "string"]}, "net": {"type": "string"}, "pid": {"type": ["string", "null"]}, "ports": { "type": "array", "items": { "type": ["string", "number"], "format": "ports" }, "uniqueItems": true }, "privileged": {"type": "boolean"}, "read_only": {"type": "boolean"}, "restart": {"type": "string"}, "security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "shm_size": {"type": ["number", "string"]}, "stdin_open": {"type": "boolean"}, "stop_signal": {"type": "string"}, "tty": {"type": "boolean"}, "ulimits": { "type": "object", "patternProperties": { "^[a-z]+$": { "oneOf": [ {"type": "integer"}, { "type":"object", "properties": { "hard": {"type": "integer"}, "soft": {"type": "integer"} }, "required": ["soft", "hard"], "additionalProperties": false } ] } } }, "user": {"type": "string"}, "volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "volume_driver": {"type": "string"}, "volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "working_dir": {"type": "string"} }, "dependencies": { "memswap_limit": ["mem_limit"] }, "additionalProperties": false }, "string_or_list": { "oneOf": [ {"type": "string"}, {"$ref": "#/definitions/list_of_strings"} ] }, "list_of_strings": { "type": "array", "items": {"type": "string"}, "uniqueItems": true }, "list_or_dict": { "oneOf": [ { "type": "object", "patternProperties": { ".+": { "type": ["string", "number", "null"] } }, "additionalProperties": false }, {"type": "array", "items": {"type": "string"}, "uniqueItems": true} ] }, "constraints": { "service": { "id": "#/definitions/constraints/service", "anyOf": [ { "required": ["build"], "not": {"required": ["image"]} }, { "required": ["image"], "not": {"anyOf": [ {"required": ["build"]}, {"required": ["dockerfile"]} ]} } ] } } } } compose-1.8.0/compose/config/config_schema_v2.0.json000066400000000000000000000211671274620702700223420ustar00rootroot00000000000000{ "$schema": "http://json-schema.org/draft-04/schema#", "id": "config_schema_v2.0.json", "type": "object", "properties": { "version": { "type": "string" }, "services": { "id": "#/properties/services", "type": "object", "patternProperties": { "^[a-zA-Z0-9._-]+$": { "$ref": "#/definitions/service" } }, "additionalProperties": false }, "networks": { "id": "#/properties/networks", "type": "object", "patternProperties": { "^[a-zA-Z0-9._-]+$": { "$ref": "#/definitions/network" } } }, "volumes": { "id": "#/properties/volumes", "type": "object", "patternProperties": { "^[a-zA-Z0-9._-]+$": { "$ref": "#/definitions/volume" } }, "additionalProperties": false } }, "additionalProperties": false, "definitions": { "service": { "id": "#/definitions/service", "type": "object", "properties": { "build": { "oneOf": [ {"type": "string"}, { "type": "object", "properties": { "context": {"type": "string"}, "dockerfile": {"type": "string"}, "args": {"$ref": "#/definitions/list_or_dict"} }, "additionalProperties": false } ] }, "cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cgroup_parent": {"type": "string"}, "command": { "oneOf": [ {"type": "string"}, {"type": "array", "items": {"type": "string"}} ] }, "container_name": {"type": "string"}, "cpu_shares": {"type": ["number", "string"]}, "cpu_quota": {"type": ["number", "string"]}, "cpuset": {"type": "string"}, "depends_on": {"$ref": "#/definitions/list_of_strings"}, "devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "dns": {"$ref": "#/definitions/string_or_list"}, "dns_search": {"$ref": "#/definitions/string_or_list"}, "domainname": {"type": "string"}, "entrypoint": { "oneOf": [ {"type": "string"}, {"type": "array", "items": {"type": "string"}} ] }, "env_file": {"$ref": "#/definitions/string_or_list"}, "environment": {"$ref": "#/definitions/list_or_dict"}, "expose": { "type": "array", "items": { "type": ["string", "number"], "format": "expose" }, "uniqueItems": true }, "extends": { "oneOf": [ { "type": "string" }, { "type": "object", "properties": { "service": {"type": "string"}, "file": {"type": "string"} }, "required": ["service"], "additionalProperties": false } ] }, "external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "extra_hosts": {"$ref": "#/definitions/list_or_dict"}, "hostname": {"type": "string"}, "image": {"type": "string"}, "ipc": {"type": "string"}, "labels": {"$ref": "#/definitions/list_or_dict"}, "links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "logging": { "type": "object", "properties": { "driver": {"type": "string"}, "options": {"type": "object"} }, "additionalProperties": false }, "mac_address": {"type": "string"}, "mem_limit": {"type": ["number", "string"]}, "memswap_limit": {"type": ["number", "string"]}, "network_mode": {"type": "string"}, "networks": { "oneOf": [ {"$ref": "#/definitions/list_of_strings"}, { "type": "object", "patternProperties": { "^[a-zA-Z0-9._-]+$": { "oneOf": [ { "type": "object", "properties": { "aliases": {"$ref": "#/definitions/list_of_strings"}, "ipv4_address": {"type": "string"}, "ipv6_address": {"type": "string"} }, "additionalProperties": false }, {"type": "null"} ] } }, "additionalProperties": false } ] }, "pid": {"type": ["string", "null"]}, "ports": { "type": "array", "items": { "type": ["string", "number"], "format": "ports" }, "uniqueItems": true }, "privileged": {"type": "boolean"}, "read_only": {"type": "boolean"}, "restart": {"type": "string"}, "security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "shm_size": {"type": ["number", "string"]}, "stdin_open": {"type": "boolean"}, "stop_signal": {"type": "string"}, "tmpfs": {"$ref": "#/definitions/string_or_list"}, "tty": {"type": "boolean"}, "ulimits": { "type": "object", "patternProperties": { "^[a-z]+$": { "oneOf": [ {"type": "integer"}, { "type":"object", "properties": { "hard": {"type": "integer"}, "soft": {"type": "integer"} }, "required": ["soft", "hard"], "additionalProperties": false } ] } } }, "user": {"type": "string"}, "volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "volume_driver": {"type": "string"}, "volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "working_dir": {"type": "string"} }, "dependencies": { "memswap_limit": ["mem_limit"] }, "additionalProperties": false }, "network": { "id": "#/definitions/network", "type": "object", "properties": { "driver": {"type": "string"}, "driver_opts": { "type": "object", "patternProperties": { "^.+$": {"type": ["string", "number"]} } }, "ipam": { "type": "object", "properties": { "driver": {"type": "string"}, "config": { "type": "array" } }, "additionalProperties": false }, "external": { "type": ["boolean", "object"], "properties": { "name": {"type": "string"} }, "additionalProperties": false } }, "additionalProperties": false }, "volume": { "id": "#/definitions/volume", "type": ["object", "null"], "properties": { "driver": {"type": "string"}, "driver_opts": { "type": "object", "patternProperties": { "^.+$": {"type": ["string", "number"]} } }, "external": { "type": ["boolean", "object"], "properties": { "name": {"type": "string"} } }, "additionalProperties": false }, "additionalProperties": false }, "string_or_list": { "oneOf": [ {"type": "string"}, {"$ref": "#/definitions/list_of_strings"} ] }, "list_of_strings": { "type": "array", "items": {"type": "string"}, "uniqueItems": true }, "list_or_dict": { "oneOf": [ { "type": "object", "patternProperties": { ".+": { "type": ["string", "number", "null"] } }, "additionalProperties": false }, {"type": "array", "items": {"type": "string"}, "uniqueItems": true} ] }, "constraints": { "service": { "id": "#/definitions/constraints/service", "anyOf": [ {"required": ["build"]}, {"required": ["image"]} ], "properties": { "build": { "required": ["context"] } } } } } } compose-1.8.0/compose/config/environment.py000066400000000000000000000062431274620702700210310ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import codecs import logging import os import six from ..const import IS_WINDOWS_PLATFORM from .errors import ConfigurationError log = logging.getLogger(__name__) def split_env(env): if isinstance(env, six.binary_type): env = env.decode('utf-8', 'replace') if '=' in env: return env.split('=', 1) else: return env, None def env_vars_from_file(filename): """ Read in a line delimited file of environment variables. """ if not os.path.exists(filename): raise ConfigurationError("Couldn't find env file: %s" % filename) elif not os.path.isfile(filename): raise ConfigurationError("%s is not a file." % (filename)) env = {} for line in codecs.open(filename, 'r', 'utf-8'): line = line.strip() if line and not line.startswith('#'): k, v = split_env(line) env[k] = v return env class Environment(dict): def __init__(self, *args, **kwargs): super(Environment, self).__init__(*args, **kwargs) self.missing_keys = [] @classmethod def from_env_file(cls, base_dir): def _initialize(): result = cls() if base_dir is None: return result env_file_path = os.path.join(base_dir, '.env') try: return cls(env_vars_from_file(env_file_path)) except ConfigurationError: pass return result instance = _initialize() instance.update(os.environ) return instance @classmethod def from_command_line(cls, parsed_env_opts): result = cls() for k, v in parsed_env_opts.items(): # Values from the command line take priority, unless they're unset # in which case they take the value from the system's environment if v is None and k in os.environ: result[k] = os.environ[k] else: result[k] = v return result def __getitem__(self, key): try: return super(Environment, self).__getitem__(key) except KeyError: if IS_WINDOWS_PLATFORM: try: return super(Environment, self).__getitem__(key.upper()) except KeyError: pass if key not in self.missing_keys: log.warn( "The {} variable is not set. Defaulting to a blank string." .format(key) ) self.missing_keys.append(key) return "" def __contains__(self, key): result = super(Environment, self).__contains__(key) if IS_WINDOWS_PLATFORM: return ( result or super(Environment, self).__contains__(key.upper()) ) return result def get(self, key, *args, **kwargs): if IS_WINDOWS_PLATFORM: return super(Environment, self).get( key, super(Environment, self).get(key.upper(), *args, **kwargs) ) return super(Environment, self).get(key, *args, **kwargs) compose-1.8.0/compose/config/errors.py000066400000000000000000000026301274620702700177750ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals VERSION_EXPLANATION = ( 'You might be seeing this error because you\'re using the wrong Compose ' 'file version. Either specify a version of "2" (or "2.0") and place your ' 'service definitions under the `services` key, or omit the `version` key ' 'and place your service definitions at the root of the file to use ' 'version 1.\nFor more on the Compose file format versions, see ' 'https://docs.docker.com/compose/compose-file/') class ConfigurationError(Exception): def __init__(self, msg): self.msg = msg def __str__(self): return self.msg class DependencyError(ConfigurationError): pass class CircularReference(ConfigurationError): def __init__(self, trail): self.trail = trail @property def msg(self): lines = [ "{} in {}".format(service_name, filename) for (filename, service_name) in self.trail ] return "Circular reference:\n {}".format("\n extends ".join(lines)) class ComposeFileNotFound(ConfigurationError): def __init__(self, supported_filenames): super(ComposeFileNotFound, self).__init__(""" Can't find a suitable configuration file in this directory or any parent. Are you in the right directory? Supported filenames: %s """ % ", ".join(supported_filenames)) compose-1.8.0/compose/config/interpolation.py000066400000000000000000000033271274620702700213540ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging from string import Template import six from .errors import ConfigurationError log = logging.getLogger(__name__) def interpolate_environment_variables(config, section, environment): def process_item(name, config_dict): return dict( (key, interpolate_value(name, key, val, section, environment)) for key, val in (config_dict or {}).items() ) return dict( (name, process_item(name, config_dict or {})) for name, config_dict in config.items() ) def interpolate_value(name, config_key, value, section, mapping): try: return recursive_interpolate(value, mapping) except InvalidInterpolation as e: raise ConfigurationError( 'Invalid interpolation format for "{config_key}" option ' 'in {section} "{name}": "{string}"'.format( config_key=config_key, name=name, section=section, string=e.string)) def recursive_interpolate(obj, mapping): if isinstance(obj, six.string_types): return interpolate(obj, mapping) elif isinstance(obj, dict): return dict( (key, recursive_interpolate(val, mapping)) for (key, val) in obj.items() ) elif isinstance(obj, list): return [recursive_interpolate(val, mapping) for val in obj] else: return obj def interpolate(string, mapping): try: return Template(string).substitute(mapping) except ValueError: raise InvalidInterpolation(string) class InvalidInterpolation(Exception): def __init__(self, string): self.string = string compose-1.8.0/compose/config/serialize.py000066400000000000000000000031561274620702700204540ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import six import yaml from compose.config import types from compose.config.config import V1 from compose.config.config import V2_0 def serialize_config_type(dumper, data): representer = dumper.represent_str if six.PY3 else dumper.represent_unicode return representer(data.repr()) yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type) yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type) def denormalize_config(config): denormalized_services = [ denormalize_service_dict(service_dict, config.version) for service_dict in config.services ] services = { service_dict.pop('name'): service_dict for service_dict in denormalized_services } networks = config.networks.copy() for net_name, net_conf in networks.items(): if 'external_name' in net_conf: del net_conf['external_name'] return { 'version': V2_0, 'services': services, 'networks': networks, 'volumes': config.volumes, } def serialize_config(config): return yaml.safe_dump( denormalize_config(config), default_flow_style=False, indent=2, width=80) def denormalize_service_dict(service_dict, version): service_dict = service_dict.copy() if 'restart' in service_dict: service_dict['restart'] = types.serialize_restart_spec(service_dict['restart']) if version == V1 and 'network_mode' not in service_dict: service_dict['network_mode'] = 'bridge' return service_dict compose-1.8.0/compose/config/sort_services.py000066400000000000000000000045501274620702700213560ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from compose.config.errors import DependencyError def get_service_name_from_network_mode(network_mode): return get_source_name_from_network_mode(network_mode, 'service') def get_container_name_from_network_mode(network_mode): return get_source_name_from_network_mode(network_mode, 'container') def get_source_name_from_network_mode(network_mode, source_type): if not network_mode: return if not network_mode.startswith(source_type+':'): return _, net_name = network_mode.split(':', 1) return net_name def get_service_names(links): return [link.split(':')[0] for link in links] def get_service_names_from_volumes_from(volumes_from): return [volume_from.source for volume_from in volumes_from] def get_service_dependents(service_dict, services): name = service_dict['name'] return [ service for service in services if (name in get_service_names(service.get('links', [])) or name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or name == get_service_name_from_network_mode(service.get('network_mode')) or name in service.get('depends_on', [])) ] def sort_service_dicts(services): # Topological sort (Cormen/Tarjan algorithm). unmarked = services[:] temporary_marked = set() sorted_services = [] def visit(n): if n['name'] in temporary_marked: if n['name'] in get_service_names(n.get('links', [])): raise DependencyError('A service can not link to itself: %s' % n['name']) if n['name'] in n.get('volumes_from', []): raise DependencyError('A service can not mount itself as volume: %s' % n['name']) if n['name'] in n.get('depends_on', []): raise DependencyError('A service can not depend on itself: %s' % n['name']) raise DependencyError('Circular dependency between %s' % ' and '.join(temporary_marked)) if n in unmarked: temporary_marked.add(n['name']) for m in get_service_dependents(n, services): visit(m) temporary_marked.remove(n['name']) unmarked.remove(n) sorted_services.insert(0, n) while unmarked: visit(unmarked[-1]) return sorted_services compose-1.8.0/compose/config/types.py000066400000000000000000000141071274620702700176270ustar00rootroot00000000000000""" Types for objects parsed from the configuration. """ from __future__ import absolute_import from __future__ import unicode_literals import os from collections import namedtuple import six from compose.config.config import V1 from compose.config.errors import ConfigurationError from compose.const import IS_WINDOWS_PLATFORM class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode type')): # TODO: drop service_names arg when v1 is removed @classmethod def parse(cls, volume_from_config, service_names, version): func = cls.parse_v1 if version == V1 else cls.parse_v2 return func(service_names, volume_from_config) @classmethod def parse_v1(cls, service_names, volume_from_config): parts = volume_from_config.split(':') if len(parts) > 2: raise ConfigurationError( "volume_from {} has incorrect format, should be " "service[:mode]".format(volume_from_config)) if len(parts) == 1: source = parts[0] mode = 'rw' else: source, mode = parts type = 'service' if source in service_names else 'container' return cls(source, mode, type) @classmethod def parse_v2(cls, service_names, volume_from_config): parts = volume_from_config.split(':') if len(parts) > 3: raise ConfigurationError( "volume_from {} has incorrect format, should be one of " "'[:]' or " "'container:[:]'".format(volume_from_config)) if len(parts) == 1: source = parts[0] return cls(source, 'rw', 'service') if len(parts) == 2: if parts[0] == 'container': type, source = parts return cls(source, 'rw', type) source, mode = parts return cls(source, mode, 'service') if len(parts) == 3: type, source, mode = parts if type not in ('service', 'container'): raise ConfigurationError( "Unknown volumes_from type '{}' in '{}'".format( type, volume_from_config)) return cls(source, mode, type) def repr(self): return '{v.type}:{v.source}:{v.mode}'.format(v=self) def parse_restart_spec(restart_config): if not restart_config: return None parts = restart_config.split(':') if len(parts) > 2: raise ConfigurationError( "Restart %s has incorrect format, should be " "mode[:max_retry]" % restart_config) if len(parts) == 2: name, max_retry_count = parts else: name, = parts max_retry_count = 0 return {'Name': name, 'MaximumRetryCount': int(max_retry_count)} def serialize_restart_spec(restart_spec): parts = [restart_spec['Name']] if restart_spec['MaximumRetryCount']: parts.append(six.text_type(restart_spec['MaximumRetryCount'])) return ':'.join(parts) def parse_extra_hosts(extra_hosts_config): if not extra_hosts_config: return {} if isinstance(extra_hosts_config, dict): return dict(extra_hosts_config) if isinstance(extra_hosts_config, list): extra_hosts_dict = {} for extra_hosts_line in extra_hosts_config: # TODO: validate string contains ':' ? host, ip = extra_hosts_line.split(':', 1) extra_hosts_dict[host.strip()] = ip.strip() return extra_hosts_dict def normalize_paths_for_engine(external_path, internal_path): """Windows paths, c:\my\path\shiny, need to be changed to be compatible with the Engine. Volume paths are expected to be linux style /c/my/path/shiny/ """ if not IS_WINDOWS_PLATFORM: return external_path, internal_path if external_path: drive, tail = os.path.splitdrive(external_path) if drive: external_path = '/' + drive.lower().rstrip(':') + tail external_path = external_path.replace('\\', '/') return external_path, internal_path.replace('\\', '/') class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')): @classmethod def parse(cls, volume_config): """Parse a volume_config path and split it into external:internal[:mode] parts to be returned as a valid VolumeSpec. """ if IS_WINDOWS_PLATFORM: # relative paths in windows expand to include the drive, eg C:\ # so we join the first 2 parts back together to count as one drive, tail = os.path.splitdrive(volume_config) parts = tail.split(":") if drive: parts[0] = drive + parts[0] else: parts = volume_config.split(':') if len(parts) > 3: raise ConfigurationError( "Volume %s has incorrect format, should be " "external:internal[:mode]" % volume_config) if len(parts) == 1: external, internal = normalize_paths_for_engine( None, os.path.normpath(parts[0])) else: external, internal = normalize_paths_for_engine( os.path.normpath(parts[0]), os.path.normpath(parts[1])) mode = 'rw' if len(parts) == 3: mode = parts[2] return cls(external, internal, mode) def repr(self): external = self.external + ':' if self.external else '' return '{ext}{v.internal}:{v.mode}'.format(ext=external, v=self) @property def is_named_volume(self): return self.external and not self.external.startswith(('.', '/', '~')) class ServiceLink(namedtuple('_ServiceLink', 'target alias')): @classmethod def parse(cls, link_spec): target, _, alias = link_spec.partition(':') if not alias: alias = target return cls(target, alias) def repr(self): if self.target == self.alias: return self.target return '{s.target}:{s.alias}'.format(s=self) @property def merge_field(self): return self.alias compose-1.8.0/compose/config/validation.py000066400000000000000000000342121274620702700206140ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import json import logging import os import re import sys import six from docker.utils.ports import split_port from jsonschema import Draft4Validator from jsonschema import FormatChecker from jsonschema import RefResolver from jsonschema import ValidationError from ..const import COMPOSEFILE_V1 as V1 from .errors import ConfigurationError from .errors import VERSION_EXPLANATION from .sort_services import get_service_name_from_network_mode log = logging.getLogger(__name__) DOCKER_CONFIG_HINTS = { 'cpu_share': 'cpu_shares', 'add_host': 'extra_hosts', 'hosts': 'extra_hosts', 'extra_host': 'extra_hosts', 'device': 'devices', 'link': 'links', 'memory_swap': 'memswap_limit', 'port': 'ports', 'privilege': 'privileged', 'priviliged': 'privileged', 'privilige': 'privileged', 'volume': 'volumes', 'workdir': 'working_dir', } VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]' VALID_EXPOSE_FORMAT = r'^\d+(\-\d+)?(\/[a-zA-Z]+)?$' @FormatChecker.cls_checks(format="ports", raises=ValidationError) def format_ports(instance): try: split_port(instance) except ValueError as e: raise ValidationError(six.text_type(e)) return True @FormatChecker.cls_checks(format="expose", raises=ValidationError) def format_expose(instance): if isinstance(instance, six.string_types): if not re.match(VALID_EXPOSE_FORMAT, instance): raise ValidationError( "should be of the format 'PORT[/PROTOCOL]'") return True def match_named_volumes(service_dict, project_volumes): service_volumes = service_dict.get('volumes', []) for volume_spec in service_volumes: if volume_spec.is_named_volume and volume_spec.external not in project_volumes: raise ConfigurationError( 'Named volume "{0}" is used in service "{1}" but no' ' declaration was found in the volumes section.'.format( volume_spec.repr(), service_dict.get('name') ) ) def python_type_to_yaml_type(type_): type_name = type(type_).__name__ return { 'dict': 'mapping', 'list': 'array', 'int': 'number', 'float': 'number', 'bool': 'boolean', 'unicode': 'string', 'str': 'string', 'bytes': 'string', }.get(type_name, type_name) def validate_config_section(filename, config, section): """Validate the structure of a configuration section. This must be done before interpolation so it's separate from schema validation. """ if not isinstance(config, dict): raise ConfigurationError( "In file '{filename}', {section} must be a mapping, not " "{type}.".format( filename=filename, section=section, type=anglicize_json_type(python_type_to_yaml_type(config)))) for key, value in config.items(): if not isinstance(key, six.string_types): raise ConfigurationError( "In file '{filename}', the {section} name {name} must be a " "quoted string, i.e. '{name}'.".format( filename=filename, section=section, name=key)) if not isinstance(value, (dict, type(None))): raise ConfigurationError( "In file '{filename}', {section} '{name}' must be a mapping not " "{type}.".format( filename=filename, section=section, name=key, type=anglicize_json_type(python_type_to_yaml_type(value)))) def validate_top_level_object(config_file): if not isinstance(config_file.config, dict): raise ConfigurationError( "Top level object in '{}' needs to be an object not '{}'.".format( config_file.filename, type(config_file.config))) def validate_ulimits(service_config): ulimit_config = service_config.config.get('ulimits', {}) for limit_name, soft_hard_values in six.iteritems(ulimit_config): if isinstance(soft_hard_values, dict): if not soft_hard_values['soft'] <= soft_hard_values['hard']: raise ConfigurationError( "Service '{s.name}' has invalid ulimit '{ulimit}'. " "'soft' value can not be greater than 'hard' value ".format( s=service_config, ulimit=ulimit_config)) def validate_extends_file_path(service_name, extends_options, filename): """ The service to be extended must either be defined in the config key 'file', or within 'filename'. """ error_prefix = "Invalid 'extends' configuration for %s:" % service_name if 'file' not in extends_options and filename is None: raise ConfigurationError( "%s you need to specify a 'file', e.g. 'file: something.yml'" % error_prefix ) def validate_network_mode(service_config, service_names): network_mode = service_config.config.get('network_mode') if not network_mode: return if 'networks' in service_config.config: raise ConfigurationError("'network_mode' and 'networks' cannot be combined") dependency = get_service_name_from_network_mode(network_mode) if not dependency: return if dependency not in service_names: raise ConfigurationError( "Service '{s.name}' uses the network stack of service '{dep}' which " "is undefined.".format(s=service_config, dep=dependency)) def validate_links(service_config, service_names): for link in service_config.config.get('links', []): if link.split(':')[0] not in service_names: raise ConfigurationError( "Service '{s.name}' has a link to service '{link}' which is " "undefined.".format(s=service_config, link=link)) def validate_depends_on(service_config, service_names): for dependency in service_config.config.get('depends_on', []): if dependency not in service_names: raise ConfigurationError( "Service '{s.name}' depends on service '{dep}' which is " "undefined.".format(s=service_config, dep=dependency)) def get_unsupported_config_msg(path, error_key): msg = "Unsupported config option for {}: '{}'".format(path_string(path), error_key) if error_key in DOCKER_CONFIG_HINTS: msg += " (did you mean '{}'?)".format(DOCKER_CONFIG_HINTS[error_key]) return msg def anglicize_json_type(json_type): if json_type.startswith(('a', 'e', 'i', 'o', 'u')): return 'an ' + json_type return 'a ' + json_type def is_service_dict_schema(schema_id): return schema_id in ('config_schema_v1.json', '#/properties/services') def handle_error_for_schema_with_id(error, path): schema_id = error.schema['id'] if is_service_dict_schema(schema_id) and error.validator == 'additionalProperties': return "Invalid service name '{}' - only {} characters are allowed".format( # The service_name is the key to the json object list(error.instance)[0], VALID_NAME_CHARS) if error.validator == 'additionalProperties': if schema_id == '#/definitions/service': invalid_config_key = parse_key_from_error_msg(error) return get_unsupported_config_msg(path, invalid_config_key) if not error.path: return '{}\n\n{}'.format(error.message, VERSION_EXPLANATION) def handle_generic_error(error, path): msg_format = None error_msg = error.message if error.validator == 'oneOf': msg_format = "{path} {msg}" config_key, error_msg = _parse_oneof_validator(error) if config_key: path.append(config_key) elif error.validator == 'type': msg_format = "{path} contains an invalid type, it should be {msg}" error_msg = _parse_valid_types_from_validator(error.validator_value) elif error.validator == 'required': error_msg = ", ".join(error.validator_value) msg_format = "{path} is invalid, {msg} is required." elif error.validator == 'dependencies': config_key = list(error.validator_value.keys())[0] required_keys = ",".join(error.validator_value[config_key]) msg_format = "{path} is invalid: {msg}" path.append(config_key) error_msg = "when defining '{}' you must set '{}' as well".format( config_key, required_keys) elif error.cause: error_msg = six.text_type(error.cause) msg_format = "{path} is invalid: {msg}" elif error.path: msg_format = "{path} value {msg}" if msg_format: return msg_format.format(path=path_string(path), msg=error_msg) return error.message def parse_key_from_error_msg(error): return error.message.split("'")[1] def path_string(path): return ".".join(c for c in path if isinstance(c, six.string_types)) def _parse_valid_types_from_validator(validator): """A validator value can be either an array of valid types or a string of a valid type. Parse the valid types and prefix with the correct article. """ if not isinstance(validator, list): return anglicize_json_type(validator) if len(validator) == 1: return anglicize_json_type(validator[0]) return "{}, or {}".format( ", ".join([anglicize_json_type(validator[0])] + validator[1:-1]), anglicize_json_type(validator[-1])) def _parse_oneof_validator(error): """oneOf has multiple schemas, so we need to reason about which schema, sub schema or constraint the validation is failing on. Inspecting the context value of a ValidationError gives us information about which sub schema failed and which kind of error it is. """ types = [] for context in error.context: if context.validator == 'oneOf': _, error_msg = _parse_oneof_validator(context) return path_string(context.path), error_msg if context.validator == 'required': return (None, context.message) if context.validator == 'additionalProperties': invalid_config_key = parse_key_from_error_msg(context) return (None, "contains unsupported option: '{}'".format(invalid_config_key)) if context.path: return ( path_string(context.path), "contains {}, which is an invalid type, it should be {}".format( json.dumps(context.instance), _parse_valid_types_from_validator(context.validator_value)), ) if context.validator == 'uniqueItems': return ( None, "contains non unique items, please remove duplicates from {}".format( context.instance), ) if context.validator == 'type': types.append(context.validator_value) valid_types = _parse_valid_types_from_validator(types) return (None, "contains an invalid type, it should be {}".format(valid_types)) def process_service_constraint_errors(error, service_name, version): if version == V1: if 'image' in error.instance and 'build' in error.instance: return ( "Service {} has both an image and build path specified. " "A service can either be built to image or use an existing " "image, not both.".format(service_name)) if 'image' in error.instance and 'dockerfile' in error.instance: return ( "Service {} has both an image and alternate Dockerfile. " "A service can either be built to image or use an existing " "image, not both.".format(service_name)) if 'image' not in error.instance and 'build' not in error.instance: return ( "Service {} has neither an image nor a build context specified. " "At least one must be provided.".format(service_name)) def process_config_schema_errors(error): path = list(error.path) if 'id' in error.schema: error_msg = handle_error_for_schema_with_id(error, path) if error_msg: return error_msg return handle_generic_error(error, path) def validate_against_config_schema(config_file): schema = load_jsonschema(config_file.version) format_checker = FormatChecker(["ports", "expose"]) validator = Draft4Validator( schema, resolver=RefResolver(get_resolver_path(), schema), format_checker=format_checker) handle_errors( validator.iter_errors(config_file.config), process_config_schema_errors, config_file.filename) def validate_service_constraints(config, service_name, version): def handler(errors): return process_service_constraint_errors(errors, service_name, version) schema = load_jsonschema(version) validator = Draft4Validator(schema['definitions']['constraints']['service']) handle_errors(validator.iter_errors(config), handler, None) def get_schema_path(): return os.path.dirname(os.path.abspath(__file__)) def load_jsonschema(version): filename = os.path.join( get_schema_path(), "config_schema_v{0}.json".format(version)) with open(filename, "r") as fh: return json.load(fh) def get_resolver_path(): schema_path = get_schema_path() if sys.platform == "win32": scheme = "///" # TODO: why is this necessary? schema_path = schema_path.replace('\\', '/') else: scheme = "//" return "file:{}{}/".format(scheme, schema_path) def handle_errors(errors, format_error_func, filename): """jsonschema returns an error tree full of information to explain what has gone wrong. Process each error and pull out relevant information and re-write helpful error messages that are relevant. """ errors = list(sorted(errors, key=str)) if not errors: return error_msg = '\n'.join(format_error_func(error) for error in errors) raise ConfigurationError( "The Compose file{file_msg} is invalid because:\n{error_msg}".format( file_msg=" '{}'".format(filename) if filename else "", error_msg=error_msg)) compose-1.8.0/compose/const.py000066400000000000000000000014301274620702700163370ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import sys DEFAULT_TIMEOUT = 10 HTTP_TIMEOUT = 60 IMAGE_EVENTS = ['delete', 'import', 'pull', 'push', 'tag', 'untag'] IS_WINDOWS_PLATFORM = (sys.platform == "win32") LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number' LABEL_ONE_OFF = 'com.docker.compose.oneoff' LABEL_PROJECT = 'com.docker.compose.project' LABEL_SERVICE = 'com.docker.compose.service' LABEL_VERSION = 'com.docker.compose.version' LABEL_CONFIG_HASH = 'com.docker.compose.config-hash' COMPOSEFILE_V1 = '1' COMPOSEFILE_V2_0 = '2.0' API_VERSIONS = { COMPOSEFILE_V1: '1.21', COMPOSEFILE_V2_0: '1.22', } API_VERSION_TO_ENGINE_VERSION = { API_VERSIONS[COMPOSEFILE_V1]: '1.9.0', API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0' } compose-1.8.0/compose/container.py000066400000000000000000000171051274620702700172010ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from functools import reduce import six from .const import LABEL_CONTAINER_NUMBER from .const import LABEL_PROJECT from .const import LABEL_SERVICE class Container(object): """ Represents a Docker container, constructed from the output of GET /containers/:id:/json. """ def __init__(self, client, dictionary, has_been_inspected=False): self.client = client self.dictionary = dictionary self.has_been_inspected = has_been_inspected self.log_stream = None @classmethod def from_ps(cls, client, dictionary, **kwargs): """ Construct a container object from the output of GET /containers/json. """ name = get_container_name(dictionary) if name is None: return None new_dictionary = { 'Id': dictionary['Id'], 'Image': dictionary['Image'], 'Name': '/' + name, } return cls(client, new_dictionary, **kwargs) @classmethod def from_id(cls, client, id): return cls(client, client.inspect_container(id), has_been_inspected=True) @classmethod def create(cls, client, **options): response = client.create_container(**options) return cls.from_id(client, response['Id']) @property def id(self): return self.dictionary['Id'] @property def image(self): return self.dictionary['Image'] @property def image_config(self): return self.client.inspect_image(self.image) @property def short_id(self): return self.id[:12] @property def name(self): return self.dictionary['Name'][1:] @property def service(self): return self.labels.get(LABEL_SERVICE) @property def name_without_project(self): project = self.labels.get(LABEL_PROJECT) if self.name.startswith('{0}_{1}'.format(project, self.service)): return '{0}_{1}'.format(self.service, self.number) else: return self.name @property def number(self): number = self.labels.get(LABEL_CONTAINER_NUMBER) if not number: raise ValueError("Container {0} does not have a {1} label".format( self.short_id, LABEL_CONTAINER_NUMBER)) return int(number) @property def ports(self): self.inspect_if_not_inspected() return self.get('NetworkSettings.Ports') or {} @property def human_readable_ports(self): def format_port(private, public): if not public: return private return '{HostIp}:{HostPort}->{private}'.format( private=private, **public[0]) return ', '.join(format_port(*item) for item in sorted(six.iteritems(self.ports))) @property def labels(self): return self.get('Config.Labels') or {} @property def stop_signal(self): return self.get('Config.StopSignal') @property def log_config(self): return self.get('HostConfig.LogConfig') or None @property def human_readable_state(self): if self.is_paused: return 'Paused' if self.is_restarting: return 'Restarting' if self.is_running: return 'Ghost' if self.get('State.Ghost') else 'Up' else: return 'Exit %s' % self.get('State.ExitCode') @property def human_readable_command(self): entrypoint = self.get('Config.Entrypoint') or [] cmd = self.get('Config.Cmd') or [] return ' '.join(entrypoint + cmd) @property def environment(self): def parse_env(var): if '=' in var: return var.split("=", 1) return var, None return dict(parse_env(var) for var in self.get('Config.Env') or []) @property def exit_code(self): return self.get('State.ExitCode') @property def is_running(self): return self.get('State.Running') @property def is_restarting(self): return self.get('State.Restarting') @property def is_paused(self): return self.get('State.Paused') @property def log_driver(self): return self.get('HostConfig.LogConfig.Type') @property def has_api_logs(self): log_type = self.log_driver return not log_type or log_type != 'none' def attach_log_stream(self): """A log stream can only be attached if the container uses a json-file log driver. """ if self.has_api_logs: self.log_stream = self.attach(stdout=True, stderr=True, stream=True) def get(self, key): """Return a value from the container or None if the value is not set. :param key: a string using dotted notation for nested dictionary lookups """ self.inspect_if_not_inspected() def get_value(dictionary, key): return (dictionary or {}).get(key) return reduce(get_value, key.split('.'), self.dictionary) def get_local_port(self, port, protocol='tcp'): port = self.ports.get("%s/%s" % (port, protocol)) return "{HostIp}:{HostPort}".format(**port[0]) if port else None def get_mount(self, mount_dest): for mount in self.get('Mounts'): if mount['Destination'] == mount_dest: return mount return None def start(self, **options): return self.client.start(self.id, **options) def stop(self, **options): return self.client.stop(self.id, **options) def pause(self, **options): return self.client.pause(self.id, **options) def unpause(self, **options): return self.client.unpause(self.id, **options) def kill(self, **options): return self.client.kill(self.id, **options) def restart(self, **options): return self.client.restart(self.id, **options) def remove(self, **options): return self.client.remove_container(self.id, **options) def create_exec(self, command, **options): return self.client.exec_create(self.id, command, **options) def start_exec(self, exec_id, **options): return self.client.exec_start(exec_id, **options) def rename_to_tmp_name(self): """Rename the container to a hopefully unique temporary container name by prepending the short id. """ self.client.rename( self.id, '%s_%s' % (self.short_id, self.name) ) def inspect_if_not_inspected(self): if not self.has_been_inspected: self.inspect() def wait(self): return self.client.wait(self.id) def logs(self, *args, **kwargs): return self.client.logs(self.id, *args, **kwargs) def inspect(self): self.dictionary = self.client.inspect_container(self.id) self.has_been_inspected = True return self.dictionary def attach(self, *args, **kwargs): return self.client.attach(self.id, *args, **kwargs) def __repr__(self): return '' % (self.name, self.id[:6]) def __eq__(self, other): if type(self) != type(other): return False return self.id == other.id def __hash__(self): return self.id.__hash__() def get_container_name(container): if not container.get('Name') and not container.get('Names'): return None # inspect if 'Name' in container: return container['Name'] # ps shortest_name = min(container['Names'], key=lambda n: len(n.split('/'))) return shortest_name.split('/')[-1] compose-1.8.0/compose/errors.py000066400000000000000000000002621274620702700165270ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals class OperationFailedError(Exception): def __init__(self, reason): self.msg = reason compose-1.8.0/compose/network.py000066400000000000000000000135521274620702700167120ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging from docker.errors import NotFound from docker.utils import create_ipam_config from docker.utils import create_ipam_pool from .config import ConfigurationError log = logging.getLogger(__name__) class Network(object): def __init__(self, client, project, name, driver=None, driver_opts=None, ipam=None, external_name=None): self.client = client self.project = project self.name = name self.driver = driver self.driver_opts = driver_opts self.ipam = create_ipam_config_from_dict(ipam) self.external_name = external_name def ensure(self): if self.external_name: try: self.inspect() log.debug( 'Network {0} declared as external. No new ' 'network will be created.'.format(self.name) ) except NotFound: raise ConfigurationError( 'Network {name} declared as external, but could' ' not be found. Please create the network manually' ' using `{command} {name}` and try again.'.format( name=self.external_name, command='docker network create' ) ) return try: data = self.inspect() if self.driver and data['Driver'] != self.driver: raise ConfigurationError( 'Network "{}" needs to be recreated - driver has changed' .format(self.full_name)) if data['Options'] != (self.driver_opts or {}): raise ConfigurationError( 'Network "{}" needs to be recreated - options have changed' .format(self.full_name)) except NotFound: driver_name = 'the default driver' if self.driver: driver_name = 'driver "{}"'.format(self.driver) log.info( 'Creating network "{}" with {}' .format(self.full_name, driver_name) ) self.client.create_network( name=self.full_name, driver=self.driver, options=self.driver_opts, ipam=self.ipam, ) def remove(self): if self.external_name: log.info("Network %s is external, skipping", self.full_name) return log.info("Removing network {}".format(self.full_name)) self.client.remove_network(self.full_name) def inspect(self): return self.client.inspect_network(self.full_name) @property def full_name(self): if self.external_name: return self.external_name return '{0}_{1}'.format(self.project, self.name) def create_ipam_config_from_dict(ipam_dict): if not ipam_dict: return None return create_ipam_config( driver=ipam_dict.get('driver'), pool_configs=[ create_ipam_pool( subnet=config.get('subnet'), iprange=config.get('ip_range'), gateway=config.get('gateway'), aux_addresses=config.get('aux_addresses'), ) for config in ipam_dict.get('config', []) ], ) def build_networks(name, config_data, client): network_config = config_data.networks or {} networks = { network_name: Network( client=client, project=name, name=network_name, driver=data.get('driver'), driver_opts=data.get('driver_opts'), ipam=data.get('ipam'), external_name=data.get('external_name'), ) for network_name, data in network_config.items() } if 'default' not in networks: networks['default'] = Network(client, name, 'default') return networks class ProjectNetworks(object): def __init__(self, networks, use_networking): self.networks = networks or {} self.use_networking = use_networking @classmethod def from_services(cls, services, networks, use_networking): service_networks = { network: networks.get(network) for service in services for network in get_network_names_for_service(service) } unused = set(networks) - set(service_networks) - {'default'} if unused: log.warn( "Some networks were defined but are not used by any service: " "{}".format(", ".join(unused))) return cls(service_networks, use_networking) def remove(self): if not self.use_networking: return for network in self.networks.values(): try: network.remove() except NotFound: log.warn("Network %s not found.", network.full_name) def initialize(self): if not self.use_networking: return for network in self.networks.values(): network.ensure() def get_network_defs_for_service(service_dict): if 'network_mode' in service_dict: return {} networks = service_dict.get('networks', {'default': None}) return dict( (net, (config or {})) for net, config in networks.items() ) def get_network_names_for_service(service_dict): return get_network_defs_for_service(service_dict).keys() def get_networks(service_dict, network_definitions): networks = {} for name, netdef in get_network_defs_for_service(service_dict).items(): network = network_definitions.get(name) if network: networks[network.full_name] = netdef else: raise ConfigurationError( 'Service "{}" uses an undefined network "{}"' .format(service_dict['name'], name)) return networks compose-1.8.0/compose/parallel.py000066400000000000000000000164141274620702700170150ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging import operator import sys from threading import Thread from docker.errors import APIError from six.moves import _thread as thread from six.moves.queue import Empty from six.moves.queue import Queue from compose.cli.signals import ShutdownException from compose.errors import OperationFailedError from compose.utils import get_output_stream log = logging.getLogger(__name__) STOP = object() def parallel_execute(objects, func, get_name, msg, get_deps=None): """Runs func on objects in parallel while ensuring that func is ran on object only after it is ran on all its dependencies. get_deps called on object must return a collection with its dependencies. get_name called on object must return its name. """ objects = list(objects) stream = get_output_stream(sys.stderr) writer = ParallelStreamWriter(stream, msg) for obj in objects: writer.initialize(get_name(obj)) events = parallel_execute_iter(objects, func, get_deps) errors = {} results = [] error_to_reraise = None for obj, result, exception in events: if exception is None: writer.write(get_name(obj), 'done') results.append(result) elif isinstance(exception, APIError): errors[get_name(obj)] = exception.explanation writer.write(get_name(obj), 'error') elif isinstance(exception, OperationFailedError): errors[get_name(obj)] = exception.msg writer.write(get_name(obj), 'error') elif isinstance(exception, UpstreamError): writer.write(get_name(obj), 'error') else: errors[get_name(obj)] = exception error_to_reraise = exception for obj_name, error in errors.items(): stream.write("\nERROR: for {} {}\n".format(obj_name, error)) if error_to_reraise: raise error_to_reraise return results, errors def _no_deps(x): return [] class State(object): """ Holds the state of a partially-complete parallel operation. state.started: objects being processed state.finished: objects which have been processed state.failed: objects which either failed or whose dependencies failed """ def __init__(self, objects): self.objects = objects self.started = set() self.finished = set() self.failed = set() def is_done(self): return len(self.finished) + len(self.failed) >= len(self.objects) def pending(self): return set(self.objects) - self.started - self.finished - self.failed def parallel_execute_iter(objects, func, get_deps): """ Runs func on objects in parallel while ensuring that func is ran on object only after it is ran on all its dependencies. Returns an iterator of tuples which look like: # if func returned normally when run on object (object, result, None) # if func raised an exception when run on object (object, None, exception) # if func raised an exception when run on one of object's dependencies (object, None, UpstreamError()) """ if get_deps is None: get_deps = _no_deps results = Queue() state = State(objects) while True: feed_queue(objects, func, get_deps, results, state) try: event = results.get(timeout=0.1) except Empty: continue # See https://github.com/docker/compose/issues/189 except thread.error: raise ShutdownException() if event is STOP: break obj, _, exception = event if exception is None: log.debug('Finished processing: {}'.format(obj)) state.finished.add(obj) else: log.debug('Failed: {}'.format(obj)) state.failed.add(obj) yield event def producer(obj, func, results): """ The entry point for a producer thread which runs func on a single object. Places a tuple on the results queue once func has either returned or raised. """ try: result = func(obj) results.put((obj, result, None)) except Exception as e: results.put((obj, None, e)) def feed_queue(objects, func, get_deps, results, state): """ Starts producer threads for any objects which are ready to be processed (i.e. they have no dependencies which haven't been successfully processed). Shortcuts any objects whose dependencies have failed and places an (object, None, UpstreamError()) tuple on the results queue. """ pending = state.pending() log.debug('Pending: {}'.format(pending)) for obj in pending: deps = get_deps(obj) if any(dep in state.failed for dep in deps): log.debug('{} has upstream errors - not processing'.format(obj)) results.put((obj, None, UpstreamError())) state.failed.add(obj) elif all( dep not in objects or dep in state.finished for dep in deps ): log.debug('Starting producer thread for {}'.format(obj)) t = Thread(target=producer, args=(obj, func, results)) t.daemon = True t.start() state.started.add(obj) if state.is_done(): results.put(STOP) class UpstreamError(Exception): pass class ParallelStreamWriter(object): """Write out messages for operations happening in parallel. Each operation has it's own line, and ANSI code characters are used to jump to the correct line, and write over the line. """ def __init__(self, stream, msg): self.stream = stream self.msg = msg self.lines = [] def initialize(self, obj_index): if self.msg is None: return self.lines.append(obj_index) self.stream.write("{} {} ... \r\n".format(self.msg, obj_index)) self.stream.flush() def write(self, obj_index, status): if self.msg is None: return position = self.lines.index(obj_index) diff = len(self.lines) - position # move up self.stream.write("%c[%dA" % (27, diff)) # erase self.stream.write("%c[2K\r" % 27) self.stream.write("{} {} ... {}\r".format(self.msg, obj_index, status)) # move back down self.stream.write("%c[%dB" % (27, diff)) self.stream.flush() def parallel_operation(containers, operation, options, message): parallel_execute( containers, operator.methodcaller(operation, **options), operator.attrgetter('name'), message) def parallel_remove(containers, options): stopped_containers = [c for c in containers if not c.is_running] parallel_operation(stopped_containers, 'remove', options, 'Removing') def parallel_start(containers, options): parallel_operation(containers, 'start', options, 'Starting') def parallel_pause(containers, options): parallel_operation(containers, 'pause', options, 'Pausing') def parallel_unpause(containers, options): parallel_operation(containers, 'unpause', options, 'Unpausing') def parallel_kill(containers, options): parallel_operation(containers, 'kill', options, 'Killing') def parallel_restart(containers, options): parallel_operation(containers, 'restart', options, 'Restarting') compose-1.8.0/compose/progress_stream.py000066400000000000000000000057131274620702700204400ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from compose import utils class StreamOutputError(Exception): pass def stream_output(output, stream): is_terminal = hasattr(stream, 'isatty') and stream.isatty() stream = utils.get_output_stream(stream) all_events = [] lines = {} diff = 0 for event in utils.json_stream(output): all_events.append(event) is_progress_event = 'progress' in event or 'progressDetail' in event if not is_progress_event: print_output_event(event, stream, is_terminal) stream.flush() continue if not is_terminal: continue # if it's a progress event and we have a terminal, then display the progress bars image_id = event.get('id') if not image_id: continue if image_id in lines: diff = len(lines) - lines[image_id] else: lines[image_id] = len(lines) stream.write("\n") diff = 0 # move cursor up `diff` rows stream.write("%c[%dA" % (27, diff)) print_output_event(event, stream, is_terminal) if 'id' in event: # move cursor back down stream.write("%c[%dB" % (27, diff)) stream.flush() return all_events def print_output_event(event, stream, is_terminal): if 'errorDetail' in event: raise StreamOutputError(event['errorDetail']['message']) terminator = '' if is_terminal and 'stream' not in event: # erase current line stream.write("%c[2K\r" % 27) terminator = "\r" elif 'progressDetail' in event: return if 'time' in event: stream.write("[%s] " % event['time']) if 'id' in event: stream.write("%s: " % event['id']) if 'from' in event: stream.write("(from %s) " % event['from']) status = event.get('status', '') if 'progress' in event: stream.write("%s %s%s" % (status, event['progress'], terminator)) elif 'progressDetail' in event: detail = event['progressDetail'] total = detail.get('total') if 'current' in detail and total: percentage = float(detail['current']) / float(total) * 100 stream.write('%s (%.1f%%)%s' % (status, percentage, terminator)) else: stream.write('%s%s' % (status, terminator)) elif 'stream' in event: stream.write("%s%s" % (event['stream'], terminator)) else: stream.write("%s%s\n" % (status, terminator)) def get_digest_from_pull(events): for event in events: status = event.get('status') if not status or 'Digest' not in status: continue _, digest = status.split(':', 1) return digest.strip() return None def get_digest_from_push(events): for event in events: digest = event.get('aux', {}).get('Digest') if digest: return digest return None compose-1.8.0/compose/project.py000066400000000000000000000464061274620702700166730ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import datetime import logging import operator from functools import reduce import enum from docker.errors import APIError from . import parallel from .config import ConfigurationError from .config.config import V1 from .config.sort_services import get_container_name_from_network_mode from .config.sort_services import get_service_name_from_network_mode from .const import DEFAULT_TIMEOUT from .const import IMAGE_EVENTS from .const import LABEL_ONE_OFF from .const import LABEL_PROJECT from .const import LABEL_SERVICE from .container import Container from .network import build_networks from .network import get_networks from .network import ProjectNetworks from .service import BuildAction from .service import ContainerNetworkMode from .service import ConvergenceStrategy from .service import NetworkMode from .service import Service from .service import ServiceNetworkMode from .utils import microseconds_from_time_nano from .volume import ProjectVolumes log = logging.getLogger(__name__) @enum.unique class OneOffFilter(enum.Enum): include = 0 exclude = 1 only = 2 @classmethod def update_labels(cls, value, labels): if value == cls.only: labels.append('{0}={1}'.format(LABEL_ONE_OFF, "True")) elif value == cls.exclude: labels.append('{0}={1}'.format(LABEL_ONE_OFF, "False")) elif value == cls.include: pass else: raise ValueError("Invalid value for one_off: {}".format(repr(value))) class Project(object): """ A collection of services. """ def __init__(self, name, services, client, networks=None, volumes=None): self.name = name self.services = services self.client = client self.volumes = volumes or ProjectVolumes({}) self.networks = networks or ProjectNetworks({}, False) def labels(self, one_off=OneOffFilter.exclude): labels = ['{0}={1}'.format(LABEL_PROJECT, self.name)] OneOffFilter.update_labels(one_off, labels) return labels @classmethod def from_config(cls, name, config_data, client): """ Construct a Project from a config.Config object. """ use_networking = (config_data.version and config_data.version != V1) networks = build_networks(name, config_data, client) project_networks = ProjectNetworks.from_services( config_data.services, networks, use_networking) volumes = ProjectVolumes.from_config(name, config_data, client) project = cls(name, [], client, project_networks, volumes) for service_dict in config_data.services: service_dict = dict(service_dict) if use_networking: service_networks = get_networks(service_dict, networks) else: service_networks = {} service_dict.pop('networks', None) links = project.get_links(service_dict) network_mode = project.get_network_mode( service_dict, list(service_networks.keys()) ) volumes_from = get_volumes_from(project, service_dict) if config_data.version != V1: service_dict['volumes'] = [ volumes.namespace_spec(volume_spec) for volume_spec in service_dict.get('volumes', []) ] project.services.append( Service( service_dict.pop('name'), client=client, project=name, use_networking=use_networking, networks=service_networks, links=links, network_mode=network_mode, volumes_from=volumes_from, **service_dict) ) return project @property def service_names(self): return [service.name for service in self.services] def get_service(self, name): """ Retrieve a service by name. Raises NoSuchService if the named service does not exist. """ for service in self.services: if service.name == name: return service raise NoSuchService(name) def validate_service_names(self, service_names): """ Validate that the given list of service names only contains valid services. Raises NoSuchService if one of the names is invalid. """ valid_names = self.service_names for name in service_names: if name not in valid_names: raise NoSuchService(name) def get_services(self, service_names=None, include_deps=False): """ Returns a list of this project's services filtered by the provided list of names, or all services if service_names is None or []. If include_deps is specified, returns a list including the dependencies for service_names, in order of dependency. Preserves the original order of self.services where possible, reordering as needed to resolve dependencies. Raises NoSuchService if any of the named services do not exist. """ if service_names is None or len(service_names) == 0: service_names = self.service_names unsorted = [self.get_service(name) for name in service_names] services = [s for s in self.services if s in unsorted] if include_deps: services = reduce(self._inject_deps, services, []) uniques = [] [uniques.append(s) for s in services if s not in uniques] return uniques def get_services_without_duplicate(self, service_names=None, include_deps=False): services = self.get_services(service_names, include_deps) for service in services: service.remove_duplicate_containers() return services def get_links(self, service_dict): links = [] if 'links' in service_dict: for link in service_dict.get('links', []): if ':' in link: service_name, link_name = link.split(':', 1) else: service_name, link_name = link, None try: links.append((self.get_service(service_name), link_name)) except NoSuchService: raise ConfigurationError( 'Service "%s" has a link to service "%s" which does not ' 'exist.' % (service_dict['name'], service_name)) del service_dict['links'] return links def get_network_mode(self, service_dict, networks): network_mode = service_dict.pop('network_mode', None) if not network_mode: if self.networks.use_networking: return NetworkMode(networks[0]) if networks else NetworkMode('none') return NetworkMode(None) service_name = get_service_name_from_network_mode(network_mode) if service_name: return ServiceNetworkMode(self.get_service(service_name)) container_name = get_container_name_from_network_mode(network_mode) if container_name: try: return ContainerNetworkMode(Container.from_id(self.client, container_name)) except APIError: raise ConfigurationError( "Service '{name}' uses the network stack of container '{dep}' which " "does not exist.".format(name=service_dict['name'], dep=container_name)) return NetworkMode(network_mode) def start(self, service_names=None, **options): containers = [] def start_service(service): service_containers = service.start(quiet=True, **options) containers.extend(service_containers) services = self.get_services(service_names) def get_deps(service): return {self.get_service(dep) for dep in service.get_dependency_names()} parallel.parallel_execute( services, start_service, operator.attrgetter('name'), 'Starting', get_deps) return containers def stop(self, service_names=None, one_off=OneOffFilter.exclude, **options): containers = self.containers(service_names, one_off=one_off) def get_deps(container): # actually returning inversed dependencies return {other for other in containers if container.service in self.get_service(other.service).get_dependency_names()} parallel.parallel_execute( containers, operator.methodcaller('stop', **options), operator.attrgetter('name'), 'Stopping', get_deps) def pause(self, service_names=None, **options): containers = self.containers(service_names) parallel.parallel_pause(reversed(containers), options) return containers def unpause(self, service_names=None, **options): containers = self.containers(service_names) parallel.parallel_unpause(containers, options) return containers def kill(self, service_names=None, **options): parallel.parallel_kill(self.containers(service_names), options) def remove_stopped(self, service_names=None, one_off=OneOffFilter.exclude, **options): parallel.parallel_remove(self.containers( service_names, stopped=True, one_off=one_off ), options) def down(self, remove_image_type, include_volumes, remove_orphans=False): self.stop(one_off=OneOffFilter.include) self.find_orphan_containers(remove_orphans) self.remove_stopped(v=include_volumes, one_off=OneOffFilter.include) self.networks.remove() if include_volumes: self.volumes.remove() self.remove_images(remove_image_type) def remove_images(self, remove_image_type): for service in self.get_services(): service.remove_image(remove_image_type) def restart(self, service_names=None, **options): containers = self.containers(service_names, stopped=True) parallel.parallel_restart(containers, options) return containers def build(self, service_names=None, no_cache=False, pull=False, force_rm=False): for service in self.get_services(service_names): if service.can_be_built(): service.build(no_cache, pull, force_rm) else: log.info('%s uses an image, skipping' % service.name) def create( self, service_names=None, strategy=ConvergenceStrategy.changed, do_build=BuildAction.none, ): services = self.get_services_without_duplicate(service_names, include_deps=True) for svc in services: svc.ensure_image_exists(do_build=do_build) plans = self._get_convergence_plans(services, strategy) for service in services: service.execute_convergence_plan( plans[service.name], detached=True, start=False) def events(self, service_names=None): def build_container_event(event, container): time = datetime.datetime.fromtimestamp(event['time']) time = time.replace( microsecond=microseconds_from_time_nano(event['timeNano'])) return { 'time': time, 'type': 'container', 'action': event['status'], 'id': container.id, 'service': container.service, 'attributes': { 'name': container.name, 'image': event['from'], }, 'container': container, } service_names = set(service_names or self.service_names) for event in self.client.events( filters={'label': self.labels()}, decode=True ): # The first part of this condition is a guard against some events # broadcasted by swarm that don't have a status field. # See https://github.com/docker/compose/issues/3316 if 'status' not in event or event['status'] in IMAGE_EVENTS: # We don't receive any image events because labels aren't applied # to images continue # TODO: get labels from the API v1.22 , see github issue 2618 try: # this can fail if the conatiner has been removed container = Container.from_id(self.client, event['id']) except APIError: continue if container.service not in service_names: continue yield build_container_event(event, container) def up(self, service_names=None, start_deps=True, strategy=ConvergenceStrategy.changed, do_build=BuildAction.none, timeout=DEFAULT_TIMEOUT, detached=False, remove_orphans=False): warn_for_swarm_mode(self.client) self.initialize() self.find_orphan_containers(remove_orphans) services = self.get_services_without_duplicate( service_names, include_deps=start_deps) for svc in services: svc.ensure_image_exists(do_build=do_build) plans = self._get_convergence_plans(services, strategy) def do(service): return service.execute_convergence_plan( plans[service.name], timeout=timeout, detached=detached ) def get_deps(service): return {self.get_service(dep) for dep in service.get_dependency_names()} results, errors = parallel.parallel_execute( services, do, operator.attrgetter('name'), None, get_deps ) if errors: raise ProjectError( 'Encountered errors while bringing up the project.' ) return [ container for svc_containers in results if svc_containers is not None for container in svc_containers ] def initialize(self): self.networks.initialize() self.volumes.initialize() def _get_convergence_plans(self, services, strategy): plans = {} for service in services: updated_dependencies = [ name for name in service.get_dependency_names() if name in plans and plans[name].action in ('recreate', 'create') ] if updated_dependencies and strategy.allows_recreate: log.debug('%s has upstream changes (%s)', service.name, ", ".join(updated_dependencies)) plan = service.convergence_plan(ConvergenceStrategy.always) else: plan = service.convergence_plan(strategy) plans[service.name] = plan return plans def pull(self, service_names=None, ignore_pull_failures=False): for service in self.get_services(service_names, include_deps=False): service.pull(ignore_pull_failures) def push(self, service_names=None, ignore_push_failures=False): for service in self.get_services(service_names, include_deps=False): service.push(ignore_push_failures) def _labeled_containers(self, stopped=False, one_off=OneOffFilter.exclude): return list(filter(None, [ Container.from_ps(self.client, container) for container in self.client.containers( all=stopped, filters={'label': self.labels(one_off=one_off)})]) ) def containers(self, service_names=None, stopped=False, one_off=OneOffFilter.exclude): if service_names: self.validate_service_names(service_names) else: service_names = self.service_names containers = self._labeled_containers(stopped, one_off) def matches_service_names(container): return container.labels.get(LABEL_SERVICE) in service_names return [c for c in containers if matches_service_names(c)] def find_orphan_containers(self, remove_orphans): def _find(): containers = self._labeled_containers() for ctnr in containers: service_name = ctnr.labels.get(LABEL_SERVICE) if service_name not in self.service_names: yield ctnr orphans = list(_find()) if not orphans: return if remove_orphans: for ctnr in orphans: log.info('Removing orphan container "{0}"'.format(ctnr.name)) ctnr.kill() ctnr.remove(force=True) else: log.warning( 'Found orphan containers ({0}) for this project. If ' 'you removed or renamed this service in your compose ' 'file, you can run this command with the ' '--remove-orphans flag to clean it up.'.format( ', '.join(["{}".format(ctnr.name) for ctnr in orphans]) ) ) def _inject_deps(self, acc, service): dep_names = service.get_dependency_names() if len(dep_names) > 0: dep_services = self.get_services( service_names=list(set(dep_names)), include_deps=True ) else: dep_services = [] dep_services.append(service) return acc + dep_services def get_volumes_from(project, service_dict): volumes_from = service_dict.pop('volumes_from', None) if not volumes_from: return [] def build_volume_from(spec): if spec.type == 'service': try: return spec._replace(source=project.get_service(spec.source)) except NoSuchService: pass if spec.type == 'container': try: container = Container.from_id(project.client, spec.source) return spec._replace(source=container) except APIError: pass raise ConfigurationError( "Service \"{}\" mounts volumes from \"{}\", which is not the name " "of a service or container.".format( service_dict['name'], spec.source)) return [build_volume_from(vf) for vf in volumes_from] def warn_for_swarm_mode(client): info = client.info() if info.get('Swarm', {}).get('LocalNodeState') == 'active': log.warn( "The Docker Engine you're using is running in swarm mode.\n\n" "Compose does not use swarm mode to deploy services to multiple nodes in a swarm. " "All containers will be scheduled on the current node.\n\n" "To deploy your application across the swarm, " "use the bundle feature of the Docker experimental build.\n\n" "More info:\n" "https://docs.docker.com/compose/bundles\n" ) class NoSuchService(Exception): def __init__(self, name): self.name = name self.msg = "No such service: %s" % self.name def __str__(self): return self.msg class ProjectError(Exception): def __init__(self, msg): self.msg = msg compose-1.8.0/compose/service.py000066400000000000000000001071131274620702700166560ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging import re import sys from collections import namedtuple from operator import attrgetter import enum import six from docker.errors import APIError from docker.utils import LogConfig from docker.utils.ports import build_port_bindings from docker.utils.ports import split_port from . import __version__ from . import progress_stream from .config import DOCKER_CONFIG_KEYS from .config import merge_environment from .config.types import VolumeSpec from .const import DEFAULT_TIMEOUT from .const import LABEL_CONFIG_HASH from .const import LABEL_CONTAINER_NUMBER from .const import LABEL_ONE_OFF from .const import LABEL_PROJECT from .const import LABEL_SERVICE from .const import LABEL_VERSION from .container import Container from .errors import OperationFailedError from .parallel import parallel_execute from .parallel import parallel_start from .progress_stream import stream_output from .progress_stream import StreamOutputError from .utils import json_hash log = logging.getLogger(__name__) DOCKER_START_KEYS = [ 'cap_add', 'cap_drop', 'cgroup_parent', 'cpu_quota', 'devices', 'dns', 'dns_search', 'env_file', 'extra_hosts', 'ipc', 'read_only', 'log_driver', 'log_opt', 'mem_limit', 'memswap_limit', 'pid', 'privileged', 'restart', 'security_opt', 'shm_size', 'volumes_from', ] class BuildError(Exception): def __init__(self, service, reason): self.service = service self.reason = reason class NeedsBuildError(Exception): def __init__(self, service): self.service = service class NoSuchImageError(Exception): pass ServiceName = namedtuple('ServiceName', 'project service number') ConvergencePlan = namedtuple('ConvergencePlan', 'action containers') @enum.unique class ConvergenceStrategy(enum.Enum): """Enumeration for all possible convergence strategies. Values refer to when containers should be recreated. """ changed = 1 always = 2 never = 3 @property def allows_recreate(self): return self is not type(self).never @enum.unique class ImageType(enum.Enum): """Enumeration for the types of images known to compose.""" none = 0 local = 1 all = 2 @enum.unique class BuildAction(enum.Enum): """Enumeration for the possible build actions.""" none = 0 force = 1 skip = 2 class Service(object): def __init__( self, name, client=None, project='default', use_networking=False, links=None, volumes_from=None, network_mode=None, networks=None, **options ): self.name = name self.client = client self.project = project self.use_networking = use_networking self.links = links or [] self.volumes_from = volumes_from or [] self.network_mode = network_mode or NetworkMode(None) self.networks = networks or {} self.options = options def __repr__(self): return ''.format(self.name) def containers(self, stopped=False, one_off=False, filters={}): filters.update({'label': self.labels(one_off=one_off)}) return list(filter(None, [ Container.from_ps(self.client, container) for container in self.client.containers( all=stopped, filters=filters)])) def get_container(self, number=1): """Return a :class:`compose.container.Container` for this service. The container must be active, and match `number`. """ labels = self.labels() + ['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)] for container in self.client.containers(filters={'label': labels}): return Container.from_ps(self.client, container) raise ValueError("No container found for %s_%s" % (self.name, number)) def start(self, **options): containers = self.containers(stopped=True) for c in containers: self.start_container_if_stopped(c, **options) return containers def scale(self, desired_num, timeout=DEFAULT_TIMEOUT): """ Adjusts the number of containers to the specified number and ensures they are running. - creates containers until there are at least `desired_num` - stops containers until there are at most `desired_num` running - starts containers until there are at least `desired_num` running - removes all stopped containers """ if self.custom_container_name and desired_num > 1: log.warn('The "%s" service is using the custom container name "%s". ' 'Docker requires each container to have a unique name. ' 'Remove the custom name to scale the service.' % (self.name, self.custom_container_name)) if self.specifies_host_port() and desired_num > 1: log.warn('The "%s" service specifies a port on the host. If multiple containers ' 'for this service are created on a single host, the port will clash.' % self.name) def create_and_start(service, number): container = service.create_container(number=number, quiet=True) service.start_container(container) return container def stop_and_remove(container): container.stop(timeout=timeout) container.remove() running_containers = self.containers(stopped=False) num_running = len(running_containers) if desired_num == num_running: # do nothing as we already have the desired number log.info('Desired container number already achieved') return if desired_num > num_running: # we need to start/create until we have desired_num all_containers = self.containers(stopped=True) if num_running != len(all_containers): # we have some stopped containers, let's start them up again stopped_containers = sorted( (c for c in all_containers if not c.is_running), key=attrgetter('number')) num_stopped = len(stopped_containers) if num_stopped + num_running > desired_num: num_to_start = desired_num - num_running containers_to_start = stopped_containers[:num_to_start] else: containers_to_start = stopped_containers parallel_start(containers_to_start, {}) num_running += len(containers_to_start) num_to_create = desired_num - num_running next_number = self._next_container_number() container_numbers = [ number for number in range( next_number, next_number + num_to_create ) ] parallel_execute( container_numbers, lambda n: create_and_start(service=self, number=n), lambda n: self.get_container_name(n), "Creating and starting" ) if desired_num < num_running: num_to_stop = num_running - desired_num sorted_running_containers = sorted( running_containers, key=attrgetter('number')) parallel_execute( sorted_running_containers[-num_to_stop:], stop_and_remove, lambda c: c.name, "Stopping and removing", ) def create_container(self, one_off=False, previous_container=None, number=None, quiet=False, **override_options): """ Create a container for this service. If the image doesn't exist, attempt to pull it. """ # This is only necessary for `scale` and `volumes_from` # auto-creating containers to satisfy the dependency. self.ensure_image_exists() container_options = self._get_container_create_options( override_options, number or self._next_container_number(one_off=one_off), one_off=one_off, previous_container=previous_container, ) if 'name' in container_options and not quiet: log.info("Creating %s" % container_options['name']) try: return Container.create(self.client, **container_options) except APIError as ex: raise OperationFailedError("Cannot create container for service %s: %s" % (self.name, ex.explanation)) def ensure_image_exists(self, do_build=BuildAction.none): if self.can_be_built() and do_build == BuildAction.force: self.build() return try: self.image() return except NoSuchImageError: pass if not self.can_be_built(): self.pull() return if do_build == BuildAction.skip: raise NeedsBuildError(self) self.build() log.warn( "Image for service {} was built because it did not already exist. To " "rebuild this image you must use `docker-compose build` or " "`docker-compose up --build`.".format(self.name)) def image(self): try: return self.client.inspect_image(self.image_name) except APIError as e: if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation): raise NoSuchImageError("Image '{}' not found".format(self.image_name)) else: raise @property def image_name(self): return self.options.get('image', '{s.project}_{s.name}'.format(s=self)) def convergence_plan(self, strategy=ConvergenceStrategy.changed): containers = self.containers(stopped=True) if not containers: return ConvergencePlan('create', []) if strategy is ConvergenceStrategy.never: return ConvergencePlan('start', containers) if ( strategy is ConvergenceStrategy.always or self._containers_have_diverged(containers) ): return ConvergencePlan('recreate', containers) stopped = [c for c in containers if not c.is_running] if stopped: return ConvergencePlan('start', stopped) return ConvergencePlan('noop', containers) def _containers_have_diverged(self, containers): config_hash = None try: config_hash = self.config_hash except NoSuchImageError as e: log.debug( 'Service %s has diverged: %s', self.name, six.text_type(e), ) return True has_diverged = False for c in containers: container_config_hash = c.labels.get(LABEL_CONFIG_HASH, None) if container_config_hash != config_hash: log.debug( '%s has diverged: %s != %s', c.name, container_config_hash, config_hash, ) has_diverged = True return has_diverged def execute_convergence_plan(self, plan, timeout=DEFAULT_TIMEOUT, detached=False, start=True): (action, containers) = plan should_attach_logs = not detached if action == 'create': container = self.create_container() if should_attach_logs: container.attach_log_stream() if start: self.start_container(container) return [container] elif action == 'recreate': return [ self.recreate_container( container, timeout=timeout, attach_logs=should_attach_logs, start_new_container=start ) for container in containers ] elif action == 'start': if start: for container in containers: self.start_container_if_stopped(container, attach_logs=should_attach_logs) return containers elif action == 'noop': for c in containers: log.info("%s is up-to-date" % c.name) return containers else: raise Exception("Invalid action: {}".format(action)) def recreate_container( self, container, timeout=DEFAULT_TIMEOUT, attach_logs=False, start_new_container=True): """Recreate a container. The original container is renamed to a temporary name so that data volumes can be copied to the new container, before the original container is removed. """ log.info("Recreating %s" % container.name) container.stop(timeout=timeout) container.rename_to_tmp_name() new_container = self.create_container( previous_container=container, number=container.labels.get(LABEL_CONTAINER_NUMBER), quiet=True, ) if attach_logs: new_container.attach_log_stream() if start_new_container: self.start_container(new_container) container.remove() return new_container def start_container_if_stopped(self, container, attach_logs=False, quiet=False): if not container.is_running: if not quiet: log.info("Starting %s" % container.name) if attach_logs: container.attach_log_stream() return self.start_container(container) def start_container(self, container): self.connect_container_to_networks(container) try: container.start() except APIError as ex: raise OperationFailedError("Cannot start service %s: %s" % (self.name, ex.explanation)) return container def connect_container_to_networks(self, container): connected_networks = container.get('NetworkSettings.Networks') for network, netdefs in self.networks.items(): if network in connected_networks: if short_id_alias_exists(container, network): continue self.client.disconnect_container_from_network( container.id, network) self.client.connect_container_to_network( container.id, network, aliases=self._get_aliases(netdefs, container), ipv4_address=netdefs.get('ipv4_address', None), ipv6_address=netdefs.get('ipv6_address', None), links=self._get_links(False)) def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT): for c in self.duplicate_containers(): log.info('Removing %s' % c.name) c.stop(timeout=timeout) c.remove() def duplicate_containers(self): containers = sorted( self.containers(stopped=True), key=lambda c: c.get('Created'), ) numbers = set() for c in containers: if c.number in numbers: yield c else: numbers.add(c.number) @property def config_hash(self): return json_hash(self.config_dict()) def config_dict(self): return { 'options': self.options, 'image_id': self.image()['Id'], 'links': self.get_link_names(), 'net': self.network_mode.id, 'networks': self.networks, 'volumes_from': [ (v.source.name, v.mode) for v in self.volumes_from if isinstance(v.source, Service) ], } def get_dependency_names(self): net_name = self.network_mode.service_name return (self.get_linked_service_names() + self.get_volumes_from_names() + ([net_name] if net_name else []) + self.options.get('depends_on', [])) def get_linked_service_names(self): return [service.name for (service, _) in self.links] def get_link_names(self): return [(service.name, alias) for service, alias in self.links] def get_volumes_from_names(self): return [s.source.name for s in self.volumes_from if isinstance(s.source, Service)] # TODO: this would benefit from github.com/docker/docker/pull/14699 # to remove the need to inspect every container def _next_container_number(self, one_off=False): containers = filter(None, [ Container.from_ps(self.client, container) for container in self.client.containers( all=True, filters={'label': self.labels(one_off=one_off)}) ]) numbers = [c.number for c in containers] return 1 if not numbers else max(numbers) + 1 def _get_aliases(self, network, container=None): if container and container.labels.get(LABEL_ONE_OFF) == "True": return [] return list( {self.name} | ({container.short_id} if container else set()) | set(network.get('aliases', ())) ) def build_default_networking_config(self): if not self.networks: return {} network = self.networks[self.network_mode.id] endpoint = { 'Aliases': self._get_aliases(network), 'IPAMConfig': {}, } if network.get('ipv4_address'): endpoint['IPAMConfig']['IPv4Address'] = network.get('ipv4_address') if network.get('ipv6_address'): endpoint['IPAMConfig']['IPv6Address'] = network.get('ipv6_address') return {"EndpointsConfig": {self.network_mode.id: endpoint}} def _get_links(self, link_to_self): links = {} for service, link_name in self.links: for container in service.containers(): links[link_name or service.name] = container.name links[container.name] = container.name links[container.name_without_project] = container.name if link_to_self: for container in self.containers(): links[self.name] = container.name links[container.name] = container.name links[container.name_without_project] = container.name for external_link in self.options.get('external_links') or []: if ':' not in external_link: link_name = external_link else: external_link, link_name = external_link.split(':') links[link_name] = external_link return [ (alias, container_name) for (container_name, alias) in links.items() ] def _get_volumes_from(self): return [build_volume_from(spec) for spec in self.volumes_from] def _get_container_create_options( self, override_options, number, one_off=False, previous_container=None): add_config_hash = (not one_off and not override_options) container_options = dict( (k, self.options[k]) for k in DOCKER_CONFIG_KEYS if k in self.options) container_options.update(override_options) if not container_options.get('name'): container_options['name'] = self.get_container_name(number, one_off) container_options.setdefault('detach', True) # If a qualified hostname was given, split it into an # unqualified hostname and a domainname unless domainname # was also given explicitly. This matches the behavior of # the official Docker CLI in that scenario. if ('hostname' in container_options and 'domainname' not in container_options and '.' in container_options['hostname']): parts = container_options['hostname'].partition('.') container_options['hostname'] = parts[0] container_options['domainname'] = parts[2] if 'ports' in container_options or 'expose' in self.options: container_options['ports'] = build_container_ports( container_options, self.options) container_options['environment'] = merge_environment( self.options.get('environment'), override_options.get('environment')) binds, affinity = merge_volume_bindings( container_options.get('volumes') or [], previous_container) override_options['binds'] = binds container_options['environment'].update(affinity) if 'volumes' in container_options: container_options['volumes'] = dict( (v.internal, {}) for v in container_options['volumes']) container_options['image'] = self.image_name container_options['labels'] = build_container_labels( container_options.get('labels', {}), self.labels(one_off=one_off), number, self.config_hash if add_config_hash else None) # Delete options which are only used when starting for key in DOCKER_START_KEYS: container_options.pop(key, None) container_options['host_config'] = self._get_container_host_config( override_options, one_off=one_off) networking_config = self.build_default_networking_config() if networking_config: container_options['networking_config'] = networking_config container_options['environment'] = format_environment( container_options['environment']) return container_options def _get_container_host_config(self, override_options, one_off=False): options = dict(self.options, **override_options) logging_dict = options.get('logging', None) log_config = get_log_config(logging_dict) return self.client.create_host_config( links=self._get_links(link_to_self=one_off), port_bindings=build_port_bindings(options.get('ports') or []), binds=options.get('binds'), volumes_from=self._get_volumes_from(), privileged=options.get('privileged', False), network_mode=self.network_mode.mode, devices=options.get('devices'), dns=options.get('dns'), dns_search=options.get('dns_search'), restart_policy=options.get('restart'), cap_add=options.get('cap_add'), cap_drop=options.get('cap_drop'), mem_limit=options.get('mem_limit'), memswap_limit=options.get('memswap_limit'), ulimits=build_ulimits(options.get('ulimits')), log_config=log_config, extra_hosts=options.get('extra_hosts'), read_only=options.get('read_only'), pid_mode=options.get('pid'), security_opt=options.get('security_opt'), ipc_mode=options.get('ipc'), cgroup_parent=options.get('cgroup_parent'), cpu_quota=options.get('cpu_quota'), shm_size=options.get('shm_size'), tmpfs=options.get('tmpfs'), ) def build(self, no_cache=False, pull=False, force_rm=False): log.info('Building %s' % self.name) build_opts = self.options.get('build', {}) path = build_opts.get('context') # python2 os.path() doesn't support unicode, so we need to encode it to # a byte string if not six.PY3: path = path.encode('utf8') build_output = self.client.build( path=path, tag=self.image_name, stream=True, rm=True, forcerm=force_rm, pull=pull, nocache=no_cache, dockerfile=build_opts.get('dockerfile', None), buildargs=build_opts.get('args', None), ) try: all_events = stream_output(build_output, sys.stdout) except StreamOutputError as e: raise BuildError(self, six.text_type(e)) # Ensure the HTTP connection is not reused for another # streaming command, as the Docker daemon can sometimes # complain about it self.client.close() image_id = None for event in all_events: if 'stream' in event: match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', '')) if match: image_id = match.group(1) if image_id is None: raise BuildError(self, event if all_events else 'Unknown') return image_id def can_be_built(self): return 'build' in self.options def labels(self, one_off=False): return [ '{0}={1}'.format(LABEL_PROJECT, self.project), '{0}={1}'.format(LABEL_SERVICE, self.name), '{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False") ] @property def custom_container_name(self): return self.options.get('container_name') def get_container_name(self, number, one_off=False): if self.custom_container_name and not one_off: return self.custom_container_name return build_container_name(self.project, self.name, number, one_off) def remove_image(self, image_type): if not image_type or image_type == ImageType.none: return False if image_type == ImageType.local and self.options.get('image'): return False log.info("Removing image %s", self.image_name) try: self.client.remove_image(self.image_name) return True except APIError as e: log.error("Failed to remove image for service %s: %s", self.name, e) return False def specifies_host_port(self): def has_host_port(binding): _, external_bindings = split_port(binding) # there are no external bindings if external_bindings is None: return False # we only need to check the first binding from the range external_binding = external_bindings[0] # non-tuple binding means there is a host port specified if not isinstance(external_binding, tuple): return True # extract actual host port from tuple of (host_ip, host_port) _, host_port = external_binding if host_port is not None: return True return False return any(has_host_port(binding) for binding in self.options.get('ports', [])) def pull(self, ignore_pull_failures=False): if 'image' not in self.options: return repo, tag, separator = parse_repository_tag(self.options['image']) tag = tag or 'latest' log.info('Pulling %s (%s%s%s)...' % (self.name, repo, separator, tag)) output = self.client.pull(repo, tag=tag, stream=True) try: return progress_stream.get_digest_from_pull( stream_output(output, sys.stdout)) except StreamOutputError as e: if not ignore_pull_failures: raise else: log.error(six.text_type(e)) def push(self, ignore_push_failures=False): if 'image' not in self.options or 'build' not in self.options: return repo, tag, separator = parse_repository_tag(self.options['image']) tag = tag or 'latest' log.info('Pushing %s (%s%s%s)...' % (self.name, repo, separator, tag)) output = self.client.push(repo, tag=tag, stream=True) try: return progress_stream.get_digest_from_push( stream_output(output, sys.stdout)) except StreamOutputError as e: if not ignore_push_failures: raise else: log.error(six.text_type(e)) def short_id_alias_exists(container, network): aliases = container.get( 'NetworkSettings.Networks.{net}.Aliases'.format(net=network)) or () return container.short_id in aliases class NetworkMode(object): """A `standard` network mode (ex: host, bridge)""" service_name = None def __init__(self, network_mode): self.network_mode = network_mode @property def id(self): return self.network_mode mode = id class ContainerNetworkMode(object): """A network mode that uses a container's network stack.""" service_name = None def __init__(self, container): self.container = container @property def id(self): return self.container.id @property def mode(self): return 'container:' + self.container.id class ServiceNetworkMode(object): """A network mode that uses a service's network stack.""" def __init__(self, service): self.service = service @property def id(self): return self.service.name service_name = id @property def mode(self): containers = self.service.containers() if containers: return 'container:' + containers[0].id log.warn("Service %s is trying to use reuse the network stack " "of another service that is not running." % (self.id)) return None # Names def build_container_name(project, service, number, one_off=False): bits = [project, service] if one_off: bits.append('run') return '_'.join(bits + [str(number)]) # Images def parse_repository_tag(repo_path): """Splits image identification into base image path, tag/digest and it's separator. Example: >>> parse_repository_tag('user/repo@sha256:digest') ('user/repo', 'sha256:digest', '@') >>> parse_repository_tag('user/repo:v1') ('user/repo', 'v1', ':') """ tag_separator = ":" digest_separator = "@" if digest_separator in repo_path: repo, tag = repo_path.rsplit(digest_separator, 1) return repo, tag, digest_separator repo, tag = repo_path, "" if tag_separator in repo_path: repo, tag = repo_path.rsplit(tag_separator, 1) if "/" in tag: repo, tag = repo_path, "" return repo, tag, tag_separator # Volumes def merge_volume_bindings(volumes, previous_container): """Return a list of volume bindings for a container. Container data volumes are replaced by those from the previous container. """ affinity = {} volume_bindings = dict( build_volume_binding(volume) for volume in volumes if volume.external) if previous_container: old_volumes = get_container_data_volumes(previous_container, volumes) warn_on_masked_volume(volumes, old_volumes, previous_container.service) volume_bindings.update( build_volume_binding(volume) for volume in old_volumes) if old_volumes: affinity = {'affinity:container': '=' + previous_container.id} return list(volume_bindings.values()), affinity def get_container_data_volumes(container, volumes_option): """Find the container data volumes that are in `volumes_option`, and return a mapping of volume bindings for those volumes. """ volumes = [] volumes_option = volumes_option or [] container_mounts = dict( (mount['Destination'], mount) for mount in container.get('Mounts') or {} ) image_volumes = [ VolumeSpec.parse(volume) for volume in container.image_config['ContainerConfig'].get('Volumes') or {} ] for volume in set(volumes_option + image_volumes): # No need to preserve host volumes if volume.external: continue mount = container_mounts.get(volume.internal) # New volume, doesn't exist in the old container if not mount: continue # Volume was previously a host volume, now it's a container volume if not mount.get('Name'): continue # Copy existing volume from old container volume = volume._replace(external=mount['Name']) volumes.append(volume) return volumes def warn_on_masked_volume(volumes_option, container_volumes, service): container_volumes = dict( (volume.internal, volume.external) for volume in container_volumes) for volume in volumes_option: if ( volume.external and volume.internal in container_volumes and container_volumes.get(volume.internal) != volume.external ): log.warn(( "Service \"{service}\" is using volume \"{volume}\" from the " "previous container. Host mapping \"{host_path}\" has no effect. " "Remove the existing containers (with `docker-compose rm {service}`) " "to use the host volume mapping." ).format( service=service, volume=volume.internal, host_path=volume.external)) def build_volume_binding(volume_spec): return volume_spec.internal, volume_spec.repr() def build_volume_from(volume_from_spec): """ volume_from can be either a service or a container. We want to return the container.id and format it into a string complete with the mode. """ if isinstance(volume_from_spec.source, Service): containers = volume_from_spec.source.containers(stopped=True) if not containers: return "{}:{}".format( volume_from_spec.source.create_container().id, volume_from_spec.mode) container = containers[0] return "{}:{}".format(container.id, volume_from_spec.mode) elif isinstance(volume_from_spec.source, Container): return "{}:{}".format(volume_from_spec.source.id, volume_from_spec.mode) # Labels def build_container_labels(label_options, service_labels, number, config_hash): labels = dict(label_options or {}) labels.update(label.split('=', 1) for label in service_labels) labels[LABEL_CONTAINER_NUMBER] = str(number) labels[LABEL_VERSION] = __version__ if config_hash: log.debug("Added config hash: %s" % config_hash) labels[LABEL_CONFIG_HASH] = config_hash return labels # Ulimits def build_ulimits(ulimit_config): if not ulimit_config: return None ulimits = [] for limit_name, soft_hard_values in six.iteritems(ulimit_config): if isinstance(soft_hard_values, six.integer_types): ulimits.append({'name': limit_name, 'soft': soft_hard_values, 'hard': soft_hard_values}) elif isinstance(soft_hard_values, dict): ulimit_dict = {'name': limit_name} ulimit_dict.update(soft_hard_values) ulimits.append(ulimit_dict) return ulimits def get_log_config(logging_dict): log_driver = logging_dict.get('driver', "") if logging_dict else "" log_options = logging_dict.get('options', None) if logging_dict else None return LogConfig( type=log_driver, config=log_options ) # TODO: remove once fix is available in docker-py def format_environment(environment): def format_env(key, value): if value is None: return key return '{key}={value}'.format(key=key, value=value) return [format_env(*item) for item in environment.items()] # Ports def build_container_ports(container_options, options): ports = [] all_ports = container_options.get('ports', []) + options.get('expose', []) for port_range in all_ports: internal_range, _ = split_port(port_range) for port in internal_range: port = str(port) if '/' in port: port = tuple(port.split('/')) ports.append(port) return ports compose-1.8.0/compose/state.py000066400000000000000000000000001274620702700163210ustar00rootroot00000000000000compose-1.8.0/compose/utils.py000066400000000000000000000052031274620702700163530ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import codecs import hashlib import json import json.decoder import six json_decoder = json.JSONDecoder() def get_output_stream(stream): if six.PY3: return stream return codecs.getwriter('utf-8')(stream) def stream_as_text(stream): """Given a stream of bytes or text, if any of the items in the stream are bytes convert them to text. This function can be removed once docker-py returns text streams instead of byte streams. """ for data in stream: if not isinstance(data, six.text_type): data = data.decode('utf-8', 'replace') yield data def line_splitter(buffer, separator=u'\n'): index = buffer.find(six.text_type(separator)) if index == -1: return None return buffer[:index + 1], buffer[index + 1:] def split_buffer(stream, splitter=None, decoder=lambda a: a): """Given a generator which yields strings and a splitter function, joins all input, splits on the separator and yields each chunk. Unlike string.split(), each chunk includes the trailing separator, except for the last one if none was found on the end of the input. """ splitter = splitter or line_splitter buffered = six.text_type('') for data in stream_as_text(stream): buffered += data while True: buffer_split = splitter(buffered) if buffer_split is None: break item, buffered = buffer_split yield item if buffered: yield decoder(buffered) def json_splitter(buffer): """Attempt to parse a json object from a buffer. If there is at least one object, return it and the rest of the buffer, otherwise return None. """ try: obj, index = json_decoder.raw_decode(buffer) rest = buffer[json.decoder.WHITESPACE.match(buffer, index).end():] return obj, rest except ValueError: return None def json_stream(stream): """Given a stream of text, return a stream of json objects. This handles streams which are inconsistently buffered (some entries may be newline delimited, and others are not). """ return split_buffer(stream, json_splitter, json_decoder.decode) def json_hash(obj): dump = json.dumps(obj, sort_keys=True, separators=(',', ':')) h = hashlib.sha256() h.update(dump.encode('utf8')) return h.hexdigest() def microseconds_from_time_nano(time_nano): return int(time_nano % 1000000000 / 1000) def build_string_dict(source_dict): return dict((k, str(v if v is not None else '')) for k, v in source_dict.items()) compose-1.8.0/compose/volume.py000066400000000000000000000110401274620702700165160ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging from docker.errors import NotFound from .config import ConfigurationError log = logging.getLogger(__name__) class Volume(object): def __init__(self, client, project, name, driver=None, driver_opts=None, external_name=None): self.client = client self.project = project self.name = name self.driver = driver self.driver_opts = driver_opts self.external_name = external_name def create(self): return self.client.create_volume( self.full_name, self.driver, self.driver_opts ) def remove(self): if self.external: log.info("Volume %s is external, skipping", self.full_name) return log.info("Removing volume %s", self.full_name) return self.client.remove_volume(self.full_name) def inspect(self): return self.client.inspect_volume(self.full_name) def exists(self): try: self.inspect() except NotFound: return False return True @property def external(self): return bool(self.external_name) @property def full_name(self): if self.external_name: return self.external_name return '{0}_{1}'.format(self.project, self.name) class ProjectVolumes(object): def __init__(self, volumes): self.volumes = volumes @classmethod def from_config(cls, name, config_data, client): config_volumes = config_data.volumes or {} volumes = { vol_name: Volume( client=client, project=name, name=vol_name, driver=data.get('driver'), driver_opts=data.get('driver_opts'), external_name=data.get('external_name') ) for vol_name, data in config_volumes.items() } return cls(volumes) def remove(self): for volume in self.volumes.values(): try: volume.remove() except NotFound: log.warn("Volume %s not found.", volume.full_name) def initialize(self): try: for volume in self.volumes.values(): volume_exists = volume.exists() if volume.external: log.debug( 'Volume {0} declared as external. No new ' 'volume will be created.'.format(volume.name) ) if not volume_exists: raise ConfigurationError( 'Volume {name} declared as external, but could' ' not be found. Please create the volume manually' ' using `{command}{name}` and try again.'.format( name=volume.full_name, command='docker volume create --name=' ) ) continue if not volume_exists: log.info( 'Creating volume "{0}" with {1} driver'.format( volume.full_name, volume.driver or 'default' ) ) volume.create() else: driver = volume.inspect()['Driver'] if volume.driver is not None and driver != volume.driver: raise ConfigurationError( 'Configuration for volume {0} specifies driver ' '{1}, but a volume with the same name uses a ' 'different driver ({3}). If you wish to use the ' 'new configuration, please remove the existing ' 'volume "{2}" first:\n' '$ docker volume rm {2}'.format( volume.name, volume.driver, volume.full_name, volume.inspect()['Driver'] ) ) except NotFound: raise ConfigurationError( 'Volume %s specifies nonexistent driver %s' % (volume.name, volume.driver) ) def namespace_spec(self, volume_spec): if not volume_spec.is_named_volume: return volume_spec volume = self.volumes[volume_spec.external] return volume_spec._replace(external=volume.full_name) compose-1.8.0/contrib/000077500000000000000000000000001274620702700146345ustar00rootroot00000000000000compose-1.8.0/contrib/completion/000077500000000000000000000000001274620702700170055ustar00rootroot00000000000000compose-1.8.0/contrib/completion/bash/000077500000000000000000000000001274620702700177225ustar00rootroot00000000000000compose-1.8.0/contrib/completion/bash/docker-compose000066400000000000000000000244721274620702700225700ustar00rootroot00000000000000#!bash # # bash completion for docker-compose # # This work is based on the completion for the docker command. # # This script provides completion of: # - commands and their options # - service names # - filepaths # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file to e.g. ~/.docker-compose-completion.sh and add the line # below to your .bashrc after bash completion features are loaded # . ~/.docker-compose-completion.sh __docker_compose_q() { docker-compose 2>/dev/null $daemon_options "$@" } # Transforms a multiline list of strings into a single line string # with the words separated by "|". __docker_compose_to_alternatives() { local parts=( $1 ) local IFS='|' echo "${parts[*]}" } # Transforms a multiline list of options into an extglob pattern # suitable for use in case statements. __docker_compose_to_extglob() { local extglob=$( __docker_compose_to_alternatives "$1" ) echo "@($extglob)" } # suppress trailing whitespace __docker_compose_nospace() { # compopt is not available in ancient bash versions type compopt &>/dev/null && compopt -o nospace } # Extracts all service names from the compose file. ___docker_compose_all_services_in_compose_file() { __docker_compose_q config --services } # All services, even those without an existing container __docker_compose_services_all() { COMPREPLY=( $(compgen -W "$(___docker_compose_all_services_in_compose_file)" -- "$cur") ) } # All services that have an entry with the given key in their compose_file section ___docker_compose_services_with_key() { # flatten sections under "services" to one line, then filter lines containing the key and return section name __docker_compose_q config \ | sed -n -e '/^services:/,/^[^ ]/p' \ | sed -n 's/^ //p' \ | awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' \ | awk -F: -v key=": +$1:" '$0 ~ key {print $1}' } # All services that are defined by a Dockerfile reference __docker_compose_services_from_build() { COMPREPLY=( $(compgen -W "$(___docker_compose_services_with_key build)" -- "$cur") ) } # All services that are defined by an image __docker_compose_services_from_image() { COMPREPLY=( $(compgen -W "$(___docker_compose_services_with_key image)" -- "$cur") ) } # The services for which containers have been created, optionally filtered # by a boolean expression passed in as argument. __docker_compose_services_with() { local containers names containers="$(__docker_compose_q ps -q)" names=$(docker 2>/dev/null inspect -f "{{if ${1:-true}}}{{range \$k, \$v := .Config.Labels}}{{if eq \$k \"com.docker.compose.service\"}}{{\$v}}{{end}}{{end}}{{end}}" $containers) COMPREPLY=( $(compgen -W "$names" -- "$cur") ) } # The services for which at least one paused container exists __docker_compose_services_paused() { __docker_compose_services_with '.State.Paused' } # The services for which at least one running container exists __docker_compose_services_running() { __docker_compose_services_with '.State.Running' } # The services for which at least one stopped container exists __docker_compose_services_stopped() { __docker_compose_services_with 'not .State.Running' } _docker_compose_build() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--force-rm --help --no-cache --pull" -- "$cur" ) ) ;; *) __docker_compose_services_from_build ;; esac } _docker_compose_bundle() { case "$prev" in --output|-o) _filedir return ;; esac COMPREPLY=( $( compgen -W "--fetch-digests --help --output -o" -- "$cur" ) ) } _docker_compose_config() { COMPREPLY=( $( compgen -W "--help --quiet -q --services" -- "$cur" ) ) } _docker_compose_create() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--force-recreate --help --no-build --no-recreate" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_docker_compose() { case "$prev" in --tlscacert|--tlscert|--tlskey) _filedir return ;; --file|-f) _filedir "y?(a)ml" return ;; $(__docker_compose_to_extglob "$daemon_options_with_args") ) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "$daemon_boolean_options $daemon_options_with_args --help -h --verbose --version -v" -- "$cur" ) ) ;; *) COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) ) ;; esac } _docker_compose_down() { case "$prev" in --rmi) COMPREPLY=( $( compgen -W "all local" -- "$cur" ) ) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --rmi --volumes -v --remove-orphans" -- "$cur" ) ) ;; esac } _docker_compose_events() { case "$prev" in --json) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --json" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_exec() { case "$prev" in --index|--user) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "-d --help --index --privileged -T --user" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_help() { COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) ) } _docker_compose_kill() { case "$prev" in -s) COMPREPLY=( $( compgen -W "SIGHUP SIGINT SIGKILL SIGUSR1 SIGUSR2" -- "$(echo $cur | tr '[:lower:]' '[:upper:]')" ) ) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help -s" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_logs() { case "$prev" in --tail) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--follow -f --help --no-color --tail --timestamps -t" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_pause() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_port() { case "$prev" in --protocol) COMPREPLY=( $( compgen -W "tcp udp" -- "$cur" ) ) return; ;; --index) return; ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --index --protocol" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_ps() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help -q" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_pull() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --ignore-pull-failures" -- "$cur" ) ) ;; *) __docker_compose_services_from_image ;; esac } _docker_compose_push() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --ignore-push-failures" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_restart() { case "$prev" in --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --timeout -t" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_rm() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--force -f --help -v" -- "$cur" ) ) ;; *) __docker_compose_services_stopped ;; esac } _docker_compose_run() { case "$prev" in -e) COMPREPLY=( $( compgen -e -- "$cur" ) ) __docker_compose_nospace return ;; --entrypoint|--name|--user|-u|--workdir|-w) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "-d --entrypoint -e --help --name --no-deps --publish -p --rm --service-ports -T --user -u --workdir -w" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_scale() { case "$prev" in =) COMPREPLY=("$cur") return ;; --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --timeout -t" -- "$cur" ) ) ;; *) COMPREPLY=( $(compgen -S "=" -W "$(___docker_compose_all_services_in_compose_file)" -- "$cur") ) __docker_compose_nospace ;; esac } _docker_compose_start() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; *) __docker_compose_services_stopped ;; esac } _docker_compose_stop() { case "$prev" in --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --timeout -t" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_unpause() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; *) __docker_compose_services_paused ;; esac } _docker_compose_up() { case "$prev" in --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--abort-on-container-exit --build -d --force-recreate --help --no-build --no-color --no-deps --no-recreate --timeout -t --remove-orphans" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_version() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--short" -- "$cur" ) ) ;; esac } _docker_compose() { local previous_extglob_setting=$(shopt -p extglob) shopt -s extglob local commands=( build bundle config create down events exec help kill logs pause port ps pull push restart rm run scale start stop unpause up version ) # options for the docker daemon that have to be passed to secondary calls to # docker-compose executed by this script local daemon_boolean_options=" --skip-hostname-check --tls --tlsverify " local daemon_options_with_args=" --file -f --host -H --project-name -p --tlscacert --tlscert --tlskey " COMPREPLY=() local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword # search subcommand and invoke its handler. # special treatment of some top-level options local command='docker_compose' local daemon_options=() local counter=1 while [ $counter -lt $cword ]; do case "${words[$counter]}" in $(__docker_compose_to_extglob "$daemon_boolean_options") ) local opt=${words[counter]} daemon_options+=($opt) ;; $(__docker_compose_to_extglob "$daemon_options_with_args") ) local opt=${words[counter]} local arg=${words[++counter]} daemon_options+=($opt $arg) ;; -*) ;; *) command="${words[$counter]}" break ;; esac (( counter++ )) done local completions_func=_docker_compose_${command//-/_} declare -F $completions_func >/dev/null && $completions_func eval "$previous_extglob_setting" return 0 } complete -F _docker_compose docker-compose compose-1.8.0/contrib/completion/zsh/000077500000000000000000000000001274620702700176115ustar00rootroot00000000000000compose-1.8.0/contrib/completion/zsh/_docker-compose000066400000000000000000000433251274620702700226140ustar00rootroot00000000000000#compdef docker-compose # Description # ----------- # zsh completion for docker-compose # https://github.com/sdurrheimer/docker-compose-zsh-completion # ------------------------------------------------------------------------- # Version # ------- # 1.5.0 # ------------------------------------------------------------------------- # Authors # ------- # * Steve Durrheimer # ------------------------------------------------------------------------- # Inspiration # ----------- # * @albers docker-compose bash completion script # * @felixr docker zsh completion script : https://github.com/felixr/docker-zsh-completion # ------------------------------------------------------------------------- __docker-compose_q() { docker-compose 2>/dev/null $compose_options "$@" } # All services defined in docker-compose.yml __docker-compose_all_services_in_compose_file() { local already_selected local -a services already_selected=$(echo $words | tr " " "|") __docker-compose_q config --services \ | grep -Ev "^(${already_selected})$" } # All services, even those without an existing container __docker-compose_services_all() { [[ $PREFIX = -* ]] && return 1 integer ret=1 services=$(__docker-compose_all_services_in_compose_file) _alternative "args:services:($services)" && ret=0 return ret } # All services that have an entry with the given key in their docker-compose.yml section __docker-compose_services_with_key() { local already_selected local -a buildable already_selected=$(echo $words | tr " " "|") # flatten sections to one line, then filter lines containing the key and return section name. __docker-compose_q config \ | sed -n -e '/^services:/,/^[^ ]/p' \ | sed -n 's/^ //p' \ | awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' \ | grep " \+$1:" \ | cut -d: -f1 \ | grep -Ev "^(${already_selected})$" } # All services that are defined by a Dockerfile reference __docker-compose_services_from_build() { [[ $PREFIX = -* ]] && return 1 integer ret=1 buildable=$(__docker-compose_services_with_key build) _alternative "args:buildable services:($buildable)" && ret=0 return ret } # All services that are defined by an image __docker-compose_services_from_image() { [[ $PREFIX = -* ]] && return 1 integer ret=1 pullable=$(__docker-compose_services_with_key image) _alternative "args:pullable services:($pullable)" && ret=0 return ret } __docker-compose_get_services() { [[ $PREFIX = -* ]] && return 1 integer ret=1 local kind declare -a running paused stopped lines args services docker_status=$(docker ps > /dev/null 2>&1) if [ $? -ne 0 ]; then _message "Error! Docker is not running." return 1 fi kind=$1 shift [[ $kind =~ (stopped|all) ]] && args=($args -a) lines=(${(f)"$(_call_program commands docker $docker_options ps $args)"}) services=(${(f)"$(_call_program commands docker-compose 2>/dev/null $compose_options ps -q)"}) # Parse header line to find columns local i=1 j=1 k header=${lines[1]} declare -A begin end while (( j < ${#header} - 1 )); do i=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 1 )) j=$(( i + ${${header[$i,-1]}[(i) ]} - 1 )) k=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 2 )) begin[${header[$i,$((j-1))]}]=$i end[${header[$i,$((j-1))]}]=$k done lines=(${lines[2,-1]}) # Container ID local line s name local -a names for line in $lines; do if [[ ${services[@]} == *"${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"* ]]; then names=(${(ps:,:)${${line[${begin[NAMES]},-1]}%% *}}) for name in $names; do s="${${name%_*}#*_}:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}" s="$s, ${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}" s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}" if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then stopped=($stopped $s) else if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = *\(Paused\)* ]]; then paused=($paused $s) fi running=($running $s) fi done fi done [[ $kind =~ (running|all) ]] && _describe -t services-running "running services" running "$@" && ret=0 [[ $kind =~ (paused|all) ]] && _describe -t services-paused "paused services" paused "$@" && ret=0 [[ $kind =~ (stopped|all) ]] && _describe -t services-stopped "stopped services" stopped "$@" && ret=0 return ret } __docker-compose_pausedservices() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services paused "$@" } __docker-compose_stoppedservices() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services stopped "$@" } __docker-compose_runningservices() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services running "$@" } __docker-compose_services() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services all "$@" } __docker-compose_caching_policy() { oldp=( "$1"(Nmh+1) ) # 1 hour (( $#oldp )) } __docker-compose_commands() { local cache_policy zstyle -s ":completion:${curcontext}:" cache-policy cache_policy if [[ -z "$cache_policy" ]]; then zstyle ":completion:${curcontext}:" cache-policy __docker-compose_caching_policy fi if ( [[ ${+_docker_compose_subcommands} -eq 0 ]] || _cache_invalid docker_compose_subcommands) \ && ! _retrieve_cache docker_compose_subcommands; then local -a lines lines=(${(f)"$(_call_program commands docker-compose 2>&1)"}) _docker_compose_subcommands=(${${${lines[$((${lines[(i)Commands:]} + 1)),${lines[(I) *]}]}## #}/ ##/:}) (( $#_docker_compose_subcommands > 0 )) && _store_cache docker_compose_subcommands _docker_compose_subcommands fi _describe -t docker-compose-commands "docker-compose command" _docker_compose_subcommands } __docker-compose_subcommand() { local opts_help opts_force_recreate opts_no_recreate opts_no_build opts_remove_orphans opts_timeout opts_no_color opts_no_deps opts_help='(: -)--help[Print usage]' opts_force_recreate="(--no-recreate)--force-recreate[Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate.]" opts_no_recreate="(--force-recreate)--no-recreate[If containers already exist, don't recreate them. Incompatible with --force-recreate.]" opts_no_build="(--build)--no-build[Don't build an image, even if it's missing.]" opts_remove_orphans="--remove-orphans[Remove containers for services not defined in the Compose file]" opts_timeout=('(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: ") opts_no_color='--no-color[Produce monochrome output.]' opts_no_deps="--no-deps[Don't start linked services.]" integer ret=1 case "$words[1]" in (build) _arguments \ $opts_help \ '--force-rm[Always remove intermediate containers.]' \ '--no-cache[Do not use cache when building the image.]' \ '--pull[Always attempt to pull a newer version of the image.]' \ '*:services:__docker-compose_services_from_build' && ret=0 ;; (bundle) _arguments \ $opts_help \ '(--output -o)'{--output,-o}'[Path to write the bundle file to. Defaults to ".dab".]:file:_files' && ret=0 ;; (config) _arguments \ $opts_help \ '(--quiet -q)'{--quiet,-q}"[Only validate the configuration, don't print anything.]" \ '--services[Print the service names, one per line.]' && ret=0 ;; (create) _arguments \ $opts_help \ $opts_force_recreate \ $opts_no_recreate \ $opts_no_build \ "(--no-build)--build[Build images before creating containers.]" \ '*:services:__docker-compose_services_all' && ret=0 ;; (down) _arguments \ $opts_help \ "--rmi[Remove images. Type must be one of: 'all': Remove all images used by any service. 'local': Remove only images that don't have a custom tag set by the \`image\` field.]:type:(all local)" \ '(-v --volumes)'{-v,--volumes}"[Remove named volumes declared in the \`volumes\` section of the Compose file and anonymous volumes attached to containers.]" \ $opts_remove_orphans && ret=0 ;; (events) _arguments \ $opts_help \ '--json[Output events as a stream of json objects]' \ '*:services:__docker-compose_services_all' && ret=0 ;; (exec) _arguments \ $opts_help \ '-d[Detached mode: Run command in the background.]' \ '--privileged[Give extended privileges to the process.]' \ '--user=[Run the command as this user.]:username:_users' \ '-T[Disable pseudo-tty allocation. By default `docker-compose exec` allocates a TTY.]' \ '--index=[Index of the container if there are multiple instances of a service \[default: 1\]]:index: ' \ '(-):running services:__docker-compose_runningservices' \ '(-):command: _command_names -e' \ '*::arguments: _normal' && ret=0 ;; (help) _arguments ':subcommand:__docker-compose_commands' && ret=0 ;; (kill) _arguments \ $opts_help \ '-s[SIGNAL to send to the container. Default signal is SIGKILL.]:signal:_signals' \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (logs) _arguments \ $opts_help \ '(-f --follow)'{-f,--follow}'[Follow log output]' \ $opts_no_color \ '--tail=[Number of lines to show from the end of the logs for each container.]:number of lines: ' \ '(-t --timestamps)'{-t,--timestamps}'[Show timestamps]' \ '*:services:__docker-compose_services_all' && ret=0 ;; (pause) _arguments \ $opts_help \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (port) _arguments \ $opts_help \ '--protocol=[tcp or udp \[default: tcp\]]:protocol:(tcp udp)' \ '--index=[index of the container if there are multiple instances of a service \[default: 1\]]:index: ' \ '1:running services:__docker-compose_runningservices' \ '2:port:_ports' && ret=0 ;; (ps) _arguments \ $opts_help \ '-q[Only display IDs]' \ '*:services:__docker-compose_services_all' && ret=0 ;; (pull) _arguments \ $opts_help \ '--ignore-pull-failures[Pull what it can and ignores images with pull failures.]' \ '*:services:__docker-compose_services_from_image' && ret=0 ;; (push) _arguments \ $opts_help \ '--ignore-push-failures[Push what it can and ignores images with push failures.]' \ '*:services:__docker-compose_services' && ret=0 ;; (rm) _arguments \ $opts_help \ '(-f --force)'{-f,--force}"[Don't ask to confirm removal]" \ '-v[Remove any anonymous volumes attached to containers]' \ '*:stopped services:__docker-compose_stoppedservices' && ret=0 ;; (run) _arguments \ $opts_help \ '-d[Detached mode: Run container in the background, print new container name.]' \ '*-e[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \ '--entrypoint[Overwrite the entrypoint of the image.]:entry point: ' \ '--name=[Assign a name to the container]:name: ' \ $opts_no_deps \ '(-p --publish)'{-p,--publish=}"[Publish a container's port(s) to the host]" \ '--rm[Remove container after run. Ignored in detached mode.]' \ "--service-ports[Run command with the service's ports enabled and mapped to the host.]" \ '-T[Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY.]' \ '(-u --user)'{-u,--user=}'[Run as specified username or uid]:username or uid:_users' \ '(-w --workdir)'{-w,--workdir=}'[Working directory inside the container]:workdir: ' \ '(-):services:__docker-compose_services' \ '(-):command: _command_names -e' \ '*::arguments: _normal' && ret=0 ;; (scale) _arguments \ $opts_help \ $opts_timeout \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (start) _arguments \ $opts_help \ '*:stopped services:__docker-compose_stoppedservices' && ret=0 ;; (stop|restart) _arguments \ $opts_help \ $opts_timeout \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (unpause) _arguments \ $opts_help \ '*:paused services:__docker-compose_pausedservices' && ret=0 ;; (up) _arguments \ $opts_help \ '(--abort-on-container-exit)-d[Detached mode: Run containers in the background, print new container names. Incompatible with --abort-on-container-exit.]' \ $opts_no_color \ $opts_no_deps \ $opts_force_recreate \ $opts_no_recreate \ $opts_no_build \ "(--no-build)--build[Build images before starting containers.]" \ "(-d)--abort-on-container-exit[Stops all containers if any container was stopped. Incompatible with -d.]" \ '(-t --timeout)'{-t,--timeout}"[Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10)]:seconds: " \ $opts_remove_orphans \ '*:services:__docker-compose_services_all' && ret=0 ;; (version) _arguments \ $opts_help \ "--short[Shows only Compose's version number.]" && ret=0 ;; (*) _message 'Unknown sub command' && ret=1 ;; esac return ret } _docker-compose() { # Support for subservices, which allows for `compdef _docker docker-shell=_docker_containers`. # Based on /usr/share/zsh/functions/Completion/Unix/_git without support for `ret`. if [[ $service != docker-compose ]]; then _call_function - _$service return fi local curcontext="$curcontext" state line integer ret=1 typeset -A opt_args _arguments -C \ '(- :)'{-h,--help}'[Get help]' \ '(-f --file)'{-f,--file}'[Specify an alternate docker-compose file (default: docker-compose.yml)]:file:_files -g "*.yml"' \ '(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \ '--verbose[Show more output]' \ '(- :)'{-v,--version}'[Print version and exit]' \ '(-H --host)'{-H,--host}'[Daemon socket to connect to]:host:' \ '--tls[Use TLS; implied by --tlsverify]' \ '--tlscacert=[Trust certs signed only by this CA]:ca path:' \ '--tlscert=[Path to TLS certificate file]:client cert path:' \ '--tlskey=[Path to TLS key file]:tls key path:' \ '--tlsverify[Use TLS and verify the remote]' \ "--skip-hostname-check[Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address)]" \ '(-): :->command' \ '(-)*:: :->option-or-argument' && ret=0 local -a relevant_compose_flags relevant_docker_flags compose_options docker_options relevant_compose_flags=( "--file" "-f" "--host" "-H" "--project-name" "-p" "--tls" "--tlscacert" "--tlscert" "--tlskey" "--tlsverify" "--skip-hostname-check" ) relevant_docker_flags=( "--host" "-H" "--tls" "--tlscacert" "--tlscert" "--tlskey" "--tlsverify" ) for k in "${(@k)opt_args}"; do if [[ -n "${relevant_docker_flags[(r)$k]}" ]]; then docker_options+=$k if [[ -n "$opt_args[$k]" ]]; then docker_options+=$opt_args[$k] fi fi if [[ -n "${relevant_compose_flags[(r)$k]}" ]]; then compose_options+=$k if [[ -n "$opt_args[$k]" ]]; then compose_options+=$opt_args[$k] fi fi done case $state in (command) __docker-compose_commands && ret=0 ;; (option-or-argument) curcontext=${curcontext%:*:*}:docker-compose-$words[1]: __docker-compose_subcommand && ret=0 ;; esac return ret } _docker-compose "$@" compose-1.8.0/contrib/migration/000077500000000000000000000000001274620702700166255ustar00rootroot00000000000000compose-1.8.0/contrib/migration/migrate-compose-file-v1-to-v2.py000077500000000000000000000125461274620702700245130ustar00rootroot00000000000000#!/usr/bin/env python """ Migrate a Compose file from the V1 format in Compose 1.5 to the V2 format supported by Compose 1.6+ """ from __future__ import absolute_import from __future__ import unicode_literals import argparse import logging import sys import ruamel.yaml from compose.config.types import VolumeSpec log = logging.getLogger('migrate') def migrate(content): data = ruamel.yaml.load(content, ruamel.yaml.RoundTripLoader) service_names = data.keys() for name, service in data.items(): warn_for_links(name, service) warn_for_external_links(name, service) rewrite_net(service, service_names) rewrite_build(service) rewrite_logging(service) rewrite_volumes_from(service, service_names) services = {name: data.pop(name) for name in data.keys()} data['version'] = "2" data['services'] = services create_volumes_section(data) return data def warn_for_links(name, service): links = service.get('links') if links: example_service = links[0].partition(':')[0] log.warn( "Service {name} has links, which no longer create environment " "variables such as {example_service_upper}_PORT. " "If you are using those in your application code, you should " "instead connect directly to the hostname, e.g. " "'{example_service}'." .format(name=name, example_service=example_service, example_service_upper=example_service.upper())) def warn_for_external_links(name, service): external_links = service.get('external_links') if external_links: log.warn( "Service {name} has external_links: {ext}, which now work " "slightly differently. In particular, two containers must be " "connected to at least one network in common in order to " "communicate, even if explicitly linked together.\n\n" "Either connect the external container to your app's default " "network, or connect both the external container and your " "service's containers to a pre-existing network. See " "https://docs.docker.com/compose/networking/ " "for more on how to do this." .format(name=name, ext=external_links)) def rewrite_net(service, service_names): if 'net' in service: network_mode = service.pop('net') # "container:" is now "service:" if network_mode.startswith('container:'): name = network_mode.partition(':')[2] if name in service_names: network_mode = 'service:{}'.format(name) service['network_mode'] = network_mode def rewrite_build(service): if 'dockerfile' in service: service['build'] = { 'context': service.pop('build'), 'dockerfile': service.pop('dockerfile'), } def rewrite_logging(service): if 'log_driver' in service: service['logging'] = {'driver': service.pop('log_driver')} if 'log_opt' in service: service['logging']['options'] = service.pop('log_opt') def rewrite_volumes_from(service, service_names): for idx, volume_from in enumerate(service.get('volumes_from', [])): if volume_from.split(':', 1)[0] not in service_names: service['volumes_from'][idx] = 'container:%s' % volume_from def create_volumes_section(data): named_volumes = get_named_volumes(data['services']) if named_volumes: log.warn( "Named volumes ({names}) must be explicitly declared. Creating a " "'volumes' section with declarations.\n\n" "For backwards-compatibility, they've been declared as external. " "If you don't mind the volume names being prefixed with the " "project name, you can remove the 'external' option from each one." .format(names=', '.join(list(named_volumes)))) data['volumes'] = named_volumes def get_named_volumes(services): volume_specs = [ VolumeSpec.parse(volume) for service in services.values() for volume in service.get('volumes', []) ] names = { spec.external for spec in volume_specs if spec.is_named_volume } return {name: {'external': True} for name in names} def write(stream, new_format, indent, width): ruamel.yaml.dump( new_format, stream, Dumper=ruamel.yaml.RoundTripDumper, indent=indent, width=width) def parse_opts(args): parser = argparse.ArgumentParser() parser.add_argument("filename", help="Compose file filename.") parser.add_argument("-i", "--in-place", action='store_true') parser.add_argument( "--indent", type=int, default=2, help="Number of spaces used to indent the output yaml.") parser.add_argument( "--width", type=int, default=80, help="Number of spaces used as the output width.") return parser.parse_args() def main(args): logging.basicConfig(format='\033[33m%(levelname)s:\033[37m %(message)s\033[0m\n') opts = parse_opts(args) with open(opts.filename, 'r') as fh: new_format = migrate(fh.read()) if opts.in_place: output = open(opts.filename, 'w') else: output = sys.stdout write(output, new_format, opts.indent, opts.width) if __name__ == "__main__": main(sys.argv) compose-1.8.0/docker-compose.spec000066400000000000000000000017171274620702700167700ustar00rootroot00000000000000# -*- mode: python -*- block_cipher = None a = Analysis(['bin/docker-compose'], pathex=['.'], hiddenimports=[], hookspath=None, runtime_hooks=None, cipher=block_cipher) pyz = PYZ(a.pure, cipher=block_cipher) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [ ( 'compose/config/config_schema_v1.json', 'compose/config/config_schema_v1.json', 'DATA' ), ( 'compose/config/config_schema_v2.0.json', 'compose/config/config_schema_v2.0.json', 'DATA' ), ( 'compose/GITSHA', 'compose/GITSHA', 'DATA' ) ], name='docker-compose', debug=False, strip=None, upx=True, console=True) compose-1.8.0/docs/000077500000000000000000000000001274620702700141245ustar00rootroot00000000000000compose-1.8.0/docs/Dockerfile000066400000000000000000000003031274620702700161120ustar00rootroot00000000000000FROM docs/base:oss MAINTAINER Docker Docs ENV PROJECT=compose # To get the git info for this repo COPY . /src RUN rm -rf /docs/content/$PROJECT/ COPY . /docs/content/$PROJECT/ compose-1.8.0/docs/Makefile000066400000000000000000000030601274620702700155630ustar00rootroot00000000000000.PHONY: all default docs docs-build docs-shell shell test # to allow `make DOCSDIR=1 docs-shell` (to create a bind mount in docs) DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR):/docs/content/compose) # to allow `make DOCSPORT=9000 docs` DOCSPORT := 8000 # Get the IP ADDRESS DOCKER_IP=$(shell python -c "import urlparse ; print urlparse.urlparse('$(DOCKER_HOST)').hostname or ''") HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER_IP)") HUGO_BIND_IP=0.0.0.0 GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null) GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g") DOCKER_DOCS_IMAGE := docker-docs$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN)) DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE # for some docs workarounds (see below in "docs-build" target) GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null) default: docs docs: docs-build $(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP) --watch docs-draft: docs-build $(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP) docs-shell: docs-build $(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash test: docs-build $(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)" docs-build: docker build -t "$(DOCKER_DOCS_IMAGE)" . compose-1.8.0/docs/README.md000066400000000000000000000075461274620702700154170ustar00rootroot00000000000000 # Contributing to the Docker Compose documentation The documentation in this directory is part of the [https://docs.docker.com](https://docs.docker.com) website. Docker uses [the Hugo static generator](http://gohugo.io/overview/introduction/) to convert project Markdown files to a static HTML site. You don't need to be a Hugo expert to contribute to the compose documentation. If you are familiar with Markdown, you can modify the content in the `docs` files. If you want to add a new file or change the location of the document in the menu, you do need to know a little more. ## Documentation contributing workflow 1. Edit a Markdown file in the tree. 2. Save your changes. 3. Make sure you are in the `docs` subdirectory. 4. Build the documentation. $ make docs ---> ffcf3f6c4e97 Removing intermediate container a676414185e8 Successfully built ffcf3f6c4e97 docker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 -e DOCKERHOST "docs-base:test-tooling" hugo server --port=8000 --baseUrl=192.168.59.103 --bind=0.0.0.0 ERROR: 2015/06/13 MenuEntry's .Url is deprecated and will be removed in Hugo 0.15. Use .URL instead. 0 of 4 drafts rendered 0 future content 12 pages created 0 paginator pages created 0 tags created 0 categories created in 55 ms Serving pages from /docs/public Web Server is available at http://0.0.0.0:8000/ Press Ctrl+C to stop 5. Open the available server in your browser. The documentation server has the complete menu but only the Docker Compose documentation resolves. You can't access the other project docs from this localized build. ## Tips on Hugo metadata and menu positioning The top of each Docker Compose documentation file contains TOML metadata. The metadata is commented out to prevent it from appearing in GitHub. The metadata alone has this structure: +++ title = "Extending services in Compose" description = "How to use Docker Compose's extends keyword to share configuration between files and projects" keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"] [menu.main] parent="workw_compose" weight=2 +++ The `[menu.main]` section refers to navigation defined [in the main Docker menu](https://github.com/docker/docs-base/blob/hugo/config.toml). This metadata says *add a menu item called* Extending services in Compose *to the menu with the* `smn_workdw_compose` *identifier*. If you locate the menu in the configuration, you'll find *Create multi-container applications* is the menu title. You can move an article in the tree by specifying a new parent. You can shift the location of the item by changing its weight. Higher numbers are heavier and shift the item to the bottom of menu. Low or no numbers shift it up. ## Other key documentation repositories The `docker/docs-base` repository contains [the Hugo theme and menu configuration](https://github.com/docker/docs-base). If you open the `Dockerfile` you'll see the `make docs` relies on this as a base image for building the Compose documentation. The `docker/docs.docker.com` repository contains [build system for building the Docker documentation site](https://github.com/docker/docs.docker.com). Fork this repository to build the entire documentation site. compose-1.8.0/docs/bundles.md000066400000000000000000000150321274620702700161030ustar00rootroot00000000000000 # Docker Stacks and Distributed Application Bundles (experimental) > **Note**: This is a copy of the [Docker Stacks and Distributed Application > Bundles](https://github.com/docker/docker/blob/v1.12.0-rc4/experimental/docker-stacks-and-bundles.md) > document in the [docker/docker repo](https://github.com/docker/docker). ## Overview Docker Stacks and Distributed Application Bundles are experimental features introduced in Docker 1.12 and Docker Compose 1.8, alongside the concept of swarm mode, and Nodes and Services in the Engine API. A Dockerfile can be built into an image, and containers can be created from that image. Similarly, a docker-compose.yml can be built into a **distributed application bundle**, and **stacks** can be created from that bundle. In that sense, the bundle is a multi-services distributable image format. As of Docker 1.12 and Compose 1.8, the features are experimental. Neither Docker Engine nor the Docker Registry support distribution of bundles. ## Producing a bundle The easiest way to produce a bundle is to generate it using `docker-compose` from an existing `docker-compose.yml`. Of course, that's just *one* possible way to proceed, in the same way that `docker build` isn't the only way to produce a Docker image. From `docker-compose`: ```bash $ docker-compose bundle WARNING: Unsupported key 'network_mode' in services.nsqd - ignoring WARNING: Unsupported key 'links' in services.nsqd - ignoring WARNING: Unsupported key 'volumes' in services.nsqd - ignoring [...] Wrote bundle to vossibility-stack.dab ``` ## Creating a stack from a bundle A stack is created using the `docker deploy` command: ```bash # docker deploy --help Usage: docker deploy [OPTIONS] STACK Create and update a stack Options: --file string Path to a Distributed Application Bundle file (Default: STACK.dab) --help Print usage --with-registry-auth Send registry authentication details to Swarm agents ``` Let's deploy the stack created before: ```bash # docker deploy vossibility-stack Loading bundle from vossibility-stack.dab Creating service vossibility-stack_elasticsearch Creating service vossibility-stack_kibana Creating service vossibility-stack_logstash Creating service vossibility-stack_lookupd Creating service vossibility-stack_nsqd Creating service vossibility-stack_vossibility-collector ``` We can verify that services were correctly created: ```bash # docker service ls ID NAME REPLICAS IMAGE COMMAND 29bv0vnlm903 vossibility-stack_lookupd 1 nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662 /nsqlookupd 4awt47624qwh vossibility-stack_nsqd 1 nsqio/nsq@sha256:eeba05599f31eba418e96e71e0984c3dc96963ceb66924dd37a47bf7ce18a662 /nsqd --data-path=/data --lookupd-tcp-address=lookupd:4160 4tjx9biia6fs vossibility-stack_elasticsearch 1 elasticsearch@sha256:12ac7c6af55d001f71800b83ba91a04f716e58d82e748fa6e5a7359eed2301aa 7563uuzr9eys vossibility-stack_kibana 1 kibana@sha256:6995a2d25709a62694a937b8a529ff36da92ebee74bafd7bf00e6caf6db2eb03 9gc5m4met4he vossibility-stack_logstash 1 logstash@sha256:2dc8bddd1bb4a5a34e8ebaf73749f6413c101b2edef6617f2f7713926d2141fe logstash -f /etc/logstash/conf.d/logstash.conf axqh55ipl40h vossibility-stack_vossibility-collector 1 icecrime/vossibility-collector@sha256:f03f2977203ba6253988c18d04061c5ec7aab46bca9dfd89a9a1fa4500989fba --config /config/config.toml --debug ``` ## Managing stacks Stacks are managed using the `docker stack` command: ```bash # docker stack --help Usage: docker stack COMMAND Manage Docker stacks Options: --help Print usage Commands: config Print the stack configuration deploy Create and update a stack rm Remove the stack services List the services in the stack tasks List the tasks in the stack Run 'docker stack COMMAND --help' for more information on a command. ``` ## Bundle file format Distributed application bundles are described in a JSON format. When bundles are persisted as files, the file extension is `.dab`. A bundle has two top-level fields: `version` and `services`. The version used by Docker 1.12 tools is `0.1`. `services` in the bundle are the services that comprise the app. They correspond to the new `Service` object introduced in the 1.12 Docker Engine API. A service has the following fields:
Image (required) string
The image that the service will run. Docker images should be referenced with full content hash to fully specify the deployment artifact for the service. Example: postgres@sha256:e0a230a9f5b4e1b8b03bb3e8cf7322b0e42b7838c5c87f4545edb48f5eb8f077
Command []string
Command to run in service containers.
Args []string
Arguments passed to the service containers.
Env []string
Environment variables.
Labels map[string]string
Labels used for setting meta data on services.
Ports []Port
Service ports (composed of Port (int) and Protocol (string). A service description can only specify the container port to be exposed. These ports can be mapped on runtime hosts at the operator's discretion.
WorkingDir string
Working directory inside the service containers.
User string
Username or UID (format: <name|uid>[:<group|gid>]).
Networks []string
Networks that the service containers should be connected to. An entity deploying a bundle should create networks as needed.
> **Note:** Some configuration options are not yet supported in the DAB format, > including volume mounts. compose-1.8.0/docs/completion.md000066400000000000000000000046261274620702700166270ustar00rootroot00000000000000 # Command-line Completion Compose comes with [command completion](http://en.wikipedia.org/wiki/Command-line_completion) for the bash and zsh shell. ## Installing Command Completion ### Bash Make sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available. On a Mac, install with `brew install bash-completion` Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g. curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose Completion will be available upon next login. ### Zsh Place the completion script in your `/path/to/zsh/completion`, using e.g. `~/.zsh/completion/` mkdir -p ~/.zsh/completion curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose Include the directory in your `$fpath`, e.g. by adding in `~/.zshrc` fpath=(~/.zsh/completion $fpath) Make sure `compinit` is loaded or do it by adding in `~/.zshrc` autoload -Uz compinit && compinit -i Then reload your shell exec $SHELL -l ## Available completions Depending on what you typed on the command line so far, it will complete - available docker-compose commands - options that are available for a particular command - service names that make sense in a given context (e.g. services with running or stopped instances or services based on images vs. services based on Dockerfiles). For `docker-compose scale`, completed service names will automatically have "=" appended. - arguments for selected options, e.g. `docker-compose kill -s` will complete some signals like SIGHUP and SIGUSR1. Enjoy working with Compose faster and with less typos! ## Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/compose-file.md000066400000000000000000001030641274620702700170340ustar00rootroot00000000000000 # Compose file reference The Compose file is a [YAML](http://yaml.org/) file defining [services](#service-configuration-reference), [networks](#network-configuration-reference) and [volumes](#volume-configuration-reference). The default path for a Compose file is `./docker-compose.yml`. A service definition contains configuration which will be applied to each container started for that service, much like passing command-line parameters to `docker run`. Likewise, network and volume definitions are analogous to `docker network create` and `docker volume create`. As with `docker run`, options specified in the Dockerfile (e.g., `CMD`, `EXPOSE`, `VOLUME`, `ENV`) are respected by default - you don't need to specify them again in `docker-compose.yml`. You can use environment variables in configuration values with a Bash-like `${VARIABLE}` syntax - see [variable substitution](#variable-substitution) for full details. ## Service configuration reference > **Note:** There are two versions of the Compose file format – version 1 (the > legacy format, which does not support volumes or networks) and version 2 (the > most up-to-date). For more information, see the [Versioning](#versioning) > section. This section contains a list of all configuration options supported by a service definition. ### build Configuration options that are applied at build time. `build` can be specified either as a string containing a path to the build context, or an object with the path specified under [context](#context) and optionally [dockerfile](#dockerfile) and [args](#args). build: ./dir build: context: ./dir dockerfile: Dockerfile-alternate args: buildno: 1 If you specify `image` as well as `build`, then Compose names the built image with the `webapp` and optional `tag` specified in `image`: build: ./dir image: webapp:tag This will result in an image named `webapp` and tagged `tag`, built from `./dir`. > **Note**: In the [version 1 file format](#version-1), `build` is different in > two ways: > > - Only the string form (`build: .`) is allowed - not the object form. > - Using `build` together with `image` is not allowed. Attempting to do so > results in an error. #### context > [Version 2 file format](#version-2) only. In version 1, just use > [build](#build). Either a path to a directory containing a Dockerfile, or a url to a git repository. When the value supplied is a relative path, it is interpreted as relative to the location of the Compose file. This directory is also the build context that is sent to the Docker daemon. Compose will build and tag it with a generated name, and use that image thereafter. build: context: ./dir #### dockerfile Alternate Dockerfile. Compose will use an alternate file to build with. A build path must also be specified. build: context: . dockerfile: Dockerfile-alternate > **Note**: In the [version 1 file format](#version-1), `dockerfile` is > different in two ways: * It appears alongside `build`, not as a sub-option: build: . dockerfile: Dockerfile-alternate * Using `dockerfile` together with `image` is not allowed. Attempting to do so results in an error. #### args > [Version 2 file format](#version-2) only. Add build arguments, which are environment variables accessible only during the build process. First, specify the arguments in your Dockerfile: ARG buildno ARG password RUN echo "Build number: $buildno" RUN script-requiring-password.sh "$password" Then specify the arguments under the `build` key. You can pass either a mapping or a list: build: context: . args: buildno: 1 password: secret build: context: . args: - buildno=1 - password=secret You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running. args: - buildno - password > **Note**: YAML boolean values (`true`, `false`, `yes`, `no`, `on`, `off`) must > be enclosed in quotes, so that the parser interprets them as strings. ### cap_add, cap_drop Add or drop container capabilities. See `man 7 capabilities` for a full list. cap_add: - ALL cap_drop: - NET_ADMIN - SYS_ADMIN ### command Override the default command. command: bundle exec thin -p 3000 The command can also be a list, in a manner similar to [dockerfile](https://docs.docker.com/engine/reference/builder/#cmd): command: [bundle, exec, thin, -p, 3000] ### cgroup_parent Specify an optional parent cgroup for the container. cgroup_parent: m-executor-abcd ### container_name Specify a custom container name, rather than a generated default name. container_name: my-web-container Because Docker container names must be unique, you cannot scale a service beyond 1 container if you have specified a custom name. Attempting to do so results in an error. ### devices List of device mappings. Uses the same format as the `--device` docker client create option. devices: - "/dev/ttyUSB0:/dev/ttyUSB0" ### depends_on Express dependency between services, which has two effects: - `docker-compose up` will start services in dependency order. In the following example, `db` and `redis` will be started before `web`. - `docker-compose up SERVICE` will automatically include `SERVICE`'s dependencies. In the following example, `docker-compose up web` will also create and start `db` and `redis`. Simple example: version: '2' services: web: build: . depends_on: - db - redis redis: image: redis db: image: postgres > **Note:** `depends_on` will not wait for `db` and `redis` to be "ready" before > starting `web` - only until they have been started. If you need to wait > for a service to be ready, see [Controlling startup order](startup-order.md) > for more on this problem and strategies for solving it. ### dns Custom DNS servers. Can be a single value or a list. dns: 8.8.8.8 dns: - 8.8.8.8 - 9.9.9.9 ### dns_search Custom DNS search domains. Can be a single value or a list. dns_search: example.com dns_search: - dc1.example.com - dc2.example.com ### tmpfs Mount a temporary file system inside the container. Can be a single value or a list. tmpfs: /run tmpfs: - /run - /tmp ### entrypoint Override the default entrypoint. entrypoint: /code/entrypoint.sh The entrypoint can also be a list, in a manner similar to [dockerfile](https://docs.docker.com/engine/reference/builder/#entrypoint): entrypoint: - php - -d - zend_extension=/usr/local/lib/php/extensions/no-debug-non-zts-20100525/xdebug.so - -d - memory_limit=-1 - vendor/bin/phpunit ### env_file Add environment variables from a file. Can be a single value or a list. If you have specified a Compose file with `docker-compose -f FILE`, paths in `env_file` are relative to the directory that file is in. Environment variables specified in `environment` override these values. env_file: .env env_file: - ./common.env - ./apps/web.env - /opt/secrets.env Compose expects each line in an env file to be in `VAR=VAL` format. Lines beginning with `#` (i.e. comments) are ignored, as are blank lines. # Set Rails/Rack environment RACK_ENV=development > **Note:** If your service specifies a [build](#build) option, variables > defined in environment files will _not_ be automatically visible during the > build. Use the [args](#args) sub-option of `build` to define build-time > environment variables. ### environment Add environment variables. You can use either an array or a dictionary. Any boolean values; true, false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the YML parser. Environment variables with only a key are resolved to their values on the machine Compose is running on, which can be helpful for secret or host-specific values. environment: RACK_ENV: development SHOW: 'true' SESSION_SECRET: environment: - RACK_ENV=development - SHOW=true - SESSION_SECRET > **Note:** If your service specifies a [build](#build) option, variables > defined in `environment` will _not_ be automatically visible during the > build. Use the [args](#args) sub-option of `build` to define build-time > environment variables. ### expose Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified. expose: - "3000" - "8000" ### extends Extend another service, in the current file or another, optionally overriding configuration. You can use `extends` on any service together with other configuration keys. The `extends` value must be a dictionary defined with a required `service` and an optional `file` key. extends: file: common.yml service: webapp The `service` the name of the service being extended, for example `web` or `database`. The `file` is the location of a Compose configuration file defining that service. If you omit the `file` Compose looks for the service configuration in the current file. The `file` value can be an absolute or relative path. If you specify a relative path, Compose treats it as relative to the location of the current file. You can extend a service that itself extends another. You can extend indefinitely. Compose does not support circular references and `docker-compose` returns an error if it encounters one. For more on `extends`, see the [the extends documentation](extends.md#extending-services). ### external_links Link to containers started outside this `docker-compose.yml` or even outside of Compose, especially for containers that provide shared or common services. `external_links` follow semantics similar to `links` when specifying both the container name and the link alias (`CONTAINER:ALIAS`). external_links: - redis_1 - project_db_1:mysql - project_db_1:postgresql > **Note:** If you're using the [version 2 file format](#version-2), the > externally-created containers must be connected to at least one of the same > networks as the service which is linking to them. ### extra_hosts Add hostname mappings. Use the same values as the docker client `--add-host` parameter. extra_hosts: - "somehost:162.242.195.82" - "otherhost:50.31.209.229" An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this service, e.g: 162.242.195.82 somehost 50.31.209.229 otherhost ### image Specify the image to start the container from. Can either be a repository/tag or a partial image ID. image: redis image: ubuntu:14.04 image: tutum/influxdb image: example-registry.com:4000/postgresql image: a4bc65fd If the image does not exist, Compose attempts to pull it, unless you have also specified [build](#build), in which case it builds it using the specified options and tags it with the specified tag. > **Note**: In the [version 1 file format](#version-1), using `build` together > with `image` is not allowed. Attempting to do so results in an error. ### labels Add metadata to containers using [Docker labels](https://docs.docker.com/engine/userguide/labels-custom-metadata/). You can use either an array or a dictionary. It's recommended that you use reverse-DNS notation to prevent your labels from conflicting with those used by other software. labels: com.example.description: "Accounting webapp" com.example.department: "Finance" com.example.label-with-empty-value: "" labels: - "com.example.description=Accounting webapp" - "com.example.department=Finance" - "com.example.label-with-empty-value" ### links Link to containers in another service. Either specify both the service name and a link alias (`SERVICE:ALIAS`), or just the service name. web: links: - db - db:database - redis Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified. Links also express dependency between services in the same way as [depends_on](#depends-on), so they determine the order of service startup. > **Note:** If you define both links and [networks](#networks), services with > links between them must share at least one network in common in order to > communicate. ### logging > [Version 2 file format](#version-2) only. In version 1, use > [log_driver](#log_driver) and [log_opt](#log_opt). Logging configuration for the service. logging: driver: syslog options: syslog-address: "tcp://192.168.0.42:123" The `driver` name specifies a logging driver for the service's containers, as with the ``--log-driver`` option for docker run ([documented here](https://docs.docker.com/engine/reference/logging/overview/)). The default value is json-file. driver: "json-file" driver: "syslog" driver: "none" > **Note:** Only the `json-file` driver makes the logs available directly from > `docker-compose up` and `docker-compose logs`. Using any other driver will not > print any logs. Specify logging options for the logging driver with the ``options`` key, as with the ``--log-opt`` option for `docker run`. Logging options are key-value pairs. An example of `syslog` options: driver: "syslog" options: syslog-address: "tcp://192.168.0.42:123" ### log_driver > [Version 1 file format](#version-1) only. In version 2, use > [logging](#logging). Specify a log driver. The default is `json-file`. log_driver: syslog ### log_opt > [Version 1 file format](#version-1) only. In version 2, use > [logging](#logging). Specify logging options as key-value pairs. An example of `syslog` options: log_opt: syslog-address: "tcp://192.168.0.42:123" ### net > [Version 1 file format](#version-1) only. In version 2, use > [network_mode](#network_mode). Network mode. Use the same values as the docker client `--net` parameter. The `container:...` form can take a service name instead of a container name or id. net: "bridge" net: "host" net: "none" net: "container:[service name or container name/id]" ### network_mode > [Version 2 file format](#version-2) only. In version 1, use [net](#net). Network mode. Use the same values as the docker client `--net` parameter, plus the special form `service:[service name]`. network_mode: "bridge" network_mode: "host" network_mode: "none" network_mode: "service:[service name]" network_mode: "container:[container name/id]" ### networks > [Version 2 file format](#version-2) only. In version 1, use [net](#net). Networks to join, referencing entries under the [top-level `networks` key](#network-configuration-reference). services: some-service: networks: - some-network - other-network #### aliases Aliases (alternative hostnames) for this service on the network. Other containers on the same network can use either the service name or this alias to connect to one of the service's containers. Since `aliases` is network-scoped, the same service can have different aliases on different networks. > **Note**: A network-wide alias can be shared by multiple containers, and even by multiple services. If it is, then exactly which container the name will resolve to is not guaranteed. The general format is shown here. services: some-service: networks: some-network: aliases: - alias1 - alias3 other-network: aliases: - alias2 In the example below, three services are provided (`web`, `worker`, and `db`), along with two networks (`new` and `legacy`). The `db` service is reachable at the hostname `db` or `database` on the `new` network, and at `db` or `mysql` on the `legacy` network. version: '2' services: web: build: ./web networks: - new worker: build: ./worker networks: - legacy db: image: mysql networks: new: aliases: - database legacy: aliases: - mysql networks: new: legacy: #### ipv4_address, ipv6_address Specify a static IP address for containers for this service when joining the network. The corresponding network configuration in the [top-level networks section](#network-configuration-reference) must have an `ipam` block with subnet and gateway configurations covering each static address. If IPv6 addressing is desired, the `com.docker.network.enable_ipv6` driver option must be set to `true`. An example: version: '2' services: app: image: busybox command: ifconfig networks: app_net: ipv4_address: 172.16.238.10 ipv6_address: 2001:3984:3989::10 networks: app_net: driver: bridge driver_opts: com.docker.network.enable_ipv6: "true" ipam: driver: default config: - subnet: 172.16.238.0/24 gateway: 172.16.238.1 - subnet: 2001:3984:3989::/64 gateway: 2001:3984:3989::1 ### pid pid: "host" Sets the PID mode to the host PID mode. This turns on sharing between container and the host operating system the PID address space. Containers launched with this flag will be able to access and manipulate other containers in the bare-metal machine's namespace and vise-versa. ### ports Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container port (a random host port will be chosen). > **Note:** When mapping ports in the `HOST:CONTAINER` format, you may experience > erroneous results when using a container port lower than 60, because YAML will > parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason, > we recommend always explicitly specifying your port mappings as strings. ports: - "3000" - "3000-3005" - "8000:8000" - "9090-9091:8080-8081" - "49100:22" - "127.0.0.1:8001:8001" - "127.0.0.1:5000-5010:5000-5010" ### security_opt Override the default labeling scheme for each container. security_opt: - label:user:USER - label:role:ROLE ### stop_signal Sets an alternative signal to stop the container. By default `stop` uses SIGTERM. Setting an alternative signal using `stop_signal` will cause `stop` to send that signal instead. stop_signal: SIGUSR1 ### ulimits Override the default ulimits for a container. You can either specify a single limit as an integer or soft/hard limits as a mapping. ulimits: nproc: 65535 nofile: soft: 20000 hard: 40000 ### volumes, volume\_driver Mount paths or named volumes, optionally specifying a path on the host machine (`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`). For [version 2 files](#version-2), named volumes need to be specified with the [top-level `volumes` key](#volume-configuration-reference). When using [version 1](#version-1), the Docker Engine will create the named volume automatically if it doesn't exist. You can mount a relative path on the host, which will expand relative to the directory of the Compose configuration file being used. Relative paths should always begin with `.` or `..`. volumes: # Just specify a path and let the Engine create a volume - /var/lib/mysql # Specify an absolute path mapping - /opt/data:/var/lib/mysql # Path on the host, relative to the Compose file - ./cache:/tmp/cache # User-relative path - ~/configs:/etc/configs/:ro # Named volume - datavolume:/var/lib/mysql If you do not use a host path, you may specify a `volume_driver`. volume_driver: mydriver Note that for [version 2 files](#version-2), this driver will not apply to named volumes (you should use the `driver` option when [declaring the volume](#volume-configuration-reference) instead). For [version 1](#version-1), both named volumes and container volumes will use the specified driver. > Note: No path expansion will be done if you have also specified a > `volume_driver`. See [Docker Volumes](https://docs.docker.com/engine/userguide/dockervolumes/) and [Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/) for more information. ### volumes_from Mount all of the volumes from another service or container, optionally specifying read-only access (``ro``) or read-write (``rw``). If no access level is specified, then read-write will be used. volumes_from: - service_name - service_name:ro - container:container_name - container:container_name:rw > **Note:** The `container:...` formats are only supported in the > [version 2 file format](#version-2). In [version 1](#version-1), you can use > container names without marking them as such: > > - service_name > - service_name:ro > - container_name > - container_name:rw ### cpu\_shares, cpu\_quota, cpuset, domainname, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, privileged, read\_only, restart, shm\_size, stdin\_open, tty, user, working\_dir Each of these is a single value, analogous to its [docker run](https://docs.docker.com/engine/reference/run/) counterpart. cpu_shares: 73 cpu_quota: 50000 cpuset: 0,1 user: postgresql working_dir: /code domainname: foo.com hostname: foo ipc: host mac_address: 02:42:ac:11:65:43 mem_limit: 1000000000 memswap_limit: 2000000000 privileged: true restart: always read_only: true shm_size: 64M stdin_open: true tty: true ## Volume configuration reference While it is possible to declare volumes on the fly as part of the service declaration, this section allows you to create named volumes that can be reused across multiple services (without relying on `volumes_from`), and are easily retrieved and inspected using the docker command line or API. See the [docker volume](https://docs.docker.com/engine/reference/commandline/volume_create/) subcommand documentation for more information. ### driver Specify which volume driver should be used for this volume. Defaults to `local`. The Docker Engine will return an error if the driver is not available. driver: foobar ### driver_opts Specify a list of options as key-value pairs to pass to the driver for this volume. Those options are driver-dependent - consult the driver's documentation for more information. Optional. driver_opts: foo: "bar" baz: 1 ### external If set to `true`, specifies that this volume has been created outside of Compose. `docker-compose up` will not attempt to create it, and will raise an error if it doesn't exist. `external` cannot be used in conjunction with other volume configuration keys (`driver`, `driver_opts`). In the example below, instead of attemping to create a volume called `[projectname]_data`, Compose will look for an existing volume simply called `data` and mount it into the `db` service's containers. version: '2' services: db: image: postgres volumes: - data:/var/lib/postgres/data volumes: data: external: true You can also specify the name of the volume separately from the name used to refer to it within the Compose file: volumes data: external: name: actual-name-of-volume ## Network configuration reference The top-level `networks` key lets you specify networks to be created. For a full explanation of Compose's use of Docker networking features, see the [Networking guide](networking.md). ### driver Specify which driver should be used for this network. The default driver depends on how the Docker Engine you're using is configured, but in most instances it will be `bridge` on a single host and `overlay` on a Swarm. The Docker Engine will return an error if the driver is not available. driver: overlay ### driver_opts Specify a list of options as key-value pairs to pass to the driver for this network. Those options are driver-dependent - consult the driver's documentation for more information. Optional. driver_opts: foo: "bar" baz: 1 ### ipam Specify custom IPAM config. This is an object with several properties, each of which is optional: - `driver`: Custom IPAM driver, instead of the default. - `config`: A list with zero or more config blocks, each containing any of the following keys: - `subnet`: Subnet in CIDR format that represents a network segment - `ip_range`: Range of IPs from which to allocate container IPs - `gateway`: IPv4 or IPv6 gateway for the master subnet - `aux_addresses`: Auxiliary IPv4 or IPv6 addresses used by Network driver, as a mapping from hostname to IP A full example: ipam: driver: default config: - subnet: 172.28.0.0/16 ip_range: 172.28.5.0/24 gateway: 172.28.5.254 aux_addresses: host1: 172.28.1.5 host2: 172.28.1.6 host3: 172.28.1.7 ### external If set to `true`, specifies that this network has been created outside of Compose. `docker-compose up` will not attempt to create it, and will raise an error if it doesn't exist. `external` cannot be used in conjunction with other network configuration keys (`driver`, `driver_opts`, `ipam`). In the example below, `proxy` is the gateway to the outside world. Instead of attemping to create a network called `[projectname]_outside`, Compose will look for an existing network simply called `outside` and connect the `proxy` service's containers to it. version: '2' services: proxy: build: ./proxy networks: - outside - default app: build: ./app networks: - default networks: outside: external: true You can also specify the name of the network separately from the name used to refer to it within the Compose file: networks: outside: external: name: actual-name-of-network ## Versioning There are two versions of the Compose file format: - Version 1, the legacy format. This is specified by omitting a `version` key at the root of the YAML. - Version 2, the recommended format. This is specified with a `version: '2'` entry at the root of the YAML. To move your project from version 1 to 2, see the [Upgrading](#upgrading) section. > **Note:** If you're using > [multiple Compose files](extends.md#different-environments) or > [extending services](extends.md#extending-services), each file must be of the > same version - you cannot mix version 1 and 2 in a single project. Several things differ depending on which version you use: - The structure and permitted configuration keys - The minimum Docker Engine version you must be running - Compose's behaviour with regards to networking These differences are explained below. ### Version 1 Compose files that do not declare a version are considered "version 1". In those files, all the [services](#service-configuration-reference) are declared at the root of the document. Version 1 is supported by **Compose up to 1.6.x**. It will be deprecated in a future Compose release. Version 1 files cannot declare named [volumes](#volume-configuration-reference), [networks](networking.md) or [build arguments](#args). Example: web: build: . ports: - "5000:5000" volumes: - .:/code links: - redis redis: image: redis ### Version 2 Compose files using the version 2 syntax must indicate the version number at the root of the document. All [services](#service-configuration-reference) must be declared under the `services` key. Version 2 files are supported by **Compose 1.6.0+** and require a Docker Engine of version **1.10.0+**. Named [volumes](#volume-configuration-reference) can be declared under the `volumes` key, and [networks](#network-configuration-reference) can be declared under the `networks` key. Simple example: version: '2' services: web: build: . ports: - "5000:5000" volumes: - .:/code redis: image: redis A more extended example, defining volumes and networks: version: '2' services: web: build: . ports: - "5000:5000" volumes: - .:/code networks: - front-tier - back-tier redis: image: redis volumes: - redis-data:/var/lib/redis networks: - back-tier volumes: redis-data: driver: local networks: front-tier: driver: bridge back-tier: driver: bridge ### Upgrading In the majority of cases, moving from version 1 to 2 is a very simple process: 1. Indent the whole file by one level and put a `services:` key at the top. 2. Add a `version: '2'` line at the top of the file. It's more complicated if you're using particular configuration features: - `dockerfile`: This now lives under the `build` key: build: context: . dockerfile: Dockerfile-alternate - `log_driver`, `log_opt`: These now live under the `logging` key: logging: driver: syslog options: syslog-address: "tcp://192.168.0.42:123" - `links` with environment variables: As documented in the [environment variables reference](link-env-deprecated.md), environment variables created by links have been deprecated for some time. In the new Docker network system, they have been removed. You should either connect directly to the appropriate hostname or set the relevant environment variable yourself, using the link hostname: web: links: - db environment: - DB_PORT=tcp://db:5432 - `external_links`: Compose uses Docker networks when running version 2 projects, so links behave slightly differently. In particular, two containers must be connected to at least one network in common in order to communicate, even if explicitly linked together. Either connect the external container to your app's [default network](networking.md), or connect both the external container and your service's containers to an [external network](networking.md#using-a-pre-existing-network). - `net`: This is now replaced by [network_mode](#network_mode): net: host -> network_mode: host net: bridge -> network_mode: bridge net: none -> network_mode: none If you're using `net: "container:[service name]"`, you must now use `network_mode: "service:[service name]"` instead. net: "container:web" -> network_mode: "service:web" If you're using `net: "container:[container name/id]"`, the value does not need to change. net: "container:cont-name" -> network_mode: "container:cont-name" net: "container:abc12345" -> network_mode: "container:abc12345" - `volumes` with named volumes: these must now be explicitly declared in a top-level `volumes` section of your Compose file. If a service mounts a named volume called `data`, you must declare a `data` volume in your top-level `volumes` section. The whole file might look like this: version: '2' services: db: image: postgres volumes: - data:/var/lib/postgresql/data volumes: data: {} By default, Compose creates a volume whose name is prefixed with your project name. If you want it to just be called `data`, declare it as external: volumes: data: external: true ## Variable substitution Your configuration options can contain environment variables. Compose uses the variable values from the shell environment in which `docker-compose` is run. For example, suppose the shell contains `EXTERNAL_PORT=8000` and you supply this configuration: web: build: . ports: - "${EXTERNAL_PORT}:5000" When you run `docker-compose up` with this configuration, Compose looks for the `EXTERNAL_PORT` environment variable in the shell and substitutes its value in. In this example, Compose resolves the port mapping to `"8000:5000"` before creating the `web` container. If an environment variable is not set, Compose substitutes with an empty string. In the example above, if `EXTERNAL_PORT` is not set, the value for the port mapping is `:5000` (which is of course an invalid port mapping, and will result in an error when attempting to create the container). Both `$VARIABLE` and `${VARIABLE}` syntax are supported. Extended shell-style features, such as `${VARIABLE-default}` and `${VARIABLE/foo/bar}`, are not supported. You can use a `$$` (double-dollar sign) when your configuration needs a literal dollar sign. This also prevents Compose from interpolating a value, so a `$$` allows you to refer to environment variables that you don't want processed by Compose. web: build: . command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE" If you forget and use a single dollar sign (`$`), Compose interprets the value as an environment variable and will warn you: The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string. ## Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) compose-1.8.0/docs/django.md000066400000000000000000000157261274620702700157230ustar00rootroot00000000000000 # Quickstart: Docker Compose and Django This quick-start guide demonstrates how to use Docker Compose to set up and run a simple Django/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md). ### Define the project components For this project, you need to create a Dockerfile, a Python dependencies file, and a `docker-compose.yml` file. 1. Create an empty project directory. You can name the directory something easy for you to remember. This directory is the context for your application image. The directory should only contain resources to build that image. 2. Create a new file called `Dockerfile` in your project directory. The Dockerfile defines an application's image content via one or more build commands that configure that image. Once built, you can run the image in a container. For more information on `Dockerfiles`, see the [Docker user guide](/engine/tutorials/dockerimages.md#building-an-image-from-a-dockerfile) and the [Dockerfile reference](/engine/reference/builder.md). 3. Add the following content to the `Dockerfile`. FROM python:2.7 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ This `Dockerfile` starts with a Python 2.7 base image. The base image is modified by adding a new `code` directory. The base image is further modified by installing the Python requirements defined in the `requirements.txt` file. 4. Save and close the `Dockerfile`. 5. Create a `requirements.txt` in your project directory. This file is used by the `RUN pip install -r requirements.txt` command in your `Dockerfile`. 6. Add the required software in the file. Django psycopg2 7. Save and close the `requirements.txt` file. 8. Create a file called `docker-compose.yml` in your project directory. The `docker-compose.yml` file describes the services that make your app. In this example those services are a web server and database. The compose file also describes which Docker images these services use, how they link together, any volumes they might need mounted inside the containers. Finally, the `docker-compose.yml` file describes which ports these services expose. See the [`docker-compose.yml` reference](compose-file.md) for more information on how this file works. 9. Add the following configuration to the file. version: '2' services: db: image: postgres web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" depends_on: - db This file defines two services: The `db` service and the `web` service. 10. Save and close the `docker-compose.yml` file. ### Create a Django project In this step, you create a Django started project by building the image from the build context defined in the previous procedure. 1. Change to the root of your project directory. 2. Create the Django project using the `docker-compose` command. $ docker-compose run web django-admin.py startproject composeexample . This instructs Compose to run `django-admin.py startproject composeeexample` in a container, using the `web` service's image and configuration. Because the `web` image doesn't exist yet, Compose builds it from the current directory, as specified by the `build: .` line in `docker-compose.yml`. Once the `web` service image is built, Compose runs it and executes the `django-admin.py startproject` command in the container. This command instructs Django to create a set of files and directories representing a Django project. 3. After the `docker-compose` command completes, list the contents of your project. $ ls -l drwxr-xr-x 2 root root composeexample -rw-rw-r-- 1 user user docker-compose.yml -rw-rw-r-- 1 user user Dockerfile -rwxr-xr-x 1 root root manage.py -rw-rw-r-- 1 user user requirements.txt If you are running Docker on Linux, the files `django-admin` created are owned by root. This happens because the container runs as the root user. Change the ownership of the the new files. sudo chown -R $USER:$USER . If you are running Docker on Mac or Windows, you should already have ownership of all files, including those generated by `django-admin`. List the files just verify this. $ ls -l total 32 -rw-r--r-- 1 user staff 145 Feb 13 23:00 Dockerfile drwxr-xr-x 6 user staff 204 Feb 13 23:07 composeexample -rw-r--r-- 1 user staff 159 Feb 13 23:02 docker-compose.yml -rwxr-xr-x 1 user staff 257 Feb 13 23:07 manage.py -rw-r--r-- 1 user staff 16 Feb 13 23:01 requirements.txt ### Connect the database In this section, you set up the database connection for Django. 1. In your project directory, edit the `composeexample/settings.py` file. 2. Replace the `DATABASES = ...` with the following: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'postgres', 'USER': 'postgres', 'HOST': 'db', 'PORT': 5432, } } These settings are determined by the [postgres](https://hub.docker.com/_/postgres/) Docker image specified in `docker-compose.yml`. 3. Save and close the file. 4. Run the `docker-compose up` command. $ docker-compose up Starting composepractice_db_1... Starting composepractice_web_1... Attaching to composepractice_db_1, composepractice_web_1 ... db_1 | PostgreSQL init process complete; ready for start up. ... db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started .. web_1 | Django version 1.8.4, using settings 'composeexample.settings' web_1 | Starting development server at http://0.0.0.0:8000/ web_1 | Quit the server with CONTROL-C. At this point, your Django app should be running at port `8000` on your Docker host. If you are using a Docker Machine VM, you can use the `docker-machine ip MACHINE_NAME` to get the IP address. ![Django example](images/django-it-worked.png) ## More Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/env-file.md000066400000000000000000000024631274620702700161600ustar00rootroot00000000000000 # Environment file Compose supports declaring default environment variables in an environment file named `.env` placed in the folder `docker-compose` command is executed from *(current working directory)*. Compose expects each line in an env file to be in `VAR=VAL` format. Lines beginning with `#` (i.e. comments) are ignored, as are blank lines. > Note: Values present in the environment at runtime will always override > those defined inside the `.env` file. Similarly, values passed via > command-line arguments take precedence as well. Those environment variables will be used for [variable substitution](compose-file.md#variable-substitution) in your Compose file, but can also be used to define the following [CLI variables](reference/envvars.md): - `COMPOSE_API_VERSION` - `COMPOSE_FILE` - `COMPOSE_HTTP_TIMEOUT` - `COMPOSE_PROJECT_NAME` - `DOCKER_CERT_PATH` - `DOCKER_HOST` - `DOCKER_TLS_VERIFY` ## More Compose documentation - [User guide](index.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/environment-variables.md000066400000000000000000000077071274620702700207730ustar00rootroot00000000000000 # Environment variables in Compose There are multiple parts of Compose that deal with environment variables in one sense or another. This page should help you find the information you need. ## Substituting environment variables in Compose files It's possible to use environment variables in your shell to populate values inside a Compose file: web: image: "webapp:${TAG}" For more information, see the [Variable substitution](compose-file.md#variable-substitution) section in the Compose file reference. ## Setting environment variables in containers You can set environment variables in a service's containers with the ['environment' key](compose-file.md#environment), just like with `docker run -e VARIABLE=VALUE ...`: web: environment: - DEBUG=1 ## Passing environment variables through to containers You can pass environment variables from your shell straight through to a service's containers with the ['environment' key](compose-file.md#environment) by not giving them a value, just like with `docker run -e VARIABLE ...`: web: environment: - DEBUG The value of the `DEBUG` variable in the container will be taken from the value for the same variable in the shell in which Compose is run. ## The “env_file” configuration option You can pass multiple environment variables from an external file through to a service's containers with the ['env_file' option](compose-file.md#env-file), just like with `docker run --env-file=FILE ...`: web: env_file: - web-variables.env ## Setting environment variables with 'docker-compose run' Just like with `docker run -e`, you can set environment variables on a one-off container with `docker-compose run -e`: $ docker-compose run -e DEBUG=1 web python console.py You can also pass a variable through from the shell by not giving it a value: $ docker-compose run -e DEBUG web python console.py The value of the `DEBUG` variable in the container will be taken from the value for the same variable in the shell in which Compose is run. ## The “.env” file You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an [environment file](env-file.md) named `.env`: $ cat .env TAG=v1.5 $ cat docker-compose.yml version: '2.0' services: web: image: "webapp:${TAG}" When you run `docker-compose up`, the `web` service defined above uses the image `webapp:v1.5`. You can verify this with the [config command](reference/config.md), which prints your resolved application config to the terminal: $ docker-compose config version: '2.0' services: web: image: 'webapp:v1.5' Values in the shell take precedence over those specified in the `.env` file. If you set `TAG` to a different value in your shell, the substitution in `image` uses that instead: $ export TAG=v2.0 $ docker-compose config version: '2.0' services: web: image: 'webapp:v2.0' ## Configuring Compose using environment variables Several environment variables are available for you to configure the Docker Compose command-line behaviour. They begin with `COMPOSE_` or `DOCKER_`, and are documented in [CLI Environment Variables](reference/envvars.md). ## Environment variables created by links When using the ['links' option](compose-file.md#links) in a [v1 Compose file](compose-file.md#version-1), environment variables will be created for each link. They are documented in the [Link environment variables reference](link-env-deprecated.md). Please note, however, that these variables are deprecated - you should just use the link alias as a hostname instead. compose-1.8.0/docs/extends.md000066400000000000000000000232251274620702700161240ustar00rootroot00000000000000 # Extending services and Compose files Compose supports two methods of sharing common configuration: 1. Extending an entire Compose file by [using multiple Compose files](#multiple-compose-files) 2. Extending individual services with [the `extends` field](#extending-services) ## Multiple Compose files Using multiple Compose files enables you to customize a Compose application for different environments or different workflows. ### Understanding multiple Compose files By default, Compose reads two files, a `docker-compose.yml` and an optional `docker-compose.override.yml` file. By convention, the `docker-compose.yml` contains your base configuration. The override file, as its name implies, can contain configuration overrides for existing services or entirely new services. If a service is defined in both files Compose merges the configurations using the rules described in [Adding and overriding configuration](#adding-and-overriding-configuration). To use multiple override files, or an override file with a different name, you can use the `-f` option to specify the list of files. Compose merges files in the order they're specified on the command line. See the [`docker-compose` command reference](./reference/overview.md) for more information about using `-f`. When you use multiple configuration files, you must make sure all paths in the files are relative to the base Compose file (the first Compose file specified with `-f`). This is required because override files need not be valid Compose files. Override files can contain small fragments of configuration. Tracking which fragment of a service is relative to which path is difficult and confusing, so to keep paths easier to understand, all paths must be defined relative to the base file. ### Example use case In this section are two common use cases for multiple compose files: changing a Compose app for different environments, and running administrative tasks against a Compose app. #### Different environments A common use case for multiple files is changing a development Compose app for a production-like environment (which may be production, staging or CI). To support these differences, you can split your Compose configuration into a few different files: Start with a base file that defines the canonical configuration for the services. **docker-compose.yml** web: image: example/my_web_app:latest links: - db - cache db: image: postgres:latest cache: image: redis:latest In this example the development configuration exposes some ports to the host, mounts our code as a volume, and builds the web image. **docker-compose.override.yml** web: build: . volumes: - '.:/code' ports: - 8883:80 environment: DEBUG: 'true' db: command: '-d' ports: - 5432:5432 cache: ports: - 6379:6379 When you run `docker-compose up` it reads the overrides automatically. Now, it would be nice to use this Compose app in a production environment. So, create another override file (which might be stored in a different git repo or managed by a different team). **docker-compose.prod.yml** web: ports: - 80:80 environment: PRODUCTION: 'true' cache: environment: TTL: '500' To deploy with this production Compose file you can run docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d This deploys all three services using the configuration in `docker-compose.yml` and `docker-compose.prod.yml` (but not the dev configuration in `docker-compose.override.yml`). See [production](production.md) for more information about Compose in production. #### Administrative tasks Another common use case is running adhoc or administrative tasks against one or more services in a Compose app. This example demonstrates running a database backup. Start with a **docker-compose.yml**. web: image: example/my_web_app:latest links: - db db: image: postgres:latest In a **docker-compose.admin.yml** add a new service to run the database export or backup. dbadmin: build: database_admin/ links: - db To start a normal environment run `docker-compose up -d`. To run a database backup, include the `docker-compose.admin.yml` as well. docker-compose -f docker-compose.yml -f docker-compose.admin.yml \ run dbadmin db-backup ## Extending services Docker Compose's `extends` keyword enables sharing of common configurations among different files, or even different projects entirely. Extending services is useful if you have several services that reuse a common set of configuration options. Using `extends` you can define a common set of service options in one place and refer to it from anywhere. > **Note:** `links`, `volumes_from`, and `depends_on` are never shared between > services using >`extends`. These exceptions exist to avoid > implicit dependencies—you always define `links` and `volumes_from` > locally. This ensures dependencies between services are clearly visible when > reading the current file. Defining these locally also ensures changes to the > referenced file don't result in breakage. ### Understand the extends configuration When defining any service in `docker-compose.yml`, you can declare that you are extending another service like this: web: extends: file: common-services.yml service: webapp This instructs Compose to re-use the configuration for the `webapp` service defined in the `common-services.yml` file. Suppose that `common-services.yml` looks like this: webapp: build: . ports: - "8000:8000" volumes: - "/data" In this case, you'll get exactly the same result as if you wrote `docker-compose.yml` with the same `build`, `ports` and `volumes` configuration values defined directly under `web`. You can go further and define (or re-define) configuration locally in `docker-compose.yml`: web: extends: file: common-services.yml service: webapp environment: - DEBUG=1 cpu_shares: 5 important_web: extends: web cpu_shares: 10 You can also write other services and link your `web` service to them: web: extends: file: common-services.yml service: webapp environment: - DEBUG=1 cpu_shares: 5 links: - db db: image: postgres ### Example use case Extending an individual service is useful when you have multiple services that have a common configuration. The example below is a Compose app with two services: a web application and a queue worker. Both services use the same codebase and share many configuration options. In a **common.yml** we define the common configuration: app: build: . environment: CONFIG_FILE_PATH: /code/config API_KEY: xxxyyy cpu_shares: 5 In a **docker-compose.yml** we define the concrete services which use the common configuration: webapp: extends: file: common.yml service: app command: /code/run_web_app ports: - 8080:8080 links: - queue - db queue_worker: extends: file: common.yml service: app command: /code/run_worker links: - queue ## Adding and overriding configuration Compose copies configurations from the original service over to the local one. If a configuration option is defined in both the original service the local service, the local value *replaces* or *extends* the original value. For single-value options like `image`, `command` or `mem_limit`, the new value replaces the old value. # original service command: python app.py # local service command: python otherapp.py # result command: python otherapp.py > **Note:** In the case of `build` and `image`, when using > [version 1 of the Compose file format](compose-file.md#version-1), using one > option in the local service causes Compose to discard the other option if it > was defined in the original service. > > For example, if the original service defines `image: webapp` and the > local service defines `build: .` then the resulting service will have > `build: .` and no `image` option. > > This is because `build` and `image` cannot be used together in a version 1 > file. For the **multi-value options** `ports`, `expose`, `external_links`, `dns`, `dns_search`, and `tmpfs`, Compose concatenates both sets of values: # original service expose: - "3000" # local service expose: - "4000" - "5000" # result expose: - "3000" - "4000" - "5000" In the case of `environment`, `labels`, `volumes` and `devices`, Compose "merges" entries together with locally-defined values taking precedence: # original service environment: - FOO=original - BAR=original # local service environment: - BAR=local - BAZ=local # result environment: - FOO=original - BAR=local - BAZ=local ## Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/faq.md000066400000000000000000000117251274620702700152230ustar00rootroot00000000000000 # Frequently asked questions If you don’t see your question here, feel free to drop by `#docker-compose` on freenode IRC and ask the community. ## Can I control service startup order? Yes - see [Controlling startup order](startup-order.md). ## Why do my services take 10 seconds to recreate or stop? Compose stop attempts to stop a container by sending a `SIGTERM`. It then waits for a [default timeout of 10 seconds](./reference/stop.md). After the timeout, a `SIGKILL` is sent to the container to forcefully kill it. If you are waiting for this timeout, it means that your containers aren't shutting down when they receive the `SIGTERM` signal. There has already been a lot written about this problem of [processes handling signals](https://medium.com/@gchudnov/trapping-signals-in-docker-containers-7a57fdda7d86) in containers. To fix this problem, try the following: * Make sure you're using the JSON form of `CMD` and `ENTRYPOINT` in your Dockerfile. For example use `["program", "arg1", "arg2"]` not `"program arg1 arg2"`. Using the string form causes Docker to run your process using `bash` which doesn't handle signals properly. Compose always uses the JSON form, so don't worry if you override the command or entrypoint in your Compose file. * If you are able, modify the application that you're running to add an explicit signal handler for `SIGTERM`. * Set the `stop_signal` to a signal which the application knows how to handle: web: build: . stop_signal: SIGINT * If you can't modify the application, wrap the application in a lightweight init system (like [s6](http://skarnet.org/software/s6/)) or a signal proxy (like [dumb-init](https://github.com/Yelp/dumb-init) or [tini](https://github.com/krallin/tini)). Either of these wrappers take care of handling `SIGTERM` properly. ## How do I run multiple copies of a Compose file on the same host? Compose uses the project name to create unique identifiers for all of a project's containers and other resources. To run multiple copies of a project, set a custom project name using the [`-p` command line option](./reference/overview.md) or the [`COMPOSE_PROJECT_NAME` environment variable](./reference/envvars.md#compose-project-name). ## What's the difference between `up`, `run`, and `start`? Typically, you want `docker-compose up`. Use `up` to start or restart all the services defined in a `docker-compose.yml`. In the default "attached" mode, you'll see all the logs from all the containers. In "detached" mode (`-d`), Compose exits after starting the containers, but the containers continue to run in the background. The `docker-compose run` command is for running "one-off" or "adhoc" tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use `run` to run tests or perform an administrative task such as removing or adding data to a data volume container. The `run` command acts like `docker run -ti` in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container. The `docker-compose start` command is useful only to restart containers that were previously created, but were stopped. It never creates new containers. ## Can I use json instead of yaml for my Compose file? Yes. [Yaml is a superset of json](http://stackoverflow.com/a/1729545/444646) so any JSON file should be valid Yaml. To use a JSON file with Compose, specify the filename to use, for example: ```bash docker-compose -f docker-compose.json up ``` ## Should I include my code with `COPY`/`ADD` or a volume? You can add your code to the image using `COPY` or `ADD` directive in a `Dockerfile`. This is useful if you need to relocate your code along with the Docker image, for example when you're sending code to another environment (production, CI, etc). You should use a `volume` if you want to make changes to your code and see them reflected immediately, for example when you're developing code and your server supports hot code reloading or live-reload. There may be cases where you'll want to use both. You can have the image include the code using a `COPY`, and use a `volume` in your Compose file to include the code from the host during development. The volume overrides the directory contents of the image. ## Where can I find example compose files? There are [many examples of Compose files on github](https://github.com/search?q=in%3Apath+docker-compose.yml+extension%3Ayml&type=Code). ## Compose documentation - [Installing Compose](install.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/gettingstarted.md000066400000000000000000000143051274620702700175010ustar00rootroot00000000000000 # Getting Started On this page you build a simple Python web application running on Docker Compose. The application uses the Flask framework and increments a value in Redis. While the sample uses Python, the concepts demonstrated here should be understandable even if you're not familiar with it. ## Prerequisites Make sure you have already [installed both Docker Engine and Docker Compose](install.md). You don't need to install Python, it is provided by a Docker image. ## Step 1: Setup 1. Create a directory for the project: $ mkdir composetest $ cd composetest 2. With your favorite text editor create a file called `app.py` in your project directory. from flask import Flask from redis import Redis app = Flask(__name__) redis = Redis(host='redis', port=6379) @app.route('/') def hello(): redis.incr('hits') return 'Hello World! I have been seen %s times.' % redis.get('hits') if __name__ == "__main__": app.run(host="0.0.0.0", debug=True) 3. Create another file called `requirements.txt` in your project directory and add the following: flask redis These define the applications dependencies. ## Step 2: Create a Docker image In this step, you build a new Docker image. The image contains all the dependencies the Python application requires, including Python itself. 1. In your project directory create a file named `Dockerfile` and add the following: FROM python:2.7 ADD . /code WORKDIR /code RUN pip install -r requirements.txt CMD python app.py This tells Docker to: * Build an image starting with the Python 2.7 image. * Add the current directory `.` into the path `/code` in the image. * Set the working directory to `/code`. * Install the Python dependencies. * Set the default command for the container to `python app.py` For more information on how to write Dockerfiles, see the [Docker user guide](/engine/tutorials/dockerimages.md#building-an-image-from-a-dockerfile) and the [Dockerfile reference](/engine/reference/builder.md). 2. Build the image. $ docker build -t web . This command builds an image named `web` from the contents of the current directory. The command automatically locates the `Dockerfile`, `app.py`, and `requirements.txt` files. ## Step 3: Define services Define a set of services using `docker-compose.yml`: 1. Create a file called docker-compose.yml in your project directory and add the following: version: '2' services: web: build: . ports: - "5000:5000" volumes: - .:/code depends_on: - redis redis: image: redis This Compose file defines two services, `web` and `redis`. The web service: * Builds from the `Dockerfile` in the current directory. * Forwards the exposed port 5000 on the container to port 5000 on the host machine. * Mounts the project directory on the host to `/code` inside the container allowing you to modify the code without having to rebuild the image. * Links the web service to the Redis service. The `redis` service uses the latest public [Redis](https://registry.hub.docker.com/_/redis/) image pulled from the Docker Hub registry. ## Step 4: Build and run your app with Compose 1. From your project directory, start up your application. $ docker-compose up Pulling image redis... Building web... Starting composetest_redis_1... Starting composetest_web_1... redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3 web_1 | * Running on http://0.0.0.0:5000/ web_1 | * Restarting with stat Compose pulls a Redis image, builds an image for your code, and start the services you defined. 2. Enter `http://0.0.0.0:5000/` in a browser to see the application running. If you're using Docker on Linux natively, then the web app should now be listening on port 5000 on your Docker daemon host. If `http://0.0.0.0:5000` doesn't resolve, you can also try `http://localhost:5000`. If you're using Docker Machine on a Mac, use `docker-machine ip MACHINE_VM` to get the IP address of your Docker host. Then, `open http://MACHINE_VM_IP:5000` in a browser. You should see a message in your browser saying: `Hello World! I have been seen 1 times.` 3. Refresh the page. The number should increment. ## Step 5: Experiment with some other commands If you want to run your services in the background, you can pass the `-d` flag (for "detached" mode) to `docker-compose up` and use `docker-compose ps` to see what is currently running: $ docker-compose up -d Starting composetest_redis_1... Starting composetest_web_1... $ docker-compose ps Name Command State Ports ------------------------------------------------------------------- composetest_redis_1 /usr/local/bin/run Up composetest_web_1 /bin/sh -c python app.py Up 5000->5000/tcp The `docker-compose run` command allows you to run one-off commands for your services. For example, to see what environment variables are available to the `web` service: $ docker-compose run web env See `docker-compose --help` to see other available commands. You can also install [command completion](completion.md) for the bash and zsh shell, which will also show you available commands. If you started Compose with `docker-compose up -d`, you'll probably want to stop your services once you've finished with them: $ docker-compose stop At this point, you have seen the basics of how Compose works. ## Where to go next - Next, try the quick start guide for [Django](django.md), [Rails](rails.md), or [WordPress](wordpress.md). - [Explore the full list of Compose commands](./reference/index.md) - [Compose configuration file reference](compose-file.md) compose-1.8.0/docs/images/000077500000000000000000000000001274620702700153715ustar00rootroot00000000000000compose-1.8.0/docs/images/django-it-worked.png000066400000000000000000000674361274620702700212640ustar00rootroot00000000000000PNG  IHDR LnIDATxEA @aff?|oGzJi@OpT}gg޿`? o߾d2Y8^jBO\n6ںPA*Dۄ;3Nzq-El6;#gt:XmCi 1Z (:Qׯ"r8`Px-Z-ԆQQl.ϟOFuNZwU>VmPmq=J -\d֧O,˳0FQ'ŬRƁ=NoF!ԇo)9wޚ[lSj@ӾQTZODm)w/juI˖ܹc;6m+->VDj"NTOH}BƧM5הn0flCiWوT~}XQ>H}jjܽJ56vv X V Umu8uqj4 R>D5B5>mhD'@뇞y\ V~Ҟ!ѪCGj8M8( M)i.Hu2cTCT#TF?3OS% &ZƬFƪT5j'vvaj4뮝o.HdTcԆFز\_$rRz6Rj읅?&y33/8̌F[g[J}E!Kū".ԞT\˥ӍtYJg1R.%MnҤIFICF>#PXPҩ_QFDDDDDdQ| wGTHkդ8ZԤ=A֥]Ng1]%1](ŀ4))ɳb)ICHhD{?c?ԻPz~h7 """""](~_QkdKh)fA.teݔRJ$IGYF#]@#u> u>Rh]A;^OtIGكpxp#Hlֈ,ZR$)f9MioztALFJ礴FJ&!h(7YSuYޝȣLwxQ`t| uXKu\@Z݀BWI84B +fp==Mio^֋#fZS)-4RRLgBJwm7|oށ nBDDDDD$>6΄;PTJ%u8.t뭓S1r;g1}h)s_izJWIiJw4@8 itRQ>"ʇzΏ>QϞPgv"""""" =#JÙáp+QŽZz:)jT pNb n :v)v-GszG##nX4Y: /KP/Cݿi/zW™pkx ׊EP[/A麜|]NqoijνIK^gQzJt7)h=6sbIB6}Z׊7f߂: ݀H<(^#J8CEXqXO ½8%=Mios sIoRӹtGbϲT5%c ヿ.cΡRۜtʏa}W"M>߭Q]?uAÉ>liXTk$dIPq4=8 .WRE1xҬ%Π&)}˿~We]/k˿q>ogO3] OLw^El/o\f KzBf)5ytz{K純6Î'&F ]C?=.zq=0$pNM{<iGb:-ŌoWʮ ׻=#SLEDDDDDSo↫zMr\ƻt5%+eYh=GupvGo^\p)5yŔ}Ѽ4e$9--~4(NyTDDDDDDp9n \tЛryii|42Y-eRvbʙ[(%4C.DDDDDDp9ni\˔^\0Mqľytƛ2-$ޤžpqbĽ;b*"""""t ۗ4z4tލtҔfEL&ߟ=;[+""""""℘oNMqLLӅG/%Mi/ѕ{o)&]Jߪ؍up:zp7}$Nb. i)ej&2J㤥u~r;TDDDDDD1钚z8_&?N9o΋;v1MFbJ/hRW_JDK q/zM!~_mTDDDDDDp9w ^^;k3!W39qύ44Yis/Szx~seٿ]r8nWrLژfi HL牼4FL51/e"S/-n]g/n(/vr^\0}ژ)N9OݑR ֬a"Ӻ2޺Y1Z1}y]9/.18bL$bʃ.1ei^Kx=;~&6?olNxp8`Lqþ4f15Fb:OÿIkC[p\GEQ91C@`f42h$([BiѴ$z3kS#TAn)%m]LUz`נma7.iK) 5E[̷땾i{"B!b 鼉LpC8$&'u-4%<J~X?|~˝{[ZbZ< cmΜAhK9yfq5=ӋvO!B!ӧvp<_6s$8wC%ݒ1AS'!nZ8S0?]ðRˣ&_/3= };1v_Swӭ1pB!b L|p?? N7#}1Cb LZL+b0E쯟k9;zBruK>cTeS)!B!d3VLye\,!O;*LELខbj׉b0K8`ܚ~"/z)Mٖ׏O?_)WL)B!.e~~2/nG+Zb LJL-r+vbT&{ ^JS|d?-Ŵ4v7.إ4d;㮕V%{ܼfD$3n)QSF) jb4}1E/&* ~iM,OZ^fb ABb(B[ciCIƙI?vU0{}L8|QۿkYTUk^:g܁+U1uE0'iͻ 3iN^7K1%B!)sz }1+51cnL_bj9aꖊ܍A{M/m31؍ӎ<r|G:l0/SY`CwBrO'=꼄@1ե;9I2l5KKF\(B!/Ynd[ؗ-7KLR=kW#6S/Ni=0-TjfV7Dfizf=e"k svmUo%x ֑v2ƙ_vaX ̸ L[Sç"K0}rtI3.-Z\#×lŶ/4B!Brp:S.$pI8Sy_4{1GXW0t?-ɕ]Ji, OLmOnq UȺ>Ï̯vW4EJg$Hiplu84 K9oe5vBջy5] #L*V 87Y $Ndˮκ+b-Ab!oTߓrYo4E%eH[`%Y<_owS&B!B1<ǻs)[ILAW:;7'g߿[Dg&S0m }NA#V߯ >GHUQVUA$2<,[F&[TӃ>p4B!B(֪R'7XWlLDugI%lH*VS9Y'I$$ 'ɯ&wx}:O1M B!PL"WÎxKRs%H[MseUXMMRLeU7&ʖ\IE[.2]%jgxt1O7PL?!BŔb H-+ul0Kȱ6ܧ.b* ,c [F*_,TlX) ,#PL !BvŔb ޟDIZ\@R6*Bí#ҦnfluS~&S9ϩ.)/ϽhW~&+lٹLLTVB!ߠRL(g&eU? vַVI@Lwj[`e*+/ӷcF|O} m]zٮ?%Wy6Sɛ])H/>nJչk0plGpƣGnQm~Awas`04cnE>\d$/-mH,SW>@6kʄeS*v8Q| n06w'lG>LEȐ؎vņfihhhl3yo{Ŷޱ^?bZT2 b`0 F_cJ7u㺧c/Z Hr [n}FKi!O{i : f{XOcw_ l)`0`bssNj8ty4=.ׁʛ4-ם_u|.{OPwP񇷩Uf\NjZKDPmN[pFwIǐfҋ7A(L,}pt!a=yzo$ZY"bg|TeDt5]jwj^  { u.p}u;pң\⫑TDM}j;S*T+t/<6pyFߴGMK/1'a##dMcMYt߫r˭{#}]{Rȭd9A]E6~ىA `IȹIW7rQUoF|7:f~UKD@":}iqRC0O21'<@YFItmLe`0&LLG̉)gDupZ97Pg{3R#!Y{LLD }]iLe%nB,3@Lpxk :Y0/DqGKO%~z,@-J~(r0Tƞvm9jQguN"B M S i،l`0 S(?'nD%}*3RU\OLkTraBLAᘈh {;b"odڊ8 ̬Sd1(dQb يhZڲ=gkKh`9Z0&p쾕"CδKHtyb ŁL{jl]p;zh[T槕1c@j-5Z k+A;yYPd\ZeEoߌ*q=7.+VfD4(5D&8 L}4M6ӃD\W/a7 h&jmӰs;0`0 &@aB4j@5ܰΑ9@'JD )SY( 1UғAB5M"hAQTSN6"MX^1u=7MW? ?3TyCNiΒ-"4 <144}SUUֺ{BL5)=АbTHYhҰ%^qcl' JD=rKgHWө\t=ΔŎ)@Lhӌ~4UzIfPf `015o%BcTyҌƼl;&0 KrWF']y3mb46AwV8*9< Qjb+@:$ \ƷeT\æB ߀ZXqe%\]TqaSԮ*;:Z0Tz1%S޿ Rf6Eqv;]ǶiaG_ڢ U \~٥ 16 v(ÈH t\AqZH+otLcc ]AFO=^hdzqҚUԁN( WM#2Đ7y[2:iL5J.I80I!gh `01ebjƑ#*='-3>ȝBL<5,nuՍ-Tr6%mVܚH$4A%Ҧ?fԴJS>RѝLLՔa `01ebj< /z:-pg QǶX~EaG޼ kFsc?a g bY] 1S1S1S1S1SN1S1S1S1S1SĴ51St84"=\L3^1m1@L1@L1@L1@L1@L1=ku0@?C}`Cŋi(\;(sem_߿T3ӓ &+2|3er;Q7=l,i߽'<>q-0?4?r-A8]Qw1"\F2+䥃wI%A ƈ|F(ţxvLG;/wH+&.b\`~³+ػΰozoۛNN]ǽi6Ʃ[. N\7$\)~im|ޛ7A}Iޙ3yϜ 6[#m;vꅗ|z=ckFTnȌbW0W] (I#ˤ_8_O0E@ y_&]QkPえuJ g"nyzaO_9#r6 upg~ۯ4`Սe˺cPnl;9g W%ԿD3jllTo*ҰoʧE<1nvFϻʟ_T"Ju%TJ";Բ^ HjKԏ>ikwOr=(%c4Q#7.3&W=j_S_Q+3&#XR{_P` Q k%ϨOOa2ڔ땊9vڝ }{$hkkDfz|ץgr]m]'MpvLU wRޟ^+M%m%më#?ZQ%w3.Fuv;+)$ژ6ʹ7&iz٪g6 Œ0T+0=yگ`87qR݈ݲxL;v94mϘ%U/ͻwG%lީ Ym+˩T.7HEOTiu}(#+g޲ya*`ׂ=qcU;Kx|P[-Qw3(BI]< 5o_g.ed=mݜM`jSKC'ܭ)8+:XJM鲃Ǹ?1*QgyqCwϹOb)sjKԀsЖ< ]{/R'*ԓCO SCVD$ߦ"оT}ɬꥂ<16aߕ{O-ڛLs1>∯yoLN61TY;ݬܳ6WjyvEu:U|~R4׫ ez CqRotu$U1gj~3wRO}edA7Դ]ڸ=F i-Hfz+@It(z vwZRjfη%Tey[}V'.$\{D"+<2VE+}2@־6 3(e'Zt?}Ŵ=zM:n4gew݌UV/VԆf_卺LvzU]Uf $O&q겁@5t[v^v#]y3CYLUMgz#,=a^5kSOYs=mt˪_E.^HxF:,)SuE"}8O0JdʣW%2SuQ :A٩۞ gяP[wCmgEq$Ь6- ʮU0;zO}&M+V{cԆ F^>=h^SU9MBSUKKZx._ m&mо8 gҧ\ryJaF _ ֔lj] _ v2OYݟhIo|.*e?ޛ@^J A2􈽬nu7P/CB60m?顯],S9ġ<ڗ϶>͊f+S ƀ鉒 {+9C2M0d/3Sl>yQ"(CYY:hN'?o䷦z?4Ϣ_$}%Raegԫ3wxgNOnY|L &v6-֞{3MOmE: PQFi9Փ[9mztG\vYJ@\ VW+ftK.nT& Xxhq>GV˫ZNKTI*e%&C'_`#$hVj_3)w:Z<|`rz#؆x=q`,G5\&m$<<,mDVQ[Ydbmz>LfW[?CtS$\AǻNN**lT{ 3P""@Aȡpk+7$bVΟ)_pݐEdɰE%jȄ@R%"~( M}Y4b0lŔN @Yit ܍~ƭ?LNu+:ԴdK`1܏b67>x lmgYfˋֱځ o]1g8_q޲_9B[m!PalO`j}Eei='ZK2ɈX_Y%IuVUP@YKLyZùI6FYyt{XghCS=Wy[mUUM\!`wlsIz{1rqY䉆zI3nڋ 5|mݿwOL|Cj?0c'}+ՏR$$II’7N$[xIZ86|U0Qm0yF_E3ÌV [>Q.m"w4LD Rye,QbU wѓ'UO:bu x:獺+lsXeΟȁicďOfTxy TvG->ƀi a$Kzc2\j/{VfU~8O}%E `L(ȮZTArlQBF xye&lPŕMc:dL)0e@^n.Y}`?"i ZR'X c7*TFQBtp%Ld L 4.J"/de.tH0K}/.WV#9bE|ca %Ȟ(:bg;0=>)jAx9Tc_Ia+ml_@Voh"@K+|уw,$M`^[.tm<=wwYI/"HҎdR*iC/qhb>Jƾ[cze#9c]7Ԃ,S]zB.Ik84dk0`,_sU`3(sC5i\솫 G; ^=LDFP^Z|Ƹu:XA:\+-IgI"gy}RT)y}wWb:|AUy @JЀ9*feȋvvl _yots޷Հ֒Jl_VZ"+sġKNO<Ub`r+ݛ;󕜕pmk1 u=]j⻘"a^^!="@ȾLoqikر"A}|X. <ЁcV s8['.A[vPOл쓫Ȓ,=lkOO`bSore/:A_6UTE/FOG?=CY5>uۄ>m6JLW{G$걬]z:}bbMNےo<䀙s^-|&yLn䮾{ylW /lO"{~'yĹYຢ!s[V thnueN|`%ZQ)z|ݗKJ-L zCnh1;G)܇ɏ7 ܵ {gn=Ypwv'YSWЫd3W+ڭ0َO?|>@/S(Ztdt3 T GKi?<g8gtק&<5l̽{oa^Ͼü?vgҰh]) S׿S#6Ə9{A84xYnͳX O/oxk+@ SZ5orݼSa3#uh;'bֹ/eYUz6&N\!:Yiet:vf @9 Zѹ4p 8D< 9T-:\rV 0A{~;mLJM06ʭovv $'O̤z5tb=lhciњ_pA|أWƀ_>EuM.7z O,d*ٕ{cǎuAXNIt\'uN ̿ȮlBqp˽@8b )b )bb )b )b Hbk|/Y0 b  b  b  b  b 6þUƵ-qk[XkuHTJpuGBWJ-RH . $Oyޙ ݤ˽ʙ3??vf.pu>x\!QicM7oq8k DiJw5hTu]+mm'|2=q$.n.|*tqq,OSU9ۥ3W`1&j18;KSiR|(R?𜆬pQiQC.o9Z\DzaM.{xa]Ap z8Tqpqù^1]/'weCLW>xط9$@LP!cLτ s+"ujٝӂ-`;ZR0;=D{?t/d~}r,xߍ>%ڲ|#(598BBB`/9Jݩ{R[xo3*}x.c4el7;k:T}󜴈5 NMm滙vchSw XF'Am#bo|HBLP_Fm]-/rns9r;!a8f'BB>B`qvS_!y⿷dB)#:fb ,p3qg8\rbk{(6V_+Om PXQ,E.`O?k;`J;'q`WG.'/2 -"z9n7>X.X\d iI;~b*Rb'ݾ8M4_A(:[6%/%䱫@ !bFO1E\9!rCB:4*1\oDGSq[,Oװ?I)'L1cVond@D45EX1 qH]Tj"u8$O͑zN}ӆ0¼ݸo}sz8eq7  ã֓N,Ÿ)?z0(si+1Ȇ>(t>i1r21l x1E$1?ƇhG?DEA6؛3ɇG.M4\7 y!gA-Rn*ɉ RBBy(8"C15{#˿r'ou~wߟDu,}CQ!oN=m"Cn덖SC1b*~$87n֮D}9/Jϖ +T_x Bw+x|'A}<&'3917G?ddĔZj7-ՙ[סS>ZFRG_:O۱o:?/ yuJ;BE-^[ ?bk|q$|dm{WǑwc033333 1dVrlI&I"E:0[r@]A?ofvig)iwꕠ%/`q ـlJzxߕmY8 #Gg-Z=G! 2|{U%ڥD]l9h^WOip>{b^0:u `:eDd\5&|ho|&ǖ9w|G{ۖϝ𫐉5m%ʱiFDa(,ۦʄNԔxc{4{ʴXxO{E8-r\Đ n qjLc}`` ()SNO5g>/ʘ]1"q ao0+K>|а7;WLL5` , LCbyxsLhSƛd)W)8`d m4$mAL><G=n>[Pf&(_@R1rE PL[W,F7,:HeEw&ù@f M(3j9 p6'YFɇ-].8|ٚI1P (d* d h<ʻP0"+wnzj kK}$WZFde+Glː͊{93zmYs‡2m6fpB*E3^+J 8uD<~iAw^ME%^+# LIg>~}3h0c&Mx5ay$9g-Qrʳq?1)ENwxz P' 4>BAzg<*`2L ?\{,]UZ:J=܆Pn$=W`6?x;\/i@@jD4/g e\a{[~=li3(x=]С&J_s % Lv`2~@LIHnJ#룞e"_;h3`2cV}Ϻ=Dy\8}r-ΏzqarA@`z7g 4TBF{Y_D1;Wo|ƒgHF6ǘ,6a_r=[RaLBl7X/|74`D0,(055KGr"Sh^iC8 S}*`~{0؆CV{fa1('4J fw$9CIr9o&1HMuZ@< pPYK_2+̌i<ϐ?;!yֿMº C[NZ!8Pl]`]aPț,p5L59SmPZ,wTE|Gv?s6x|##h"kam]eÐ++f9,-++\}2gMjMn˟&:0"ў L^* Xotn^R`]Il-ŧq>^sa8w/IbLؙbxбz-XT+:U(&LAu-}#<}1c#1uS~:}W<C.TI{tՌİyrktMT=Sէjv@T_?pwSQd3x{ ؟78[3F(s8/O* Neﮏ}`` L\|ɸC y(()=Z}xmG_|P~v]( gEiߞ+ =9>DZw_`ʕ䊅5WDZmxy& B'.أywy '3kiCA!kf?梇 mt}oc(LHYܳ:MYt_Nyk ݛܣ7Ir &vaTl5ž=&'R+@. YLck߯Yq6ABOg1n@)y[}$WĊ:VTE3~Lh L5vsسΌ&ӡmm3HfbEV[ ~9eu~]k#m+DW<]6L0:}*'XaL0W~{R}E,ݥ7II 7a_ԈKLFs{۰)rtv!+ Y60TA߲y L ߔCPXO1L2s2jh.60?lsCQ9J!S r'yt2܆d̟4lbP8bς"l#Ccg%LڇPbo " 62&`m;]*$Zjf-|-q@=$iȣdx%&y?Z:sԃ'0ndu3C~Ҡپ\uq5t܋{E! imJPLt@9pZ;AQadȘyʋ=?EkI_TwmXf4V)14J.|H>W}—87y$HAGX4Aw|g֥⸦e)3:ګl /zŗ_ȃA51޾2~]/ŖahɚGq`MM2\S݊u񤷻TWaE`8B߱ ֵv46M/J]/_it<}-Y͍.TZǀ⋨2.E/ v LlH` I` l0EԼnxyp?SƁcLEgyd 51U{VpZJ"K# ʸ;N3R*_i?;< m<6gu W7Nð.L>tq߱WK$떪 7V_R*%`Gx`@`A`B`C`D`E`F`G`H`IS`L LA8oFS T4ډ,0+n}%`s B%Ve} plGOsy7^R)!L[F"9 ^8::9RPP4»|*P)e6,=FoƜ/?.0 1ߕ S`C`D`E`F L)Լ{605yf,zZ0fafff13?83^R}V̮*iҷCKô@d^`Ҕ>z.ӥtށLqBp8pFb),@71|2qg”HG=},aULEDDDDDVL2 qLq┸tŧ|1ӓ"eHϴ' D[+""""""C7fnGa1)qyiVdҘީvDDDDDDSoK~4pƥ)z')b47}ŇO\fmLeo.i8^/qA7LDވ)sʘ @J)ͭ3ysMSLEDDDDDS2↽w\Z&_>o>S"Z[{9o~IM__m?@DDDDDDp8~w_;΃VdҼ>Ӿ4kc/_OjZy;TDDDDDD1p51}iKq.MƍĴ@"yZymLΩXsxkTDDDDD0n7;qG>XL]ϔ6}Dp#IM{i+7K"""""".v4i).518"RYL7lgr>FWL9[|QY/oݺp9$‸`ƛ2}aO1]g!HsjTהQ//Ux<x<N7aOK_byt-5̈́Of1/vx<(8nG8_&񮥥_x%3yȚY )ej9SLm`7ro<skKWNo=iɥk^f~"8^Jx37+bWLZ:tUL@Z*x4L_JI?΄f1|=8;qzh=S3|bzzVKPwy 'z54Ej35\d g>7\˙‹|)gxq.br޵tK|ˈv9ewD g'ygy&q'#\r}/""""""x ŝp(\*2:\gp3 Wzn~8:l~"""""" ř;.u*2k\IH#YRR wKOiR\KiJxqCqNK3)ι(145Kz䔈7X8#B>GT{y" """"""pnu¹{}o7}%˥tIJMR61]/{ME9=,]e0n0nF GP[ K*ISH̞KaG t|HC J*.u!nzO颔N%soet-5%ߴi"11)ŸgA1cu\w~?i?pw3""""""݁ELǡp)*^_f!Rttԥ޵t^3Ri7J)ͯ1>==f.zUbebq ػ@xR4E2BSR\z8_R\p},{)1mONg9cߖ^L$VRQuaf! tŗ"SEDqYFq2, )Ζp.sR%kewS.N]SYPG'E"6څ\pȊ?/E@#k%fIHg!ޚ.MHb.sj9eK/ǥ7)*)XzUHڥ"HgxUDʔ]\,hJvqKK.7羚 \1ёhS@xI jwY%Jt#z:MNS*'`%Z:XI$Izz:~ i/btV }f҉az8e=*'s:uNb`%Z$I$e)"FdwDn%=R0qO{:oFTx V V|*'$I!t]EfNeN탴(maqzH*JSI$I tEb$H?3Jⴝ5#UW$IdI zk mQZ N ԶH%T3V A{J$IDO.#DѶoF)>@%R_* YI$Iz P"U}qZHmJff&bV$I=J%F{G)> PTdD$I$ 謌PBYG)oj`x='I$Iʾ!c H['JP%V[x$I$u-:[-Dэݿu=V{J$Iex-!◼u=Z$IƊ?օ/$Iºp Y`[RIENDB`compose-1.8.0/docs/images/rails-welcome.png000066400000000000000000002125721274620702700206530ustar00rootroot00000000000000PNG  IHDR"_bAIDATxoXEeffffaffY/޺T*>>DLٹcfM5[;M.+'O Y`Y֭[vv'R'!ccldĹݻ,=6V&ɬc}cʕ+gnHɪm>jxF܃ŋ#~y=Cg`) Y*FHoQã5&ڵk?ffĔPq25JV:FfG50DzgΝ{Xd9˳t+Vj$S#+!5:j`\~ItSYtիWg@̞[Ҋ^ N؊DHڂƇCZQ"~dzrƍEVgdΒJ % Y!."!9>jt86 Ey)s_U=3+8qH$#2=F_ˈNQñ "٫5y=sA=#{f9V(IqA12Vpx8:9.7%>{K-{'y o5+{vHq8LZQ ^Ȉe!V-$o?j|pp82|{q}>Z<8?83?/ g^Xfq8Rr0\MI ߲H/FmZCdz[ķ`?mG]@"})q8gX<{6֜Hq0QM$5HvķjbЏ!݄[ Zɛ] o;98q8}Aq'~8yx,X4#k^Κ(I!gH#H붬!3!ߊ *߂ |8>Ѕrtp`_ozK*zg8[4'k^K&[A#5F܈c״nEچLoA| 7 9@v(,ۑf$5 s!!+oA| V GGx?8v(8~$8v 7w5faiF>ّ8|9;Gp݉ Ds#uwS/F9FZ[eH}.D[2vmX-VG[Cut8!qd;OK>#,0ܫh6֌Xqhv9J4-7$Za~߷j-#O[&NmH~8=u,ߎՊcQu +_ |8>;:4 >_%^_s.+Ǯ_,<ĚDZs' %ã;$JDZoLU+Lj_r܊CnC_j=闊aQ-Uװv:@p| ewFRձ1\kzq3]f]Ͼo_ű9\pSp w&m x;RcDZF[V"aD!%υ U?^#ςx VC Cv|8?>7)fLӳb>3zj^|ra_#B!mԒ]&Xq?B!¸8텗|O8>(H~}:(p1"߈6~/"n\l6eIAScInR^"n9znUeU_A_uWtno[ؿq1e=/3걽^vu{@gj }Fice= -t]s1J}vO_7ͳ߳ޫu5yk]{5(l\Whl<.]Uu6Fic5bX#ֈ5bX#ֈ5bX~R ?2"M"1peM~ RE~#t?˱tôE[Ei"i{X XjAvmq_7H - CMvsKF豮׮)F EGwax=׏[Xԏή,K?N]SjFK{c־Gc臻׈5bX#ֈ5bX#ֈ5bdbwH ?B BOE+""Ѥ!nP.Y^ex+""D-ni˳Ԕ*<nA.D4PlۛxݲrM^5{s D lEi}xʳԄJY1m|y"l|"[-DTU"1Pj`)!M!-KTM?3 !|TmL]٦c(r܅LG+ 0@ n;n#(3BBD̃455q$AϥBD;VmND!Z[ Ŵ#AƪL3YLOˊY^ :%DB,$Nb,Ylz3M>8p--?%/>(",К-g>tCc2%s" ~6*DN_ BBǩuڰX,3Ǐ6i\t{Dn3fKw1THE2 L,l\QɣճyU~Ykԡ+yu*')̗$R"O˹E,19,I\e."> 鏦p2 ~TW{""DK+nlؓhQMHE0JfR,9CgpŒ]k/yN{Ƶg Dhmڧ(> 2ǔ@DfCT ڇج{ I&#^;GB8:/'HAdp?ș#l42}\5u]xp%Je;T*OS 4 s, _ }g{PmF% ff2GPέSv8p,.8my1 ֈH ]E> 3jVBHOInGK׊v3>ވH@m4ԗ\fٟ36x"DBo^cl 1Rd EC| A;xujHY4g-e48emI>-WCI7.Q-ZYCYuh*'Qe.f۠,8n]_"r"Y@aic \fb޼yܦZЦ'1sfǽsgbA.Gng|b7:KM }N:~$n `z$#qp!sGPt {eX:ڽ]joHh'L@Q #& 7F.bߩ,Ic[94~tޯ&Cyi.6NIh.VR[M R PA9<t!B1w MȜԬ#:@dƹczv-Da^Fcm܊:;Q h}q3O"'Cq/%Qm1 ks{Vπ +V] u@ ;\vfĻ "$$@$4M ![BaS!DɊ > Ì|pQ{] j "ǛjHvON^@@vRp)t5cbg{|MM \)mfh; ꒤Ff`}X&@D!~<6Cz^"AG v*2E7,wJ^;k.:H/Pz{ r5"GYAak@dPZJ_ZL *К0?dQZYG gVH;"BBX"H/uq(ULg:Îʷ̨UMkƆ(5zͤgX:ҳdfI &3-exǞIe(q=rA02+]E J,v&4kq;)K>д."D\|BSq1qSAd/ 2ASk9 +Q5xW5?;1hC #rI1h,Oeўjg3xwDPlWE4&4_R n7vQ[,".C;L%})n̈́-Ÿ ^b)I>svz=2k_.ff"][s >)WVW4%ɌLd]E攢UPǟA}xJ~p@;ml(eÀdHҪHbQ 3q\0nB0J=Px6_3lk%h(O%WQWt .ko _0Q,*`@!zB?&l"gG;=iwNjYk~? u.:ߝ( uBEzL ܿ rdDϟDa*7*=)EՈc #%xPp.':/9 g@r?;Ii; _JAݍl+gP'եxHw\wUz,]1)㐺a1Nd?oo؂U@˝GNŭrcv"Ck:ǞMrțqm]cho,ĬN;[;t<^Mol^#~MҤ6;ȿ-l[6 [7o N tn Va88G6'm@^_"vOk:@D]#*j׋ө]}ڗ8j5*H,)^}5z^);o"bX!nAcU\W5s<?fQ #+<ǀ jHUn4*s"QHڼ؀@HnҾ_k!(P!d^-pnx?Ӈ#hEODDHwZD(ʡɒͿ, C,Lmuay#v(s*]Qr8\/.L^j11!kn/Юb>k?_gQ|v)qK+.'׽ al<24=V}3#X6J|gͰ ?2-Ԃ} agoHd:JkTksZ DFg;?BL @䓔ڂ94 5MD^?Cc%>[ twMUV{VqcNѕ1N]遛RiMsQo!ܩU<7' @v־eo}4ZNCO/`_T@B6{gCx[Z F`C٥_!sLL4U^O?校8 2Q# uɨ`4/V~?Û9u(—g:>짇Q_n͐" #Pr#2ԍpv8$|5 AF}׉Mܡv#sv7+ސ%(c(ZoCt tQ R,j<":Q7S-2B頋zTQV[nx@N|"Kktg{"X霔)=N|>NׂT9шT[Ң6 ѐQH4d9D>| Y4?P:R]=V֠ >` PsOg~D=7#Y aj9! F `%=4]EcetJ.2 z$ >Pw,i_$JYf,q20ꇸq3jes߄+m֍ǙEc<ֆ@=ՉPKHqO2(kWJS=<_dTt}>dE\)Oo\) hayyUIԇ|rC͗TS{ԭldiWAmHBp1R eAP,v "LD6 "bDİȇbֈ2A v6E=̒dZ Hng1IZpvlDNʋF8[275REG- C!k1r"{Bd홇-_ iO%you"c\D3\(\d>j&M۩eɚOLDBB:tH1Ҩ>Jf˩ܹX6y:ܙR*ܹYs0msi+kh׼(?9 ѹtKc.&*mAm: ډ{&@#ƈ=As#D23"C1ԌGDq f;5xRsG #U71 R$wObpS4xZq4d0L of/'f8n@Ra՗i=\S'շytquuֽ|$X~ )K᫪ah1a#-7P;2M( [etd1.`0oqHNdܙoDl'3ץ[-"KJ-N(EO9кt5  "LO~y7cؘ Μ`$AȩSϛ?U HJȢ# HnQtj b `1`}p "~&k>ٮ+PBR\:. pq"Æ>1@j:)RzM.jU6֢Skjy'лTAa^sw 5Z֒ѭŽs|D6mS5n>|&y1y W]O"@DHށa9qFXݝ-([ؾQ?(,@B""| Y d bp"!s"m??Bz41 y懕<:s*:rOv7"=hf܏ B@3#2m@3Prr7 y,A.Fp8ɼ4_k>~WzbuRNOPRKGDh|"Ov^._[Z{i6gƫ{Oң*E' "p:?܀Yꭋ' ғ ?G <:",s<&fPvih{W/ ?kb@Z82 aёUt]$?ו cZqS `PHxH1ۆ{pZ\Y<ϥH2 D]W^!eKWaDmn "vTIwպ^6ڡg@D>f2Q;" #Y.ntOL Խ0mm8e:˷=YZ8GDH%҃qyPoD_s{"="{YDpd EE n0ݪ,AwQ٠Ukd ӉEسu#& 7O[Uq՗}"iN~pW,Ă|;#^&@Dn!EE#J1e6 t)kqrGG_PtmO4-p3L r~<¢yH1AGF8g/\ԣ-2w#aDtY<#r`(?k@d^UwD?d|{ DiqCcAdNmtv%{wI=mI~Fly:ضw-A!Dl09&Qȕ-XL4w5ڐ<[csN@J2~mxwa=H耛j#'OtL~u]X'fi?kz'awC]ߤy;qJ<Ə>6[ 5"D>^޿G8(SD6Eԡٷ0s@(:Ӹ^>e@kzՌ\ Dx0bp;  ! HTEǡ,7cv#ut[=vH Dh,);bSqviFgn\:ww:l@+|^JH厭&4Rtc=\'8.3+ƘҵNpSC55Y"kD % BO].FZv_~6LvkI)IK`1Âj9(#"YPYE˕:zw#5w픓tk:O/lax n%b'XA1Í1gOEoRёWm8| 10k2`Ogg͍,]+Yhj #Q)&#i$D}9@ՃXe%e[<)/-l郜bSg?h)y=_Sf|u|Kniڦu:mp~-4^t4}~M4f U4MvIIw4_/!!OD[z7ōMֈ^>rQ']"aQy]3u>WG?xN+SjS3(>rDd lbl=3Scdt D$8q9O5\:"""Uolg`aVo&?JGG/EK-G00SJe*!ayt||,"2I0!!!!!!ibwVMF+2}JR<9Д'\ "JO l,@DևlA y9t4/+"zȞ; V3Y:m8D+F2;$h۫H] gea`c %̬4#2"aH\ӑa fs8AY5CvwC̬6*)e;o;}\' Bdϓ/[HmT+݇3O+%UmtlQ=|=DvUf~ڙ aNb)N/x?oFƶUz)qb$!1% GG@rt,c&ܗ!H<1sG A$rFu~j0ȉC ;%EC 4AV\=fskVyjt7jd i[!v65KHHHHHH\i3.NɎغo+sRڦ-XKnXmi5"P v ̩"K֒wrV颺ΟlUkureAԬ"80/ H:1p+̭됴b/CHߴ> #C1cF8|<.,@־y50 D>sD$s8Fkȅ(oBhn)Kߖm"k"Q{8gDӑpڣQf&])$ZCN'lʌe{WZ5-@jI@PnDDD3{8-\A2BTN@\ ]lv&HYKg#|1"q FHxzV9w}"mW } BVW1k'NM _}G[%}5_UYGjKǣ"n*1 Yzao1YA9E1t0a;Lؿ-9ܜL %r #|AYHvf~;AD0"D؎Y<" $x8/^C"k~Ȼ*7 5Q#զl9 EHl80?NE/yέu_0~ 4 MAZ~DQ7%㓒^8 R߻_t &Lo &yg`Κ>b` FԬ)Hr *3kH4Wy] ^"5ԕ'n5ŪvJ>N7IMdN =R):~57 -o^aU{.I/c ]{wtUlN8BBBD  +1 ;KF#761 FB>)]cY4A)0m9ؙ!'Ź 9^1'M‰Epz$_>gmD~ssX|4h.{#7u:r9788g;O^˷?)o#"_Q7#DN!RcP"'P:Уpō8KRdCSe 8i88bh-p'- .͒~ynm\i}we1~n4SH [0E5t$*;"sAHE=H6fwd6Kcs+̅gsX-އ\pڢ=D1G D\>ȿWY{"ҩO "`h!~wT:hw*zϗτI 8S,@DHH*"„  ﰳ7\ _kcxXDa\@{{;EI=#BҎjCX2{ FF[Hyc8D Pc灆X=&^L^7-}wShVRʓ9|{4xt)2T׫W@&}w'2t熃+* ,%gesXC7a\?;FE.kf=8>s4=ΒL%1_@ԔQc3fM`zϡgDQ pK+i9TK;o] r0@+9#55Υd9]Mg"ec|^NԻJR}l O{k_C"oDF^/gZ3n!-@NE"O砊&@DF!0`A0ޑCE,ȋ GխB>{Qiq*XH`؉ֈ`!z8xm(3[fCaaio|厲ډ r7^KzJNc7%3ߣ ;A~KBiTtߑ. +PM׵+ D_u @5Kؙ(oE:~[Hdn9jn%$\Bsuu D(+)fk "$$@D0"Dw/lb&i"*&~^HpB~dG$' OyzZ_s1@Tn9-{í#l͍t QP)(l ؿ{y/vݒED>rЮ3ANR@GZ[AD[D@D)@kk7PC"$unG$DxTC :kV8Rku!92"8喠 BzR#Ά{ib$qar "@embu!?=L "S")Ğ8㡁 9 pC+2Cp#,VUիG{3 Sl5ᶦD ! @͆neHK+}ܿχ"3>T JWJȀrrl·[A2+,$ =!ڃ{'5T׫Զ(Oݑ&D?Jrb_#l>Yi+MTd/5\eF-%Px/DYٶ#.I߫k,NgtDDR9 uH0aDlq`?wvDExѺ -sL-3h@/2{"BMC8W]PtU'vAd\;ZFBM,B"6<am6! +k[gpK#l?ՐDBnN"}Ⱦ~JBP[HGZwT_nHA>BBBpE4k B}Pg;> @",fj%Ps۶㜃܌bC_":ݤo,(gwJs3|T'ώ-?^ ۑRtR$WrqT\I.ECte8OOO{" "\}/qEh7X\4 ڣґ-U^xVk*+ZC5Egqsd+4.:&ΐykG NF ?+mQDs9 Մ "<=ˡo2!Y7u#6 l [wK&rSff>;Jepwf\1p:AؓGI-YFoW} f_lps?lA QQcya/Ņ * XldDCG}G̗G]CF1FnVxVcJOBu!u:7- jVM{$_Hvs߲o#5켉$o<<-?$2(>JDL&MC>a7//FGw5[3(6}%B׼$&!" "TBDBdY}=";wlg7y4Ps!2f]~hF#FsNd'BXsY AAǣ""sf9~G͇R|[(1 N3gCr!3>xnFf PoB_ȣgeayBdp-l13O^a%gۏlc6~F3Dr7  !"'%D$D;3fG3b*: E_9'tQ<:g s&E(eH^}[͚,rZL}];Y݅bb;yi t'|*u[CA q)!"!(YIxxy`եs2^\ZYu癒"^^sert9WWs;T^YɆK_zź'sKK罸zԥӎSm|ܷ5\L"&ِ\^  "8SBDBig<3퓹}sL[m2dBsGMHxbRq7FNy.eFE8HiGNA DYNikN* tABSSkƄ&rz>772<  "8SBDB̓GbyVst0#'UU*^L}ǀR*h P6+CT*#aca& DP6X!.n8SAABD_JHK*D,γ{EfY X'sAMkyF)}#5Q~ʂt£F|rc}F5T!Tu˱n0[n76:^LÏt $H BM1%`0T;Vd˒\$ْm6 -f iKc]Yo|=tι "rEs?h?eu4ͽcy9V8"A v@DHmFJ (_Rsn νz˝ bKv7QY[_oƪ"πd⦌H+守Iмc֯XD@D "_$o"b5*mU)Ct/ᜧkO]""95~ "b/gjswhW|־Œ*7 w 47ϡ%"jB`mzh:2άy\ZQI= K͵4uZ_f/(_&ZbqO37('thGΡrTD#mߏ@Ka 1^x6>{2@%'GvZ;rFh'z$oykMt/lgffv)G?Kcn*:χъu*{){IHyJGFʙU[Mrux^S/UY٥0PR.=D218fwS"$x9% R5rN ;T&eA!-m񍏁61W '=?;v9F5@Q܋z=54kE5)W0a'a?y^VoҎ1} ZhBi=\:'j5=oVe2 fi5o}p_GU}FßRvQyU* Ymd3b%y5'b#vVQCY'cT}{F)gɒy٧ K]52Qi2Neσ:'>.eK[v$]{|cܻq}Y3 Ҫ541P\$RZsA澖t>s$kyei:{%"r۶fvYW0QGU:ڲHym]}$]D@IK'K("_rD_Hp9qr Mb+\7I62N.Tso( VY.d&O(eߟYF+ْp\sܒ_G"'f6mXRb7.K;dBGy}bH{{Mio4&mru=MWA$R_ TāeR~/ϖTr؆Db#ɻn_$˫diSߚq4k^H9Ir-"5|HdFiH}ovY\\Z0#rcD~ZXM/?ntR="-If2KNDDJ!"JbV$ԕ3rr FM" SDt ,Rg!$1VǠC:@"2j(-OOD:e ~tOwZщ2ȤEVlQ|"%_.i긘LElqLDpM}XJu93B)7=Z I;\ ||c[Hf>Lb`H?Z's?}3"=ddWӸ^iQa(k97_`1Z0˓K6>7D":dB3v%"r"bd*"З$q XD(BGs Iɞ2sNUDY_Dcj }Ƙ4y#"KnGl dl/l.JGDd 8uP&lg,꧈ȧO$CfZbW^\.sh W~PFb%rEmL®J~o(HbmNAJ2 㗲/결Yd<8X[Ү{"DD{0 zV"b;iCg\oH2E5'#T`5~_MT|l.h]:'QD,C|iV1Y]Au>>7;\)fI5lFU>Y#fJƥP-}~TK9]F{X9)Y,ٌ-tF^]eTHrkɮ"X.*LLe ظw{#%&!lHZOr?dUi]Zi%_֦E/ʧ{C}◴]Ƚ/{}GB+oG_kZ)3L-u_~^ce{]uǟhfN8":fdn>DIww;ϷS]ݡ_jC\P'$FܞXHHt!u_IIbpG'90\Y\.uu^}fʔkW yt>' |=$@DDd?' ܠ: "D"Z\b&@/vY"m.9ZԟE1Ҧ  ""0@DD@DA "HmI~"Ӕ9z6@##iK}ܕFZ*:PE(%'諦i.f]q[9V@ hdH!"k_D?fh|a{sZ"j~jz`x)e."xT*u!c#DfY>NiD"~DDGem9_UO9""7e| zi:_KDq25 ,""#Dzs|mg Oi[*7l~@s"lڅr-l+ҘY3+BzȣYv9"\"r:ں2lj(sYX=(wИGJ_9x0˞("JG:r;}v.JjANL`ֶh͈^ɏx97@D="Jh- P PCsѐqf hRhH)9 sMU|s;mY3[?:r[uCD "pF䈽r![cɽ,YO"ǽҬyyԛGX_g {[PJ%2D: "G$X,.K/?jLѧ;#Cf{,jNIg)Pmv,18D߬~iz^[g9a8kK)"6:tE/=mcYFX7{j]Dk$"#|sRCEeT.=nΐh="<^n%1;Ǧ) WEd'r!>Sj@D I/o&_CukfD"Ul)|琤CD "Ō@DD@DA A"@@D "D"| @ < "į3 " 6b;ټ-靏@@D "4:4e7gR~*fF)/w>"P^NCL-X㰽9MR"@D ":*k k3Ȫ"O"rSF)m4֚uD"96EdW\4jhhZh"ӡ̈,[AoLfTF="&A/yEGc6zJ thO.Hmu*~TTFh! /a<>iיZ?xDZ 5NۛYz:z0.LJYT\\Ac De HǷ| E#3.zXLf>> 7N'slT媣C;rh|k=zJ5,EC2\xDQbOyEDfD5"9#K-(nX!L=fQ)У[-rݩόxe,E~\#:]ad>MY]_Ӑ yn-kД ΟnTx+""KDDL"WK5G$XOR*e3zYL:^>%2lV o_Q=Vߖ-|<ГdFĬ̈-$""/3D"pnV?mV^@ƴ9θ҄uio0iئuً<&u<<]KMz*.Su H&Go|,җn"c+=f$$"_I~YIX4ziNYTT< Y•,jzyգm:۠ ٸS80C\}I;h9s^fIZV9ROC@D "WSD^[W-37Zֵ\ 6"M*ڵ{3D@DA A"@@D "D"~4NnOwuU%N%{)??0]p7eO.Epw@DD$ Z0z$*@koCN5+"c#D@D@)} JFevjN}k("2] ]v*(tUꄈDux{q v h{iȟHR4Kk;{Μ9CVR_)JP@D tN6d1:hc㠚شd̿=mrq]!u:]THb13Erj=ϋ*;:^,חg;/`8yeҾ/̕\}_"4:4e7gR~*fF)/^ImO?Vځdoy/FfZ)D\M;ِ/~46e\PⲪxCCjH+K,r:mjB9?E*9RԟpV=M>R2D3U> >%_Љ=KH^ ܜNRfGa9jlyw7 " 4т9ۛLᥔ\i'ʩ:4Y=UGfv}F^GxNiۃ,k?o)Ii2KOoK[eiI?R2j8 Q6VP:~N7"S}bF'/SD$u,zrBvP=c,.K8" 60*lߔQJ-MH]v$W,N̲_2;5m1r(e" "*?fPҧմFIZj1Wt#,+MHH}<],4>/31)V%J+ik\D"sAYr{HH\WXdm:tRK"2qV:]X@; *Ӫa$oy7گ!?CXޱ{3!"HdmW\4jhhZ%a߲@/t(/[AoLfTF="&A/yEGc6ؔ$`\soji6y$oFD7Pe H5'sgfۤ̆JH;mc<Ϭ̨/kG-1ѸKIf -=kB{HBt fTڅ:Y^杏L4+FhUfDoRr)~qVoO皢x XI&X)2`EZrǥ#" xw'2ˢvxj 97OS~홢/t%w8%⡿LKDVKٷPOQtW(>BY @DzP'~{2!t/3.⊸49> G$qƥ#kuI9xua:6U3UDB~^IU::;.ėq,YIq,[s21FqvI3QCr_d ̎%yD9l}+EU|[Eg4+ge~R;:P%?0 *`orGK]lvN^-Nu "<H*jU'D'ED[y"򥧌>yoRrT"}~hA򐲝;^NZD `,8k]5q%os;9eD4q5DߚaE Iٳ_2HnIǙ) ~I(92ߎ^8IcTHYDy,$DOWsr^"Yc _)Ȥ#"\lu|,Mz"rfz}""S۾ƲT)L^@̈Q ,JSDxQ-S @!&< jDmi9rqF:SY%K΄% ph+!Y$Uq$H쭑 TD=}ȑ<!Kc}VɽJ@D "*[Lޮ.  !\C">i֕?L*m%gSküüc՝x?ۻeO ""G$XOR䕗O""Y]T[j_%RnvltJ;KUgvsVp9""3" e҂1/09zNSIY4`Ix,f[) 3Lr MO?)M#&lʙK񅆃 "u鳟RK80!yZ&5d_{"NE8Z+ₙ˒X=c6zDޠ$4-FN3.*J3t#:i=^6~G䅯Lj,Į}ѽM\GuoWyyR,YR4;Ү*W }+V"O*Dz |vZ6z:a<>Cc!""2|MHu# 2cu%xiȞ3UщF>L[rI@D "fE$)])xERM +O//?ZW_g44z}ns9,2K/++"{I?'Fϯmك6lNnr$iڪ<'ޢF)")\L)?KT4K3_O8xem-<7DD$~I1iC'4Y<][R%31Y•=%O*tp9.yJ{PdIRt}~hLɲY:h.Y(zȲhR憽<6<3.17=/n^9_]dzr=7F{X"al>g{HJ:_wvݚsJ$ҎsQD*!7Ԭ$9D-/SZDRɨMŌ_5 "H?D@<5Z\Y4H~UuPژv4& sdH}^ʐ J;^ܫH{kv " Oiʜ(hkƽc&>-=T|gCᜟ?$eFD6Mk͈ ߕtS#gi+sё2.3 UxZp?47S.'*?$Js:pksgQDEwW\v;/1rMʡ4kjjeIΗs.)7 e_j.3/=qKEWl8_HtHO}O_zz?SKDM*Qk?&4KKou Fma" xu "D"Gm_MFޗ g%@DDABvpJ"@@D ""@uٖO0 [~B 6'ܔKݚ"@@D " m:]?1stBv εѠ>Okᅢ~th3wB_ 2ڮZK\]"}q'MfFI~W>͈\j,']C7 _Ӷם0DA "0"h h]v~4UDdom""@@D "Y""=9Ւ&f@Gzzom,Gu*?%owXD"o4!.Xu4 9w*'ۦ$G~z|ͨ-VqH)ssd3$M鈟nmGq_/DeLB_͹V.󁇞~NR֩}rͻKO=Iqvב}?}4ۢ9fWL.%?~-eS&!x;*z./Yԓ:yk)nE"y<0*S5"Οohff#R#z,Q(eNkڛ)S67N2o}0 Fв''r6M%"Re9;^ghWu\6ඩd˾uE$ָx ^⤿G;DZrԽkoNYg|>n+pti]?4:6,|.h֓3NwgA/6嵼'D\u="b퓡6xrV#B _3A "vH,BQD%.3&y]$[[g$Iq.goR6W%39neSKj?]^-VClwA1i+A;: K({8] KݒmI5f,'鷹""yl }QtDg>}/׆=Έ)eAHQf)G`uI5""{Ftdi)wG?H='3=؊RRاB;iV!mxl'C7v*2{ $ӑ 5-Yme,"?A LdI$7̤& 4G'X&{%"H%ut%&؏((mED {J]>jpˮS d_B|ٓ,Cϐ}yC/׭Rf4>P7&ynkmq~ujnFiKVkʟ"bA? PZB!9J}!D+K(t|% Zwp"K9 -8FA,݃lg?iնnX۶m۶m={&_wN'y1]"fBx `reG :FՎ+o3yEв_1"WܓC5Fv__ئ}-'w6x7 P1"BL@@ !b&D DL3!b&D"13!R1"BL@ @ "fBDB̄HBL3!"D!bfB.!b&D"fB@ !3"3! D̄ !̄1"B"fBDbikʪSʟ)u_g8>뀑HѤi?w%^M%]ߙ}ufum]gi@LP6@p͍߾ t t6Zht $| ~6&9ܡԩ4nsϑ֭{~-S -창}:"w^X{99qzǽ+át0c]{ǪZmGO)?uҶ5"ސC~j-~Iey볩5!+v),KCl;ڰD޳)'o?&Gj**Nw4:+,3|=YPA1>喩:hއD10a"DAc;I81J'(ۙ&Q5F"4~GQt4*Wɍ0r}^dM[Bue. U'Hu T~ ^wvE55ߘ_MOc=LY'W{leƕI=&ߐ=gv|@{/x\~ؑNyۨ{Rv!eؠdG I+Ȥզih49*ma Bt0~ϻW3 |Q8=8!g EחJ871kH?[]p@E'V"\ gi b(Jz)ՠG̑$@U BݟG$GUO8{:y(ׯH7&"LD401:s~OI{&Y^*/;̰q-)nWE0Ksf:賒Woĉ3pj#["v6[ʼn=fu_xd#ɽshĊ z8!ӕw ;b Ia/6;ٶiqHa2%:lg^d}3] fx(8ΖjDMzƸUm(a=/DV9&x/P(@`eZg=X0ѵ z@Dt}uBfE(C W%4?~w]ȈYD0M+vI~_Hߵñc/oDm(SH:1ٲ9jQBNwA4-ȯ΀~cъhvu9C"%}w&+郞+)Iwf:7CɽЈŚ`Ab<(]籓ޤHV=b0aaa"DLDm:LDsK#"RT^+[sZU{;AČ0 ᄈX@!J<@x~opmO-˦1 ίn̶'86-* {)GYENX_,vm:,ۊUI$$"O0DG "̣e r ATy|KfNJY"u0W8@֬JdUFD,J`"D8q̦S=HVB%OL$"z@'"V+3|͈3WED^m)6QI?]9GO_u9Oiwtv=0 @FKnDh^K_51[vvRA}P7'0 rda[|#Z뫇7׮YA{Qi|/ +"DF,"ș[_k?J.?C (,gӯ9V}!܇ȵt$ 'LŎG,%Dt\ю 5gصXX8mEN߂-Zx lqW3׎O!¬2VU.(S6)3+ȏ]+/6&x@ja/mf_sJA, ጜ꺠<Et?b;YI ؝aY%# @%JQ"B~ԗ" HW1bPug/bbEEa-D!",,,LDOB 2#t#lj}:t}=ykC#UsY* }s'fŶ9"x`"8⅐!LA{֫b;]{rs9knъ}zmYr3֣l qg%ܑ!Huu;2s4Z##_ę_*7[z xjNqh,3 ŴP9GAԗcSsP6D;!N*XY%xyoڴ]_J#g[RSCoJ#&މmKfhXpDZG);u ] "n+),똝29"X D?`4)D,}/U_'+=T2K&ƒ5`K~tb̪lXyB8Du"",,,LDnD:Fq ު6uX0~َePKqnE +L⹢z%GR2cnh6!B#"X:h_-kCCGAV : :(ό OX}(+*"C-]AQ#Tpb:8{D8Cn{RRL@0)*Hr^10 ̼20aaaa"ځ` Htf .|`hy=(10y `0z;۩rg׫` E=LDXXX0 `yD`0 &",,LD F>Co1z35ntCw]Ŧؾ Dp`]\wSh:F7Pk(Bu`3 ~%ԥ+O0,C%)=l:#vhmNDQq^sh}9Na}~F6RWOCŽ(A5Ihʜ>;n m_KrZQS/1 &",,LDL1m 'L8ۖЮv(b|99v왟\MzlxZs@ mtHR IN>h|ww8u>-@h.ZCNg*8n{2p*s?J@l [lg!+Gxf) 1|m@XP";o*e 1C9ﭮ 6ޟ39șDbw} -.p$D R|༙]} H/3mǻΝ2}fy}C8x^^ Ť9keϝ^,-7j8UgΏ_j43ޮYzu Hm{2A,cD$JN힤ݭ7UG#6Vrt,m\u+M?^eB`0aaa"DemQk"Rۙ_a:6Jt->mZ&3Gn}銓-zZxe=ŁNDt2"}U"Y'x ۤH2x0.L)?F#m+E9 &"LD^i0 eU[Ċ; Bv#1}RbDxv\C 6ȍЬ=& 1DYZ^[j0rm!(`00a"JMqm#yy"c fO3lVI "}!p|<'M0t@ehGD}2H5251jK"|$_)}{$"ꚳpԫl+Z:X">zƳph#`"D -YY'" a@FM'a.JpL5嫚LOVzUoۿ'V͉'a-ίBbMĆ-P,kbKl1bEP@(1w123 5{ųox\7ewʙsY)3i>6!j:9?~Nd >dHiVFu>nGZj,4y7St鋗ȢYSiE)eDsv wG41` LuZV7jy 7J<4>]#(>yŲncH'BԬ$}y`p}HHpK5=8Kt;/ɔgobՂ]DUEHT/o3>s3Y'1yAke'[sf]BnɶÈ$lV̏&:%*Bӭ2RO:=w<&gZ~ʦ/}(g V^hi+o~v>an9#SW^܈LJ;Ҝ6^ ujuGϗ˝^\'[&[z<$v\/ 2ѲX|Zuys?nn픧'Lt<-)kv/'eBǯJ\LOT'V|CW,&vo'Rֺ˵irv, njHDm\uZ/LvԆ=g`~z+Y4&ÊQ׽Xmx stJ'&e>Wfm#vs+ 8KWXpɸ{xƖ0"Hn>~ljEjMw-rBo߅]~";G<滩EGڍK=& KI){>~GUIK%)eքyFJ_V\j2[&S/KcWi?=hv݆뾐Wo{u1˿m_M=&!7cX:Md3c5.2M׿KD/*z'_$uKGZ]b]߸e#~$]5͍QթU}׬=!3q'xdN .^o4[ >&YsYeRśǾ][=~Uw[SLy4=#F]F0V{aO].---RjעqFuGU$a{iD {]Rl>_u'XBوT^%g)hړ vvřdyrO8 9I>ľ;eRsỺ:e^bv`S^dgh|(.+7k4Ay.wnCzSX:MǗf[0ZXq=׵(kuNjբSZ&>َ֔7峌aK:B!L:Yubs[X}M﹪S"PeU|Sd[FĊMMG]rx~Xz::kk%.Ƭ8T,*0"0ȰtA{5O+EY ɛz,0->5}'[G ܈DbHt:?) |D~n{Rg~vѽÈudJ{[@wթE_zVT͖O˝Ȉ[9E_\koiK"ÈbDŽP|bM$.STW<参4T먛+nk1ˈhRpArǨV=SJxͺ*%&]-6YUBHb%w5^uVxrwBށ]N>5q=u[Ĉ(G'`g]r_Ϭ.m% ?۟"5պךbr.S-V<'Meur|9?[z] k0sų=XfRΛzq$::g=p,WDcגR|s?YsdL}^37b ]bftN,NS3_gFfMˆ|%lXO$1v7kGM`J"v;uÈ_/J]ZXo5CG5_fFQVqX.={\ CƦlv}F!F ]$X#2B{ST#_iD'd~>I~=H&1 53Sd_;k`فHrşӪۯwԲgR> wb'N LѧJYriBhk?>O*]8lkpNj]ؓw{mD_@~C׈d\0Չ.")xZךJⲔ)Ou_]h>ͨ`ZM;gOL+FS7LǚV}v -۫W.j> @:uI2˽9^]][zx]kkUcӘtɔ'^1]Xɾ AC$hlk]~XI:".Iǽ ɳΟ8r;nɀB4Q7~bh۰'W;W?#/= =r&G얚Q`Dˆ`Dˆ`D#ˆBB`DBA#A#B#v0"aD0"aD0"AaDF!F!F0"!@  F!H;0"0" 0"i#F#F#F `Dˆ`Dˆ`D#ˆBhkJdPS%eUu #F\xRSO֯PV.[&ˌPvw}/Y&nj1'ȲR~] =&/6ZD?=("57=q}Q))%ߴG#F#2X,XP ܌t_yo"K28@lgD6{Xb #F$y0"o.@U~#?FHlXd |3 kFZSl{^l()GF~5r4DZLΕ/ƕeqm:bb9Z礱1F֖6:`b(:!mUƈ,#m2")fZ|8mngۜH/\oNIڎ{R`ւ~~0"f!3̵5~>j}1|#S 6Q4..;ff¸0uԑ95_JoI92+5x`DB7pF^]BsJsׄZCB#ѐ>Z䛐7[s]'+J]xnz;Eܬ֩Y^Sn4Qq|ɛћNHG4zx&ARsJ65a'$mcYCܴI>tiS-/=zm|5Z>ߢf ~9Sn97XNi ]liz+Ot&aD|~Gj_+j~;CN7mDF$،ԛvR-tDz!tB.՟OzɢbKîبIMr}j,4)tJYUcW~ߋ:ZY<'#`’ a!qã|w.T_&ͷms~h& y.J8hߋIR A2"^O4I>)Cӑ'-NQ8&:vɘU.'X4xa޾ú Y!G$BhQrt)jS# Sw>If^G*FD{٭dVG2IkxZO#-{^s`F*~X"v{R7n#b/H'4'[6"Բ:̄r?߽2Y볯c qykWPyPϚ\}/&aAaDle)X3!y; Mt蔞ʶ?.b Bu;9rO~\6"& +6LOj*F5jq;~I˨Ә68`sȊ eƏQ9xqRRLN5sI;Fjr^Xvf:v)׺'u{nDtڗ2}ow&߀{ZF9xxL}I]u|< )#F (zׄ,XQ,ao_X=:n&vR Y kStꑮ)8Z2xg{u:FrZw)&:"0[.ӱg kl֙wg4c?HSYJMKcVSj|9F$6}˜{J>2 `nrgj@*_C qM'[//-1%odꐄm&"UUr5hO~=_ +"9D;IHTJ_R3n>?ϝ p0wh}~4vĈhk]%Zz.Qhx(8~9퓸WC@^IiebumSkmFs?`DR}5 /'BecS2-f+8-^|ub5Oi6"QijjM0GpBjOҶP#cM$tMښjukG< Q#F ڰRH"m狥:|csLPl[gzpz׵w%tESrt^t`Z0c}0[;nR=Nj49[q\Thb^\K'vҪudw?Fk~B݄EԎIt-:e(sb2L}\Q&jc^盢'ΊUc@v5s=nQ($'^%E#|2NY,۷Ssf')Ʀ,%KȬ9j=vYF)^'FZ#@5-)\vlXfC J`Ɣk!!Ѹ=sΕn:5Ͽ.xBF!8;{Dm}-9HKb9~1._Nn=/ۣ51J0j:پ$S{.I7~ Z nr,O>Լ Q^" WuhlG=wyj“r>r ݂Z뢸ku:QFaE܈KN-kq]FZKb1헿djvAM} u@br`蓟2~.]ʸ߇:]ϊ-8Y}EV(q.TsO:h i]mZ:N)=jч7kjs91{Ϟ} ŇE~ڏA/}~9zu)U4bEx_ }^riJ>{ȶjp͒`3 '-6RE_ """"DDgX {JK!CQEmT=￵>FzT B*\Z[ ";}bɠn!"KY8QʱU~@"D~7"Q'k&/;c? <% 2VOBDDDM5VZa+N+?RwowղA_KkeO}d*!ݡmY42 1jw4$ {i[Z:ǂ$#lը ғ}Ÿ˳ÔX*Me [ <@ݔg5.[fy>=ۤ\Ʈw~yj40Άd]t-u̧eQ9N{󲺐zǃ 5d2c]ٞk͉2oFie7ekgigԙ gMVrLߙYTtRu>,m9J|sۉ)Eܮ|U?wf󀵔]B˜~mdzt&5X^8\[_n螬ŔW X? BDDDN(k;c1y9ł nÆ麤9cnBZ;~;͘Y0J}f,nl}5˱Favx=u$ 6| 5G ovakCSqw01۪閫/neUQ?F5m}׌MyX3LVX2YNbIzp`}':w>xηښ~Szd69pn?%W-58ٰ!l$gN{ oY:#&}ּ3/9@XNo絣郲V^^ Ї\OGuY:oQ !"FE=` jd!.fT2Dgsaa{WeAjp|9)6Io_u}1Q;#4K׆ڧyǝ@P D"!:+if0'3|s &vRzD(d&1~e J>N/*i80'L=~P{y<[GĤ D4RSYμ{Y^*E rySM7)<^xhoE1ٵz6# g;y΍#_SǐGaA}bbAODX@""X$|Iզ;dy@pmio綰2_= qرkJP) _ۆ3V`n@L; DWƣnl_-Y="l0.druE)M)ץ ^ e=? hz<6`>A.Ձ^5vO }9Ƒ}=b/mZ" !"BDll\'s!FHJ1c(\ava=P(D#n=3g }&f(2r65 a. Lxyi"Bp&wT i2z<I?t9"l|kE>ʽ(ٟ(oqH@L76ӅYוsTh$ ; q= >{qy_?cG:M1 ?X9O5I:E(9+A[`G 8Hӆ'͸M+MrN%utЂ?2 {4[z=jd}ꄓslwțdpmC%?N9d%,́)8/t9O+_6AhBm9Tj;ݻbcol::D_[˧>C?}6'@X$;O"dBƂLS7o_yfMCL'WD7Wc2(Sۊה`ꋋ~J3yC[ {OCcBDDDGr9e}ߊxc kw]z{{w&=^vG~@ DDDD :t8p~bЕVycu0&mӁ;8OwCW!"BD@ ,9""@ DDDD[|nchuڐP-7Mb+^T'%N 5I ""DDMv^mHr ׬EQN QΚNF!"BDAG?IJ} ł>CR\9zӈ>RݩEA,jf|G=MB~zm͚+Rkʚ*{*q$"v=gyoWgi[XGjgK5mӨy~:amVcO3Dgy}Wރi:EZ_\ DDDD_G۳7)0[jizf,N>lͪZi)hLϩDc$s{-sg|X3lT)囏+<W-WߟQX0Rbfkq-ݝ 6fp96@?Gy,4M5mlt Pm7`Դ;M#d`+1;p]S\~"H~5ovմ֪ jsxQvОkerEE,|ܾ뻼/C7ޗ,u?c-ϩqs"M?Nn$`eOB tx;h).v>`rC,@,:WWצ{Ľn&6L\ۇi&pz􁰶eӟzxޮ9JƚuG3`@yv;G|[ݻ>' o\`}hʃ[5_I3US֊~$2ߪ,~}Ϲ@!"զ-$Eǻgy08)f"6PP!$<7~Q^4R)0+ZoS @ۖBr'߾@sknd?pj X_y { 2{zCc! ]sPe}-_̈́ e9lNDsq֥^c2S='Ì_ P7&:5{}&j聉S_]algM/>flng/ ""Y*N$`g?!j${΄y>c} BϬ2OCP<މ 9SR2y< @:N3s ""͉S"'"ƚvr5\:7{ N<&ևI' Q$L$iij2Qo7z"9? a\W EBtp!Bؒ~53 HkY>}!Jkk\/\Eº KBDDD"2}K>c-XꝢ7VR&gn" oM}}}GNG4L @bG|Ox oxH(dTgT?{MCEwV*(qC-5q\}F-eIM?L:gI`>#$y@mL W^F` ^sP@$!DDDD!"#;ٱxKJj: [K۲R+BI"tnW>fênɰ#KڪxL=msw+o{-MŔ~ikmYZ zl5~ꛢPeSmZ(kG!mmc/<<5Snv ]쬇.+鮉QfeR>Az^_Ukܫȅ0eǑMuc\p&Q~*Ǧ\raHKX)ĦiBuxgޯXU|aUXSN&3|/zu^(2VDx7l3'b'T)8M!"BDBt Kp\eFZשݳH"d3ap}Nryݨm1:Zl|NeK8N4sv>@u;[>6`w-d-8v01e'+}>buX'W%rOS@sAD{N`^4TO;^2dFlyPn$왚'?!LWu ,sbJ~& {<|3*f6!NvGGWZn' [8@xů]o3ɽ0s0.ASC@*V6~&حЁqsBw9AVj`O$؆xsYu""tbYXqev͜ 3NO@Q]il-HNkE;0oXXsj-ϭaHCD60UB0΅{yXb@1;p0@,[l:6>HI[c>0o͢mYUy=H̗gSA%|t$<'W ݴ%ޮrjiiQ֜7}Da얍u<*1Q'@!DDDDh%QN%=0ٵw "jH;擷'Q7;~Yp|@ig֜tAx_ )%4}=*X0y9"1%лpL!fW%pL4UHF2Ghs-IhyU=*QP AoJ:?K%C&;e>'S~%m8ity 3{0gaM4b+PL9#!"BD=k0vdA/I69I3=ܜGbfaGyIOJc BN-V;zv'"0:':@Lx5"9IAyaR %<G33/[   *O tP2& D2iEAKᣃ>%C JK??2r\9[U{-be*9A>~ KBDDDduj`C#|є{hd7$ ]W(O2N݉܀I:E(rR[qD;lv\>^' #YS pbϱ*Sy;5 jQE3Kmlt?$ oqM3 sJ@D:vO;> >5443NfRgei1Iy.>s]<s=< 'g sZKz=u'ϕw1L#c>`ˀM ""Dduh#$ӗ`Vlˡ:ڛgU ^K܎zj r:gm%϶vDa+''[8nҳr; ueAavo$rx:^e^AM<ϳzbMjPHv/ƂxH w(oUmeãIB|}&j:n^gu^A@t-ڏ_e+zHD?{#q1hܠjD6mq 7s'v!x;^Cё2@ ?hjjJ-w^>k Uc y+?-7b>Ϯ)˟gST*}=((4Xyjigþ2O%_!23"@> A=<4h٪X2{ҏGcf>ceY+BĬЄx0N=O}z}'V5}]@!BL3!"D!bfB!b&D"13!2"fBDB̄@BL3!"D!bfBd3! D̄ "fBDB̄("fB@ !3"@ !b&D DL D̄1"B"f&DXM@qrسgO8z:"+qӨk|z/_gO1"B/읇WWͽĽ{.9.)v\v*Ht Fp lܐi\wSH`snxy}{$ҾRRRȚB}FU -MTj%j͘EZfӡI6}?ARj鮳 O`;GSGߏA>߉c^z- "D. u4CSdcMv,(yn߸4zm8!Ǎߎ~}o߰SrSDs.=]wCGs?y$#)tΫS4o듎W7ĿKjڕyMv-Hqs<E}"AD4.{F<pEVN ]')"2R=M%YDA "ȵ}!]=dJEܨ[˓Yn\9єSB>w q'ڰ4Ƹިhj }tziHcyFDkj\~CNJh%6f{F;w16e?ݓ~[beS[rSގR_pnc;)6:.ĽCˬԥ6*Stt`!gj <;ĵM͜qov졄5}^mF2-VW{,ϥUѳ~*UWaW1k['eظ^eV]>-yln"XDD>7df}RD`H-3DpNZF$\Krݣ$?D$蟯ӾTŎ0w*\ײ H2w;ug#0Xn01#C,"SF6#iOGuB#o3q<ƪpp݉z\}gD .Z1WюT9g}`1W'ҞcE3.nR!xϮxtc\Xn;?96u A ":C!"'"wbYFҬkuHIC`jWHVf`׼pq;/ gxQ_2jc·*+"<#򩣃$|]X޵ZEz}Q4m~FD2)?1.b4E"@ "*>ZEEb9('F"r_M`HNՠ[/*]"^HZH \Uz2Ddk۩ɼ*K$m,ͧK(* {cbBcN4:n26Qfq1bu^ǻ1g+Znȍ<&I} mi|Ncw3ڥk^rSOjd "DdDD=cYQYAv_z't٧+%zO{Vt%f'[Z'g}4A,IǸX״,"CwZYG)< ϙ۷7@Bui˱o,kHaLѺcz؝ύ%EC&"K'Ktyq-9FT{5SҨ MsY c]r>[1f}1;0ѫ߁č1|2 !?NɉkQ,V"]29R3>®G)I='l?9<3qѣ lȨDexUP)W?hX`,>gYv9n_O,S7R < ϪT<3` (KsF ʮY0@DZFv=dJJJL{9[4RẖcReuV "D"@@D " HD@DD@DA !"@@D "D" -"A "@DD$D@D "D"@@D "n(I+肫 M5'kK'EE]#ȧׯsyo}zx]ut 0+?M^ |\=dY4<_īdyq!uFw|o8 ,<|,Î %qA%qB8C*s!rзX9ufrX_Wu7/r\أi0rԿ3"O.)KƗ %zUU@,cgyI6:?m1Fα#es'V(z+_Ge q %"JD\ّb_:Cv]W "Z]A 87Gki>yZ;uў4&y !綦+i夛+y6EuQ>MDˏ_ou[32ug4-?`\w "gצӆ=(JDTT(Q?3Ȝ!?_Ѹn9\Q"U|P÷v2˖LD"AZz}ni1j]oEžbs%-;U;@DP(QQQ"DD+Wcݕ&|.}]"wъ죜 SOr+ϡ֍0p33Ny`nkr&`6|xQ~~$DMyӵ>n_|3Luџ?N{S^y1\V\gڪ{aPΊF+l{ ]'ئZN"BS{MRvdn4zriUN M0me6p#=V~hR"򻧞itf\ў' "Zo.bi'Kjl=@O8pƪEzrKK^ -}WLOlyIklEEd}(U$ǫyjgrNe}1yvز? %"**JD(DOav2~h#ǐou*hc'<T8Tszf2jtЫ|Q;w<>z{흴hp?ҏ};+ތsOBTG9N,P;H'mdg8ViI:Rx9XI|~S{M} ƲU, %"**JDhjVs5i 54k(]Wkpl>DΧ#b?QώMOC x~G9&avW3v@\sW]!R%"DۿV r ڂru[`qdn. um%7E"{˜c{_= bSɤEl]bv|\Cvu8GjhSr+p#(~S~ɑ$Ʋe(JDTT(Q,ɝ! kfztc)3XlOJDPCҠ,Hم\%p1~"rel@uCr^Lä tX~纬#"= Q3e""#H;ՍDlf]x*iS֋# )Y cReMD$YԞ#&ߩdo/SY5K""CP"DDH d}HAt!HgSvzETH$"$8FǸ'kiusL.6l""hg џgoX{ZcJϩO٩C8NXԈK%oQI Y_ ʷRMսb~'IDDAzS'gʩYl."H(cRBDDEE%"rkX^\<GffsӶoS N1/@?9|cT{"9Qa p=*k#Bz.+^4 F랊Xۆ30xMS8A ACޱ1=bksF|yF;⌎s4OFD¦OA xDΡ1MKmgNF6Okя|dexiGĆړbcV>sב`D{0:M4&嘴B%"JDx[k!np $]p(6]>de,0ۏ8o*#m"fL%dkD?jkDQ5Xzg)kg ꮶ""4bW)mKhܔc5RxV\.oШsAӮSY-ϭN8r(x{Ahv/<rʼn*j(Hu bD6[0Uݸ΄3[IS(JDTT(QW $H`e!D¸tYZi!̽X탿$=wx'/_Cg8oo4d>)9vCʘRux9RYtz;=${Du@ R&CP"DD}#̮έ'{PpZΡE찯+h[;; o 7ҿ: SG;O]L> %""V@R?nQUAgN'{P( %"JD BP(QQQ"DDP( B0 RUU5:;A$r_#ꋖ*BO_áCHDDEE{vv՗飌B}_!WC!>MCrgADKCC{i 혷p?Z#!x7t :+_'/$B%"g#x">-hz~pi8N*[oL5^J81}=P/o~Y.LS%"JD<#Nӗ_̋'tS)OS~?^_3C5pƉxw (D23q:|7ˀġoDL~sO};)@_@" 珶mDzkBմkwKz&Jk8=8Ap*vSK;.Skֈ{ƩkW7236ށrKrlC _.\- 6^">IDm7Ѫ"Z\/"{"wqVÖ׺Q#"re}ȓ3YTugzC_u쏩ۅ6ϡCզfJJ6e7| Dc6~d'NbӏM.P?tz>q]v>ݾׂ>Mm2ZI>ݝck n7Uɒ\?3])6YLoӉۏ=i凶v'qfn^o[ߠ^}3N|[F+vyn}O1'q{㻳mܖn, }Ϣ_io k]ƨV=2C%qºc‿pyҙHP>wwy%w9wpiQG-*a |&[\uc :ݩ=ӶJ"P"DD.$f':v9XNm4"G\D1d:v#=Y)#ksq-ɻaNye千v8i8E-b-O{D{,o8|^,F%kG!u9}rĜMPK(FI@bڈh/u6zB"bHT1c:w&؀(w}wa!o+sL6I)$C\""%t9\;j.?MmNF!-n7"+\n- a ƙH`Vɇm5Aq(d9i`}ڎ&w> n:SLZ8{&<[zyG0bP6@_&Bd.u%"**JD(y ;fαs^ g96~vqe"Fob c= fLuOH7i,A1q$ !2}Qܛw/%"NܮO(}h&-eH>/4"8 7FɒifJDl:ܞM_t$A8XH]R!" Q(s"!:ɆYt%82"zdҶ׸s:~)NL =[J$M$R#Bw&OTDӘ8jtFa6ma-2gcsy+(Q"D}njb</y9k" 9v)ܛbiV3Ņ; `wĮN,GJ}^s R6n b~&*!l4dש&Nf ؅M9vZl/"&[v8%DIP&♃E p r''ܩ<^dgα;EcvU&DC :HX3@;AHƦCP"DDr:s$=u9Ƴ[T؃(,B!Qׁ+ŇKnl o:vFcԶm۶mݾLu$k%;8{n>@ !b&D DL!b&D"13!м"fBDB̄HK"fB@ !3"K !b&D DL!b&D"13!̄"fBDB̄@ "fB@ !3"L !b&D DL!b&D"13!XBL\~ @ [meDz D̄Hv>;N87渪MC$&8-[+j +O>=FBJ)j Dƒqz'x9rd`ò5Hη|oG3wZ!3!#ٹ/g;j l̗7,5DrW~qߚB$dagy~O!ÛJ D|X'J)gc?B$k /o`ػ!_ԭ/SYHok)eOGJĥˏ?=J D̼#r' cپl{G8'j /oh|HN?B$xA6ǭ9?\"H"𹽳4""̼S,633wø3IT}]y뚞xR :^#+"O&t9"H *D?/¡""ϝ[~|) 7*<4Yn("aa(""EEš=ZDb0Aa""BVD)){aGkGDʯPWxS~& 7,P {!"rDD^DVn!"W px(2YD9k3'"ba""" "^x"B#"O씈U[?^(|·md͜⋈aa0Al  !BaN"B!cHDeE[0_~3D lzD&/ڳ_D 0 PDD$!iK)8dP}"7 "x"W5DD(DD["c:}c{'ڳ_D 0 PDD$mY|H;ޯAꭈ"Vk( "LD9rL;.̉4ol~hZ|1 0 C-mYs2B!ݘ""6vYDYX/G90(s"=5_Xt2.aHxSЖ· o̊du(ފH;N9L6l۳ȿ/a%peڲ2R Gd>TT؊^;-"KDݜ9>޶g,"#U0 0a1lbmo{?dh>dhc~սl,&aI{VgaOTE&[?{"/"aa(""Ro\UG5m!~>"4SDf͜vN][X2^tÿn/b""""̆Ր‰cdmo{?$A"_W^2Ή lϪGMm"yӿzd$"""SϪP۲r0E% ӞY:+2+‡b㓿^EDDj*\@SԺ!e~HC(d-+ ;E$s"iϊdNmʕaIUM-ȳOr]DDDDDr7p:NvȬmY~>dt,'C*_fEs#?d]or: imYYۛ꽈sEd^{VϚCs_?0b jP!~n˚up SrڪH;+HZ|G_"#ibfviڗ;#'Ʒ~<6LֽqMk־&k_:YʶWO~/=-""""hs^r oKL ӓ od&.t˻J=>) _!¨#:~?mYEd``mjc;U> *9rȇ'26-Piڗ;#yfBz ɖ% !Ck{GHei"z26-٦EYpnDH W-\))1)~'Z~~w}{yJ}DDDDDV &t@͗o'.L>]_?'ߎ.`( cEBڹ}nSjHז5,"9ln"٠u>̐Цl²R "$q"%خ7oQ:DDDDDVw\7'G<Յ+ˏ/'&ߎ YkAi"Lrҷd ͆,+""aj# <!RRdTVP{O@DDDD$9pr2;%Q'WQG%H$BV`z/!\Hߒ5ND"˵h"d=ӪUjgTHhيT#;䏉jJJD","""""Ʌɋ[rV<ȯ#uUB"!($I\Yn^۴H/$iJHb J$"+{ """""ɃG8"V>B}[ !َ5FBFTC""eH/#٦VTGz!IVb<\"*qYm$ߍd nŃ< H Yћ$K7C3#XR_酄ROfH"%ȿӂ H B/#Xy!^N%e@ZDDDDDd\ ưt W>V^@H/%d2$cX8F,ec |I'1.S"""""򃱴~`nIENDB`compose-1.8.0/docs/images/wordpress-files.png000066400000000000000000002122471274620702700212370ustar00rootroot00000000000000PNG  IHDR4- nIDATx݅,W񸻻繻ݓ5XpXX܂~>S ӳ ۓۮs9n\5hX =s9\h:;5vhvggDgcccΟ?H}ts9眫9)[*: 2[1v|q>Ν;xn|| sιaVeL3Hb4;B{L?5tFlf\^tI>sιAl訌ՈJQ֎^S~m§D-|)ggfFij y.{w9dVY+4OK#FYFeëЈO4lç)tfp &ý/*^}8s j5DE7Ra* mF76@P>#<3:͈JU{_SusEE'NiDii 5 mhg)p &lrrrV s9粅d$~TT(/F{@ Pqٽ:#4K7e[^Ös9\$3Ojj 5efBhWj|'q)l 1ܳח0TeÒs9\tlD'L*Fi)] i qϨ!ex5TO-вImk)t qm9 bxm5k%s9H&%IN#LR4NH5aTtM{D[>m:Sv'%GM~xkkqmg72I̓s9\t,DHD? ©&tW)]&(^j ݧZo(Ƕg v B)h 8߂k[q܆k`sXr9眓d"8i* B@)i (Q-wNA2Bjh~r#,ǽ'T8GŔ9?M9a &?hܳ w=8e3~9snX!٨8i7D? )iiJzl0Joq*Q=P &}Sߩgnvb#>p!Ի)p 8?akGpQc8?p~9snP 摁E2$ B)%"F6ME9N '}}SӲ|+Bk{Ak(7 h(7j齌}WF|j)xB'?HA6~?~9sCDG iY9+jB1JkZ"DDDk {Ce)h2B-X~j'.|RE'!-y/xKx2Gp>s9ܠ'EQq҅⦳tT *F2T~DR˲|44NAY ex=M3Ph}xC::NQE:"d8~8p>s97=2P;Qq:BG t}Uu0Jgto5{t-e"7TOexSܼ%в`}yRk{,Hu'P4?N?t?}pS\ s9ܠW$GtR{ihMH :3:xm=!yz򲼞{CeNR|m/|J{@kY/M;z*_4ї+5ο[õqs9ܠG?A$ S)Hi2Ji*:DGzZ7TK-/<Mgi9@?~D9s(=Gwy龸,tt(5#4-7MA2|_@-*ƿ*N@['sι8E;:ދMм$ߊP-S .)iZͭur9眣8t?ƒg ɋ<|֟5fm۶m۶m=={edk{gs/3/]Y*J CįBfvDtRD7.@dlDVLGx$ChH ApGFpXwvFO;z8vkR;] xM^xOޛcX8&cX9:&*f8:!6TL[6;]}Tӻ I^oA2~bVgqTPJJ#H\?m\Y{Vcmg3|V>3skRo8^oC%)^}.jwHrX))cټuBU**]u;q-k\?nMi,Ȋ[5؄ڋCH=sH^䝛H=ԿS}HG#H'?3ÿ5x-^=x/ޓ8c9V49\pN87#sWKURq9rΘ?M@۠dwP*3ACV#gs8!]9ϹskxU*G߆)OڒJs^9S]T\߷YTY7蜒M3F "J ?@yďoAdl'B{|`= ۳ 3'%ϟ"J!׃spn8G+L 5ZpM6*U5f86G3mْ'[FqB5!B⊗x1$?6ߛ X_!TRV%9W[郋i͂z+Fx$Ķo@y^Bb9\rN9cε'5@`P1׊kƵVH7q5\Kry6(%RA3^$σez~_M]_0=J[(Tڢԋ[yD] SBc#j"xRzs9s \O0R['*F3*ݓP>h^[tT9_fyN(֤T*U1D7G`xrx7GxxЌz %?jghpmF\+^Kh7E J"\%\]/\*xiɋyP6%)]Sσw>6c|JJ>ء o9уC{!qwoe)H2Gq \ӷ!8}70'ʐC*F3<7ْ7)O΅ \@(QP]]]׍ECVҧkފ7Z[8[)Tru*Y-,18u pu34\c5-ӀN_Ro[=rHt ;=!T3P/S7~ÚRtS9j**TP)Nmga}}MKhfpPlOF RT7By\g7=)\iNPr RA(b^$G6w39*xc{PJUYJ=Ig4xm gŠasa;"m)nwO@R'eZ^J,k>Q)fJ!T|rC_R#HӨ \dN27%OzfJE~3Wh3r]]+ BɓP\ ޡ Jy&G|zNF'#RtbOctZm6PsorEA=<]17cU*8uRI g8 Y!5 TjҾoCz(Z:w9p>fV_`a\|`~V5=44{hhG~0k(ÚW*!ka\7!- 7S'hy ߡ9.B_HQIçSHJ=U6 kkƿW&1,y<}6QuxJ!tg+SW>:tM `E {S߮bj%-)U TA(}Fڏ_;{DDP^l1jr:C u@%z;}B0SxJ>B(  Nus&>ӵ~=?է'aB[`0y;f?oWk+~s9}7aeԵn|_먃j-|m+NLE~" 9N`^/[ODX0u3& KBm6U6bMnl.]FF1c籋0/݌XqMV|%lCF=iNV׍P4ʀ69r{:tݧ5">omsdhU`kM6},\X~  5F36Zx:ž0~ly }\_TUTA&2vf7\dQDG \vџ=B`U=@ TBpR_O- eM]orlexrD q۝:<:$tBq_$PlGpqRU6:mvhc 'fz^[sU-/Ao^ͯ9dlڤMV}*i,\8ANE@O74SQD,Lt]lg;uk:?+e J*UNP( ԭ2pVz*I:K]Hlnd~ZpcjaguiP|d dn~ Ipg&Ý*?ƽ? p-r~?`PT~L忎)LΆv$Z!ho@+Ƀv4ZiC;Vȷ~'ނe,`S C/G ay "A(p K?MXء6몸Ncg o*O)--Ơм3X(ՁyYE!oO_BW/Pz>2O7ajy0XExx1S*B`~>1oݻ|^_ތAtsD*U ޚG1i$S^$6r It`H7dNh ;ECƵɇfJe]?41Sp1WH/ T#f['lr?wacNށw2+)U TmѴ.b,:6U#ïAv,*K,Y ]HOӏb[04NN:l*;r۫[q{{۫]3m^ziOlr|ʇT/ v<~Rm0) { RSǏ4ַl=i{=o%3v|9*7*nz*NXُo"0%UR6#418BMO\?Z* .P¬(1לų,ְq9XP/*rinҤr B=>W@ ^G9d֌ߥ/%FjR~gG8e\ؽ6֮M{| €*00Ixj1{_ cK~,|Lpu= W=(!_D.B7F0;FMEAc4g@aq]>i ZQE* Rlꅴ_̏70E_.3|lʼnc\Gte*U T~ [Hn%ײ_^3Gn&[?%\StUxR~:&6jYG;s mEL*́%5{ܟ4-9ɯlځWbov$چ[q4~ oq PwŰ\S/:Syf5@(▙Q.bK'[1;6w1!T#jm?1V{s > l#χox6L1h5"XB}F`|HB(ua: I!T:u׺ VCh7mCk~Wv)6e*pWg kaUg+U TA #d/ϯ=+Ƈl+kpyL?,;os@=/5}P7 I{w՜ wmEۄ:^u0%|ov__y7mXLw@lP[*<(xb$ 3TøZUVtp ;380z-0ZZ< _eUM (8<&8Z>I\=#Ncl\V[7kgyiHbeɴ(f6YJ|u:SP B>Y?l JYeĽlm=]q{P?sp@pOีGrq$?2@aW@}E]'H%^?<r#01P]BأԷ׋׏Dۼz\<.miio0#86"&m _qe -G?y?[,F7J*g0t`|ߓԯ*+`Å P96֟սQǷxn[i!P?{} \gP:%߆i{ 袄Z~r?w-eG+U TAJ2/ ?cn;O/q@d2q>}_td5 m2?%K]YPP3;_>ZH^R B*U*y%h/mcLצ B"JIxY ~\5)U TAR P'_U"JT8>q63۹eL>+J*UTRC$%JPaRKP+ci-8IcR BUqRU>) ZlGDڢ\FJ*UbiRtiBc++m~+q3ߨmކf]rߘnx$tV7]WTAǡg5)FMҼ,_g~d%LBkgid7&.~tjV''§Av !n,LRTAПWC(л0շ#,YF Iމ}>ƳW]ߘ VgC 'v5mbRhRTAPyS B*لuj,qηBղxȩ0vLeŊ"Rx6jۗ OuC1윪}9?fpx/n xS1쁐IX3q&+\Dmo4it<20sߋ@>o{3B Aw[F, nj _Nk{ٳ/,+U TAkxq2dj3ߌxIPX5ϟsXDu`~v\|F9 =t\u&B&y}zYm3 <ϰ ߄Jَ`B3˹V8ͥ볉Btbim̳9ދ{ ):25~y9TTAWfjĔL3:j׼#yĔa-[$PX/wcà裭Ϟ!DڑRn7.9c{K: yE_Euc lz)WO6F)vIL`/;rc{+ BYk|6~HnH)` V@3 ӽ^{6mMש hc ,jW)_gRު]Հ@\G9sҞAC6! uiuG;Ąٳ a#$B?d슁 Vx{deB|u1>Y1[{F}*Xมɜ|qWqDq9Vۃ ggLH1H.Bɋ* S@xKOGBx2=Yg<ki8}~PR B̅ɩj=:Jgc  \Q;)F$cbCĽνL>J }ldxfS0 ȣ~4lҿ Q+` R97П;}O O();Mi "mk~ `%0*ʀL9 Ϩw2fzRtqhfNyu7 `֙gLE {ɑ^Y#h>6̡y Fh:٫ʞm#R{q׮~췱2CR BlEY A(^YB? 2;NI_BbuC @][_& 6F+A({m9,0r^vn(5\uⅼN䎒yIE"K-4nK;~̀?#c{m'L-]ZX wxcPzF BI4#->(@TיXuIث[B5{Ue`TE1U` @(sGA2 ۣQ0y Jlla tX)*rP a-frXyК^(YfP%SQ| .c[&J*^#5Bh 77ob`N F'v -A|= iD8zDDZ,^|KP 4'4"\5݂apgE#R~k韑{I1 ׺}!~PCX~Vt]`%5'U%N/p?EP2dhqRkNM\(B: !86X ڢv0 ? BIzE>Ql(-܆ٱqEENwx!(&f2׺t85Ծkg +U TA($_]sX30)숨~ΫAv,%_ ]HOӏb&Q^pCP1ltv`S)M 1qL | ϕ?JK9D \6'e{2x,EI۝H_о"[TEiv[Z\RkJ9B-g>@b2@(>~BL#I, %ifovb3r%[7BU9 ]JHD_JQDjw<\.(I%E`J&h doCs>f'@p3hϛtkI3{vI᮸BٳtH76倆=PUd> ,3*U Tm.Ia]oGN"WW4"yGtylGGplq}n6᱊wRn$"$1u =Bٳ:]F~5.qr@$H~t@WHR Bf{JQ < ;̎O ?\ HPMQ׊lQTZ|%/jU}Ǫ e{$c2Y1g fr!+Q(a&{Fٳʞٯc y8E}*{_9810'Hp9e0+TTQg&4i&OkW%9ZOD|ڤZ6#@M&PE& Ի4R{ʻogꂨ}=w`U>Mޓ43)I-V$i8r`{jJ 0i6>j255594>rs6BGwgR!mdrӽ=[JK֦X뗷L0w-L_Ƨm|(W( g 5V<0JrxSXo5֝*mp3~䠐1xh-[4wa|ٮ4E B BM&POTm$4d?W*c^6͸s_2G-e,)l/Sazp=9zHn?҂i(E?M%PЪ7DL'~"ü?z]WOO?Zq:LajRۣb|\z_fg𸥰ȧ $l$&L&;,ѓU7.r6-עi31E CU3 )j*Vͫ)ob7|;<2JG&瓌w=|_'M|ԖM B BM&˚-cG1Q'٦9ӌTe$G`@=)Js8v@=`{|_'>}i'g}r&>s>/f/LoJt %(љ]$ht͟3Yi$iv;@zR>䛘Ɯj\Yy? {}+y#O%S'&S>s>d25$4H-WkxygM 7UuT/[Ių8(5)i ɘ(=) 45إѡY`;l-);й&:JcF x$3>/S>swO @)sBM$byHUi^ݹY/ђTH9h[@]/%MVcJq.d u F a;-ƅxLhԎO;O[wO82 B B B^bqAEG#r|[Wun/ڢ+|2Mc4!5^Fhu`#^R|԰"]M{ڟ?ions` lҫQ:W8Йh'|G|%zL$Z_LWjlYpʜ?i[(=Z9@'Zf/z K; 5!7ڲ팟w|_k+>/=zy2J+ow;S]/BS~X'xOëDN; jj{G߼g/3  k3s"{8;Pbtϣ^YոiMuUC9encW`NB3SF`[l9|Gi2~zuV83k@0uz];s?hW?"5g{Y/aߕfʟUJ myN0lgpE?Ļ@ Bmh{(U9qE:zUUZb8,hr$˵(ӝQ:=@UNM;8}L:|v}9`.9jڞǜ1w)s͜3;t4=3:Æb[l9|G>v Ԗ/u1O@^Y+sox:˄?'JwB(vQhmLtODBC!tgY<=W!tv/~3<䉜q|W-u滒H*80O<3+8^gjKM Kϫ.qˍ\5^TXo\gORjݠ};rX7~E;c`,1Fʘ{hP^:M0MGsLv~Nֱ66[a3l %6Ŷ[csl|K&AA뭿JT`$D D1㕓D$nK͒. rz jB;/  @w><.ާu?(;ߪ@dI2R=% ;ˠԍjg7~g@FȰgP23"k$ph96" JLDX]ptٵ~_H ϛՋ[ϡjڹ|h*Pr\-%j)/UKjw2Y?D_fgû|RuQ'um&m±G>W0 sms`+1'yF7ݽ5sclM 6V aClM-6ev SW+#vp3epB(Pn> Ի_A@P vCۏMT!ra_mI&vAjCt6ֶT=~,g(a55hQW9Q孳܌tvXTh-Z`5%Ӛ\r>H% ;PU_E;ԉuiu3}g 116H聚(>| ݍl.3̥dk朹`l{gd2B`[?~И ܱgy} Bϐcbr<{PCB`+R{ߪXND[68ꄹ5ePJyY#s#\?R/Y՜'I$%~)0TڤM;nTQpف)w-sd69:DH:*vtQ~]sػ@G/HQm?xwߥN ڢMڦ>7`&Qes䎏2f7\0' s\M搹dN[昹fΙ{l- Fتo4L(]IBCq#0$W6T4%_.K7{1fԋcQZAԲ:?&J}a>0X%k6BoIJսC e~LIJ8χxQ0/ejYMtzXF|r;jzr1:>SiWa p߸."Q\JKfObyc2hV(+"/,^ .(&c`,1Fʘ;s\0' s\1gsF7[昹f]&wA2w]8IݑOk7H_3}g 116X3cg 愹a+,& \VMIj/$$ ue!u"N;/Jׁs.Nw?G?WLcaL12Vؙ7MnFL&PП@@gof {GFA 0P VII"v{%ω#XGy)32YFg<;|::6h6i>\AdjjI  Z3ƨa|5+TkU1e Lp ((Q|׭7=\&AAh rFWYj: j?݈d2LT m);Ih5X%Ը7[-g^oP 2L&AARSy~Y|vnϝ>nI(߂)jܵQOd2 B B#ȗ ٳc6-I<~`R8zWvGwൡpZ.WSO?S{C}vS/fENR5٢՘R ɟ9O%OYci)O Mo\|#C6ic(KGg<;|::6&˿t9}XƼUGJ;c`,GL&5 ~ퟬ".=Qa M˸wہ=_c8LUC^w蓉s+jCj>CZc~/[$;߄8Mtmފ12Vؙ9an#9d25Ole?z]DDϞ,rs"wg90z9~P uu65⺚wʿ`|d5d/U}j{qDDc8OsEҿbg% m|T?{ gae/Wcz5y65ޫɣj)u`r_ۃ{j~gj\^suy/{_ع3Pg[:;:?xwߥN ڢMڦ>7H_3}g 11o=̡of*s25s[݇& 4e74[,Oxsw)Z4.A Uc_?-UliEmwԼK7jH8$LJ"5ڥ֧=VK"B״7'>|bw,q@jNg̵?y_0Fʘ;s\0' s\OMa_"cU؄MrIslvY|g2rb'쾶fU|RqG8#%WA5\)Dȿfmٌm*z㔚OlWL<kA- ,"M;(vjڞTm:;;݊推,sɜ21s͜3݀grf,scC 9F (zsk {F*_Khc=j*0ƍz+,?}"aRc!`pկC[IN(}.A̘BBλA3>JD2V-rq0r.+?k`l{7&U}|_'?M&AA-Wi{>XFxvC\XZWE6ΚcbU7P1=FNԔ &&rVNW9r鏿^yQ;ԘBޙKlܲND{[9P3 Æb[lߕ4U?g<;Pl2<٭siJ Y<:N3V+`)y,Gc 'v(K|f.Sϊ(1#K7kwY^R3Wg}/X΄@T>_NKw~:~bo$7/n/=ڴ~OS$LW^Pll7)5_ĕ-k~A-ƢMb);_M$KU?#Uo߱T[?%Y kn/%_+Ta|W_2ߌ1_L&PPg(OYkt^b. [3,تO3DiVP-/XԬ|Պy!Kuq;ueq;N}qOڠ^pKhKW? ԛ]'eۜ눈}og/ƥ:qP[Vi٢3%EIгTOD [퟿ҷ)d\i&DžU jܶN-WNyVikUNB ٬>NDfI~iwRX7|wDLY4^Vz}^9c2DO|uW9%k&xBg;8ӅaO| 82wT_R)}>y쳫s !sU|hTO$4mQɽU8O}{eľcjX3_K$K48Wm+w+O3!Eo:Z֪HL3 B B-1w@f[y_9s=ۻxVu]M}.m?Y?5M Y^}a-w#Bxy~SWڻs֯^ ]26^'w{ڪXg޺3GŪa2E{VbŊ7!{MKod2552L( 7.;s^5.GY/ P W9zN-+sr=;1{!:"P/_VZvTkeZ@܉'q{}ccԼos} gݭ7YŊ|7I ?'v45-P ?LvN>!taLʞesx.~mOfwgl GJ@ѻm|Mh?о!:\EOeݺRSvZxf*gp݉(ocZ?{"+V>7Ya[L|FH(ӏ)PPКguTV ]tT*NÛE Fp/gJv=|V$ܪT97iqn]>̅um&ǬWT6!kޟel֭gWyw/ϾQj[gu*WΙ9Qu16Ju?D1[_r+Za3TK)×?Ye?Ltt+ڽʋߛX|݁Ӆ_yg1#DNg/TN{2+>?W @ @wjڞAԁ9ڸn2M:?H{VSbC=->ZsLyyW} } L~:"_OuW^/?c?QZpLJV'7sdŊ W=t B BM$g~uxW/ޥG'M\>Q@V^+^ѫ6,T~ydjjr9'+p?YjDlˊ+,Lrj,d255Bhܥi)=CTE_ҷ/VXF2~]ꎎNMXZj?GM&PP殚6˗:>'&(pd]+v(S3HFo5G&PP#5%B3'}d۬XwߵeAzÁוL&|TG qVf~bŊ~Vw֎VBn83}d255ZoR!϶{%zQ@>MgY-5T+p|0Z=5\S"UK3_=7[F0 p t7-ơ8nc"gBo)9/󆠷7 Y8%ύ UZb Z!Lmח*U :p]ƝB[RI-f/c+l{YژƮ$x]xE%պH{8h/OoFϏ$\9\ 0̘yRM~/4HS=]> $Xy! t&H{•trIi7^70c H'nF\\Hux XClkIXEE7#3~{N=n Y!1׏T#,Ojhyx,GB ɡ901it;L 5#v&f}G~uɅIeU TAG\X8_F&0twc#i>9T FnH BݣeKL B`E@c[>0Ү H2[?~VB8rhDzDg+Ah23,vnv*/Osz(2'˭cg ` X%R%iahS: ~RwBH,(`KX91uA>1)H@X0jLh Zz߱0{حR BVrq݉`_#xmt/:ҷ=N߽+Dz6ugN]g[I\|PR8.>@iB;|On^h0V hk[V|C}`[;~l۽>~`G!G+^lvYG99HBS8БuM'h[.'tRGS%GkʀnfkLsHM#?Pla0wqYNN7reVFtM3Ryi]B -U_ P5M32Ch>! ۀlvQ820z27L>Ų2Cln f1ZZ˴/=ODnTPO3C() X`н /<_w|ߝfۏBjew,Įӿ?!sql:,׽~ÞӟaPI.VFajNά֢9y|?@3z tqZu|]>Ql?U(3xϝCq׻0ÓRG֯&6$./L 9Mt ahl@:DMgJdzSer뫰fڞBEyew~q.N \4mp& tǕ!T -G'(RCH;߂T77>6 ų̆mnvUxڞW TA F+Yꧭglv~.O%yϋVE t5a}= ,5#v`,o9%u: 0Lм*`#mvDoo-[a=S ?T@r!aGR< dԢ}VKEI^CN36aEa"gpН>a"WHbU6!J\t4hjZ 9rT*o[y``FtUBk_{mw?S TA3|nYZn\u7ۂ&4m>A{׋#zAC]; w^ƍ+_6YBt%p78Aev*ߒς;Å^8&(& r$'vcܙQ^zVfy_>/-,JCW&XρCP=dŘExFXHUY,@.DcC#TAPdMic&"ӫt ?UYf<}ugJBeV@*upY}$QBI;Dw.\C^&C}8[lWm"?6E m?J6ޝM-΅ ї~FO9J*UM?d^=B7lO:'krNفJ(Qā܌=o+U TAPs||,"%Jpw]Yn߅o W_M*#U$. @%J(w#νg??UR B*٘/ FRD١c|1ܽu {NU TAP_#vm%Xń9Dv*5% dȡv8p؃XWR B*M\RD)~dġSsXp"3H8aAH9lxeC:?EguԬdz@[_;AAJ +R B*? Ò@ gnPIriX1ɏgrn̙_6B' ky^jBAm7t$q"R B*U-6-Ԫg;ŵ anК5.٢e y i g7r@5mk!v7, GGv$ -1P7H Ao/鏻9 L"A$NcA$[-LsަLb|Dޝv7C "ٵ֬t%g/+1ԌܺŢyq KK5,Յl:tÎt>ь8NdKF1YYHj[GS_BOhxM/tӕ0鵽{*U TA?,%4ԤXa~ ~=^,wۊ Bog'@qS.q\ʷꛘ!՛tyazr5z !ڜb 2H=%r˜1HX&u&tcgc`: D )fq} 74|V0ğ~5; `=~dl ϵ1?n球WUJMn% o+U TA{0aƍ &=ta#Ep.ah<\t/LhK!n;rk25p2ODb>)IgXgݍ+$|%2?Q[h|S-,XI;\Bw0\DFq/?Ă ,{0!T>+CрB\4(kk|>Ȉvm[w^ç m{FR Bk:k:\\1/^KW^]|lHv١V#A:i!tY r 7n/S $YA_&art!WڿI>(] Ȧb:`C"Ƅ9>[]mGni$;ڨ\ 8a y8 IyE̶;E[dx(IhB9C l(O& T|R:B\8;ېbJФӶQB*J@)6Iűhr_͒v!541,V3%hCvVиC|Xc%V!zj$o;0~{J*UzڭB=K,a| X֩|BtDYjPlcߠt*!osK.,k|$d f f e1V@.k5K֐SlB ^Ǭυ$ *Ap!o :Ex{=׳{ Y};V/|REqnVP =ða}a9# A%K~Y0sDPM~4-k"ޅ/eUN}>߇,^GN!p t3ܯtMJmw8:yJO#p`'n=x}^\?vB8s1Ԝ!$0ZLFI X^R%tBBn"m_xR^/:rڰ3%* 5zjq0 uʏLt5.=Ig^E0-}+ Y}R}'!sB)( \#a( ԏ}TAz}< ip[:ްX<G={b:OP>s ELyV惟Pyd$]Oޒ "ŽAQڎVB(Wq+jS "bqpA^בjE G_s: `LÌ̭\G.fTH[vY9k+o,op㐏4>:=aaaUk9D y9l?IP B ^>WzX{9`~Jz I돃¬Ds% =x9y篨vg->E~4ǣ%Jл ٶ g JN B1f(Gv| ~ܚ?U>#2(=]1ۃOУQlT/nE)埡>RىAhao"?B%J  Z7 JVW5WNm+U TA(i~htnD|3AL =Gڎ'(*2A2FM^ܹ{7|#p Џy{rw槫B}'@bڕ(ftK,my%J^w w dG39y,͍ӻCR B BHb ~'%iC:|5?s qO6w cVY?"%ODڱ2e(Ah-s=9%}:x)ۡ51+Oě[/A%O z|/>;.*UZObu,OVBÛUc,v/dԣE#yhy %J<=Bk>lz[_dVƗwwJz?|J}U Ti(wEngVW`z Rt{+&'&̑|~5`c9ˍi>1XSBk]{]pJJ6W!rU뫂PJZk_ڶz0m%B'ȃϯMzڦ)Y47VԫN+%~o4m޿7S07J}U Ti}6?z1A4ڟa'Qw"n 2kYzh=)]X~Q`SBkָ؂@_{Ek;wC R B֬*]NCaW/7FydnCJ6#3Ce7_7kod;R}~$[ W/̩a=r^;aY ]^R\c钴GXC>hM>yY怋J38L ـCzt#kF︯1;HNn$+䆜 xjEp<్Xwy>nxgi]؟zZO ڱyl- Bz-=Ik @ր*UZ@t!rׇpAr1iqԪpp]r\I3I]Z`{BU''R] .'tWA.$f .[A hJchAis N3g] β.mgV/i|徛fy~|)vnQٕSBk,o$:t[AAHg><}Jk@*kU>t._\qu<,c9l!T-[8?#}.5P#TqG]\].dFu-W5 BYcH?1w,^?BCrK tc+:]HBڤ5Jk6_P~w㯴iӚW>JkC*U ĝgpoHې-|}@(5,rm_Bmlk7#A$0o2Tƻ㨸Ք V j#0ǰJD@|4I33 T}w!Dw~$SCBk$M16=w_ONHGwڦ5Nk]P B^Ø;d@Fi PK_a˽*woc{ǰ+B(y)v6:zy`_fEOoyk [dH!},(EhScl!eX떭vtE^BykoQb, (GY[JioCBpvKzXDPE \;EY]Q,`Ġ&XYusݝw3~cǧ1S731Kz"ZF-H۠ Ď,sBuy9ФJ:&sDٙGH&_B'|>1Uvue7sI%Me<?'rE۪]8¥L{ oJǿUуU!.wS-Ͼ6JSI}άh,MY&&FG?8u0Z H72@ GSU|.`uοFS}R}Lա{{"i3#D5(PDC)M T+ʜT;U^U[h. @6$*]LkKk[k!0?<;U߼Gfnc|u4W8D~;n=:䗪eVGrU%|5 ߪ1NU3gǑKב;VzGš֕W6y:zs_=i&gO܉ê4[ҿ$WI: h}Y9#w2uz>v' |rTUždow'B1E5hID֖wUU2y>3k\0oNK&{?fztdv]r{=`ީ1+'nk>9ƞ>O=tw'B\ucnTs$Ke' ^!50 ֬ruHZɹnmэ~<_lS?*wsf>TEZ2Ft㫳yHIQH6#1F3}3gz(_tOw8HF~J3ќ>Y۩s%c5w/O'B1]-5 D״c5D5G>UI2n?rt|X#K)AZ>πe.dd!4f|Ec/XG}cB-B9fS%+E=m BBy: 賄*xxBp6=r7!uxzQ{9s\0)6I}C%{=^cϹ#8q ԍI󆢐75FcNNldںϪ%)#b+Q.Y\N!54;C28cUs^'N7Nߪ_<,_"gMctVsps-oRDAq@hζ֚uOtבfqj;}@kUhꩁcDžP^֍f_rDJ"$H=Mt6ݭߗsV!Co$zdڒC/w Ɨi|7ZOzW3{=bK;q Aw^WHp6SJw4򹈤4BQVBɥ34ׯw}}:JˌqV} [P㩷 x4 j 8uqSƁ<Ӛ._4&BzrHgdiNO/rE|'MIEވ<> {žaﰇ~l:q AP[' 4d>{`oG+.ĉP?tPCə?:J^GH{OxMͳo|yBmgjPX[ӕ~V8O5&jT:m5(iϳ{dܔeZGG=.w —)|  KL=qRlsPs>/6>>{pN8uO f&j8nhϻ>PUJ]7QꚥZcRDDI 1Pzq)_1ߤI35ZO/i|hHrW G|߉D:W5/6c"F:)v|7Q|wa|Ʒq|VĉPB85iCes"J*<'p~"֊%s#ҹJL{GESޢFOm$ǩ-E~Ln<02տ4AS5\XCw\okx|2F a׮5>g+L8m;>/S>s>/6ډ<8q AoP\B)G\ooI2:ްW5(ʑ7хLSHLi.j-2it&' \7 >DjN.yڲh=:{ROZC[Rtѽٌ+}~D65qFw'>oXg|_§-| _= .ɛ B:qRjwʨq]9_aóӶ lNц+jHD >HLLsĩ(DN ԙz9翞VfETN+5^C4k.1 e,/G'# 3t"blMM٤V\1)n*&c{|_' |_g|_§-| _sXqAPNYts@>Tjx"vUxdfڔeDաDϙ1PSM3T2i;`eFmUd;z,{c4-.+rأ#4\Ggszeܸ'7Vi]OO|KDQO4><R/əDCi|dXŘ})|/ɽn@2*;QӃG.p6m.)E{l- F a;l-)ĉBv‚9"rԑSn)iX9;DhN!}˳y M+vo ,;8|7'#)U+ Zl h^hXxň xyޘ3D"9/TFʚY;:@ݠ#t:Dݢct=6`#le# 'BخMj=yk&uuUBfJwlQA$*JZ$/,UG 3#)r3hIwkGk m23By SgXc%Ƹs,l_w Аbh*jM+DhfqtTMDH #=\=Wd "G'6El`LJC>[> gIHGvaP |D! F f_v;.܋{ro\sc̕93wZXkc5vt. =XC[t94 6rĉPowiuͭ1l0}=Xt5TqN܋\W>X&_=!.=5ePOZ<\ia \퉖v4PTWfQ}+K3.$n҅vզQT.jqTБ<%F_ҹ#:F54Pxpq&%k<{ lstusPQm<^+">:AT6x3|11{p/ɽsaN̍92W`hN&Y+kf]t,`;t.L'N:ucQE5VIH&N@yގ/}]ϭr:< .Oә0 + @LBh 3&{>1ρh_:Pf9E"vqGh@%}~e;yCA՛FUmdB+5ޞ Dh0t`g$!h!B-$F*"4Q¿ǎK8ܛ90ܘ#se̝5읍SɺX|Vu)(AGN$(15艵9V``DISo*8u"N:**Lϼ=}>wƒJ\K& $`]6MY=wn*4 4%T"Kpɢڪ8 'S#+t^!Y8mE.6qMs<]wP$$Td\d6(8Ҫ%VYUhW52>Wŏ]*PH(b\ptN}c PÖBZ')=߲\wxz7`ڹl["^u 팰r\UB=RH($9ar66AX5g*,jLX,z@,˭tl?LB^ݦsGIh%ϐ%kT9PHX- ξeP,֑caZXY}@Jh$_"eSiT%t{30F.L:Н4Q6wJ4*;%twZdmO; ܖKJ&#Qp7Ksφ=a?%Tc^vFL(#9QPAJT㲄 =&dwEϋڴbsVy^y5 PHy~fFl$zlG)BB PH($z\@B!PH($ BB!v ~G BB(H'#Ƒߗߓ|'J7MԻX3Ù< v)%ʧ(?QJh#?ZI(w H($alznmZ|WЗ%%@cLhPHh뒑GhfcֶWOg5ϿBbNyi$r˵r/D_CgJk}oT p<5-H&8_{ <|cK fSNxiWZDGy1{(vNhj㍣~d^YBOeUh2p.4u󼎦_r  o\Ie~S(zŲ^41E(X"yHQ0MtԌ8%K4+S k30Bӳ>K(˔giV3{4Є~S6p"NƨxϽ=!H/cGz-4ǒ+{.gZ(8e ~4~WȨz>Rۉ|u:$z8F.TԞK2Eb ݠEVXLJd\*h‰"+,Bv#'=&mH ~87LUXs\ʹRnݬ]nM`WdzQX~[yUqlmV_~eH($Pʳ>;1bӅe][8 WSL*$e^~ò"Eq ,M$`;(FS -شMXR@)HUj?1pxz70R_\B 1C /~ NzjI=`^|MBy^eF  =͇\r66AFXee+8^oioRaиdecfދjaRgo&in"=m'PEd-GqXBED{6dDz32Mz2I+%T;RB4]B+KL^ƾ'( $p5"$kWT}By"UCa)B>3%QYel`g4F5mNՊS-UA:$ }M qKs; r%/=A,q̫\?wBB@B!PH($ @B!H($ @B!x_YZ[ql_:OOBBSKu~,kt˔(|PH(b| 륎#w?}S{d3E-߮$\rTlJLF7TH )0D'1=Ty?Gmw{П\(mZ6uY, { ܦ}AWZ[ =xCo"ı-,d2}|ngXd&K),MN"?>8y^kmP@ⷥih|N!r}Z9[,ԗLBnώ@B!a]BvMq%Ujii)MdA4.VŹM8"v,cc@H&d/-oD? ^$fH%͛*YNR2mceQ QFWǖEr ?qZdw𼤘ϫ6O5!r'Xk,MZӶXm tƖ2͇-y5OR6{87BBnj !K)n{AEBESb KXu"ƒ.VGZWتldrB1H(S$NxP'y)Bb =n&|,c`>O -iykg1BBc! K\2Sdiqaqpq /*{nRwv%t{/v7[.%:>Z0mY,nG~Ƽ&MwK;gs ew H($1xS|K{~)UX4.YXY}@Jw^!*ctߠm9l׆iDK!o} }d?кهd<~D~TB{f1=I+Ŭ"7DQb&I4۝% 74Z.k׬}}#EUuXQ_?㴒MQvI*gz3_LRm !}Ӂ:ю$Sd ?e)W wI/#ns;"RbSBy9>{XB뱇3,r>̦($gɺOF׸ABc,` x%RUɅZWO|IB;%3[֞P{KVs uHhh,'1PHh먙+\zUkrj=FI5WҮ? |C=|_k m3n3>!uW~ril6~h>xMX1$l -N_(=TQ+׆oS;leڑZNQr.Ӆp؜WCK>BZt "HWi[g R)2Rh4H^1-%]O)icJI_*E#xHwrOgg` y,Y Dz7SB.h"JyZ>(Πm&Q:cٗ5K,S27oK4Bi|QKX;OYܤGiP;X1.Is\Bn[s&~ʱ:mNS5'i|cx,,qXy^h; E wYɀ:·£Jk])6e#_o]Ty&}n!Y/w6e0md 5J#sAr\ ~*E^{Ĝ~lNH({s;׶dž%,}O'1BBesrճQ熨b WKZ p VŹM8bl/1@TXLqTI{[)|(?"{؅8l6]g.mZ}Nxb\}lNPWWӱr\]~t*{By~]|@B!Ql\%dtNF"&aK^M䓔zہUj&6YaA< 5PcIН)3k=N⏴;XBMquKd&ߋq='>{X;iH9C-w6jc6e9 F;PH(bo>{1M&_y% bYW*=T ^~ѢZlX`I{muO Q5X%tmS`|NJrkmz~4cw3R ,GK(Y PO)dٷ*CTaiиpdecf){}2sUL qSߓ0-h)1$},GBP?LpA)Zɿd5w-.UnQSQ/J(RQ쿭l$)x|N ֘zbEb'y<1EX+)Cb-c"nP~DI f$Q PངiOmY.Q,9ctʻ I#muNzG^Jix eaRKmik;ECyWkfBYT%8Cj?7۸/r|N2')[FDKq)*}Ƽ m.\P0=qaлs4$WWb=ٶ!y/lָڰ>v~ { 1>jyCe2^25 cBqtk!5Iݤ6#a5FK(0 +S=;CfJ·8eZYԢpN?Ne#՝H ?Ñ/Z(Ǚ63 KI>3^ڬ)oޮ^Bz$ꘛ0N18 oyH{IFQ.8L<:5$쓄 Wv^n=FK(0 j<ÑmX|O/É %]>IOLZJ2!Sjn5gS ?\ϹehOZmIo]Rtxr]$E,9/ێktJaM YUuԢxESflzX8=K(0 gBSlB17VTaUnZ1%&Vų~WqK $YN(GDAdGc3N\k?&8B A8Y!W&?F,bee;z.-sU20R]*'戤Tsh#ӽUtg*w; 2t~cR>S+$2^aQTvߑeBup6R[cC|q!Մ"9j8 ;^>dk3\|9u*Ta8 pgniIXS,~@;jkZ}|/Ii 4-Y,!!,v(eh[lJ>ڗqgife}sesӸ^d>xvZs }H&)k.ѽ{H)[R프= 3s ᛯ/Y,C(C(`1QNcw{?q0L{'Uo-i}h8OԵ^8Wp5A`15,ZxԍمڇwgWv6T<8E}1g#tϖ*vn-jS֧p}{F96;K#bee ̃H6_}Tg!eo9Vpm%)&NMt_8iXڄ迏wxǽ9/>`{,X,C(C(ˆSHn{ ` x:[EPlvJS8-v7~l@lѳX?/:mX,PI:h{4kŎF4xE娪Χ"cutc):g"}K_m#}_w (\QZzKmD!جmΕ痍}#qxRfãQn~TQqͷxPz Wajy9(םIaSE$qgF@1!{z.C|pMceX ‹ &΅)h@&0 k]E_Mتz[-\^PXsy8Vmel PB7f~6xRRdzzQj r+[”Z:3js5ZRmEt8ege6hc= K}-{{xD.KٞWz~WY1BϞzccؠXQat'X8{ ,!!4(+X CҦO8n|\}Q@hȘ'Lsfg^ǸRqv$z8bݤ fDe zSWlo'3/3v[P C5'20r"w&cw4kPW`ll+3"~Vz1eX "K)&(k(f=(!85_3 Z RPMYF@N(Ā[CQƒSنh"{zW@ڗAYNsAP㋀XE/t8Y-:_/QP Q,-%'K?,!!0)ƖJd5]e߻e=ّv \39%0 z ARB{,g`nc"Jӳe{Xɬn"wD?~ʻߝv앫nQ Q,zفzC>+bBBGxȳV!,Z 0? [UCk>b%ݎCq!t=WvrpMB:|9_-tG`ߑe?7QL9٤lvccآXF.T9eBBGN|UܭN'άz{Պ:J۬  C#. вc <4J1%֡PZgս4긩M~pƙYs{*+R^՘ Շ01{ z(Ś{>} Ucf,C(C(XD9K-=;]010f#GbGD2o%NI1 {Gհ(bee_Z`'7c.ZԪZDpNex7;ec>ESpJ7O(u DY,C(C(CP}84f <}l=Z2_-phŴ &i +,Yb4D7mIxn;az.NeE:i#bBBY, <֚G-( *" _!2NS &/^+BÓ̯ KpR&4qe??1M%-5U%カ7*X,PP˱[]֖!Q7Y1/ܢlMGj-Z LLZEf6>[)K[]wA(/0>ktk(6iܶ_<ǣw0bee|m*lO kBE^-gSOk8,J)YQOcJ& 8ΥMMW Ɓ8c V?"tuM3! ^ |/2MQtԈNl(bee\KM*uU []Bvq Ĵ9&Lz4 8kxV&&d?*֝FL@ekufeP.4lƔ6|b122$U 'g~{BO#89%ͻcޥ_ȟ%ĵ8cdt% (KX﹭jWص} *J \|X,PPPcC43]:5>ϕdLoc(PUr~4gX ]*&ֱ2_~aKNOċ_š5QǓp`rK,C(CC:h{:b~#GFakDX)1NwYYFntX bH>mfb\X;,8vg(>*ἶۼ*~YL=62q %86_T;M'LQoc Wl *ph}Ӱw|Vm& Rl<$}@.b121nޞ2";[7R V׳{v:J\ m(1*:~72N d;^кX=c4!teszo$;5~_/?;[\[6ɠ|sV5{A2dz Ψj';Ԯ)wMDc&p,S=-RSZTz3׌`pz3iK/KRfԫ7[lBfsd,Tv 6,?C,g"U/An.-u*|Ӊ4r-]ڻ5ܢÓ4#_'0UU%,fYVndˀώX m"T]]E_${Z^*RJ*(Ji(SI+ӗ}Kc|sO4ƹkےKP}m'beek@=WPꭉʙsYU)IMJsIp5 A]“KZ-ۙ/&`),j!C)ǫ6s[UnAo68_jȄӾ/lF E 2b8 &eQ/Ȅ/u3"N R4m8 ?+!!!t>/RN%\E8(`;TǑhS̈́ ,EPY;լd6BV1W =LeMD&p|ilf, _kBI HI}[rG eAQ_|ƻi4iY>ɂtԈKX|:@ײX,PPkz̧~\ qL 2wA1'y:M,P% -k483`ZBkZʈ|\]+P1H,x Ʋ?"b,=B-ӱL47;xGP̮X,C(C(E%0FWy*6C* =l IeƂt Κ!wŚLł8hk('p:UB('͒}hـ8eB݋29ӂ¤E]fjJ{budAY,C(C(w* *+B۶q1Ok8ۘ y=:s5"s Bio piJCLxG3d@@ zL : X뼴O:.'<6;L^$Q~QhYґL{IS]7.RRX,C(C(EOEs}9x&N; "@(S܅zX_efHD1إ%|-j[nh!@ׄ6Q bAai6- I[ŏO܋0-1׬(ݲQ]ZRx(6,!!eyz[h>Ǜ>V!m&⭛Bv/VbM N?yEqTW,EK).CO#ta zb@@V7HآOz(@،#Yhbbs=Y,C(C(A_g0M\v'bbI=@*J1@()& X,PPeJ5h܆ts3}`c#FC1쀗UMwPm޸ԉmܪ/ tOү{lF׈ΌwVET,ƥo6i{vѻʊzň8q >ƤE;mDΡ=6PQ̭5,!!:@v\vf4x&hiq}/Dp%m4]!5?Ȝ_d>[Q\pbbbM9P!!}@:h{YoD(lG^T|Q6֯<7QbŷdAVlm8حX¿_EIbRtLP_BsվE؞]R?6t J ͈T o+`E;h}ɦ,hѹv}vÝhv*Pr`'nl DIu,0McgO1@de]sVz^MXآs@Y,PPh2&|S Pބ&s^ԟD,TX:o*|M"s(>Jb'lӲKr$]B| I&XQ{ᆵ>_QkW]wkO#f+_p8%ߦm''Նp mY;ϟb5qRt}TطbbbI).X,PP}Cc2}b hj,T|N㬦kZs|YQ:v{#.ԕ)Tt\)HT}B`[Fϒ)=[zyӳ?v$3#=6QPPP qb12Z *8=7E@(f=(!85_/#?kik ڔe\q-.Fc>M0*cQO@/L/~7SنeJEvS󏣢=>:.ƙH8k~M9F禵g+lé-aX]3*d\'{fH9Iώ!=g;$\-ښhܿk !~X,PWkä[*8tY+3 dMe=ّv \3Pd}tҰBJq4 B2C9ucbvk_#NK2?|ayB)7K?'҄s0ߦ;}LvQHz27~H5}pNBillwY3yԬUҶ|Nk>)((&(6(F(V(f(v.beylsW] CY F,`2^ ]ECoN"3`ѷ( 8Ac(_g9KC(D}4{X뿠\JM-% [6lGH7|7vv= z&lU8#g&g=i;=kz)((&(6(F>Y,C(C*幅 B'w\PC%f 7xBZ-7sݽRMEUx0x}]ƇP]b\7=KE>ҭ;Rp-hͽ賖+ (We-( *@1Lshmvc,ʦv\CѠEYq R.tw^¥_Ndyr㻿mW^ݗ69pIAOk=^۶m۶m۶YƫA>'fQʌؙu3)uʌNj^l^?O>_p ph?HٞMa<m73L@e?yRJB%YO*hГDlc:Ɋ$^7Y +\ߕ1R&1;O ʭĴ& wQ 5|Ni)LJmڥWfNTtr.޻ױ26zx/V4UV5&rua~ʡ҃ x`YgOz[ p p 8G]. FE1uk^;4DK4E[4Fk4G{i0̄3_T$1w}Jw'.׿km<pD%o1<~JztMOgЇ֦s`XG}{SސyMQԲ CA*@լd(=۴V//_|su ]p K]Z /6WFhfhhhh| 3fB̈́2eN^8<RjF5~t?Ai޸7=uª[Д ~h$j=Ɍz~ڧW.uɯ:ɕN<*\VիO@A#B#֢O`&L7l+YC>s¯tz_X`.:tUwLe͡7DkU?4H= 9M 2u6kG*Y6_vls1+1;9 9ۻr.!ju+vcsG@A#B3|i0 fB̈́ ېҼra؁~S4kwPNѵU9#X8ơ1[S3VvN)twFMja6~؟Ls9*YX֯Գ-űzq^^ sqM'K;qr r=7YwO[W v6rMʥ3ڢyӔk{e.;H@ @ YzA&tcmld+osfL+.b!&b#Fb%fb'bAHhHhHh.);15,Ս"[|n.W zE9d%Z\֟qT6H; 49.ꯔDe$\~3zƳdE&tE'l` F| Gb b"6 OA\!?e/+AGvLHS5+ܶqq۲~Q۴vA[z."hr"PwŹ3'ʼ~3zƳdE&tE'l` F| GgAHhHhH7<[/Q)k\m0+M|e_ WלGB零_d U(y$zFKU3$ϋ%ea@B!VeS{;jw(4{M!jh,|KJL;ԑ]O7ߒVS&l ë-ח8 z{5'o}-8H]8s<"C/De8_mUvq9}i!1@R(5~M>$/3D!*$%Y_]  L=wc@B!磲Jj.8##4=gNj!(q+qT%kLdu*NИTٔYn3s )}y/ޭ>x=7ҥ5\τ(o~L!WMx:,%=gc̅a٫reK73PHyЂUe2SVh*0:"T *1*U6ŵ-ukDlasYI7&f~F)QiV[?m:J!SL)w,a]H3Ry C^`-Q²l^ސ^!BBKOdfB,[KQ!0ޕl]˪W]승e2s J̋O&,Z mx=f8-q8 X IRHKYb M̱IB<Զ~/,gz^]$TaPk gDzƖ=| Ύ݌̐&ʭ֒et"Ţuv5P?kHh +2N1~ڬW&.^vVK&p1ePBxr&XOPi6ck,BB YގׇNY<hF~Y]zer?S"BKFfd\M>^Pj9❤ ~YA*m(Z~A/d6YŹ#OИs/cѻjPK#KFB;dۀ"׮Zk+_  s;y%ٿ)ET:g/b2 gZp{R6Qxcs]ڲRpG?o7Vڑb꫙Ow,n=\I%+U mS5ֲُ6MJ^^$T!?bwxYK?!v?"YNPkf$~*6f҄3>`$ BB @B!PH($@B!PH($@B$ MLTwOŧ+wvӠBBW) /P~X_"$zF@Be:$/@B!.{۱PCkO Q7@c;\RWb9CSWBe! #?Q65IN[4ɘbi17.vse MOf_mUA^VH($1N~ZuIM,m Udi_cN'Ш[jO$<ч:F_SGYjkvuc,>)c4pO⅄FP b 8("|2*I,d@B!QY%.GW3x1:B3{烙[!*%85&2:NtThHBQE1weYKˣ?_2q36s>XrqoA5 @B! T99ٻ `UNէuı@)T׶Jp^B|?m*fqVTMH3 yM'E{NeYȾ "OdgB,uKQ!tޕ,I˪W]Še2s𺦿%Tpt!MF-!N-:BS%T꽯f|ϭEBKF;gi8H($1NRϛ.)=/d-QfjZ[b>b =)Lw,biAlinR(6d\Hh'K*ǿe]PH(bRQb0 Z%K6{k75?um1OLԊ8,Ϸ(Z~A/cb!^L2,YEGfJog,֍CR6*_'hLnn @B!QcZڜfH$ߋ1k~>_wvLS[VVViLIбM*.68W{PWc Pİ%ːռŭ:]5/b\B/Qc 3 5&Ε%*PHlZ4c\wPHhBB$ BB  @B!?K PH($42}rS~ۢE۔?}_$ =)Rn;A>KhEy!}_<$ U,y``n PxOm~Xڎ ݡt5姆.+1f+2ݑj3eusIQ_Ѹ&O|KNR%q"qpvW?FG $?-ū-7s$'H&R.G=,Kqb5x"ث>XDC ~-"`> >,xwSYi,:jȝ+T[?:=I  5,3L=wwf]YƋ "Fe<\ Έ kYSș.q,Ӝ1}E#N|S.StJG$4UDs,>#hN>xݐRA>_+L5_*LzX"*[V9 b7Xsw%`PH(bhlVyNjv!*VR4XSiq,wo*ږ:\ K(??SEu)rn^E%2E/o8c}BB@HX,XǗBּ+Y8Ҳ*Ub.AԮ|Q_BNV'P."$0ǰ=1>o@B!qZ|ތ̐&¼e|,Jήshwk֑EB-`K(#qXSy3J(ϭztדk{ L)\.SSdARx-F i.6'Dj?yE z15$xM ?wfhm1Ģ G)H-P&3.K/sţ% aNP~)dlnkBY-j3$cJQSZn!fR"'t^89c|U3W\u*njʝ^Y<=Jm&4{XpH^!Z-W@:P  ꅄvri,vo5-cg}Ѷ^䍥eUK)zsٴ$ ܼ(6MSb!@ kWBB!X] Dž0- _)r\& PH($ $ BB PH($  PpOPH($?y=%|9/Op HH}Drr) $ ,M̄nsn$OPCkO Q7@c;\VbpBI|oꐿXYh] Ïk _yNJ/}x'V̕c'D[yMU.Cj; "?R.ImwS%uJGzLx q:S;2Kyo&P%ҌD;qSh&v>O-6s؂;E-c٢[;@(]4> cTVɣ8^аL;VJ,ffzE#dY.StJG)n=>壊 ݢ!{Z\,~FMPr><\N<[~*|'BF* E -XM]*Y}5DJr>5B\r.w,f)mÕK.Sߊ>qNo){2hLjo;E{NeYȾ/ "OgB,~KQ!yޕl% 1,R^u +ɜkb =ڧ{%^YQfWzQeln'.Q˦"ZHhɴ` P8HR>oFfHSPϒdžh4 f =%4O>%"~Oh9yO(KhmZOBy2"$Tl3t>̂VdRx-F i k1Ś^&ݖrsZK-RHUmdh-$׌:h&9Mg7c3smXB>M2BH($1xơw(vSKvÐ7AoJgy،$$٨Xq]y5xU+yS۷hRqY$KXׄ*1v E ;84Je}I}$o6)~/"q1KsBB!Ĺ^O YKחf̕7NP hPH($ BB$ BB PH($z>T^xRt5>o^"}{ BB!Ԫˣw%[e#z/*,AB@B!Sv]醴fJ~=Y%PPH%.{۱ ݡt5姆=Ĵ3f"F6[y?C_'͇qSK;zMzYVGokpsӽt4/r*^1? Czǡ>k!&|[՞j=m8yDUr:8\C<5 ^LӛHYvK ),t3A<'vlQC\wQrqe SdeCkxY]YyFn!YϵA ҟb^pu$Z*@USfʊ VF}Zp7H;_eS\R+9Ya bso1X\]cyEtlL1ϕEOf4kL2ydڍ%Tks ЋqeRy;U)s dNg5q_)U!I[\fD6"$PwUijP#?P{%E#8-Q^  ^l ΎƖ.,<|2ƒtQ(sٖYB6o=۶jEYsV[vsC>w**էs7 ^H . !x"›d6h1oHk4~u ȯX>3^O¢k+hfeV9#(b_hmern/c9܎Dm^Iy"7*ce:*G1^&g3NPDnG!< _YBZ2h p &ϔO a8Zs3?Z_d02ELf:]D3V_sLFHytXɲަ|kG &2X޷+/woXTyE\aQ< ie ̶W[L$ `8KtSw%V~w]~>BMFgDz fc}5t4==;&lp PH($ ^~ BB PH($ BB9ϋw;<H($ڣKK(f Ȅnsv$j]>9)ci;.Aw/yey'>i 暧Yt[Uiⱎ^Z$:FJ6K KUDҶEObH^-u+J$&;{%5G4U.ѷYJ!Si~m.k# i1qzF_]?"oFfHSPbǼfɒt0Z̺#k9\ޝ귕АZ7x{+,J(/e0k#, ˼"˘٥aCB]!* )IfcoFg)i-\n#S.=GD|56Rs-Twfhm3HY2 |5 "j;W$^GBEv4&77%$zc;zPsv-K3 BBW+ А "@B!ʼy&tL~۾p%3͙ w)|@s)?5Dm'3XurSmn7:8}4ze_=t7oStVMg2*EU1NG^Z$$.I}ڲ}q;tMcߟҠ1AN ݢY}̭gj7}tT+CB=@B!M(&Rdi3!)%Шϐ^-X?oMP4:"MM^S.Sȹ ՐjHѼ6Nn!ɢJ.(/}.H^W U\v IK *af )b\HIa} -B&d)_ј%bx K$L1 Q@$U%mQa=w a)de3hġˋ(kT<8QR^g9lhMq]qהF=AJONmۺ%|l Ant]E3JUS|=vs?X4hBBC :8fžb%EU9UdE> weʦWsfٸJhvzF0Hf?Mx:#|omܱ U\F,7O,{oϲvE>, "Ɓ>Y /EwԈٲ*Ub.AnXB=U"S3{Rud'=4Vu3r|(bȸ]~c](+KҨbB.{ mKr=Q3ĺ3, %P8HR>oFfHSxCPȲRXΔ1cɴ$0A-nuFB.ޤSׄfc}5o<\{^~QPʋԒup &ϕ-kG^{iM2م}L*J5ן>CdzvN0嵰qQ!)U\qM=^G߮ ifz__L[6Y]=s2S߄`>Xhc@B!́_4Je}K]G]x&C\ m&F޸Y8 {6[cL}VsTƙ9É_z*Bs%3O;d%$s^(E"T@caYb2IqD"TP s"TP*BD)*BKZ8y$cd61&31= @WB$m̴"$}SxV "4GQ>E7% s{ſл`'|Wph|346ܳs,Qg۶m۶ڶq^k۶wdmNgOUcN_6|k/J2iXjkLŸv ZQh>߻2ƱhӴ_-*YR$M=_{ |pI__D"H$;NyA(<ν_T#T@/0?;c'L4XOdJ~?Z.BMqӒ(k^}Yo^ 7d;Fkכ"Ǎy09pag)8eىx"إ~>kE Y۪{$D"*>b)>Zď OzB(HK(D"H$r\N*Pvq!?B't"sUܯ_> iw &㋳3*x]V]1gBCxMwPI-{G(dN-D"Hoqw!tPE^oq1(3"zmۓ 7٘4z9Jyͤ|5}C(R@5͛L%J|B(cnڥT(4iޏ=BiETY4 !B;kPeDll26e3YAiE:H$D"a)Sc؁:BىʮP[nEr"DZYżGS/H]o15(8$nzD>'\1R?z&zzGh*lF2˪Uij Ꙝ_h^?cΉڴV[OԂЬYԩ"H$WWO}?v {]D({hE(+B9Vbtb҇x2L xp'O_?HcQgoJV6mmsVSZD"H$~b}?v {]>~"b+B1uws'YYwd]ay@}?dNĉx1őxN>)宧 t%FywջBsM'*y2;AzzG(ÁӃ%A(n Ba)e^ēMz8i#7W%$BO>"L+rH+4'9_'- E]N ǵCI~G(>ĉ+ʊPܹD B`N< N/T@P333*y9ŧGm֧;S'G f}.7e+ E]Nm:CIx؟H'^/UM9%/ E]N9%}CIx7';q%ę[$ & ]-y6 Hjfff~{ŃB(<Ї\T yFt:/{$wI".'W^|qߩx7eg}|ސB$#ϩLCG_d/fffv97)({/K}?hތ'N7}bu:kfffv97W8/Ou~?h}3>'J~$ϩz@%? Rp\}+H\R:3+Bz_hsJOC^>48c\)Broq{j _A+Bq!vf_(Ky=i5 7迋罡".'w)h^IK1A;B m_zIPPNLPƷHjfff~q)h?ԗ~-f_jIP4RAD(u_h\R:UloNDGG 7o/zƟϣ㿏c|{|?ffff?Ax'?LGzgY?]__c :qx.W2sSо4߉о$ד44'<@=Q6rڊc?5/ƼO\p^m2|=R%P/(о,({ r(QcF_:ڔ:!p]{>?CM'}7e9=/1\r(n[/^B^A+B"t5 {C #fVNUq.\0T)RCC|0'>p333k/ADK_~R'[tމn@2| CIihZC;DKT~LE+F$ BtP{ ٵV@H1~Qx*Yx wU|f9z܆:@>2| n!zײ| Z'`*88UW"_=i&A)eSOgo%333F'n :3 3mo/sX_NAO!ih_?V>NEWe"1q`)@ R)X5333b 3ЌNto.g~⸜wzzt{J(D٤{ZA8NIQqp٭D1^M1Tᰊ3we)hG4D٤Z3 FdͮH<(eSli)!ćlfffv+UFf' :qV'3Ϻt mJSFLR6RipRjfffvkBf[Չ'aYv_N?]= ]/@4S\( RBiv`VЬS:qW'.[3W0ePZt=Q@ZQZaRJ٭ )v 8+:WeGyA K +f´4@ ^kffff+FfΎOv +-QZAtiU333[)vQ; >']рҠÔ*P;TSz홙ԭCupt}.|vCƲ|j:Biv`[9)S:WS]|>BS&FO:L;N;R;Vnbδ>Aqbt 5J4uYeo5:q!mv#jfffvk8۪3܅|ҡ/H;J8P5333cu[ulѹ Oz.@,7PמQ+usupnF;JqJh5333]T&6{}ynffffv_.STIENDB`compose-1.8.0/docs/images/wordpress-lang.png000066400000000000000000000727051274620702700210610ustar00rootroot00000000000000PNG  IHDRquIDATxJ033Æa\/!RJ*vpYzγm-QJ)zk1~l>\RJ)j`ׂOJ)L-Azգ݅u ^ZRJ)j>B'cG;:yzzzU[ݕkjSJ)2D6ȇ`>yhGXa˗Qo)RM<@p3 vuzӧOơ)R1ClT { :gOx{Qy7Hf7eOycdg֦RJ%޸=q@g#zxxcwܘ5c `=w ٞٔRJ)61Q)MdA1gOOߌ׆7nG?RIhl`P7֮U[ݻٔRJ)O[Ψt`Γ[hM4ކxNc Ǖ'aƣrL܀֦m$y@8-[mkw<[k<}d{4RJpM/nxb4@iֆx<›GSw7l 5퇶Ⱦ~lkO<[ڳlJ)'ncz:0tAMnd;]͍-Jwy°43S%O)t"-\dg?й ֖w=u' q o5<yXD̓9'b= qT? c1fV\"c#K8JtV\N['*~:Iʷ'*o gakNڒvpRAJ<5kXo@\ߌ ~=1#o$r-]sYrqu2"qNauX%ޞq$ / OcstVn*|+CXz/~\| ׋cc1F:dt̝tM.qDӔ+i@\ؓ FB#rn^{{sU'ߓw&ޗ#w\ .fXZp#ߌc}q\>)| c1gǸ%# \pS:r:DpS~$+q{ /BVy:'Ozͫ +de^l7`Qc}k{o۸~11Ƙt["y*Mwt2]8 4܆uq_g66./ oUs*N݄!t`Nlވg$|?ϱ~k0cpHOHzI9,ܥ9Ni|Hʉ6^.\aQ*p5jw˯l̑s;Iqo1Ƙ)p .ĉGJYO5kb6om=}+t ǂLc1SqSx o/UUy;};S4c1DNHDZCa*p xwv{s,4M3cpą81ݸw o4إQuZ[2_rG3|p2)m1p .ĉGJP\l[5k9=m<7SjrH)m1p .p\t k2[ _u[V91> ]szۂ-NsDwJf1tEq$LgW0:U5zU>(U},kٰ~0KtMiӌ1ܗN<#qe-fåF $^`Ewݬy}Qrn6c}8OHӝkqiT^%C^v2MeÕ*dD3[CW*kzxV˭TzR زE,ic K>lu*q*nǽڭxXNٻ>` tfIX?Niӌ1ܗ|T p*a*kk)lpQM2?y=x >Miӌ1ܗ|5]dSdm/x+p<5'܍`.x>Oiӌ1܇ӅsƋ 5F'pטJ9`^~Ҧc1O8Qd8Ln8Gw \=Ww$W<<7Nͱ76cbL9w|]һ֌ #ig3k^{*sS}8p_82ւWǵ^va+kFňpąxڍmAb:qo'໻]/n?C;}8]~o9.q$\p(.]ܣb<~D^"p.ɹ/o}806s \A~֕xO^>Cy/|w EFD(pm;՟=?Z2F윓su.hXg۴A[Iہ")P"F!~/UMD8E?#f|I^;f,tx~݇ GEnE|rM5D:i>j\D,D_Ql=^CC#+}j<." !ޯ Yr;" f>Էjr\DE"$K!Ĩ "5T瘈(Be8RdYO)5kM`Qrq(oƎ;8תE y ճ7!kJc"e8MyT[>H=$g(ph"z̛-K7'xXw8&Vy_B7e+ZTbRR,1^8M̤5^RVۂw6 ۮ1EVGT1i̷Ʃ18{ѯEC.½0JEV yo^H?k݆:J>1E1q2B+զ(A1^G҇|/9/^[*%=},=*BmCWbp|9Fܱ|1`"e.'KmŸ~ 4vn*\Ysd\ސU3q?5eZDkBb/=~?-p\. \Q#FN%SI;ϢzDġ8juR}8KQ \sb$6)F'-E+pU>bVh⌶b\C=?]D+p<;gGbV ;,EQ \{'T3޼Hep,٢mgCj<Зz[$öQ \;?gݭhG,]z p<" \dovhNd?`8˱~!t|$" \/mw'ז6hܛ!(t9ΈѫvxP" \dm (BO\J6Q \e^btz](&k ./ QTbX5BmЊϡ؎6 ȭB@TKowJ|H[T{p (p.rjC`Ug?>VwK>ݍC AD+p$N;"ykM;ї$9(p.x:Ր[d`X[D޵;9=Q \$Dt9Yȷmb9wrݝ/y}>" \2jFg h;ھv[9\Y%!,بdɭU^?>_-s;GD+pfBUv?>1x2Pͷov(p.lYXyU9gkJ\@5Q \֊UxmbHmwB{tkX}Wv;q[l2mA0EVI=-*6ywZ=>}-N|^{}u:E ްEdן~E;90EbβBut?>ɻֱ<5>Q \$F}[6ιĚ:> ܃6##מpj?>??<ӽNs.%b7*ӆ{۝z{Z\s &`6vH!q?8>?.3E/夶bj;LVkIB9i [Dl_9ZKj;/1C sZɛkU" \@l!hxVqwVBʛkuҚAG%^πC}P|_B(o.rcr1>%(#I}'W.rٳcByy-7wQ#v& /ƻg5ƻ軈(p3.u5^?nx]G5Ow-F_~Znz@9qw=)xРO-ߋt"oFc(pAy=8[i}xQy@R^"" \DUqrK99wn[Dc LnJ ?x76fC[q/XoxsqNGĘY*ߟ.a> F)C4.p8\ pm8\ p@.p@ p9 qWG.?;3~PNPBLzv/G\(-lc§m(|TeEp|kp#xLݛ_,;dǑcE @I kDvhCZİDy 6ұ[_u;S u tpZAi;7@S|)-}S6xJGϧ$WЦ/&P_wV F̦]cįNѸ;(CK׼p\4RRVy4~{o5S6t4.mخߓ ,G]m"WsXV} AQy I?#Wj(G!@ ; "8/T D\p(0i_HT-᳏9o pibenoRNf]$4/.c\#h">f5/:\{wo=ֈgl u\|iiq.* :)S#+\G+7Ī/{p-0R={: * 1Ɨ1zr1Й=T{N:k]TXYmlQH@G̗:~H&l I|<,TT.KNt*.-b6oG)jum/]ޠv^hL6ƉR?X`ີhoFVMM7B0 ,K2㲱NT.F.S4buن".t~?e6|GCw؈(*U"n]ȗOp7nr.tJbMg8oh.^|a.-C&.H$\jQ̫uρ0Wx}5p\"[zmҽ `/k)[\TyȭT.NYnջZ"`i6ץn]â_[*໪UɶR%lg|/ev|1"n>;=31{:V553ލMT./3`. 8iqTgj6z^%guy~T/ ͮL,ӡ0]­Kƫ$KsWOpQ{!8NSdc`ec'LFg5=_A`踴pw+b,(ƂeY0fro)p 7ԗ.@~[,CN#h@ioǓ띯e jڛ5f<^⚁nqf @99p}h X6t[ƫp>~D[\u.sԍ{NSPRU~?Ĭ0kd; K9 6-fvpdL/,ѯ&釕JoN'_QP"| yK^q1a.]2);k{κB(p u7a3VW86@LxuBeLND}5 0KeX|9h٤\[K]K-*x',i&zfC: v;gxN2~]B8|KvG2sՍBߥDSI}| \G_/|]c᝸҈yU1j~B8W \?!scAlw>fɴۿW,wJ\FE"lo9~)+> DFχ]OyTL8:p`:kM-î|:uޞw,LZ^a7``LU7E!M8> L6750>4 \)}CM6l v,t7 rs&g!8NmG$ a:6BD^DBno-"/Xui%67&ޖ%oj&5BNSs\AE7"3N˶ ]\wJ_#Bl 豱,<BDS*n >{G3q") "PP(KW\g(p \ J4'aLv)N|MOWew7;(@;iu'=uy),bnp-OGPYY8NS- i6~NYE EGݫ|OE)p 7l/ϊepyX1Xc $Ԇ|8ѿaWJ |P8n"/]PY 3"O:^|-c4doF+;Z4^`VnBO \ɤ+;foGGu68Τ4'T:!1܌%~GZ! CMXAeloLD$ vo<:~2&'"\ׇ1@,!K$Nk P3[م; 녚ϝ\m ԴeB'dP8N&/}l}Lˏ~r}a_pFE"lo9~Ui:[gu!)pL0Yab ~O!翍kwBNSxK2R H(=BȽ)p <67 7+qf5+뎰]!P8pƊz(X  |NBNS8!P8NB(p )pBNS8!P8NB(p )pBNSxTZMU ~1$;8?0u^Y V.S>P8NHb"^&!GAҭiw .#r OK8!8NS>ʒ'u_/*p7')p <'2=,--h*=+QY'M5X] '`g58݄7-6T0a{}}GQSPl J J~ٹ7%.9Ҫ&`s V>'' wp8=1,縿vu Jl - 7 H4؍5Nx>ƞjW &^k聙Dѯ}x|+MP8NGPSG"w_VwUr-qwS^t3pS$TI \'G8x}EeBr_i` 3ޫVi.q݌\E$,q&-9vMeu(c2P/oTJSl"Ƕik2)[eQ˄g+-Ƣ"S'):|8:3f(px|F')p \'yͫ:{Zw*{;asl{j8,R ymÑ?#,n.iS1܂%vݨq1GS~_EzS?#9tr:OSP8N8? ztvg۾S%r1l˼&J+o\-İ_?o1yw(h'G^oޭ͝V[8MCNSN2rEޒ06yb,(&Q'.$8Xq]-$ߗ=7!8NSmNdVq,LG/ѽg>},,)z&"_k/8J$luBDS9 u*[TW< czX[9#aJ{ntG9ICU5hT= )PO9a d#u 4 dKӱĐޟƎ9ʽ78H9e|h9a]8!8NS&HD[Cx9# H!4gLzl\$>aO4s ! 4JY-x?*6@ޤx (ZOszEPS0YqvR>0%NZ]D,r%X]wde oL*E{.[Wp 'fs[zݹSk /n{BNSP2Zw${+UvO6v5J]/3s|4 "ј֊}V}zu]_nr:g ;aSa A,,B({(p B(p  !)p BS8N!8NSBS8>3!8NS; 7#;!8NS~?agtBNSr!)&;0)-d+};FJy+T᭑sҢOWx6'j_܂iK D{g%o9ukvqm6EU{#W10u yM[)o7BS8>,_}ϛ/g/qd+B[ ᾅM-PEפ*! "nً5 5;?]OMxpFqH6D]ׄP{)p 4T1N$^&"62ϩ;+q <'-Sy:"QļGÂC|PHQ 6Hoaנ&Y42B(p 5+DI8]* ˹MpC*,YO!*JVc?`?t`ELhc2^hAW*>`!KRyf1P8Il'; WMZ{f.p7`]~ '.GkD6ZHq8cr{ U8!8>7Il+rM3b +U0Sk7#ð1h[qd+y'] q2`%2[{lKvcMB= Kߛ')9O8| \?7ge ɼ2niYZ_nܞWe0o)# wu(pB(p E~>zI5(1qK[vzEp-aڅK,1B5 s? vx'՞P7-s5|GQyB;FZS :ݹ|s7g  v7@3S޲½>90pi*C[Tݙh'.}*XεՂ?"dٚåJ>f!)p3l&%nܓ̒8cYB(p *ptG)p 4^!)p |a%NNS8!Ps%Q8!P8NBNS8!)p B(p  !)p Ӡ)p }͗?J/' ic-CM׽[1M AGu+.e-3z~žaBS80C?0F+[$DƊEq]bzb$DCU-E E(BS)p D𽩭a&¶n2>Gk'5ēI"UV[{U38!)p < hmAOZDP_Pe\./( cP7b +:CB(p cgl?e}lE,oLՏ¦o`_`W%LN)p =1S ܠ_< $WwT )pJU:Lnz[>{Rc]lf[ 9nba4{>\ۃp۝Kح(0F]B(p 쒃S<{[ BS8yU[wŵi u )`. '\_\3}iA}JDϫa=&P8NB% ' )pB)p 'P8NB@$ ' )pB)p 'P8NBNS8!P8N BNSx/J/y$;~:!)9)*Y~Q*=+"^.}^EN`d ܏G5d !8NY 'I|8%K !8N?ߕ՝<ƢY9^m#X(՞Wk|,D[ipƄQY&UdJQBMX#^C)1˪o(hVU+!)9-/10dY=kb>NuyV]u82u^TWa )p AxO5R2u[p "pc q5biL-pY9w{SeM ݤç|`QP8#2K'ee{\ÇGkpilYGkg!M|PݧA[2F~ >;%eAGu!r')KyU<+Ϗ0X=R G?m\6*F NubMBiTr Nw$-p ܞĦ3w6.Ut|PķNN?VK7{\ڕoqRфvs t)K{N1@u CqspLkHPBNSw d9sY{;!=ޟކH]ÌB!)p \'1V*Ţ xw朐f`,KR$)pB(p -E:+C7`[ăm[&>Ƴ=Ϻ'#pN@NSxr1ZK#:L-pwr7K>"AJ[NqU[qGNBNS8!P8NB(p  !)p B(p  !NSq& cçP_;ME3AS8=Sij+wvߛz?ica1j!?#8NS{p 4!hK4O%,P8Nk$q%HW8;FZ; ;O?c,~።p8b"67 6{+#"W96@7>+OדulBS\A' zqM;?ŋ4#pnKN'P őGY\uluZ=0tg^-*/!ة3 k8Qc,E]YGZ>r6ʚ/$.ddL>(aBϝDSJ"مnEpz,۫DiKw"PiOOtULl '&+Ge=ΠS0 P8>Rιn;Z\AHC$erBUuS U[ql7l;"4nw[t&,yDse㴤E?q+pN#sBϱDSIm;3N>c3EϚ>( B>NBNS8!P8NB(p  !)p B(p  !NS3˭!8NS)W!F.BS%)JU[sBNS (q%?d4%`&BDS@&,,{SZ 20rŵ`0=+s kPH> Zrlі{Lޓ)\2s?eRw;y"+|mOh;?KBSS8}à#_[/bJyKFM>[q !8N1kJ7{ܻĹ>~xmWɵhnYT"ɳxV^hƒDNf3ƢQDb0Y AR).n=N܇]F)eҝWvf[h.}B]CؙFK:ۉDD ;JȻDcཾR  |% ϽLҋޭ8};:%KTz{ eyمE񒃰4/VyL~ ͙D׬߹5N>WNg vt4Rk2L[1]TTe՗l%m<&璳5o%uƀ[V/rf.s_ !Csc,k`&:c?Zs\&g0l[el |$ &GQ_{݈Ơ γĢB(p cגRo9@4evBNSHx߅[{3\GaBDSBS8N!8NS )pB~)3鷥Ur 7 6Џ !)p \³Nϩe{B(p 7e1 ړ 8!)KƚE}MxxnRn=m2_EF.Ww|I[z}:_GXV}Cw:!)p \Cj } xqpD>8Lzt{4w:_jM;wau9~x u-)`0% )p{^ҥ~iEvO>&ct`%OlBq۝lnf@vs )X  j M77jp{޴0NFa3OBS8>Tѝm0ضE%&x^*c].c!ӄ)pNbobNgjq1TΙ9c]k4TYDuz7))~ XBNS\F}5=ZE[Ʒ%p'dVױvAGuB> F.jF.;e)߆CWp%MnU2M9glrjٽA.b3 3`_+k ]pB' T>{-t㹶Rk{ )`˲` 1T2BNS8!P8NBNS8!)p B(p  !)p 2s(p `3i]27)p n,h]Q꼲.̀ $?;F-X27yiV D it&| JNSst+v*}.c+Z\6]LHFuUZΗ-co&Zf xaš҅9"<қrOۈjw)( f2%l$F[]ls"\ £Ro{&@NSH854Ȉ- &EU{uOUlq &^k聑뿷-gXb]ov[ـoGes:^<+SlÑC[eOm.5~cQ@[W?(V~z}~umǰwΛgaDep"-Y%en:FϭLѴ[LJpP8> B{kM;?ŋxy% w8Q !*]&Yƕsmvz?V{tT=xݻ5G-F>GzMXx[x"voEQG㡾`^9~/DyWxkNwWP#)/p{U))ԉPSB{U;Fy[:1+%+Ƣ&-/RIfN=+bi*+jc ϘV\g$䪼J窾׼*S40zD%?_18v )!3&ރ Cgp:x&EynOO-f]*X s:Ż q3TztR#aiApIm(*m1]ľuM=Qז|B()p |n |\%9S&3evO[BSu4tzF N4驜{| xw+*1nXtugm6NAĚ='4ǠsIoƍ ӡW+;ѻoLuU]^;Mw ߙǐ{YP)aY5d9^p݂2\ÔO\Pw=K e~Nw,:,8=1':etoo_?<nBjEcL%N_ېSU`!':.eBSE1 19=8 Ա{5m˧+՚mVJt.D!ͳN+*WgY)߆#:L!o)ѻ[bRczznps \?f,]ՙ-ezZs!S2asM>OfBSJ[l5;ru,lATd2kv?:f֠X*@l[D9I֍[bmYl5]1L؅MŕkL^\@&)IeR|+U][3ӳϥK[?.V*I<$)p fb3j r!-sB0vk!;tx5{ꍻ!v߄)$p[/[h>!8NSsD~iNqtBS8> _-SiX NS8!P8NBNS8!)p 'P8΍\ς ۑ: )pn³NϹZ^6]LHv9uU*2BS8 `&]l6pG;#P8po;d-(zϫ'{S67r[[%N/y6kQ)v uGmIQUvb )p \bi ݠ[ش3 @*TVm"Q;xj r^ۊN D֪7܅5N_5 {w+j T_P8NCrjh14{ % ׀[ŝ̫݅: qIN%1-WhMyeD>6 !8NSC^gLtLߋ{ ݞyXWmĴwKXɧ? i‡4/ B(p ʝ?s/f&p!lˊDDN=-W/B(p ]] j/޾;N"(,r-dhǻ[Qq?48ߝDC8W)p :'y4 ҟr~xmOfO\P4ImH,HΗ\BS87rW36r׌k☜q~_EX=RDP_`+RiߞWvlJ%D!ͳN+V`R9~|f q)p J[nf[ڊp@,kcKi` z}b ĆuIdݸek.z@PfNb#P8`&8 r2w*d#)gY%P8NBNS8!)p 'P8NBNS8!P8>58!)p ɗ@ʺ#P8N7&/ƣe4|u-' vCx&^;W-/pK,X88{?Zǖ@sngӗ?>?`y{8zlO0gf{8m3Vv'=W{boyw?rqV=(ͷ x+N=+3/[_~ϰ 8^P}k?)6ܪ9 JF(x252ޗ֜] 8^Z^};3g5gVi9>ro2 }8%ip\p\p\pKpKpKpIpI8pI8.I8.IpIpI8pI8.I8.I8.gaqL{7Ҝ#,-ԷZw*ҬbOfGs4lp\Rߖ&lo!fOG ;pK8pK8pK7ppRB#-:iᙪ{%#-=酓'\oP8NSP"Ҭ.8h ^^c8)N-ܙr|#o)p 'J8mK_,>[\0`kk}kUr;FV4]g_8]oɱfzO?V[UmAd\ B(pF$-%.-,BbOĮF8]28I;InIF%gOOKNiVTɸuR}26Nj*9\NVOqح;=>5Ndv)p OhR5=} Pf7Fm[dvJ8D;" kݝ!ɲ<z %2o l}I]+9fVa?睛ps1&n/2NA*q  ! *'FLЅwL;jn^--p,;LDj0Vp ԟƂn''Aa{l *pWc{5|UYpr쨑c6*s7_P:|8NBZm{Q)6wLZ29v\$YemWM8clW~zʎ5!qljy[|UC>wÛt@;E}HMBni F(p '$wΘ |.3_+dg;\<*4Ynscv]N'1V,Nt"c5:+E:NS8!$[]n[z h\0[=⭤SmzR#pS\U^o >&UZ8nA-™G9TL-k?6DeM->+p '$"6m1j"Jv_U{նi(wOjΖӘ0w]ow 7YL(p B(pNS< )p  AS8!)p 'P8NSP  !8NS8!)p '/t?7m))p l3|,,pA ̻9`<';??-b.tНpfsι}20<yEf~pY̝S?yd]\{3ٻM(G5+)sNk 6v\O>)Yg%rx\cԡ0S9NpD6H,3cel)bi? XC0Ux|9\Jɾ``%PFJ);eh 82gNpne\(vqX`}Qt9ROR1xOFJ);e,Ͳ5\~8M$+^&8OWk]dscj\t旽6\sq/,pOS%}-.t8}31uk|.pC l587Q'z4S͜cpCLzŋ~;ohSU'3˦4lG|31 ܧk9v 7k,~{Wm'vupi^t:fc*ƍ3>oځv߲Չ^wypѵV5F9R7c̜}8F8RSqg;]w}׉.4:tF|31 ܇ڼ;{s-pu< kՍF_ 6|31 ܗ\N}Қ>w[8Fo:x{p:$7fc؋}=BTVzPઃf#]s |31 ܇5m^J=Iઃm3pXlR钸o1Ƙ9!y+uӑ*nzsտGxE᪅f4V{^x9[w&WZN-\ n䙪?˟_s~?}a1 ܁C%k9ߧ.NtA\p7ąÑ:w. Mk+[#:$qy&<up(ޏ\rEDuI}?H11IǤk.:n:Yq.imH87Թ:[jfS^;k*tëI!d ܗ^8[j=yNYE:OD\dsWa1˒upnRĝک6p^8k9nmzjf4ަ{'qsN|mA"ϧ]|9<>1'yʋ{ -a18cE]Ja 6z㼞Թ5\[ϻ*po^-M$8{:h'[iVK[@ *|zYC@2ϧ$w xJc)+I"YK8I8 W*nEOpk]޽yS^gRUS㎶ySCWc[T)\2FBV%ArAc/$hIZ~q$m\pWO5ƁP kN"yO•J_$qx/"'@ʜ4D΁.K13!AK5Αe*m܅ê{Q?j@CQ*yQm\fYez}; B|ָ E>Os&""jrC䊠v"=F9c/nGco:_ rϝ}σi.؅$["_c-|i]cܾF{a t 5muko}oܕ'""jsC6Ҿc=%11)#~^tK;莺`$[a`'Ǵ[uVxD|Joh}cAwv> yO{Qmqs䐰`oǸWpZ oU-%>%7F];<ѻ-BZPov[pgO0e.7{>SɐX l_B{(]9-i'Jw=AODDn7u;np{5x[_a {ޡ'""z7eP'UB_y vO;vBǻGAwԽ #""jq vv wu z=#ODDs2mu{:E &vU lu~=%{'""-56PsԹQw~iIENDB`compose-1.8.0/docs/images/wordpress-welcome.png000066400000000000000000001711571274620702700215740ustar00rootroot00000000000000PNG  IHDR@]+6IDATx$ٶǶm[Ѷu0Lgm~׶]GofeuQu'⋬ /u; ^`>1SS)9֭ۿ_`n;5TѬO-4;w,=d&^{ټy5TIj14P"TJ4HקK|rIxb}M\wBfttĚ`zh:=(e1j"A ?K|lsSN-7>"^sJ&=uv>wݞ)KR-Eu(!{ɔ P->-> O-;M4מn3L̚xYS 4޹65xK娖" e[ڻAY%C~LX|jiZzXvdמ-sL\sמ73@s= S#KQ-DYD &A@~Zz,<N8~@s2q0usݳ(Y,EZ$]~'5O&ɒc./^xrbf\>wݢI 5P뒠5APO^$>N&]qq]k`z5rU[,FYrHNP5B/A#zɏ>G;= Oot>扸vx~!L/]>W{#K2;CrGzMPGwzɏ.,0Kn;9 nt1nq8o3q0ujrU[,G9R͟eHN`!9AJ%Aѷ^vUSקI|Xx,;M<#ǻ"#ܽ0v=ڼwX,G" eݡ&j1zIGut7',>j"ꆸeٱF5Hq~[R߯]R]Rϓ '"$g;nP/ r0B= g} P7G];:ˏGdnZ䮏Ae=`9Zx,;xL+D[׬ǫE<^#ک\o{jr-G" j}Pqr,Brwr e rL#G:yOGr秖-r[iwN'Xfэuq\oF7"ӃnKmk,G22P!9\cqr,Au'Hb j}sAy+5箏,>,=*&ꆖ%[-x$qLq\w|D5jRo-5`C  V B.$AF?7i,v:JO<^1,:bDwC8>"q1TsaKnAQ l-SZK&X, r;4I\CtQ7P-@Ut8}LXcoY~|%8jq2Cwz$GqNnzy"xJĹ38>CQ zy!1ǗJ{5ӃjnޥW-js#KQNݡ2*!9Am5ArzΩpMk<)@mj)ݟ!E,B=R@ 1 w(1NM]j %`R\I_2`1YiR2_7PuMŲ6@PbgBk8 N"XR ܦS j-@@V#UK ^ xO-?ƣqs}G N-A^ߪsC%v-–{8  9n 6jz]`$x8x*xin"X~.< ǷK~ܡ8HqU9 I2 Pqmf,@FǗgDO=oHIpr Ga[ۊ\&@r%@%O6@<^ik_C2 [{ rÐr9D 9j E2Vjyc,<G-A~8@ Kx/6M(3"@ځ5Boǥq\.ZDgg~t%p 9\ÛA"sFś5 @t c]ro*@ yi =H!c16  { ,uDSC##sml&!Čs9ĕ8p8SVOavA{@\zON ݣ<<;i@`}Ds#po%$Я@`ۺU[uΡ|`By\`՟S]5|t{7~FX mPg?i Z+pVLS?>՟g=t:5$U 3R ] c BZMxղE@s|ۿ &!Ւ 3J48;kwFӑǁCGKZ 5t@~-?Zӎ{X+inVr 0nKQULVRX+zF(Lr NSʶݚ ;kgϨ> Սx}_jMk &+k 49\5Ϥ+PRTQ mV༡~thx]_mMkϽ΃]'u4 @wu鵦,MթzPݚroVGrv_yO]_4@oܚ6,_qTl]@4p= @ 5[mpihK{/@Akڰ n>z Pj'q &\7=h/@TGꌞnN @Ju$g\Wk\{vq0o6:f}YpM` .ip?hP[Ƽ ns5:( T{!pޚ`LUOkҪzO|7Еs9**\DrL*3eN7@ []q{iJj*& $n ߪb3Xk㖹Z4`v`BA]V*D[:Kht %x .[MUej#~>`ޚ6o<||}qVn/ `VA"ⲵem7)r}?e $Ճִ`@6;hVnx/M_ h*`\<5-kᢷ${dFivZ 3<[m 0P~U5mmlõRiUȩP0  tUcZ%UivZ4ޟ<6?(! G%! g$Yx=-x=$o[ՠ & >Ǡp?=dyj@@:/~yQg{$#tLbK&mj|{b{Z^{"gP=>@`Ry<l{޺M.S2ʖ뇄L@O @w  N )@;@t@7X@(^ǫ o $A~n8aQDIrĹ NJK'{l&U2fkKY+3B`Wk~f_K@)Djd]Ӽ'oޛ E \.Gri=@<O+=@Y $rÇ155 {SSS{A@@@@@ @ @ @ @ @ @ .$$$$$$@@@@@ckťt妸 @džŚ5kb̵oOs8wf.}h,Z=@ =3=kVwb}Շ#X}qo Z@RkBbc=R}9$ $7)nC2 |;Q@q;/}޿F=ygA|w_!BqlG#\T6{J+ Ŏ=qŨcdnw zgcwg7sǏצ[*5?hsQ},9V+_ᡏy-*  F8Jcb8]7 2_xݯM1YǿB 4q'Pݎ-?>?<,Q'bs,}Qxg_J%a{!z~>XDHMtE2s*>16߉x-=:sM$sfR)/ΆUSn=MLbp`u,-,7utc+?=1̟t䧶-1g(2{{:3-R@?D :nW) m9:+k\ښn?9’л6F*٧F< k3tPU+9!u3 o\ECdou?=צc)rA_Q:k ='q@ 1SҴz|[.X=ِtX`Sz[=i%d>]aj]m_y\37_~ϕ=@#M WCEC1TyukԢPIQ)F%*96t" yΥsϘ"l؍ ڂŖ':SI~F'bhs $AiȻnĥ=˳+e!tL(u BB1 FX/;J߫/i bN=m_g!"PvVbk)r];Jeε׳6nE ֊t"ϹtO8XSҪwt@9S>G#X_ȞP(d?ӻ>."9)7Q[Hb M 5G_bod8cCxox VH&N 펵,kDփ8TZܫ?;1_wڴ_v+vl?aH@zTwRF\JERr}>wZt_ԫQS=*Q#HHHHHu      }    aaa_HHHHu           P@@@@@]H-NQ\[30eOȉ@"9BD<3$<o!H*zR%jdwasRw^{cd;'C zȶmrȐ!CυLdt7wKPzR4m;%7)"_R"h:BGmnݕehO*2|IM(p]vRY>4e m5dN$y ckC$@pȉۤJ1}n EO6{*k/GK6 #={7\i0EV6;l{mio)|`8>Mq?Z?n߶w3Wݥs^&k/^ .VN+7(WȘ ~A6]F@ף?f_Ek,} Dƣ#yE`imR:I;4U{GyT {?1I )E鱣Zo09;ENA9mKo|pos}_PR>+@Nqnvϡ3(R!]MYO%lf~KUnĀͱi5~V~8{gAv LPZ7z$ ؃=~a>W1Jdw*[ pB $@&|njn ?蓍'91Zv `Y4ІL_kt[6֜Y4!՜ыɲ7h(5}NNɬ\} $73t{51O܆@Uԉ~Ʃua~.Mqevk,7% r=]S"ymJt+۱CtnDuc<_}8޿2=p{m4 hZ١/-]z꺦h >x%&NrS?/cMZ<]휩\jZN3=U[iw7tֵϥ1V{ h9A1jGzo) ݤLeh_?=}>sXE/L U'i0{r-H WǬ51SNL/(>OW4@q N[ڢ쫰cNQ_rLPgAhϖ~=h{ t2zO[i[Ar$l-n;La;RpxRx=NWz;(ݸ2>1WW,1ˣg[C<ߡ OȑUSp \ ]"hSԉbS2V4ׄJ]9q>kjY~eg;Ѿ1n sŸ6xsMx"%PB\ȡ#<&lMR$ wsh5D޻ Hx7Ne ". k >⿫Sҽ?q <gLc-otN܆*<nM"3]~>p0f:ӢPCaM~nfiH']nU=T 6P:r]\`VxNOtqz2lwQl% @wznfGO=uhPfRs,u48gw;\+^jnZ:iՑJIO7veT|&hdεlQFޣ'R$<آ쫥:@;E c_+bwj ZR_XѨg[.Aj4F]<ߛL Sd? S^xAk;;Px{8qq s;0qR]' 0fs: @ 01m:A] 4b^P߯p el!U'# ڌV]uq?-Lxy1zD3CNQ~Hfq`ֽ!8~`sS?>8-%fɑ.Ҁ]HNx+\=e՚P_/U%O r1@اTAW:*u Nfx>tOD%^\ At@VU/CE#u}@<x'q+0'X,lO$]VYk4}l(IRZ[`OҎg\gT;Dl偭ST;}r:7~@ Ău[.q0P7o$e엾2 G*GTx}Xuߩkj9u P;.xLIPg~LP++KhCY@ n#Vrrݭ(]8vמH c2wS z2Ɂ5EmtJu\mGL^AL#𢣙Jfԓ2&af}y'sdJ@exfг}.AߚyLڛz&4'N -PcmCgmmHUCAѶ)hpPkJ)xj7MU -e&o/ 58'ж..k0Dcrh%9Bm'8?zy⚌!׉:uXOLj]s5Tt"vs!-t~X! |M?_~j9׊ILҢx^koAigJrj2'. hM9H3;-KnCPTi`9JiTJ>X՗h CP|ۻ tLM lWX8HQ/eC7^u O vVzk썲O:BGf6Z]oʬʓP'j}YۈI;9=J>+" - <|Kp3I=KzaiKt~)ck+}+X':SΕU8_Ӟ,O+3 I*B2+u)2dYMRo߼6Hf{mĀ8KSiYú,\tA{qJyZcչߡ Jq^Tyqg{}ͦiwVJ'@7jWkYY )Il"`;!x*/OKx/cT"af0;T">JuO{3]4Z!k ǻbO{Ϣ{|HIvyL)꺮>pvy/ҳLQmQ pe06Wz^ e~7֖ibh3CYˢAّoD@[)ttHx{H}NN߯~LȘǥj|@Bֲ4fXl R4NBuSIGdI#Yd׷Qa2 I.ubq!c`5> s6x $(ɀ*ή&o?PBypT_r>1~2L,Ӆc0ZAae5xd[1\6vr? |p4S`+(m] tImHgDxXxק6J'4*je^4/91w_c ] DqvoxK.RWX%ҽum|l5* d?rYO rx7}3Ch=By/S(}iiw] ַY.nqa뚏4zNgmQ'P%٥JPh ~|NgSt?pD`b^noÌwܶ6El 'm;6ymJ:eWrpiiP *I+\%6\Ld}G?8e~ xTG,mΑ8r6*OIdd<_Y» AAoϟG֋lހ ɶםpKzRyݨ\S`uc6 7y[R`+[rv>IzB~#2`ΗŽ?MyT {9}F&2 ЋK8n H7(Wl˿:2/(zx Lܤ/7>647JRmLYES-M/roa`+@5Y7yqE7-ZKg_m:4U{'ޥI l$@&zqgb <0 96E 4:,Ix:Ń7*LYEP #hK{r/Irm;>; &_ k}о 畓o|>}La 9t>% O~Oh޳X֜]ewTU/hHyZulr%@(r w ]u9}ɡ`ZF ܏YWt&F>-~%^ܼΜ 6Nby~{Fl.;jPcq%Wi}Qh:Uh7ƣW<@6i)wEY35h:QO'#l%Q[K(ULn`#xqmOӢo \9 9MӣeoЀ")kr=rhmOa,Gه;# [>3D5|tO'3:q@Eu]S4vu*3|"FJkG4 DѶdY>h)%pT#= QCiiypG % ox1{3+c4:vE .)r2r eJ/O` Ř}&fJb6? A Wxﯧ{lhwSj?sov@L]tqP'aJcos#؞.o{_<Ž}S QG\\3n):X#Xf )tmmث#Ld ^V]'(`&_aXluZyBd̼dIgN\Ga8%i/Kc 0ҴPO[2El *wfSMZe.Px2Yf )Gb 73QD#sKNߣxȏ28H|BT]Etrɫl𪟍w\pvKRwKvȺcp.UuKrѓ71]!{3,u 3̸z?5#'{oi2I{ʽ6ɃݿJl6~q@GOCvL SdO&/At 8a}z(3yj^a"?+NJق cS_ty`mګ5+ Ť(ƿ- &Xr T@ ~U߽ O|sf-\x^Nm^ v@z?T)cV _ب:E/܇'UU@w@]`AVf;`-V?{Gcaz; 3|_CH;>P{Tt#@W. Px2NJ˹ J>(C 8 n"oʮ]r!w S\RmZWfq^ `#Xjt\^ޚy:m (ېumrqf5C=^羺ru?7t|W@̜UrTnԺ^:aFu"J |" GyU_d^}|8Ɠ.^#8Ⓥ, YP>}<.oP.cO!Cc{ |S=I07A@yAx".{_*\.g/ xR4J-T'clŽ35A4yuR+hw->.o]r0 I`6G ؃Ǵ 0h3asIJќI&<#Cnn&X4V^n|Ԁ,;Mµ4_t:u? ,Ai;4ߊ̠&hK~ ZyapM3[zit8LT|l^ep `-uڋÔL&}kulې])J29WtNϯzWl# m^@n-$@" A ^-4p `k[ƶ=ec,9}O T1{3݃H 7)D=DMl|J NrRdpz, x&lGNTop4~]qWKf H[?OUٛ!an5ejr]!d LsdY8ثZ?ᣦ2X)Dp ez1qeoWk(qJNa &m1xz,k4 1#:DC+ piڡvnWU6aFlzMJ R߽oYlD2_SKD@yr»2l0Ćr|]}.+ 7 ҟYG/Mf!_6/o6Mz uIT}Zskeg9xyS+ b蓍;Iء!r?C # E\<#&@7WxnY/wF[3se'PʗߺK=3uޟxVg>oiUǻQ @>fzڮ_mGg-Kk# }G4v"O+be=8.y@63¸4z9 }|#"cO!: Veੀ1{317T \[Eqخ,Sܪ.ۀvDw9k^_ hi<n;4a[¼DﱟxS3=>'cF^>AnٲI։~NbrU2N ,N4 Inj$p6D6`16thq =C>D;ť%8:#rC|΍k ;=ݱ٘Q Rj&G7pUzDoC |h ~ %Q\V GOGCG~]p|]"YwΨׯZSàOOh`eb2>P?}OUm/@m}!0m;8׽OtɌ8p؈njeY%pD_[enyopt6h1ߗqDI U;n9})4 Jt[ NQ[I L>J ݳE>"3 L}uBHHB" <#cHoDL̐}N_nxBn2 xRkNIUdM_UGUx > S¿w~&2{8kl_nfrl!C_2_0$\]tvSg:wM\y_ؾױ'V%+bWlo0 DH7˂mNYr3F,ۢ껖mZ:BPݙOwK>r _QlV+Zx Y]wWtpk{4Ǻ^{tON=o|iV4= {{սީc^ܞ=[@@ D  "  "@@@o@@ D  "  "@@@@@ D  " @@Кw]EF5xTJ_~P̶*׏Sz-߹sˎCSBqނd*a5 h,@*VkkG(TPW4uš@u@@=Ҿhd+];oq tI7(,t{82˅/RVJ?Hlg"^CgPt)m]D/k)[,r8֗TF9=ty_s)J~3@WIj<}M׬ֆ: ͋aNhP ?饵o:{o»hoFb܃[}P*{j51<58HT`SÎK YҵgtkWwQ۹r4m6VԱNiUXbΪ깙̹PAr- >3o:^'d_۱gN+5kw{p#j#^۷u //W/XDlMf}] {~hJȈ{}J;rD3{dLXV2ѩGH$dH2:v\v.͵U:t*i16?36ls5G"T*XqCw#{u6SVH괲ٓU U'U?ӛDU/Vp+ny13 z3@ClCU)h_7zdw[J2=2Cl(9i(:v7:P.Zi6wQAaFh]\ ^pJ=ew,pgm6W4qܱ t~x`'p,> FvB;2''ŸVt"W]7 wt3zN{m湯wOd}Qsi6m&۶斯Xht>l/jn"s3{o@}5=cK4WFFmS@gP}1j5bј5Kt}ҮT bzRe"p@ +{xDMK^5Q}ݪ kC!,f>nW/iEm \+p6cVfӵ;W|@d̒uEl@bǿʎr"1YҾ 4g-{yj)Ho| ,;D@6%r/엳_gs!6Qg6O5|I\Z$uTgc;;3_i Yh$ND@v>M拕7|˼;@_Z`sV} 8ϦuR;[8]6ޓ7+ k~'LlQM.Lzނr٣٧%3U[_?~?(,(ZOysO*hSH_wyTWZ[Y0<&@@llJ% ʹ W;bq7_K@(^Wwv\ 2hK4Z߻޻m6Y.Ή1nsy?4Kڂ3~mثJ{~8M.lIug><f&xWJRRqsl:t߽J yoM t- Q^+U\ѾӎG:9;duv tVÍqRƵ̶ h' ~t2 (ɷTB°o8n矿BAkRWU(|_AVsU^ {*6lټg1ӳ{K6 `OW,u+H#T*cx4˙~uř\sNS%/.gg^9 f)O~7 )CL+_PzDOkig= j\S{&i e*sATXJqMKdʆp)75}zx64xg7h,QG{t. ~<сcI,~"JJw}=⧔;Y[ʸv ҉DLӽO(.yVwAO$xn+j; t}cLj_h@؉hJ6nPR&^R]ur'"x?w3}.~ݻw95 wI&/ 18_ђ]S?%}h.~L (; %ѧ RޅG˷qjM_Wo?%)-x4}OM?D \0&o?wA RIe3t'P:3BmV`p4[|xI: /A_CqulU8 #/ f(D8j0eF.F}S91jœ^ P3Rzx/8<;#;c 2rqueYɇX8ziG D MhTRW{"ey3c ,?q~qJb1@֎CvdR< 9r `DZ봼Pn-r\fά :֓|oCDKFjM}Ďp2?ЀF^U:>f/䎌rwU>{XQDt8\ʍoKfe'N?9n1dABܥQ/pYa XzSiЏ@-Xi|{Ar tn`vuz:j&.+jiNQ!BBGlaoP{'[" y4eRct5)O~J ړJVןu+̖d3M e7eʧMSQ䆴MD7X_~2SFwXZξGκcvۧ z.CH %Rb~c|j&AEЎc7qZf(`iDnȥ) >;wR CJa 5l"ܤ!0F. CλaPN+(g_xIG8:^fGA9ǠEmxD%鳺#M`NkćߔD$<t Cʋ"xA(A|@wKd߹h B8k_ڊ>WQ7>WEЯ)`)Sx9dGtUWx*u.gN@v@XprU|PeeR:ylfo8yۙZA-$졝g/RM\_ÿ-=녷uu}ݩcxۄyn0x)/oG|*Cq:b)# 0ߝS#l4ڙoSF~OlǮL*qмK"?vA[ˆP0q]ݔi}T+-9 xJ!_FOR (.`Y1NJ `#"ߚe c3h#L9O J#8'~(%ʐ8j7\E6u\VJ8/* .ul겡F̽J7!k! 70eslC\2bGs? S:4ɣB~ .+mϜ,ZA;7}3DtM[7'|AlK[P\WOr0b^#A} oޔ%kCmKԔ P^3]^BːLX֒Eg##yl=g)55ES'Nђ[.5ih@o>O#]٤d_k1̵ 3`!mv^K r]zQ̓|LA9EF+ v|-}L*$oxȼr_&zi>k"RxSX8:~ 9Ak-Z!}{_ ceJ;~vg(m)>c5T6XCXYo Q'eno(K4Ev}$*_Dŵ&ǜnϞq 2ct`9[(m/ܡ͹KW(9^rי[Q8 muv*DPtHqQk} &l=YcH0T(9^g@AW8(-%7ȳb8xr^fphL ƉB@h"3d^x'ʥ$&/Sļj5κf5 ȬS֠ԬȺs&d<)%p o2ta ⸼E5.@!2xFnr~Av*L9Ϫ(rțg=* z?|?%}IS>VEވ@(]hچ#Rl⢱y } gk\.Mm(.kM?mڪwxrCe5CҼ2^[@DPS9reg'sT3{MǙaW.I4Lurۻ6,}{:orp)ގ^ F8 C穯4NQxvn;p6cv68Ke -ϓor.L٩6ssȳ1g:?uXژ9Ȍ(-ר g,dRRN/|$nlp?m~4 .\g6;Mg5{#^y"q?Y@-tgn.J@+nF[wŕQjmAWۄ@2|O52v}1Wa^i2N)AEg W?uͥ]epe;=i1mP3|[NC|է}yt)"([3~߲ba `h(4ȐE!?Fδx~ S49L\i5V Y*s6h:` B_I/Q7Vtg2!Dz|o]*he[::x8:x7 >6tgC7|<- uxfPt{6~}Y;`n& .i9鄗$oס%>z;~8C ;m6ZZLsWg]B]dc^7RF%4[_raׯV󼭄_2o' |wT֍1pXLMoOѯ!-l\UB*{nTZu9쨣IZk)ׅ@>9ʟsMjt =m}λ;|}D\' Dc8wQ<عR\'9v"'ԿuG.ljF#ƚ.kdDkk~όkF @@ @ @ @ @ @ ?@@@@@@M @ @ @ @ @ =7y@V1[@  hjj*RJyR*>@7PJ)zH)-@ |[o ~UoD3KQ/tm(>hs,VGjhįwS{7zZ͑;# в[e)pL18jZDwNԆwN7+GVkj122Ov*f]++@_c{bVLwn:rz҃ >@m_j%o_+ sعsloL\}@)yO(^0)O| >+@wPZ(? @ /a/ƺ͛T:*.wo X.Sg,7 x[ߊ_Z'=8=*Uœ+Ŋl< @ P]E\ToK~0Ĭ^WnK]lLO>^ o wY~g'o?iuy@ @,43WrmSK).`@ @  @ H  $@@ @ @ $@F2w@@@@@ @ @ @ @ @ ?@@@@@`||@ hrr2ZVoS|l $n0H)E=cLJ@H`@)z/)@hшzc,/kz,y@]ZhVq_7 DrW[w@@8nzz:)>p]qq ]A-V&ahV35^e WF'Yp|׾gkoP#^c{ZV͹]fj?9[JW֥.՞ 48S7o}ʦJ%˷/Ԝ5h`2oV}J*]V5;fc$oשEW]УqYٰOW Dz/{qM.6#vRkYw۷rI+\!?|&m^5%||kyJ#Àܢ`c/cm\`>ͶCg~R6 çqήS| JSRI-䆯n]^7n:3 {=qNepfzшZ1q0QUanz@ 8{oͬ$Ӟ/NGíK[nk"Mi7S5MmR&2 $*J(cd]ܡui{I @B#PĈ  glOMX,jqs>\.Wx:1RYYԆ| ]ru/ty1Dg+6JlQ>x^^CEy @eJIշRx͜$gw˯E*Ue)<6еĢ͏ ^(cK@  $o3ES'!inakvÇ7"dIȖBQ?od{5- Nx dƾۭw=mM{A7T٭M93_7ԮwQwY3Oe-uK 1ʿhlaרJ>ӖL"7 * D%~e@.=T @ @@" @  i @@" @   D!>@.\@@ D @@ D }?vٽ@sy|z6OҍgocWwT);o f:8F}jWeJH*=),R޻ܨFB0h{|O`!e|͝~ؘNq;;)P>%)zJ$KY8ցnUnRsFrD_4E8Y 邢܅h探o7H6+SdE#yll1(" ';f$@=$6I/ےjZcK±ړYnzvDlע2orڰ3<{] ZHɸ۴V E{7m[9qV8Gm)+Cm'r{[NkS}mGJ԰MQ%@QQ/HDirŹBqONw_ջwl붺Z;(۵(h#ENc} ݄_7ΣoP:c+ҵR*xoٖpM镃483%oݮ 5y|?ͱN~ QWv]> 5Iљsi"e Ŗ'彿u!h;^9hny!*ODlMH&i QJY<;2Zu}$@˓˩wAΊef }MR!ږ\QA<]%޽2 \e{c ]2p[A[6h`{t|.C|l>j^Wdh+پ&TIú c!G4\L]lmmWmVƓt!Н|za2n˲|M~#-ִI|%;ie퍍.h3.ZBz((6E 4~1S]v6+qzt`KEI:Nߖ xF8B!+6zyP_kbco 3.Z0&\EJ w2cn-Sz2U$.-[4fJ>0C\.s$q^*'~|$됦K:ak\7GWBA{]Dmk>ZV:9ɇhc꒣3܁6 C[[mM#Y=V#CIGTnKx ̧X˹ZsG԰ļcRrJ\exb1Ôиx6LjSE PE%@[V}WS~o$S2Heu8-,TJ:Øge.v|8{E9#9 IFg7Ph3w~FegI ṽ.߈x-ey,萉 )yM)IW7qnֲ~*:+s]!/t<@s{L(J(O| QgA\pu#O3~¸MYx+MiXG@8|PixN,wA5|1fSG5c\x s=1/} ޸.+ys`$='_m*tawFAD;TtXE{>e$gëK7 r7 m`%}݅ @`$鹽.xe,'@k!A%2教XG.K:,¥W]_nvuv8>M;B&XW&V^Puƴ.ʶ[ϥU_bيm [caf<ٷlC T76r,Ҡ/groIE PE=r%buK2bٰNAw|{2T.V}BɁqҶPV e'f6(brlSTt{Np`e%JRkֵ^CWߌ@? H`ű䕹4qiF0'Ϥ1N ԰z >L,r[;,7ax9ޛ%pZwG\ImF$TiE?F뽄ge51uJR|,HHϭuFG{UxjKŢ;fhב\H/yR{i*+8Cpud v]g;VR(r }v؉9L hc3|3ՠE[C%[+aqa]6!y1b0۲mE 4w yGHs[G E7rY7p&Lb!jDDg(J"~e*CpICe\v׷k?q9@e1׆KA5%J%S2uAdk=Ŵ94c>L] 9}Ѕ^wbڑĢtM>N˞Q:8tdp_:4'Xc4{X%eh ṽ.)Q`m!]q :dR}AHg $Ϋ֥L1b܆C!P`hϏDHfW$Wavcx?i-gr;3NeS&ɐ>;rQ1۲x%Bv~J۲?m$o{sݸ^wd(yj{ ikLB%@Q%@EKϗ?j9Us<ϛ1.5$+[@LJܽ7&֦߼:s~X]vMyw@z-WogR~zf='imiӲQ>йZT/sMmrmdĚyͥn϶~̛:oe./@K3g?nmҬli({_޽q $miz/{j$=-v4,YrSl4A387=֥hm.@#vwRz}$3eҺk֦J2ʟG3kP9ƚOOĹϗ>ؐ+,\@HaiRx}sZUUdѽh|$C)+KYvZsѤN/Mm#wV2S#vvzl@cH KLϔڕlsJ|\Y>o,!,crE=eIp@\AȒx}%vd,扬yha.:XBfۍ4OOIOyADϲ3E ٝ:(XrhkWfU~RW^mQss{^F>^ {n{"@sZj?X-VkW9fN?KQ7Tur[MwL>ʜVDԪVzGy5%F}64 $*-̔eJu`qf]s1$f^~ܪlmr陻 T$$$$$$X@@@@@@$$$$$$'@@@@@@(#c1C̦MvZ}r:*@~i1Gw֡hdǎw_|aO9O竜7$?OUUiZƘ4UUCP__G CUUPTu9L9o$Pck_VoL3۶^89PQ-$ u{SGO? $"@RDAZ~[N}wo!g$q)Jb p_}mOF yN$s(-QLP [jyBuN;sR^37էNbB'  AnGuu5:?t&&&b7>4hT!D'-p HI ɮ]?jGh}DUU֮]-[ɓ'oɚ7#f:' ̊ "$ 0"<.GX^# rj$GΜhD葪nV}&][BtW!GwS, -}xJq110>_1w/wD~?rrrZZnHS۷o9 믣 ٞz UWtCh& N}(>۶?>o }[}7DE YTi5>ZvPtjtn,ym \ov$jcoW`.LA=ֆk,lQ3dD0,E)%ƅ+t*;űTiD& KI(-@cŊT.;[Pui{9pe8z/GUX@0/K#"@tgӦMp:<44¡oy?f ˗/)Z#ImZ`;Wo.xgvn.,t~:S97F&XT,SطvD)΋kjmؾtcFhY""@JcѦ}ɻzZVK6DS66nj>7TrsE󤣁zFRCm]Eg qfВ\_֫Wb^J#_no.`!JzXq(0/K"@$.Jp:::N A)**sssqĉ71h|?-}x9?zSi{DZm΃=RcoH,={\Aܲn!"@"@1?D0Z߯A_o g$+F=O3|2n(-(\#jHs8|1=edTGQko-pEmWYIXҮnON1ER}9$B3@ U[.&0@ks:BZPV ycgxc,^vn~+.. ()AR}>|H"ܹs034Y'@4vxTWM[fI| Y 9H"@ '"@ddd`ǎ:u[VV@-8v fս9y᡿4z5 8ō 5tw>DEz7,U1xHHx[ "@"@ $"@F#F“&IUVHI(Q L\3RNLJx|^ Oǎ_73E+^K+O~;>}kl9i|N w>⎩Kb@5dT;٣Y, S\UܗEӺLO2vw7ewrl˝ɾhzGcN_Hbesi몉K?}YsUܙ"@5K7E]Wq s[_^_L-ߊ Sc=yvS[!0D}W]U) =/}PBH=w|/XRzc ZʇЇfC!˯5V͌{µ^?#CZ  {H|nڬ-cP:>vsbikZMlf/ ` r'ڌ@7]c@z/6}t~@i H]X^q8*gOƎ_ )Z9Iaxc7@3P'k{`C3Cj_;Ql|2==nilM R@)xZ*zK7f `|=GNT?Uݝ?7mETxǗ7?;LnCO.ВLZm@I Ȳo`_u{" wNu|{ $ {D TTz5.{O@Z$@Lo[k#@|Uulu6zH2@ L  3@P2HHHH@JJ @ @ @ ?_OQ\[v?!r^񗓣`D|^N"(<'FA&@"J(xt0 VT1I]̬nմcSե3ݽ{Wo7Yوh ?ĝ2ܼr0M*Qn#{Oc/m'f&@@d22`KH* x&Y+*K;ĕa3 `E#O`ljm0dELrI9:ah|lmB[0]lۂdq՜ R@@d v{9պO}fw-<o'L67I'NN?@}B[[XP!1BhCy6i}[憾kڅa4}VD 8xGϫih䩺N s@o$1yXmkŜuzR׶]m3ʠ]V0`tk ky7Ӱn٥vlāK ̝[#ru vbֽ5Ts?FNxGLhyml?mq9&Cs(m#sH-~9zI5~M*s|ʷhu*S'ڻ۵܃'%VÎRU.SjG{_Wsqq`-, r۰}~=ՍuJ)xs(`8,zĠlP+{pq>Ԏ[5iR_iCׅsj<=>=;qk>`n㥢:FO \~騫:;9,k(E-PH|9]|pʐ|l.&v^?9^lD?AC?n<y_0-r~OĐO[xڵ֙%ִJ\_9K((ȫ}OMu8B871*N}-<11tB~@UX|wiݗ:ft1[ĄDW]-Ό`%9'OF{@tbɡ1LL`eK.pC_Jt c^č45#GTL p?[':´?e grKfz?Ƿ2U}uD^e Pqsp6C|M`0M})#'Dj]\?i;2(EbKDx icNĂ(c"L#b]^SGriS mJ}&fa")e?WG} h;s s >$Fw24Nɹ$~(.lN}<%詫oK&Y؝}S;I|=Q:BaZ~8A?i,ue<@]Q1W6M9LZּwVS0&qY"ψܹF95 Ԣ?3Eq&W!̙^ br\lMy=4`<c5۹ctOme<ںPk?g Ы7v|B:LNg]Q"(b}KG,bS(ٗ'ҡ[i1%In[Һ,ﷅkOTkth9na"6z ٿ!:}R&qJD]hr*i"u120+yʿ~ iJG\o'yY܏VK}AζObS+J'>P4EDt76<2r 6KGk(f7jSSR+I] mP*CA>TnyPr6wjW~ж|PYTm,t# RBvC-[Qi~"TOΗv*fc]Dv6;e82_?t`8Oaw8 g@qm!|ؾq2Nɵ3ܒ:d:r^<07MFM/N9#p~tQ=D4]Bl]뙩E'^w~#)^M!tBd51 ?s/Lȕs;ԥ}q[ #1p' ޱj N-hKiobJ⼓N[ڃpx6|TFA=\B}_Nx0V!1\6Xz/<ɿ-Ԯ2ZYBDB(\"#1d.ёtݗYҲ2*J:TaB4_eh\mt0d۝TaiEWG%ibξ@>o0ɪE)6wMs_>:HzU]9B *O1qi,ҦV&>|%o;;va*\;uKrP>K|ZHw~(ڡVFV;;=#(e aSf):/%#u]+_?ϸ[tyURQtp>Ɩȷ0ns_?Uг;!@Z24aٮkNc S=?hbCO~xݲ{Fab]@lJ޺@@GNԑ^kM؈=}wfiiD4Uwq~ ,ƒXP;O~SW8j'kF:Hrꯢ5{+)R,P]k^WIDY݀B[Eqd-Ϲc[mf2?Jb8~E:fG|H=Hg #gJ$k (`~e)vX(VaRR~_~J˜*K}y.uω-BS޹s/i"8@2r 1' y )[t~K-]!?._?ZWz@((yy>wS*de"_ tJC򝮮hgbe_!:'dV3% j!YGFLW&N\w>ꮋv&[Cb~gt I9v9UloAMw ItHGK¢s㓉_)N=$܉Q-zNo]#BZ[i_md5FO1 8aB(hoٝOSQD)8|Y-`ҁ] $cmҴO[ˮR_!M7;+ͅd+DP|c%ggt xIsQlt6Zldi6<6P>YX٬lBl3|ֶO7cZ,#}״ږ\m{x?$Bƴ,Pv>~o o|n덈>u>@@ ک, XLBX&J׾M)./MXxin7ZwiN[qaY7غW#wz\c*o݆x%q*+ŇH",<غN3CB!Dkj-Ko ^Ak:zfgqB"B!B!BDD!B"B!PQB!B((!B!@@/B!B((!B!@@B! B!B!B!B!BD!B!B!BDD!B"B!PQB!B((!B!@@/B!B((!B!@@B! B!B!B!B!BDuyOqƍBK7d2nܸm&!PQd eYfܸqۦͲ,(!PIGXED!@@#:?h&JI'"[Jb&Ŧh4[sd5WhUظQXޏJBWn)s>9#Bsiؘ|*K~3OMso11\C>{R GdCm@@PBȯ n"$bCE9B&`is":M_z\.LLMabb* "ЂUBB+3hW"B~PQGĂRlѼ{u=q>6}tb'8qY̌"B ,n|v(DhNYFfyM_ed+V4]on B5CD$ {NnFZ\q/4}s݇[@do 81FI@ضubt$SrPBD!#@ ;_5v,3,NW;?2iq5MΏ:i:{>=PSG1t"FepƣQD_jA`!6uIl6!Xqc4*+CX!PÈ1#DpvBĺS\|M-)ۂu/zV9H]K^,:}mTڐ]Ů2eVU~|w}0v|Wlx߬29 ^u]\9#wV/iy&~Wai N he&6&6?=tPLRۄΖIp@8|!!:H:N;̩Ss#Xa vߥ#GpN'0.=*K=ҹp9Mj),YLHgyr;G8WR3^t E\L]$}3kwx}}=ޝvTg$Dkwǧbu1◁UF|ab0\_ò*Գ¥7!|-#pDD] d ,=ϓE0's=vrsI0@k|9rZ|,m}Yp<'z~ ﮉ훗1ǦEĤ 霘yss?h:c˦n"$>f(zW5X1]qI;הWf1;?rM@' WBA|4tk s:*b*גYf9FgՖ|*z mIg{ؽs7@eu3sevwlY$USCFtx"|Pจ, Gb3Ϫ" uoƊ2K:N4"H@DC}kw(|/|/*ZoN|n_tQ?3:+"otvv5~|OKƯcHDQG0+圀lSm=v{u&k/rMmgB+Ͽ4q\zyuUDTגFs)DW'0  `I~sҹ<1}N|g9:qPFvWG`fIIo"",곈骻[m|!bV 3 Eƒtsvj)k έ?׆YO٬F%ؘ t2eґe$׉'Xp ֯c$~@ڦe j9V!fI.s uN #\ӌ Bz @t# X&I  9d-idI}KÚpqT}^w ^%uFڄMI;iA0M8GϚu +&Yۯ4e,O^a*,YAsQZTf쿙KAUW,J۬#=;7vF߯s$R:"_N7kĪa[FLs,v9q; >e.dYy :,m޷|n@ng=\I鋔7)MH#-Mf @!@@hOa.}]"K&PU4 44A鬺:~u7&&[v1ҩzMLw-9bSNR8ثU3#I\+YB\&gG'.纒n~z unFGC 9w պB EWݲ|xkwLZbG/!v k{|uOn2"M{.t -N]b_{`²(+^|ű7P((5B((UaY֦[%#-– Mj9U)n__eU,lLs]&,kkuSYU6_sg P~ȂBB" 0WpHe^MVb (fFOB+ !$k|vO`W9`$s7@!@@B! B!B!B!@@B! B!B!B!B!BDD!BzB!BDD!B"B!PQB!B((!B!@@B!  !B!@@B! B!B!B!B!BDD!BzB!BDD!B"B!PQB!B((!B!@@B! !4aBЖ1Lbtt0L ?QYY}l$F?GV3_9TcKHb"_ :I%,/Af&:Lb?o\gc~bgcE5_.2۳e k/&>N,LITu  ? ,2^9ǽ8me;ھ0 Ӄ^O%#5(k%~Y=;?xidI gw Ϙ[5FXZC|peǏʼncdK F8(>/-!y;"xƣk0V`zHGg@{>9UG/#/`d]UMw͛x!R1cF= /'|_ZD^NdeiZ?uS#X&e@/=>$Z3DG>~Ff'+. aBɮs}S/ja$-8X8*_{hQiFz ROp(:ԅᅦڵܾ\=|qnj-zwě1ٗ;F-7UE*DZ"GkuWVv,k,VssF!槻5n%jVʞo]8 X9|4FD᫰;kZ1}jc6I\^q4놝r.6۵bUIVR4QN&q ~u2kh {quTV7QCi-ys.<+}ޏK)oYIujI\]E0zh:dfQEe7bB((q!H!:HlAtE3)?0}p~N+#?EJT_?B8ZZĎs?)\[ '?G4NemkVԁ*fҌwUJd_Ne OJďc[Ivh8-n;.(4v 1vl9iyS,QDc_DVIVĐ{ˆv&1RvGugFgĦ䞽ۑJzT*zE^ȏrO̪]}^sk,-3u =-OW仨isc3gxu{/Xہ:܉Ł7|7i\qsxn&ޑkq'[wʫ<}o Lvn/´i[6{yV\wDN3\x>w(Ɯ.J%֣w "p|V[Xm,ko;/cmkpfj!w"М].WzlY~,~DqLC?Yy6fmpz{,9fA#5*侽OB1&bSvן!kmZͥ1D9~'vӾoFcU(}Fo)u}~>0BD&6f܈o}.֓Mn0wK'CwꝾ36!ХO"(6<ڳWy_phHݪK8@>ڱ|V;?Tzsks u\Η){9VOL(z_T$M(%v loT$Ў<Vvt۱$UlCbzJb85WiǛi(bĐܬr`s/X!Wf;2bK_:_P:2jW:Ky5ݭpIJTOD?˖^BD&C_{ XBp|ۗТiv:HY}܃ 2P.LLMabbBVE(+E? ݀)}, +* ox rVPVP:X Շpnb녔Qv]A=e p`aڛjW# iW;>:oo]oϵ(eõeH۹IM+c}m Ɛ< &b# r\hm_u>$O+/57i_'mH|Gڮߕ-ςYDq_Km_iJgծhlIT|_T `qT~ƞN!Y=1p7B"Pyq Mkr ps xb-i]/oj_f+ fIك9Љ@{Ў4ޖҴr ׏wGɿ<@Yx@zLAĉdn&m|,ktbBYt}v47w8=3&T񅏊hj-4vO۸{W縱Kh._=:r\E4XW7v; }ø+vFV3eдZ x$֞rt[є@jdX)KHS?=DW1F1_<ꩫOān'O+_XX׌:ȍut4TuˏBDd, teYs8X[)P=@^㫰|l*Fߑ-ܫ5eW|+K&{Km|ο8kb_$XKKq2CFY#H<.{j9cur?=@<1"*'S\a%,L+ :j{m }`OI MMd9}%n+H²"5ӼKʹʏB{D]C+:uq}Öd~ttL*+eh K~e?wܵCr+HThjjh(h ASK[ZreTȋ@2T,Ϸ+qu^Dpu`>&c3Tl.00;:*2>jXojiei3}~\$HLN3CCmCOA(Ja"84s6"tF~3irY" Hrf-:m"aH&1/eQ aB9|_041 \q '}|TH;9w]g.C`;yUb0rڴ6uVRi|#unꢝ_5DϜ{0]ZOr?pu6]4h-,tV7͒wbOɗoU)DtG]U\Yj6oJڐ#I%)PXy܈~N؝6gO#wƟƧ3fazq[юl93k7E\j2od)_ӌ:_`;%yt`'+h_SncYɒu#-L Sq64xX>uNy:2F?kRKQ{,r̷HPA(*Tv?}tAr]EϢ%DZfV_).?;%Gؒ܋tQxOF]A].^9$ua71:kؤ+V݌[c*]}7Q 7i´1egpm)b]: ߹!~n7M厑cJ$.su)/}EY}x HP5?H-3R -%}CAÄ_B! _V쿒,s&Jؘ=nE7]Pr&{` =[R_֊vj}ԿlN!BPlM/#O(wYh.kbzwFڀ:T@׶p9\B! Y/ [T# v B8AWC! U8[A9_{֯G?!Jzp5i%׼)4B!BD$: 4sfiۮ3'{_>;wNk WtwasÒ*ڦ4`U/_֒%ZxN]bڡv48(j׿ËH$/]/ !PqipIG,^p Od<)Nh:UhzK>-ܗ}aAPUvmg '`E/LBy {HHEJЮ3_o*ܫ-nA+5[iGj_ukea ȥ{6`{1G$7T'|Luh W7㎑/bwu+"FVىξbA B{`<$XۄΖݚz6gS_>z[|]CQu<M]h };?"ϲF _;sѲ"]Gl ٧pIGLLIވķc gzK>Ƈ;t !PQ k-Up̀*I;$@2ٖKĝG~9#@ N: 72Þ/U&J߼Zw?%zn:'?XvXO*:ri ƿ/=#`oޱYyxBDT:2wp7TLЭt!HGw`b,Wf8t}sެ}'Bx,tb+(l[vI'!T0˛vjTP3*og$NLŰ;5`:#W*?Qv(emn B o3-X^۵`_Z 7N2jM=$U')id&vI_Fk;*6SRmt +&3 ;BvHh2ٰ#י`´+blj[ߩ<ϖܱi~u w}XU!1?HV|_weȋi?Y]"8$}-\<.woeإ?S "_ҐmQ%#NKzkOҒb,#Ňw,p6XGS  bKF#/G7EYt}'(Nl"fJ1k,䷆| N,8ˎI|21ݹ܈U2))M  OH˖Ϋ7k:_7}M1 BGM I}uG]4Aɖ|[1+qIMjة?'b_\H] .->%HZ>IbBjjqb^ğ!/*CŇw,o4O2:3@!@@^WL heq{a^gaewX<Y޹b*22#̇r뻍>[O|c]yM}9/B"V@<uBgIoEFoBG~9L;qtB((aGfGގn܎ɮĥ[?{wǕp?$$@@@@ŋڧhPٌz˗oaԩS"@ :z1@ wH$ t@@ d 2е ʚhdߛFd7ZY#L -HI=k>js*Q>=ḧZ)]-1q0Yct2=4?+^?޵Ljj5f^@-j1G?-;?؋[ܼ/ KXWcvo^S\ cYW VYlXܥ8őU(= NJ jAc^o筨V^3Wʲﱅsꋟ杼?v;:|r\M,X5 q cB#]7]blo3g'j%VFeb+jں}(+ gb2/3WQn L{b~a{_}bԬg?8|]<ʄPr[Kk̚Q[^gx隧vmջϕ_@H=v5 ~tP\yo EB,Q{F]s#ŝp.EcŰwoh|_ϔZ8Ɗai.~Wm;>|]3O3@#LS~4Bi4S61jcϏLZ' w+&@0)F(;ǍʷuV@@oHNQ(?=wvP834!vo))]S }ΠJ[,_kբ?ϒc.{ҲS(<+wܕozy  |{[=?g>;Z@TcGmC-J!$9iyZ׎IgJc6=*LV8ʏF;Fv!֎#Y6HӶՊ(kGmZwpPOx쇞*pZm_?TؚpS+\ZY~CtXT[z̻kj}kM5@+=f}]YҵTF#lVV qVV#Rde?[ P[R?1,4?:roitwv|gofs~A_u,ƖlKX / Z 4Gk97 $Bk}UKw+@|Uxhk6G߫,^@!2HV'!@@ d [@Z-_#Ϻ4^w׾=!@gΜI;(@ ꫯR z{/p+kM @ @ @ YFtgT/233333efGGTznHG={f<-GODzz|x^2c1Fw!K-~2T\]X;?Zg1c1 Ps \>ܣ9ήt߼\+>S1cQp\-V N({*@:'UT[{x~Zo c1c9pWirW^yW PqU z`nId1c48PspUì%@M P=>_ PqOp^8PS|\|c1́;4x-p \'M9"s=%@u=gy??Ylo?W 1c1 CsSc8L NHՇX״+gn7?X?Y׋d1c4[4x=p\'*8(@ͮ P}y|XqE}uR謹{?;hA1cQ>!)1p Ap7wǙ?"@ۧɇs"$8c1hL1r*"8 n* f \gVʬ6<P|.V\^\SwsPFa3KLqP1cQC EY"@Tov,oV?CZ\ɫOԏ!{)@ b1c@G`8.&QNYpETO %L , 1dv?{u}?_hc1c 59&pbp&g&8ʔ4<T/&LS'ge\!0{5{u=!Fc1c 5]s7\x6Z5 9S[9:?qc1sBMY4x)2X5.0Z&l&:ijSzc2Hۿkw~_.""""" 758859F2_?9, X_FM+r!7fveqñNZo? ,"""""{KjmjpjMv 2?X5WYpN8 S}@l:>epi&7/at(C<=-p \f7⇌~|M+ 1;{wqb+0|uEHDDDDD6'8.SYs,l&#mM 8 bSѸ((iM1!])}QH1p #? 2N;b$fZL⦾ Fb?3EEDDDDDp4Ǹ=p\%, Q1|*)""""""?E?ZP&8Kcze\]>ѹ/*""""";49ōS\_?33wH'9ݵxz;wO57usDDDDDdof﷚V,^o N}%2hq6[gPgh\c(p$MMLv`3a/r3u=č*ޮSW?뇑"> "9jVWC jC/N藾eUpq"0~"\ ZXmLt)/lpC" &7}j~7uyU]ۭ }:>"]7p1o7d~0.[)@0 P? oQs.иnJX3gᵃ5 |xXbkMlmmLC """""[ ~uw[j:=ORG|p\' r0%?ҷdO t4vK$({ؼKS]1x!,1rGn(& `n /ZV?K5~`ħ98Asq!{~g\6L @{F b^7_ tzZPG"D"``(X""""""]L$2i9wS⓮Os+pa~ҷadgg-%( Y "l/n 3!bsTQ.PGxPgЋ+ Cs1 "?G$@@R$MJmRN7("fz_]am+n"1bln""""""#:jwjoІ]t}pgϸmUgvƥpIP#ÉNAS"&v a"c""G$(-"""""z;I}١vPGz\˧ħDK`7Y k7 bSR$hԋm.692D-Q(I?DDDDD$rDtSCmK?ЋT'[&?Yvݟ.nĦ~I\A#^z)M_DDDDDSGtz١P҃O>7b7YY,@G"AD4%BY`Q"I"""""RGr":줮;=8@MO-2Hg ^ D ʘ)7%D}hQ_DDDDDd{;SCKN0%>sQ~h ZWX7%CR1DDDDDDd3@D^x'XW|֒ 3 U"DkQz)(r""""""c^r":3JJ|Vu}',SDFhQ/G$妗(;ҳD|zYLF !E(2(E(I!´?XrKOg\6'>+@"4K4Q(؄QrΔLu{l-͋*QF1i%<،3(;aJz%hMд}"6#ӢJxFY.>{#?"H"DSR"H3DDDDDDd,Ǻ~RYC|`lprhZFI;r3):,5Ï:2!Q :5(8sYKz` (FDDDDDDL]Ttb659FDDDDD߳&kZZDIENDB`compose-1.8.0/docs/index.md000066400000000000000000000017351274620702700155630ustar00rootroot00000000000000 # Docker Compose Compose is a tool for defining and running multi-container Docker applications. To learn more about Compose refer to the following documentation: - [Compose Overview](overview.md) - [Install Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Frequently asked questions](faq.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) - [Environment file](env-file.md) To see a detailed list of changes for past and current releases of Docker Compose, please refer to the [CHANGELOG](https://github.com/docker/compose/blob/master/CHANGELOG.md). compose-1.8.0/docs/install.md000066400000000000000000000112601274620702700161140ustar00rootroot00000000000000 # Install Docker Compose You can run Compose on OS X, Windows and 64-bit Linux. To install it, you'll need to install Docker first. To install Compose, do the following: 1. Install Docker Engine: * Mac OS X installation * Windows installation * Ubuntu installation * other system installations 2. The Docker Toolbox installation includes both Engine and Compose, so Mac and Windows users are done installing. Others should continue to the next step. 3. Go to the Compose repository release page on GitHub. 4. Follow the instructions from the release page and run the `curl` command, which the release page specifies, in your terminal. > Note: If you get a "Permission denied" error, your `/usr/local/bin` directory probably isn't writable and you'll need to install Compose as the superuser. Run `sudo -i`, then the two commands below, then `exit`. The following is an example command illustrating the format: curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose If you have problems installing with `curl`, see [Alternative Install Options](#alternative-install-options). 5. Apply executable permissions to the binary: $ chmod +x /usr/local/bin/docker-compose 6. Optionally, install [command completion](completion.md) for the `bash` and `zsh` shell. 7. Test the installation. $ docker-compose --version docker-compose version: 1.8.0 ## Alternative install options ### Install using pip Compose can be installed from [pypi](https://pypi.python.org/pypi/docker-compose) using `pip`. If you install using `pip` it is highly recommended that you use a [virtualenv](https://virtualenv.pypa.io/en/latest/) because many operating systems have python system packages that conflict with docker-compose dependencies. See the [virtualenv tutorial](http://docs.python-guide.org/en/latest/dev/virtualenvs/) to get started. $ pip install docker-compose > **Note:** pip version 6.0 or greater is required ### Install as a container Compose can also be run inside a container, from a small bash script wrapper. To install compose as a container run: $ curl -L https://github.com/docker/compose/releases/download/1.8.0/run.sh > /usr/local/bin/docker-compose $ chmod +x /usr/local/bin/docker-compose ## Master builds If you're interested in trying out a pre-release build you can download a binary from https://dl.bintray.com/docker-compose/master/. Pre-release builds allow you to try out new features before they are released, but may be less stable. ## Upgrading If you're upgrading from Compose 1.2 or earlier, you'll need to remove or migrate your existing containers after upgrading Compose. This is because, as of version 1.3, Compose uses Docker labels to keep track of containers, and so they need to be recreated with labels added. If Compose detects containers that were created without labels, it will refuse to run so that you don't end up with two sets of them. If you want to keep using your existing containers (for example, because they have data volumes you want to preserve) you can use compose 1.5.x to migrate them with the following command: $ docker-compose migrate-to-labels Alternatively, if you're not worried about keeping them, you can remove them. Compose will just create new ones. $ docker rm -f -v myapp_web_1 myapp_db_1 ... ## Uninstallation To uninstall Docker Compose if you installed using `curl`: $ rm /usr/local/bin/docker-compose To uninstall Docker Compose if you installed using `pip`: $ pip uninstall docker-compose >**Note**: If you get a "Permission denied" error using either of the above >methods, you probably do not have the proper permissions to remove >`docker-compose`. To force the removal, prepend `sudo` to either of the above >commands and run again. ## Where to go next - [User guide](index.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/link-env-deprecated.md000066400000000000000000000036451274620702700202770ustar00rootroot00000000000000 # Link environment variables reference > **Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](compose-file.md#links) for details. > > Environment variables will only be populated if you're using the [legacy version 1 Compose file format](compose-file.md#versioning). Compose uses [Docker links](/engine/userguide/networking/default_network/dockerlinks.md) to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container. To see what environment variables are available to a service, run `docker-compose run SERVICE env`. name\_PORT
Full URL, e.g. `DB_PORT=tcp://172.17.0.5:5432` name\_PORT\_num\_protocol
Full URL, e.g. `DB_PORT_5432_TCP=tcp://172.17.0.5:5432` name\_PORT\_num\_protocol\_ADDR
Container's IP address, e.g. `DB_PORT_5432_TCP_ADDR=172.17.0.5` name\_PORT\_num\_protocol\_PORT
Exposed port number, e.g. `DB_PORT_5432_TCP_PORT=5432` name\_PORT\_num\_protocol\_PROTO
Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp` name\_NAME
Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1` ## Related Information - [User guide](index.md) - [Installing Compose](install.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/networking.md000066400000000000000000000143041274620702700166370ustar00rootroot00000000000000 # Networking in Compose > **Note:** This document only applies if you're using [version 2 of the Compose file format](compose-file.md#versioning). Networking features are not supported for version 1 (legacy) Compose files. By default Compose sets up a single [network](https://docs.docker.com/engine/reference/commandline/network_create/) for your app. Each container for a service joins the default network and is both *reachable* by other containers on that network, and *discoverable* by them at a hostname identical to the container name. > **Note:** Your app's network is given a name based on the "project name", > which is based on the name of the directory it lives in. You can override the > project name with either the [`--project-name` > flag](reference/overview.md) or the [`COMPOSE_PROJECT_NAME` environment > variable](reference/envvars.md#compose-project-name). For example, suppose your app is in a directory called `myapp`, and your `docker-compose.yml` looks like this: version: '2' services: web: build: . ports: - "8000:8000" db: image: postgres When you run `docker-compose up`, the following happens: 1. A network called `myapp_default` is created. 2. A container is created using `web`'s configuration. It joins the network `myapp_default` under the name `web`. 3. A container is created using `db`'s configuration. It joins the network `myapp_default` under the name `db`. Each container can now look up the hostname `web` or `db` and get back the appropriate container's IP address. For example, `web`'s application code could connect to the URL `postgres://db:5432` and start using the Postgres database. Because `web` explicitly maps a port, it's also accessible from the outside world via port 8000 on your Docker host's network interface. ## Updating containers If you make a configuration change to a service and run `docker-compose up` to update it, the old container will be removed and the new one will join the network under a different IP address but the same name. Running containers will be able to look up that name and connect to the new address, but the old address will stop working. If any containers have connections open to the old container, they will be closed. It is a container's responsibility to detect this condition, look up the name again and reconnect. ## Links Links allow you to define extra aliases by which a service is reachable from another service. They are not required to enable services to communicate - by default, any service can reach any other service at that service's name. In the following example, `db` is reachable from `web` at the hostnames `db` and `database`: version: '2' services: web: build: . links: - "db:database" db: image: postgres See the [links reference](compose-file.md#links) for more information. ## Multi-host networking When [deploying a Compose application to a Swarm cluster](swarm.md), you can make use of the built-in `overlay` driver to enable multi-host communication between containers with no changes to your Compose file or application code. Consult the [Getting started with multi-host networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) to see how to set up a Swarm cluster. The cluster will use the `overlay` driver by default, but you can specify it explicitly if you prefer - see below for how to do this. ## Specifying custom networks Instead of just using the default app network, you can specify your own networks with the top-level `networks` key. This lets you create more complex topologies and specify [custom network drivers](https://docs.docker.com/engine/extend/plugins_network/) and options. You can also use it to connect services to externally-created networks which aren't managed by Compose. Each service can specify what networks to connect to with the *service-level* `networks` key, which is a list of names referencing entries under the *top-level* `networks` key. Here's an example Compose file defining two custom networks. The `proxy` service is isolated from the `db` service, because they do not share a network in common - only `app` can talk to both. version: '2' services: proxy: build: ./proxy networks: - front app: build: ./app networks: - front - back db: image: postgres networks: - back networks: front: # Use a custom driver driver: custom-driver-1 back: # Use a custom driver which takes special options driver: custom-driver-2 driver_opts: foo: "1" bar: "2" Networks can be configured with static IP addresses by setting the [ipv4_address and/or ipv6_address](compose-file.md#ipv4-address-ipv6-address) for each attached network. For full details of the network configuration options available, see the following references: - [Top-level `networks` key](compose-file.md#network-configuration-reference) - [Service-level `networks` key](compose-file.md#networks) ## Configuring the default network Instead of (or as well as) specifying your own networks, you can also change the settings of the app-wide default network by defining an entry under `networks` named `default`: version: '2' services: web: build: . ports: - "8000:8000" db: image: postgres networks: default: # Use a custom driver driver: custom-driver-1 ## Using a pre-existing network If you want your containers to join a pre-existing network, use the [`external` option](compose-file.md#network-configuration-reference): networks: default: external: name: my-pre-existing-network Instead of attemping to create a network called `[projectname]_default`, Compose will look for a network called `my-pre-existing-network` and connect your app's containers to it. compose-1.8.0/docs/overview.md000066400000000000000000000164161274620702700163240ustar00rootroot00000000000000 # Overview of Docker Compose Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration. To learn more about all the features of Compose see [the list of features](#features). Compose is great for development, testing, and staging environments, as well as CI workflows. You can learn more about each case in [Common Use Cases](#common-use-cases). Using Compose is basically a three-step process. 1. Define your app's environment with a `Dockerfile` so it can be reproduced anywhere. 2. Define the services that make up your app in `docker-compose.yml` so they can be run together in an isolated environment. 3. Lastly, run `docker-compose up` and Compose will start and run your entire app. A `docker-compose.yml` looks like this: version: '2' services: web: build: . ports: - "5000:5000" volumes: - .:/code - logvolume01:/var/log links: - redis redis: image: redis volumes: logvolume01: {} For more information about the Compose file, see the [Compose file reference](compose-file.md) Compose has commands for managing the whole lifecycle of your application: * Start, stop and rebuild services * View the status of running services * Stream the log output of running services * Run a one-off command on a service ## Compose documentation - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Frequently asked questions](faq.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) ## Features The features of Compose that make it effective are: * [Multiple isolated environments on a single host](#Multiple-isolated-environments-on-a-single-host) * [Preserve volume data when containers are created](#preserve-volume-data-when-containers-are-created) * [Only recreate containers that have changed](#only-recreate-containers-that-have-changed) * [Variables and moving a composition between environments](#variables-and-moving-a-composition-between-environments) ### Multiple isolated environments on a single host Compose uses a project name to isolate environments from each other. You can make use of this project name in several different contexts: * on a dev host, to create multiple copies of a single environment (e.g., you want to run a stable copy for each feature branch of a project) * on a CI server, to keep builds from interfering with each other, you can set the project name to a unique build number * on a shared host or dev host, to prevent different projects, which may use the same service names, from interfering with each other The default project name is the basename of the project directory. You can set a custom project name by using the [`-p` command line option](./reference/overview.md) or the [`COMPOSE_PROJECT_NAME` environment variable](./reference/envvars.md#compose-project-name). ### Preserve volume data when containers are created Compose preserves all volumes used by your services. When `docker-compose up` runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you've created in volumes isn't lost. ### Only recreate containers that have changed Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly. ### Variables and moving a composition between environments Compose supports variables in the Compose file. You can use these variables to customize your composition for different environments, or different users. See [Variable substitution](compose-file.md#variable-substitution) for more details. You can extend a Compose file using the `extends` field or by creating multiple Compose files. See [extends](extends.md) for more details. ## Common Use Cases Compose can be used in many different ways. Some common use cases are outlined below. ### Development environments When you're developing software, the ability to run an application in an isolated environment and interact with it is crucial. The Compose command line tool can be used to create the environment and interact with it. The [Compose file](compose-file.md) provides a way to document and configure all of the application's service dependencies (databases, queues, caches, web service APIs, etc). Using the Compose command line tool you can create and start one or more containers for each dependency with a single command (`docker-compose up`). Together, these features provide a convenient way for developers to get started on a project. Compose can reduce a multi-page "developer getting started guide" to a single machine readable Compose file and a few commands. ### Automated testing environments An important part of any Continuous Deployment or Continuous Integration process is the automated test suite. Automated end-to-end testing requires an environment in which to run tests. Compose provides a convenient way to create and destroy isolated testing environments for your test suite. By defining the full environment in a [Compose file](compose-file.md) you can create and destroy these environments in just a few commands: $ docker-compose up -d $ ./run_tests $ docker-compose down ### Single host deployments Compose has traditionally been focused on development and testing workflows, but with each release we're making progress on more production-oriented features. You can use Compose to deploy to a remote Docker Engine. The Docker Engine may be a single instance provisioned with [Docker Machine](/machine/overview.md) or an entire [Docker Swarm](/swarm/overview.md) cluster. For details on using production-oriented features, see [compose in production](production.md) in this documentation. ## Release Notes To see a detailed list of changes for past and current releases of Docker Compose, please refer to the [CHANGELOG](https://github.com/docker/compose/blob/master/CHANGELOG.md). ## Getting help Docker Compose is under active development. If you need help, would like to contribute, or simply want to talk about the project with like-minded individuals, we have a number of open channels for communication. * To report bugs or file feature requests: please use the [issue tracker on Github](https://github.com/docker/compose/issues). * To talk about the project with people in real time: please join the `#docker-compose` channel on freenode IRC. * To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/compose/pulls). For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/opensource/get-help/). compose-1.8.0/docs/production.md000066400000000000000000000065341274620702700166440ustar00rootroot00000000000000 ## Using Compose in production When you define your app with Compose in development, you can use this definition to run your application in different environments such as CI, staging, and production. The easiest way to deploy an application is to run it on a single server, similar to how you would run your development environment. If you want to scale up your application, you can run Compose apps on a Swarm cluster. ### Modify your Compose file for production You'll almost certainly want to make changes to your app configuration that are more appropriate to a live environment. These changes may include: - Removing any volume bindings for application code, so that code stays inside the container and can't be changed from outside - Binding to different ports on the host - Setting environment variables differently (e.g., to decrease the verbosity of logging, or to enable email sending) - Specifying a restart policy (e.g., `restart: always`) to avoid downtime - Adding extra services (e.g., a log aggregator) For this reason, you'll probably want to define an additional Compose file, say `production.yml`, which specifies production-appropriate configuration. This configuration file only needs to include the changes you'd like to make from the original Compose file. The additional Compose file can be applied over the original `docker-compose.yml` to create a new configuration. Once you've got a second configuration file, tell Compose to use it with the `-f` option: $ docker-compose -f docker-compose.yml -f production.yml up -d See [Using multiple compose files](extends.md#different-environments) for a more complete example. ### Deploying changes When you make changes to your app code, you'll need to rebuild your image and recreate your app's containers. To redeploy a service called `web`, you would use: $ docker-compose build web $ docker-compose up --no-deps -d web This will first rebuild the image for `web` and then stop, destroy, and recreate *just* the `web` service. The `--no-deps` flag prevents Compose from also recreating any services which `web` depends on. ### Running Compose on a single server You can use Compose to deploy an app to a remote Docker host by setting the `DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables appropriately. For tasks like this, [Docker Machine](/machine/overview.md) makes managing local and remote Docker hosts very easy, and is recommended even if you're not deploying remotely. Once you've set up your environment variables, all the normal `docker-compose` commands will work with no further configuration. ### Running Compose on a Swarm cluster [Docker Swarm](/swarm/overview.md), a Docker-native clustering system, exposes the same API as a single Docker host, which means you can use Compose against a Swarm instance and run your apps across multiple hosts. Read more about the Compose/Swarm integration in the [integration guide](swarm.md). ## Compose documentation - [Installing Compose](install.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/rails.md000066400000000000000000000146141274620702700155660ustar00rootroot00000000000000 ## Quickstart: Docker Compose and Rails This Quickstart guide will show you how to use Docker Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md). ### Define the project Start by setting up the three files you'll need to build the app. First, since your app is going to run inside a Docker container containing all of its dependencies, you'll need to define exactly what needs to be included in the container. This is done using a file called `Dockerfile`. To begin with, the Dockerfile consists of: FROM ruby:2.2.0 RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs RUN mkdir /myapp WORKDIR /myapp ADD Gemfile /myapp/Gemfile ADD Gemfile.lock /myapp/Gemfile.lock RUN bundle install ADD . /myapp That'll put your application code inside an image that will build a container with Ruby, Bundler and all your dependencies inside it. For more information on how to write Dockerfiles, see the [Docker user guide](/engine/tutorials/dockerimages.md#building-an-image-from-a-dockerfile) and the [Dockerfile reference](/engine/reference/builder.md). Next, create a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`. source 'https://rubygems.org' gem 'rails', '4.2.0' You'll need an empty `Gemfile.lock` in order to build our `Dockerfile`. $ touch Gemfile.lock Finally, `docker-compose.yml` is where the magic happens. This file describes the services that comprise your app (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration needed to link them together and expose the web app's port. version: '2' services: db: image: postgres web: build: . command: bundle exec rails s -p 3000 -b '0.0.0.0' volumes: - .:/myapp ports: - "3000:3000" depends_on: - db ### Build the project With those three files in place, you can now generate the Rails skeleton app using `docker-compose run`: $ docker-compose run web rails new . --force --database=postgresql --skip-bundle First, Compose will build the image for the `web` service using the `Dockerfile`. Then it'll run `rails new` inside a new container, using that image. Once it's done, you should have generated a fresh app: $ ls -l total 56 -rw-r--r-- 1 user staff 215 Feb 13 23:33 Dockerfile -rw-r--r-- 1 user staff 1480 Feb 13 23:43 Gemfile -rw-r--r-- 1 user staff 2535 Feb 13 23:43 Gemfile.lock -rw-r--r-- 1 root root 478 Feb 13 23:43 README.rdoc -rw-r--r-- 1 root root 249 Feb 13 23:43 Rakefile drwxr-xr-x 8 root root 272 Feb 13 23:43 app drwxr-xr-x 6 root root 204 Feb 13 23:43 bin drwxr-xr-x 11 root root 374 Feb 13 23:43 config -rw-r--r-- 1 root root 153 Feb 13 23:43 config.ru drwxr-xr-x 3 root root 102 Feb 13 23:43 db -rw-r--r-- 1 user staff 161 Feb 13 23:35 docker-compose.yml drwxr-xr-x 4 root root 136 Feb 13 23:43 lib drwxr-xr-x 3 root root 102 Feb 13 23:43 log drwxr-xr-x 7 root root 238 Feb 13 23:43 public drwxr-xr-x 9 root root 306 Feb 13 23:43 test drwxr-xr-x 3 root root 102 Feb 13 23:43 tmp drwxr-xr-x 3 root root 102 Feb 13 23:43 vendor If you are running Docker on Linux, the files `rails new` created are owned by root. This happens because the container runs as the root user. Change the ownership of the the new files. sudo chown -R $USER:$USER . If you are running Docker on Mac or Windows, you should already have ownership of all files, including those generated by `rails new`. List the files just to verify this. Uncomment the line in your new `Gemfile` which loads `therubyracer`, so you've got a Javascript runtime: gem 'therubyracer', platforms: :ruby Now that you've got a new `Gemfile`, you need to build the image again. (This, and changes to the Dockerfile itself, should be the only times you'll need to rebuild.) $ docker-compose build ### Connect the database The app is now bootable, but you're not quite there yet. By default, Rails expects a database to be running on `localhost` - so you need to point it at the `db` container instead. You also need to change the database and username to align with the defaults set by the `postgres` image. Replace the contents of `config/database.yml` with the following: development: &default adapter: postgresql encoding: unicode database: postgres pool: 5 username: postgres password: host: db test: <<: *default database: myapp_test You can now boot the app with: $ docker-compose up If all's well, you should see some PostgreSQL output, and then—after a few seconds—the familiar refrain: myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick 1.3.1 myapp_web_1 | [2014-01-17 17:16:29] INFO ruby 2.2.0 (2014-12-25) [x86_64-linux-gnu] myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick::HTTPServer#start: pid=1 port=3000 Finally, you need to create the database. In another terminal, run: $ docker-compose run web rake db:create That's it. Your app should now be running on port 3000 on your Docker daemon. If you're using [Docker Machine](/machine/overview.md), then `docker-machine ip MACHINE_VM` returns the Docker host IP address. ![Rails example](images/rails-welcome.png) >**Note**: If you stop the example application and attempt to restart it, you might get the following error: `web_1 | A server is already running. Check /myapp/tmp/pids/server.pid.` One way to resolve this is to delete the file `tmp/pids/server.pid`, and then re-start the application with `docker-compose up`. ## More Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/reference/000077500000000000000000000000001274620702700160625ustar00rootroot00000000000000compose-1.8.0/docs/reference/build.md000066400000000000000000000012311274620702700175000ustar00rootroot00000000000000 # build ``` Usage: build [options] [SERVICE...] Options: --force-rm Always remove intermediate containers. --no-cache Do not use cache when building the image. --pull Always attempt to pull a newer version of the image. ``` Services are built once and then tagged as `project_service`, e.g., `composetest_db`. If you change a service's Dockerfile or the contents of its build directory, run `docker-compose build` to rebuild it. compose-1.8.0/docs/reference/bundle.md000066400000000000000000000020021274620702700176470ustar00rootroot00000000000000 # bundle ``` Usage: bundle [options] Options: --push-images Automatically push images for any services which have a `build` option specified. -o, --output PATH Path to write the bundle file to. Defaults to ".dab". ``` Generate a Distributed Application Bundle (DAB) from the Compose file. Images must have digests stored, which requires interaction with a Docker registry. If digests aren't stored for all images, you can fetch them with `docker-compose pull` or `docker-compose push`. To push images automatically when bundling, pass `--push-images`. Only services with a `build` option specified will have their images pushed. compose-1.8.0/docs/reference/config.md000066400000000000000000000007621274620702700176560ustar00rootroot00000000000000 # config ```: Usage: config [options] Options: -q, --quiet Only validate the configuration, don't print anything. --services Print the service names, one per line. ``` Validate and view the compose file. compose-1.8.0/docs/reference/create.md000066400000000000000000000014441274620702700176520ustar00rootroot00000000000000 # create ``` Creates containers for a service. Usage: create [options] [SERVICE...] Options: --force-recreate Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate. --no-recreate If containers already exist, don't recreate them. Incompatible with --force-recreate. --no-build Don't build an image, even if it's missing. --build Build images before creating containers. ``` compose-1.8.0/docs/reference/down.md000066400000000000000000000022431274620702700173540ustar00rootroot00000000000000 # down ``` Usage: down [options] Options: --rmi type Remove images. Type must be one of: 'all': Remove all images used by any service. 'local': Remove only images that don't have a custom tag set by the `image` field. -v, --volumes Remove named volumes declared in the `volumes` section of the Compose file and anonymous volumes attached to containers. --remove-orphans Remove containers for services not defined in the Compose file ``` Stops containers and removes containers, networks, volumes, and images created by `up`. By default, the only things removed are: - Containers for services defined in the Compose file - Networks defined in the `networks` section of the Compose file - The default network, if one is used Networks and volumes defined as `external` are never removed. compose-1.8.0/docs/reference/envvars.md000066400000000000000000000072721274620702700201000ustar00rootroot00000000000000 # CLI Environment Variables Several environment variables are available for you to configure the Docker Compose command-line behaviour. Variables starting with `DOCKER_` are the same as those used to configure the Docker command-line client. If you're using `docker-machine`, then the `eval "$(docker-machine env my-docker-vm)"` command should set them to their correct values. (In this example, `my-docker-vm` is the name of a machine you created.) > Note: Some of these variables can also be provided using an > [environment file](../env-file.md) ## COMPOSE\_PROJECT\_NAME Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively. Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME` defaults to the `basename` of the project directory. See also the `-p` [command-line option](overview.md). ## COMPOSE\_FILE Specify the path to a Compose file. If not provided, Compose looks for a file named `docker-compose.yml` in the current directory and then each parent directory in succession until a file by that name is found. This variable supports multiple compose files separate by a path separator (on Linux and OSX the path separator is `:`, on Windows it is `;`). For example: `COMPOSE_FILE=docker-compose.yml:docker-compose.prod.yml` See also the `-f` [command-line option](overview.md). ## COMPOSE\_API\_VERSION The Docker API only supports requests from clients which report a specific version. If you receive a `client and server don't have same version error` using `docker-compose`, you can workaround this error by setting this environment variable. Set the version value to match the server version. Setting this variable is intended as a workaround for situations where you need to run temporarily with a mismatch between the client and server version. For example, if you can upgrade the client but need to wait to upgrade the server. Running with this variable set and a known mismatch does prevent some Docker features from working properly. The exact features that fail would depend on the Docker client and server versions. For this reason, running with this variable set is only intended as a workaround and it is not officially supported. If you run into problems running with this set, resolve the mismatch through upgrade and remove this setting to see if your problems resolve before notifying support. ## DOCKER\_HOST Sets the URL of the `docker` daemon. As with the Docker client, defaults to `unix:///var/run/docker.sock`. ## DOCKER\_TLS\_VERIFY When set to anything other than an empty string, enables TLS communication with the `docker` daemon. ## DOCKER\_CERT\_PATH Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TLS verification. Defaults to `~/.docker`. ## COMPOSE\_HTTP\_TIMEOUT Configures the time (in seconds) a request to the Docker daemon is allowed to hang before Compose considers it failed. Defaults to 60 seconds. ## COMPOSE\_TLS\_VERSION Configure which TLS version is used for TLS communication with the `docker` daemon. Defaults to `TLSv1`. Supported values are: `TLSv1`, `TLSv1_1`, `TLSv1_2`. ## Related Information - [User guide](../index.md) - [Installing Compose](../install.md) - [Compose file reference](../compose-file.md) - [Environment file](../env-file.md) compose-1.8.0/docs/reference/events.md000066400000000000000000000012661274620702700177150ustar00rootroot00000000000000 # events ``` Usage: events [options] [SERVICE...] Options: --json Output events as a stream of json objects ``` Stream container events for every container in the project. With the `--json` flag, a json object will be printed one per line with the format: ``` { "service": "web", "event": "create", "container": "213cf75fc39a", "image": "alpine:edge", "time": "2015-11-20T18:01:03.615550", } ``` compose-1.8.0/docs/reference/exec.md000066400000000000000000000016411274620702700173320ustar00rootroot00000000000000 # exec ``` Usage: exec [options] SERVICE COMMAND [ARGS...] Options: -d Detached mode: Run command in the background. --privileged Give extended privileges to the process. --user USER Run the command as this user. -T Disable pseudo-tty allocation. By default `docker-compose exec` allocates a TTY. --index=index index of the container if there are multiple instances of a service [default: 1] ``` This is equivalent of `docker exec`. With this subcommand you can run arbitrary commands in your services. Commands are by default allocating a TTY, so you can do e.g. `docker-compose exec web sh` to get an interactive prompt. compose-1.8.0/docs/reference/help.md000066400000000000000000000004671274620702700173430ustar00rootroot00000000000000 # help ``` Usage: help COMMAND ``` Displays help and usage instructions for a command. compose-1.8.0/docs/reference/index.md000066400000000000000000000020101274620702700175040ustar00rootroot00000000000000 ## Compose command-line reference The following pages describe the usage information for the [docker-compose](overview.md) subcommands. You can also see this information by running `docker-compose [SUBCOMMAND] --help` from the command line. * [docker-compose](overview.md) * [build](build.md) * [config](config.md) * [create](create.md) * [down](down.md) * [events](events.md) * [help](help.md) * [kill](kill.md) * [logs](logs.md) * [pause](pause.md) * [port](port.md) * [ps](ps.md) * [pull](pull.md) * [restart](restart.md) * [rm](rm.md) * [run](run.md) * [scale](scale.md) * [start](start.md) * [stop](stop.md) * [unpause](unpause.md) * [up](up.md) ## Where to go next * [CLI environment variables](envvars.md) * [docker-compose Command](overview.md) compose-1.8.0/docs/reference/kill.md000066400000000000000000000010401274620702700173320ustar00rootroot00000000000000 # kill ``` Usage: kill [options] [SERVICE...] Options: -s SIGNAL SIGNAL to send to the container. Default signal is SIGKILL. ``` Forces running containers to stop by sending a `SIGKILL` signal. Optionally the signal can be passed, for example: $ docker-compose kill -s SIGINT compose-1.8.0/docs/reference/logs.md000066400000000000000000000011031274620702700173430ustar00rootroot00000000000000 # logs ``` Usage: logs [options] [SERVICE...] Options: --no-color Produce monochrome output. -f, --follow Follow log output -t, --timestamps Show timestamps --tail Number of lines to show from the end of the logs for each container. ``` Displays log output from services. compose-1.8.0/docs/reference/overview.md000066400000000000000000000110561274620702700202550ustar00rootroot00000000000000 # Overview of docker-compose CLI This page provides the usage information for the `docker-compose` Command. You can also see this information by running `docker-compose --help` from the command line. ``` Define and run multi-container applications with Docker. Usage: docker-compose [-f=...] [options] [COMMAND] [ARGS...] docker-compose -h|--help Options: -f, --file FILE Specify an alternate compose file (default: docker-compose.yml) -p, --project-name NAME Specify an alternate project name (default: directory name) --verbose Show more output -v, --version Print version and exit -H, --host HOST Daemon socket to connect to --tls Use TLS; implied by --tlsverify --tlscacert CA_PATH Trust certs signed only by this CA --tlscert CLIENT_CERT_PATH Path to TLS certificate file --tlskey TLS_KEY_PATH Path to TLS key file --tlsverify Use TLS and verify the remote --skip-hostname-check Don't check the daemon's hostname against the name specified in the client certificate (for example if your docker host is an IP address) Commands: build Build or rebuild services config Validate and view the compose file create Create services down Stop and remove containers, networks, images, and volumes events Receive real time events from containers help Get help on a command kill Kill containers logs View output from containers pause Pause services port Print the public port for a port binding ps List containers pull Pulls service images restart Restart services rm Remove stopped containers run Run a one-off command scale Set number of containers for a service start Start services stop Stop services unpause Unpause services up Create and start containers version Show the Docker-Compose version information ``` The Docker Compose binary. You use this command to build and manage multiple services in Docker containers. Use the `-f` flag to specify the location of a Compose configuration file. You can supply multiple `-f` configuration files. When you supply multiple files, Compose combines them into a single configuration. Compose builds the configuration in the order you supply the files. Subsequent files override and add to their successors. For example, consider this command line: ``` $ docker-compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db` ``` The `docker-compose.yml` file might specify a `webapp` service. ``` webapp: image: examples/web ports: - "8000:8000" volumes: - "/data" ``` If the `docker-compose.admin.yml` also specifies this same service, any matching fields will override the previous file. New values, add to the `webapp` service configuration. ``` webapp: build: . environment: - DEBUG=1 ``` Use a `-f` with `-` (dash) as the filename to read the configuration from stdin. When stdin is used all paths in the configuration are relative to the current working directory. The `-f` flag is optional. If you don't provide this flag on the command line, Compose traverses the working directory and its parent directories looking for a `docker-compose.yml` and a `docker-compose.override.yml` file. You must supply at least the `docker-compose.yml` file. If both files are present on the same directory level, Compose combines the two files into a single configuration. The configuration in the `docker-compose.override.yml` file is applied over and in addition to the values in the `docker-compose.yml` file. See also the `COMPOSE_FILE` [environment variable](envvars.md#compose-file). Each configuration has a project name. If you supply a `-p` flag, you can specify a project name. If you don't specify the flag, Compose uses the current directory name. See also the `COMPOSE_PROJECT_NAME` [environment variable]( envvars.md#compose-project-name) ## Where to go next * [CLI environment variables](envvars.md) compose-1.8.0/docs/reference/pause.md000066400000000000000000000006141274620702700175220ustar00rootroot00000000000000 # pause ``` Usage: pause [SERVICE...] ``` Pauses running containers of a service. They can be unpaused with `docker-compose unpause`. compose-1.8.0/docs/reference/port.md000066400000000000000000000010271274620702700173700ustar00rootroot00000000000000 # port ``` Usage: port [options] SERVICE PRIVATE_PORT Options: --protocol=proto tcp or udp [default: tcp] --index=index index of the container if there are multiple instances of a service [default: 1] ``` Prints the public port for a port binding. compose-1.8.0/docs/reference/ps.md000066400000000000000000000005101274620702700170220ustar00rootroot00000000000000 # ps ``` Usage: ps [options] [SERVICE...] Options: -q Only display IDs ``` Lists containers. compose-1.8.0/docs/reference/pull.md000066400000000000000000000006231274620702700173610ustar00rootroot00000000000000 # pull ``` Usage: pull [options] [SERVICE...] Options: --ignore-pull-failures Pull what it can and ignores images with pull failures. ``` Pulls service images. compose-1.8.0/docs/reference/push.md000066400000000000000000000006361274620702700173700ustar00rootroot00000000000000 # push ``` Usage: push [options] [SERVICE...] Options: --ignore-push-failures Push what it can and ignores images with push failures. ``` Pushes images for services. compose-1.8.0/docs/reference/restart.md000066400000000000000000000006531274620702700200740ustar00rootroot00000000000000 # restart ``` Usage: restart [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) ``` Restarts services. compose-1.8.0/docs/reference/rm.md000066400000000000000000000013651274620702700170270ustar00rootroot00000000000000 # rm ``` Usage: rm [options] [SERVICE...] Options: -f, --force Don't ask to confirm removal -v Remove any anonymous volumes attached to containers -a, --all Also remove one-off containers created by docker-compose run ``` Removes stopped service containers. By default, anonymous volumes attached to containers will not be removed. You can override this with `-v`. To list all volumes, use `docker volume ls`. Any data which is not in a volume will be lost. compose-1.8.0/docs/reference/run.md000066400000000000000000000056621274620702700172210ustar00rootroot00000000000000 # run ``` Usage: run [options] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...] Options: -d Detached mode: Run container in the background, print new container name. --name NAME Assign a name to the container --entrypoint CMD Override the entrypoint of the image. -e KEY=VAL Set an environment variable (can be used multiple times) -u, --user="" Run as specified username or uid --no-deps Don't start linked services. --rm Remove container after run. Ignored in detached mode. -p, --publish=[] Publish a container's port(s) to the host --service-ports Run command with the service's ports enabled and mapped to the host. -T Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY. -w, --workdir="" Working directory inside the container ``` Runs a one-time command against a service. For example, the following command starts the `web` service and runs `bash` as its command. $ docker-compose run web bash Commands you use with `run` start in new containers with the same configuration as defined by the service' configuration. This means the container has the same volumes, links, as defined in the configuration file. There two differences though. First, the command passed by `run` overrides the command defined in the service configuration. For example, if the `web` service configuration is started with `bash`, then `docker-compose run web python app.py` overrides it with `python app.py`. The second difference is the `docker-compose run` command does not create any of the ports specified in the service configuration. This prevents the port collisions with already open ports. If you *do want* the service's ports created and mapped to the host, specify the `--service-ports` flag: $ docker-compose run --service-ports web python manage.py shell Alternatively manual port mapping can be specified. Same as when running Docker's `run` command - using `--publish` or `-p` options: $ docker-compose run --publish 8080:80 -p 2022:22 -p 127.0.0.1:2021:21 web python manage.py shell If you start a service configured with links, the `run` command first checks to see if the linked service is running and starts the service if it is stopped. Once all the linked services are running, the `run` executes the command you passed it. So, for example, you could run: $ docker-compose run db psql -h db -U docker This would open up an interactive PostgreSQL shell for the linked `db` container. If you do not want the `run` command to start linked containers, specify the `--no-deps` flag: $ docker-compose run --no-deps web python manage.py shell compose-1.8.0/docs/reference/scale.md000066400000000000000000000007201274620702700174720ustar00rootroot00000000000000 # scale ``` Usage: scale [SERVICE=NUM...] ``` Sets the number of containers to run for a service. Numbers are specified as arguments in the form `service=num`. For example: $ docker-compose scale web=2 worker=3 compose-1.8.0/docs/reference/start.md000066400000000000000000000005341274620702700175430ustar00rootroot00000000000000 # start ``` Usage: start [SERVICE...] ``` Starts existing containers for a service. compose-1.8.0/docs/reference/stop.md000066400000000000000000000007761274620702700174030ustar00rootroot00000000000000 # stop ``` Usage: stop [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds (default: 10). ``` Stops running containers without removing them. They can be started again with `docker-compose start`. compose-1.8.0/docs/reference/unpause.md000066400000000000000000000005441274620702700200670ustar00rootroot00000000000000 # unpause ``` Usage: unpause [SERVICE...] ``` Unpauses paused containers of a service. compose-1.8.0/docs/reference/up.md000066400000000000000000000046031274620702700170330ustar00rootroot00000000000000 # up ``` Usage: up [options] [SERVICE...] Options: -d Detached mode: Run containers in the background, print new container names. Incompatible with --abort-on-container-exit. --no-color Produce monochrome output. --no-deps Don't start linked services. --force-recreate Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate. --no-recreate If containers already exist, don't recreate them. Incompatible with --force-recreate. --no-build Don't build an image, even if it's missing. --build Build images before starting containers. --abort-on-container-exit Stops all containers if any container was stopped. Incompatible with -d. -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10) --remove-orphans Remove containers for services not defined in the Compose file ``` Builds, (re)creates, starts, and attaches to containers for a service. Unless they are already running, this command also starts any linked services. The `docker-compose up` command aggregates the output of each container. When the command exits, all containers are stopped. Running `docker-compose up -d` starts the containers in the background and leaves them running. If there are existing containers for a service, and the service's configuration or image was changed after the container's creation, `docker-compose up` picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag. If you want to force Compose to stop and recreate all containers, use the `--force-recreate` flag. compose-1.8.0/docs/startup-order.md000066400000000000000000000060151274620702700172630ustar00rootroot00000000000000 # Controlling startup order in Compose You can control the order of service startup with the [depends_on](compose-file.md#depends-on) option. Compose always starts containers in dependency order, where dependencies are determined by `depends_on`, `links`, `volumes_from` and `network_mode: "service:..."`. However, Compose will not wait until a container is "ready" (whatever that means for your particular application) - only until it's running. There's a good reason for this. The problem of waiting for a database (for example) to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures. To handle this, your application should attempt to re-establish a connection to the database after a failure. If the application retries the connection, it should eventually be able to connect to the database. The best solution is to perform this check in your application code, both at startup and whenever a connection is lost for any reason. However, if you don't need this level of resilience, you can work around the problem with a wrapper script: - Use a tool such as [wait-for-it](https://github.com/vishnubob/wait-for-it) or [dockerize](https://github.com/jwilder/dockerize). These are small wrapper scripts which you can include in your application's image and will poll a given host and port until it's accepting TCP connections. Supposing your application's image has a `CMD` set in its Dockerfile, you can wrap it by setting the entrypoint in `docker-compose.yml`: version: "2" services: web: build: . ports: - "80:8000" depends_on: - "db" entrypoint: ./wait-for-it.sh db:5432 db: image: postgres - Write your own wrapper script to perform a more application-specific health check. For example, you might want to wait until Postgres is definitely ready to accept commands: #!/bin/bash set -e host="$1" shift cmd="$@" until psql -h "$host" -U "postgres" -c '\l'; do >&2 echo "Postgres is unavailable - sleeping" sleep 1 done >&2 echo "Postgres is up - executing command" exec $cmd You can use this as a wrapper script as in the previous example, by setting `entrypoint: ./wait-for-postgres.sh db`. ## Compose documentation - [Installing Compose](install.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/docs/swarm.md000066400000000000000000000136371274620702700156110ustar00rootroot00000000000000 # Using Compose with Swarm Docker Compose and [Docker Swarm](/swarm/overview.md) aim to have full integration, meaning you can point a Compose app at a Swarm cluster and have it all just work as if you were using a single Docker host. The actual extent of integration depends on which version of the [Compose file format](compose-file.md#versioning) you are using: 1. If you're using version 1 along with `links`, your app will work, but Swarm will schedule all containers on one host, because links between containers do not work across hosts with the old networking system. 2. If you're using version 2, your app should work with no changes: - subject to the [limitations](#limitations) described below, - as long as the Swarm cluster is configured to use the [overlay driver](https://docs.docker.com/engine/userguide/networking/dockernetworks/#an-overlay-network), or a custom driver which supports multi-host networking. Read [Get started with multi-host networking](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) to see how to set up a Swarm cluster with [Docker Machine](/machine/overview.md) and the overlay driver. Once you've got it running, deploying your app to it should be as simple as: $ eval "$(docker-machine env --swarm )" $ docker-compose up ## Limitations ### Building images Swarm can build an image from a Dockerfile just like a single-host Docker instance can, but the resulting image will only live on a single node and won't be distributed to other nodes. If you want to use Compose to scale the service in question to multiple nodes, you'll have to build it yourself, push it to a registry (e.g. the Docker Hub) and reference it from `docker-compose.yml`: $ docker build -t myusername/web . $ docker push myusername/web $ cat docker-compose.yml web: image: myusername/web $ docker-compose up -d $ docker-compose scale web=3 ### Multiple dependencies If a service has multiple dependencies of the type which force co-scheduling (see [Automatic scheduling](#automatic-scheduling) below), it's possible that Swarm will schedule the dependencies on different nodes, making the dependent service impossible to schedule. For example, here `foo` needs to be co-scheduled with `bar` and `baz`: version: "2" services: foo: image: foo volumes_from: ["bar"] network_mode: "service:baz" bar: image: bar baz: image: baz The problem is that Swarm might first schedule `bar` and `baz` on different nodes (since they're not dependent on one another), making it impossible to pick an appropriate node for `foo`. To work around this, use [manual scheduling](#manual-scheduling) to ensure that all three services end up on the same node: version: "2" services: foo: image: foo volumes_from: ["bar"] network_mode: "service:baz" environment: - "constraint:node==node-1" bar: image: bar environment: - "constraint:node==node-1" baz: image: baz environment: - "constraint:node==node-1" ### Host ports and recreating containers If a service maps a port from the host, e.g. `80:8000`, then you may get an error like this when running `docker-compose up` on it after the first time: docker: Error response from daemon: unable to find a node that satisfies container==6ab2dfe36615ae786ef3fc35d641a260e3ea9663d6e69c5b70ce0ca6cb373c02. The usual cause of this error is that the container has a volume (defined either in its image or in the Compose file) without an explicit mapping, and so in order to preserve its data, Compose has directed Swarm to schedule the new container on the same node as the old container. This results in a port clash. There are two viable workarounds for this problem: - Specify a named volume, and use a volume driver which is capable of mounting the volume into the container regardless of what node it's scheduled on. Compose does not give Swarm any specific scheduling instructions if a service uses only named volumes. version: "2" services: web: build: . ports: - "80:8000" volumes: - web-logs:/var/log/web volumes: web-logs: driver: custom-volume-driver - Remove the old container before creating the new one. You will lose any data in the volume. $ docker-compose stop web $ docker-compose rm -f web $ docker-compose up web ## Scheduling containers ### Automatic scheduling Some configuration options will result in containers being automatically scheduled on the same Swarm node to ensure that they work correctly. These are: - `network_mode: "service:..."` and `network_mode: "container:..."` (and `net: "container:..."` in the version 1 file format). - `volumes_from` - `links` ### Manual scheduling Swarm offers a rich set of scheduling and affinity hints, enabling you to control where containers are located. They are specified via container environment variables, so you can use Compose's `environment` option to set them. # Schedule containers on a specific node environment: - "constraint:node==node-1" # Schedule containers on a node that has the 'storage' label set to 'ssd' environment: - "constraint:storage==ssd" # Schedule containers where the 'redis' image is already pulled environment: - "affinity:image==redis" For the full set of available filters and expressions, see the [Swarm documentation](/swarm/scheduler/filter.md). compose-1.8.0/docs/wordpress.md000066400000000000000000000100461274620702700164770ustar00rootroot00000000000000 # Quickstart: Docker Compose and WordPress You can use Docker Compose to easily run WordPress in an isolated environment built with Docker containers. This quick-start guide demonstrates how to use Compose to set up and run WordPress. Before starting, you'll need to have [Compose installed](install.md). ### Define the project 1. Create an empty project directory. You can name the directory something easy for you to remember. This directory is the context for your application image. The directory should only contain resources to build that image. This project directory will contain a `docker-compose.yaml` file which will be complete in itself for a good starter wordpress project. 2. Change directories into your project directory. For example, if you named your directory `my_wordpress`: $ cd my-wordpress/ 3. Create a `docker-compose.yml` file that will start your `Wordpress` blog and a separate `MySQL` instance with a volume mount for data persistence: version: '2' services: db: image: mysql:5.7 volumes: - "./.data/db:/var/lib/mysql" restart: always environment: MYSQL_ROOT_PASSWORD: wordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest links: - db ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_PASSWORD: wordpress **NOTE**: The folder `./.data/db` will be automatically created in the project directory alongside the `docker-compose.yml` which will persist any updates made by wordpress to the database. ### Build the project Now, run `docker-compose up -d` from your project directory. This pulls the needed images, and starts the wordpress and database containers, as shown in the example below. $ docker-compose up -d Creating network "my_wordpress_default" with the default driver Pulling db (mysql:5.7)... 5.7: Pulling from library/mysql efd26ecc9548: Pull complete a3ed95caeb02: Pull complete ... Digest: sha256:34a0aca88e85f2efa5edff1cea77cf5d3147ad93545dbec99cfe705b03c520de Status: Downloaded newer image for mysql:5.7 Pulling wordpress (wordpress:latest)... latest: Pulling from library/wordpress efd26ecc9548: Already exists a3ed95caeb02: Pull complete 589a9d9a7c64: Pull complete ... Digest: sha256:ed28506ae44d5def89075fd5c01456610cd6c64006addfe5210b8c675881aff6 Status: Downloaded newer image for wordpress:latest Creating my_wordpress_db_1 Creating my_wordpress_wordpress_1 ### Bring up WordPress in a web browser If you're using [Docker Machine](https://docs.docker.com/machine/), then `docker-machine ip MACHINE_VM` gives you the machine address and you can open `http://MACHINE_VM_IP:8000` in a browser. At this point, WordPress should be running on port `8000` of your Docker Host, and you can complete the "famous five-minute installation" as a WordPress administrator. **NOTE**: The Wordpress site will not be immediately available on port `8000` because the containers are still being initialized and may take a couple of minutes before the first load. ![Choose language for WordPress install](images/wordpress-lang.png) ![WordPress Welcome](images/wordpress-welcome.png) ## More Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.8.0/experimental/000077500000000000000000000000001274620702700156715ustar00rootroot00000000000000compose-1.8.0/experimental/compose_swarm_networking.md000066400000000000000000000212511274620702700233410ustar00rootroot00000000000000# Experimental: Compose, Swarm and Multi-Host Networking The [experimental build of Docker](https://github.com/docker/docker/tree/master/experimental) has an entirely new networking system, which enables secure communication between containers on multiple hosts. In combination with Docker Swarm and Docker Compose, you can now run multi-container apps on multi-host clusters with the same tooling and configuration format you use to develop them locally. > Note: This functionality is in the experimental stage, and contains some hacks and workarounds which will be removed as it matures. ## Prerequisites Before you start, you’ll need to install the experimental build of Docker, and the latest versions of Machine and Compose. - To install the experimental Docker build on a Linux machine, follow the instructions [here](https://github.com/docker/docker/tree/master/experimental#install-docker-experimental). - To install the experimental Docker build on a Mac, run these commands: $ curl -L https://experimental.docker.com/builds/Darwin/x86_64/docker-latest > /usr/local/bin/docker $ chmod +x /usr/local/bin/docker - To install Machine, follow the instructions [here](https://docs.docker.com/machine/install-machine/). - To install Compose, follow the instructions [here](https://docs.docker.com/compose/install/). You’ll also need a [Docker Hub](https://hub.docker.com/account/signup/) account and a [Digital Ocean](https://www.digitalocean.com/) account. ## Set up a swarm with multi-host networking Set the `DIGITALOCEAN_ACCESS_TOKEN` environment variable to a valid Digital Ocean API token, which you can generate in the [API panel](https://cloud.digitalocean.com/settings/applications). DIGITALOCEAN_ACCESS_TOKEN=abc12345 Start a consul server: docker-machine create -d digitalocean --engine-install-url https://experimental.docker.com consul docker $(docker-machine config consul) run -d -p 8500:8500 -h consul progrium/consul -server -bootstrap (In a real world setting you’d set up a distributed consul, but that’s beyond the scope of this guide!) Create a Swarm token: SWARM_TOKEN=$(docker run swarm create) Create a Swarm master: docker-machine create -d digitalocean --swarm --swarm-master --swarm-discovery=token://$SWARM_TOKEN --engine-install-url="https://experimental.docker.com" --digitalocean-image "ubuntu-14-10-x64" --engine-opt=default-network=overlay:multihost --engine-label=com.docker.network.driver.overlay.bind_interface=eth0 --engine-opt=kv-store=consul:$(docker-machine ip consul):8500 swarm-0 Create a Swarm node: docker-machine create -d digitalocean --swarm --swarm-discovery=token://$SWARM_TOKEN --engine-install-url="https://experimental.docker.com" --digitalocean-image "ubuntu-14-10-x64" --engine-opt=default-network=overlay:multihost --engine-label=com.docker.network.driver.overlay.bind_interface=eth0 --engine-opt=kv-store=consul:$(docker-machine ip consul):8500 --engine-label com.docker.network.driver.overlay.neighbor_ip=$(docker-machine ip swarm-0) swarm-1 You can create more Swarm nodes if you want - it’s best to give them sensible names (swarm-2, swarm-3, etc). Finally, point Docker at your swarm: eval "$(docker-machine env --swarm swarm-0)" ## Run containers and get them communicating Now that you’ve got a swarm up and running, you can create containers on it just like a single Docker instance: $ docker run busybox echo hello world hello world If you run `docker ps -a`, you can see what node that container was started on by looking at its name (here it’s swarm-3): $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 41f59749737b busybox "echo hello world" 15 seconds ago Exited (0) 13 seconds ago swarm-3/trusting_leakey As you start more containers, they’ll be placed on different nodes across the cluster, thanks to Swarm’s default “spread” scheduling strategy. Every container started on this swarm will use the “overlay:multihost” network by default, meaning they can all intercommunicate. Each container gets an IP address on that network, and an `/etc/hosts` file which will be updated on-the-fly with every other container’s IP address and name. That means that if you have a running container named ‘foo’, other containers can access it at the hostname ‘foo’. Let’s verify that multi-host networking is functioning. Start a long-running container: $ docker run -d --name long-running busybox top If you start a new container and inspect its /etc/hosts file, you’ll see the long-running container in there: $ docker run busybox cat /etc/hosts ... 172.21.0.6 long-running Verify that connectivity works between containers: $ docker run busybox ping long-running PING long-running (172.21.0.6): 56 data bytes 64 bytes from 172.21.0.6: seq=0 ttl=64 time=7.975 ms 64 bytes from 172.21.0.6: seq=1 ttl=64 time=1.378 ms 64 bytes from 172.21.0.6: seq=2 ttl=64 time=1.348 ms ^C --- long-running ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 1.140/2.099/7.975 ms ## Run a Compose application Here’s an example of a simple Python + Redis app using multi-host networking on a swarm. Create a directory for the app: $ mkdir composetest $ cd composetest Inside this directory, create 2 files. First, create `app.py` - a simple web app that uses the Flask framework and increments a value in Redis: from flask import Flask from redis import Redis import os app = Flask(__name__) redis = Redis(host='composetest_redis_1', port=6379) @app.route('/') def hello(): redis.incr('hits') return 'Hello World! I have been seen %s times.' % redis.get('hits') if __name__ == "__main__": app.run(host="0.0.0.0", debug=True) Note that we’re connecting to a host called `composetest_redis_1` - this is the name of the Redis container that Compose will start. Second, create a Dockerfile for the app container: FROM python:2.7 RUN pip install flask redis ADD . /code WORKDIR /code CMD ["python", "app.py"] Build the Docker image and push it to the Hub (you’ll need a Hub account). Replace `` with your Docker Hub username: $ docker build -t /counter . $ docker push /counter Next, create a `docker-compose.yml`, which defines the configuration for the web and redis containers. Once again, replace `` with your Hub username: web: image: /counter ports: - "80:5000" redis: image: redis Now start the app: $ docker-compose up -d Pulling web (username/counter:latest)... swarm-0: Pulling username/counter:latest... : downloaded swarm-2: Pulling username/counter:latest... : downloaded swarm-1: Pulling username/counter:latest... : downloaded swarm-3: Pulling username/counter:latest... : downloaded swarm-4: Pulling username/counter:latest... : downloaded Creating composetest_web_1... Pulling redis (redis:latest)... swarm-2: Pulling redis:latest... : downloaded swarm-1: Pulling redis:latest... : downloaded swarm-3: Pulling redis:latest... : downloaded swarm-4: Pulling redis:latest... : downloaded swarm-0: Pulling redis:latest... : downloaded Creating composetest_redis_1... Swarm has created containers for both web and redis, and placed them on different nodes, which you can check with `docker ps`: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92faad2135c9 redis "/entrypoint.sh redi 43 seconds ago Up 42 seconds swarm-2/composetest_redis_1 adb809e5cdac username/counter "/bin/sh -c 'python 55 seconds ago Up 54 seconds 45.67.8.9:80->5000/tcp swarm-1/composetest_web_1 You can also see that the web container has exposed port 80 on its swarm node. If you curl that IP, you’ll get a response from the container: $ curl http://45.67.8.9 Hello World! I have been seen 1 times. If you hit it repeatedly, the counter will increment, demonstrating that the web and redis container are communicating: $ curl http://45.67.8.9 Hello World! I have been seen 2 times. $ curl http://45.67.8.9 Hello World! I have been seen 3 times. $ curl http://45.67.8.9 Hello World! I have been seen 4 times. compose-1.8.0/logo.png000066400000000000000000001143371274620702700146530ustar00rootroot00000000000000PNG  IHDR(ޫbKGD pHYs%%IR$tIME (S IDATxw|չ6ze[+4SLH%@@)J$7@MrbظeY$sǬ+;!G?FiFiFiFiFiFiFiߋg駦oR'LW[^>􌬬/~7?/o]zɰGU ?6zHĮ;S9vc֭Ʃ'0vXv `efzeeYtvuRSSC]!"6CGN7n<;vlO$:k)'>iEHQR2LA.xB"C*J]c_0n|aokC&M4-ekaFyx揿#PY9r͛N0mv ƿ~~C}:em[>פ@8!Av|pZ~\.h@nzg?wmW^{]#„4456O/**-صsߒI_n5/3 1iexr,O!*INyx}^?e5jW#Y9%$bXu qt?@ _6]Iiq͛n]`oV ikPY92%6f;o$< La SbJ B؆&_V4pBJ)dd%*w}{gωgV[[]Ɉ "9y(YHAhB HA -1C<9 =RƜ}1uYk 1=&(m#$h% RbHiӧͷr<`் <}L/Z;;5š|EZy~T|` 8`(gN_8{LsײjȘBC(-$B 0b_q-]gF +XP\V43롦KoJ <: MDDzp%D,;Ba? Q9q*7|&BX i"btpb5 GgU40}YO31 0,+UmcDSfc -f b2[ԼBњ)pKd9u.[G[,zk| DMZ!Ϥk>(7n#I㟆/h^ٹq -Bq vj~{_#f-AxmV6Ʊ+L@(w>S/#X>4]m(IV!iɰetttB.X} ƇV%]M̞cdh:vn'TiuyZ"aQ)rze ?R5Xnjb X^>-7pjs=zHZODz.(zm ~Wg+0tԔix|{,Mة.c`c҉|xrK*O9=}X4JeH,.[L(A iS =1zi%/>C$'š7gZe [c )a#PZBXe>ZE_Kjci ü֫Nhjjz#eåщ|_;Ĝu޵0s>u<Ļ\>XV.aahVѰ{+[R} ] MBcúmCR[iXia_'N%npyQ"Bg}ݿ} |Nf\/V~6+ u"ښh=DfJӞ-t:@ E$E=2OEgdۃj"CJ`۶&HlڵӈF.ۗVe 4lk0o%ЕSEl{9F͜~vw܈4LT!Yydf/% 3!dH,%PX\ljjJ$iXVQ,jIJF:d7[b_x3Iga~z"ec%YC+ -tlC 0@Em>X׃yvө&' ѨW2O:a:WD!$PţCbJ`=d b;51PQ.(چhđJuGq,{D";)--2|giO +2}()GVXYr C;.3n8v\& ji  S#-?Cx9Ac@Fy]/>f{~xt:4A'&$Y&-ըHΈd;b0#%d\rzK|X'F${& ˭&HTF0׼{ǵkru/hm'=,.ٰDyqp$ "0Kk.hiD9yyrs֢P iK^lmJz^DWs[0Bu mhvQLz$G|yX@`݁1ǝio4A┉ذs_GQW ?{TĻ41H!DH8L/BCg*tYY !qK9n]taY2u_^_KDk %Xzon^zݷ4ANP5={pꩧjkk &Ý|ofiDN)$'?`PvTvOE ‘ l1DzZ){+3˭^r.o6i_DfKm_/ GuH$7j0 Éӿ~- 4~c{T1LeXƟ1w݆~Qq57(/~bܸq|wykmmm}>7e.:bQ%@H#Lp!jA};2Hk#g{׽Pj呋ˆ7n^~}ggk>eZ(//SظcHyuD zJ;mcc9 ˟s&U@ O<ƢWVM|wWnᳱ1i ro{w}w5񂥄<]5Rl\|i&O\|Emn% zP(t֚2u+jmKfXyeW?L'&4)BM| %]D?vY1%Iȴ9a1^kajS:)RDxwXK4AN\b1S IrGI!n'f ZW}Kub0XlrԽ$Hm޶B Yi rzW)!!=% KiL>S\ 'E @1kM8H >|$iZWAk L/| ޚ?%#r۶VtW ^paG?\s5_R:KRqS{o=kӪƍ:ujy`ХjiiŶP(4<MBb+m @U~~^uZvdggg̘{z>UMsְg C8ʓTӊRl[j}06 bY\ ɓ'ϮX(8 !r0M) RXE $ bv ;0 (R rD, 5aR^wSNN΋O>8X~~|?sϟB% g BbB{ߟUQ9{w ^w\vEBˮ]zd47] RX)*)%3+,/$'<^ `@GionG "O_,׺ʑ#;np?x9g,@kA 1:FjxW߼=7}'ͦ aYv{G25( |*Fby;"rrw[{zGu09~1Ӵ(-Bj|ٽlkEX&aƛ]eWrgV@ Ðض)q;i+?Iaضߔ` xF(m`&G3d)R9<1Ե&p؄q"縞g )1 (g pm5{n`ۖ,_X !$\=OYFHV( m~{Zmuտ?~<۷oO$#++VOp_Wg=BB28e q3Ϻ6# iI$#Q1.%n L)KVd ZA 4R"0+]ĪeKٱm*$ nο Vʆ8L^]9~B~ 6i Q1mTܹ֯sGlذbBddf1~,nvJU`#1 Z9#+VJfJ^\ PDג@iٷc3<,_p0! #/.}.GkĴ4 jkk7`{CלBzSL~[{Ҏ~48e|OHvnZ,i .(%Q= Z" !p5է\&%C{&͘eTI8`ӆ\i0bX"*233^^~jjjlO$>qLo\С*-[^ :϶5%\sgn]416A Bckrj/a 1 "ށ?1D1OA ) Aaa1sO; fAm!kٺa-;lbddT\a#* [w)}>M4zP[W˨1YJ 7s7}F?(p;fv[W0ƖNbJ7ָ$?Ê$)H!*͎e'88rgE}RSse- '1 6qSSSöM0qW/f4ANV1Vʆ]P>'nON^IM;n2 u Fbh!{T8?].Bd;ѝ_{Vf)>L5+loc)ц!Bsٺe 6m~Ȑ!YǮ;v rb(Gyy9-- m"o /Jq7t:V41l;(Oe ~.W7)D_LJX?!^3 #;EػFm={9s:^׹fOguf[SÆoi ralٴ!6i~Զ//τ`)H% "rƍ#"tڐhY:j:?ÃF=8)1fg`)a1tx%3>j.[Τi+B\.f͚â7R_W|kqee%---i8y,_#+loo]lH--1y(r#vb (1mg6_w!{ 1dd=HTkIo=C~ Gziʙ4ϼ3ȁ;lia?y*ha,}m:::,,*j?xhmkKDFjiii_ ufs;rD:eHZHdy]E(A(&f@h'JB+I $/+@#uU I*EBI`suyؼ"N.Y̌٧_@̶)+-˗yAA^n9o%Kfi 5Lο&θj" \BcZĐ, )V21,I,j2GcH&7Ӎ4Fh3LRHf R M? ZcIqf5P;W`ڬ9S0yCknniY& QFSYY/ 3.㟻[񖝤$- x NX dx,2q OM& _HDtrxl5f^Vhc疍:{6M(`uض:{6555 ry۷oe0d|0}7 qFh nK.RJpS/nɑ$Z>@r$$R#8ߢ #Lիߧ6py 0,e1c TUVӦMeuuu9p@ ;&MaC/J埼a&ѳ@&e r3}^Rȸ#%]ba " "QD9bĩoD;sBKL5ߥA :;;?ȑ&Pey76?rXo7*: $uH %&t DLe[ !K@I=cI\Ƣ=h06}64|x9C*Fck x7ho->bMKu8tP ;n]]yE|?DzZ:-=Dע*I8dhˀ}8QiNwʶ[9321dxX}:G?lٲ3i>|_ a3QZSp[E I<ͽ۲2/B*RcRdzi90}W f3l[@M =}C"-2;V3gҥK8böY6BO@pYΌ Bˁ8QԥHAc7QH<%ڊSg̹ tv{lۼ!%tbUdnmڸy!i r;:@CNA&]9b/ʺ90dgfxa#*7;q"=C1WK%r*:񼦾;n*@g O1 IDATRfzo7%9_7Rm@7,J 7?K'/V^Jˇ"MM!c1r$0M0~_<Ӛ( mAJ7q I(DK'K֪=G\׺} Lw!#Ϻ1܌&ry7QwCOr^x<=0NIn2H?Z>Hj{2wq>GH&5!ͣ)Fccc p eC$ӦM~=i$`\|eޯOT^| ~wߙ1  @œ4A=i!FL`C(a3j2yc&Sv5l6|+u N_3V Qw:^>j8vD*-aJRIQBaʥ0-"Jw⦣F)*QJu4Aa֭;:1ƻ(w1"0Nm$$9[`2u:V۴TEϾ #c-: P. =vu4Ǡ)=L h5NĶ߂ }F~"uxM-]-defa&0\c*dddvyuuPSScU,W_+BG#XPe3k(;\|xf- j(eBM5{^=h C;L3ӕ\)'hFU$2tGNj;S$EjϚ yvnª$I%hwUJT4խ  =n@{+htb K.AkMaIٓ&Y˷lS:_.`2sr"hu3/D޺Ic1SW jX70)@Klq#DcÄX,:m.w_˔k3DNZL1 iwO?׮B֡,!kR]7)q],")Lpk!V~IV>ׅ!^-{<;*Rb(;e0IbpU(e'mIOP B4paP_TT ǂv#r5}ևFkSR^v.ц)aA4G!.4#ToZʟ?7seaȄi $Bz ;/|t.!;;za0Q"htHmaJ;ՓDꢢcq&T&{"y#$}QH' )HSmm-xnrssRm/=ys'5A]/ ;0 @DAGpv%Uk^38/VW͆'n{\oB0'Re+ĿLQ!̎n"\u\,颵`< ; @rATCL{ #tOQ/D{ tRӔ7ǻRbt6Ss*u_}sDEƒS:c "`]V ضm Rh/YYغ=3]cfȑ#c'-A"px! ,BkҊL& ᮸g _~wcTfաHV¸Ɣz3BcQgR?:قNfs$>ީSnq$Id(8Z&YeB]`XH|,xNOFvFak1Ms{%in V& `" ]`tAG@I#ر8qE]Us!QF(oݲy  رs{O`SPAڈ&syӂTZu*~nbHͺ҄˴)۴0twn%%.;ZRxv/q iD.rF;;mx8>&f|UQL.ģw`vx2~a*6Yx uc@` M>Xm0[''`}01xDo,|eͤ:Aw^[ P475 r< FwGc1B:72- qca׀G{ܺw!. {B VϘ}1] 5ٱ!S%?@rc,G-V2"%fe"ިq 5WdmhOv%$f,i4OjL>{ݖACC]kYL1W}VܾJ%KO;DOzYz Zxa=dX#G\pE/?8u [.O]4y 9)퀣hRbH\F%[6h= Ӕ&{vnFir~|Rdڵ 6toWg+)ZۄQg)/.A^&X.<^QB:oKtT);Xg{"wlo*,-9жN7mݸ5WG^ߟc͞]iuԫU˷:u,-1d/t %yMރj[eg3i lVidߞ1y _>UuY}CU{1 G g_|:ƪ>l3gñC=n,XT& y)-x[2 .d d^s=w@gKztF XIQ=^e~$wC0$H,)ZL*(1O@)-:R/ύvnCċ]vMّ ڜ^Bgs? _.c K_2҄pY`q DLҽ^8L b0+ O}:eee e e8uxWc&J+?o:oR=9w8I@"uo:D[)l[! OjcXn{ #'NC-!PZb 7Wxx"?%faw6b^E~Q+qͷ#9 Uie^|!{iLKyoܺ~2c'JoBk'qcr<>2Ąis8OsWS6j"jgܱ.\o|1s7mx.h-۷o?g{0 R]a=nK=$1RF` Lr])PH,hk?P{o\q&l%u؇D#:xOOt?\QQO|uuMH,aH.O0t bZu2M3ddYpyh٨[[=;u{6_ /bmhp+'s6q}~i{wy:)+`TeYJ;ctOvZ9p 1%$UGO/Ȉq;W/P9f2+UV84Z޾GHdwlذ13S rm_ r\.77bة3ⓟ $ z1=B;E[;Pvg,[-kjj~W\x *I2nŮM9spsSSS3fdn݌0PDa2y>^*@kV14ope !1 xO&'_N vw@?Sh0cwntHFs:`횵}O:5|/ϝ;Om𡪙;lOƗ"LStcN24zuiHJwoY{],XumϯlZggiS>=BFl[1etç(G6'^ 4B~̏|M7yw{wSw2|\^1dld#%5VC܏-g]p9#&R* 7ٷ}#iY_pē o߾}~SՁFVT,:tuo1͘҈とPF:S$h\KEE/U+ ̜{:he*>FHмk\4&ꡎחf6nڌeY\ |+Ęq-(GLhj_v/oţKA4ĸ$PTu$IF{ZNGNq4ػs ?7 +ds8KjʑhsIJ zq͔-ݱc緁zom۸ ٹ{ϲ!CtlN=2I,$7+z `ĉ |~㢅7_%|?7QZQ]L>YJe<ٵk;[l_5WOhر_M2o׹ϱfH waSwh?-t;JߜGN]~NJvoC(mRT6ι8>Ŏ8=[iL)XBē=2c|W u=[wٱiŶ?`Μ9|slێ>˟qhN,5mil\.;)Ebk1СC0 v@_O uPFX cFr1dOm(_2?{!P_{q06 ]M)(>+?un:N(*n],zxEmW]ui 7WtH,-nll3sŊlܸq/vuIggGb̶1 Dkݫ]8;vN)RJki? 1!Dd~ˤeog"c$ smo}QF9 ~>;wIz\ݻWmXb~;F2rL%|+x1d7$.x퇣j,{%vm^Kn^nrda}t PPP@ 8=P+G^VQX~=?#?]axԄyB@[[G`BΥx ,Ʋםdެ~xBpi~#DUSw먂J&Jjfy\KO]f7|3M%%y.EKXf: ~U> 'cyQo֯zinnoִb *GFDd|SJIŰrtչjvL>#F:>{2O,NIAUӿAyظavr{i+nܭ\~յIe}XR(׳~rj 9gfeI64 !{QPxmX]E] 6P.^ kGI $@*{q IDAT̜1M^S}2$33={yGZ)Rʠ^ <>zyW\}f{22ur򞜙 FŞവ!bg >s"!Z>$<*VCRۗ<,^J| q #njxC$1Dف(?|'@^H0-S4;S`PV*AS}E{gG{so^gA%b׹%(.ē?JqX{a57OJhbuSHؚҧp cz7I@~ܗ&gn<A!24WDDYm|uʔ)K c cnjƎ3oޝ5al|R_"\p3ﯪYrǜۯKCd\9(2p&"!#"$ up:Z<*̹J9qqZс={O>EcSv s@@€;l E|,zA(`7"O 8Url>Yg88Th^W>{5؛c{wsظ97Q8c敂 "X(hH\ʘ? H<^K};6P}} 0zH.v(Ã~Rɶ^G(cڍPk==Ha0byS1z$}(HO*‹xM5훞`@n΍i}Uơ0 @&P2A׉ʒ"-D\b k$U+r&d0!h Ʊ}4o 5}ljic4i26n\2Çc߾}0[¿~Wfg ˲ۅ7\WGӭ Mo2 y/6ɴ}!o}L{t+E'ЃDS8SBBAYas`T h8 5'jࢫGz0(Be2J=w_F퉊ĸŕ#GĞ={ԳKxb\Hjsjl3E ip$јG ͅE}-ݸ1οUL +Zc;ylNo|yݟփNM0gJ (@,*pVCHNMu;w8c3v4v؅P1.xÏJ!B_UEi{AS%DpAQ/> o 1Q1yLN2FTpW8MkQZQYǦfd/yˁS(A|ޓF"2M^]},Aq͜1euTUg8|6PnKo5_"E7H1k@fÅf Θր" Q,fTl:4kr1 1 ďٙw9g2^FPt9~N7܂^}E 45g BU‚f޽ʘ+ ]0Ba1q 3Y&"Ž`c7 Lpole׏W v'Ů jpޙ;jԸ5jr Ո 0_ .5҃q( ?ZJz! I?Z[[k~8gwxy.Gq@¬qޝK*PW8:>ƗʊӦNƧ'+):ƸLt 9ٯƅƬw~;kPEX=)(`梭Ƶ5L_M![! {Ac6AP{Cқg5(@m9|pMQ:ƙn*z Cr]xN0{NJEMԞƸƒV$(N\Qq`jb 7`s>ܥӿ̏ȸw R]0FB"\*৶?)D``2  = XBvmaZ;p@ЫW* bky^ S#£"'?B6gL1.X@ zn7(!UO=I+sL]H`RujTYWoZ8AB 5<%4TVU|5);5ǎt^|xNv9V=<Ba r*4j#c00)?]ppA|۝xm񋐹 ,} /+903SW}gcsFe,o]p9:ufn͢EG=:s:ZJQݭXvm73H.h 6|ۘ1px, 7@u7|r`a *z8:piSϝk[l蕔0dGp>¢=L8QpĤ4: qxp8%]pb#o8|2h& HNFJzjCo3=s%\yN^zTu?m.ơe׸ Ȝo45U¶+j)La$_s (!}*8(D@Tr~9woBLz3Mŷᅪ7'AP = zCgOHץb $=h0q( \p~~PPvQ㥗.<3Za"DCH@{{pTWD*2݅[߉\cfF7?D"]^!~tT|8 rS'@tv,B8ؼz2G D^5t  ^xIv<8"eu55EmX~!YcE ( ˎ :TxEVq$^pRmǤ'#D>ވŏ>^~Ʀ=H6ם) k _rx<.HV8#`7ݾ*XIT;åίNBN#8i`?0T;Z >wo}N\e TN&O` P0?֊爍1z#[[[=$QR': F@VvfRSS)S***+x/Zܷ2l$nw7xU]{93JtǙܽƊ ޭw_G ̻~Ermv3l11\8=e*v:pZ*.E`Z**t:ɟ@^h4h4r*1jfMNOo8[Z3mZ-Ϛ}9Ο:"k 0{ޅ$^G@C࿜tvPqެ;o3I-(jڲ[֮kˆuභo_T/<~nշzW)ʝ=uEΔgDQ Jv;򷍪*|Z#h# 90 2;6xd8x2~ GEtuNcm'DcG@Z=V\vfZ aڢ8@z'E7 @QU ΂ ugA:4U_jl BEQ4A/ Y̵`ؙ;=Ca[2ҶPBdN2jDh<ܵI Z(}a$?t #UГ{EDҨwS("( I>ۃ$g$=f_u-ֶ݉&9[[Z9f6X#c G3ɤgWV1kqW@RR\RQq|erRou[EE?ȑM\U#8Ǟy)iP@"{{0hpGQS "l]ǚkq]rwT\ AoF:P/ <^rO?Sg\o^.BD(72g!3cl]j$S=?Eȝy)(.T۰qsZsשw-d־PdR@0PY'CÜpGYcF]J Ig@ƠCQ} \ :S%WtkdacO/a`s1Z[]Ų6"FZZnAޝPS} FY)H)W>xɨi#ȓ7Ut({5B*HAwUpY@2pϣOc黟a%peڹ":"7Qk)֯]-z Fas0bxTY=uZ_Bm Jh rp-{rǶMmmMK!4hQv_N( "<"Ն?'nC߿d*x0'mPu'ŹS>֠|ypoG$b2? kx:EVm3QUEV$W& c+Dl\<(b*;BzQv DI$`I DĪvNVw^:[npUEݍPsg IDAT 6iUiE0o+#3ҝ۶LjEVBrM bŧ9&9eQύ71&*7µ"HR<2]w뉪ii)WV|5}4fisw~!|غ}Wo8nLkOV>:it hx6}V^&F-;c N̸r"#(j`ASyIGqhS+vk߾tպҕ_S'(H4Yt"*: f]t9vn݈͛Deca j88TOC t D@5 G#1d0D&i=lٸFbvq`4!P!  J0TU >|_5֝z!.?%"h9!B_DcؼǚC;':T3pN;(+/Ezj&EDࢋ.6=pucӟ>oʅd&E?rPI3g!_ ?0>er|Xogzq V/-|>Y-5VHZct7ǏҢU׈6lL6jm44@fxwN( .tC߾_{ybRߍacӼ=Q)"K)pסeeߟSUp;Z@Q h@xxATt, 4F37.eT$uzl68].#=juY/j4̎?T0Xb PAk+Idz=:"UBvP~]'kӲx\t!AbHLLrD2q V5b@tT샻:{˪CyNhS/DиbF-qfť{䉒GcccY}}޺auw~1p}orߤ7M3y>L+T+A5q"Gme ^h췘g@,XV{{9 K.p4Xrze+"K '']UAjF0(B$pB!TXN h=74!Lf3Q]]ȸޝ$ǐӆ#YQOPsp"3@XJ!mkGIp.>Gy%W8WA֡D̀ y174Y\p{sm6zl 0Y¡R5Ik dXҲ]7xǒ>Pu/&æM0q~6`")׀^J 1N /Q7'*cV>YpQT@n2[H"OSAq;p(6oLvA"TXg9z,!7y&ݿ]߇LC NIMI<]p plov1LHNpssP1f9Sp Q3(ןF`˖-<=kiޟk{{r(!Cb\F/]pTp`"@ᶵKkKA[K e01Y `Ǿ]Px`o!KN H.pPFz)L':T@8[k[ ŵcIZ B(F eDZwp'Jן">/F3 px2d"cHd cj{攕357ȑ8g -kQ͎y$[[[g>lp|G N?R΀ov/DS Koq(/? ΂CVsݸ xAop;lChmiAVf&& 4e`Dge(W|V/D.>z' QN`c 5J3. ꂜV)?zd}F)}44b=2$/D tFrl߸ΖE55'MDb*{*  D(EJ 7 3`AdУ C0xDɠт5=x4N|rdb+v9_7j[[>CGW_͕D*BOjaqa´Y<6Y>WȲ|mbdK[4x~!!^ņ~;ovL Dxu NM-DctOO>Ar|ru4hmm#""^LOO:eAoinBmMmħ͡>83RW27!5gg্t:dt`ͪ>;.W ĨQf (T$@!acU7߉a)]*S``3asWTVVwb||hXt q111wECmkjlwK %}Ï/Ɠ_GƠQ0m\(xgn0b~޽~94i6nX3kj=Z!p7bQDgjWa̞:8V+Vmq(?[GFO 8qF(**KUU';-LXE! W/pEEE SUPe2(:3 BA=\n-SV#I"/"8#ULƑmXǴ?-$^.༩ӱpĉ&f9U.NGQ {<ЅXQÑ1*STE-G p9XQڽ& {ȑ#dggnٸq/***/N񦆆v o߁0uEݍǡr I"h6w$$$b]Mk/ic;VX͇4BˋC Q spaam$`Cf k=Z+{B2f`嗟ak-U`0cgRp N:lFQ(-)AUe9ftrFPA˯ pi[@C:Qi'N*dD}ז - 0\I䔔EEE`11h L4 /펫m dц;60-At|oDEGWX &@ a@]mZ \@2`B덟[,t;v2,"q{@u"T $J藖]u`n7db%fUpifLI\j)g>x}xa57^y*w$gU2`G2dHyTUELl FjjjF@$ (Uv @:z(sۛ߃)nX,at󊊊[@1ŗ/` %FR[sYMJTU\RqDQ۠IC'0 95>[B:o1cl2e +lmmp0D1(2; jG#³A 2Ӫd"rCh8*+ 5SN]ۯqM֎</qur0T} tw:.s45P0.<, @mM (8(< 0j Q! ҉(E{[+mق{0L0EINb߾999(**@~\Bk=W5^x-U( 0M'!ZZv[^̙3F566Nx<t9@za0FrjAXXXl^SaEGE+4xUx^߿rB'Q*W@ ex!R3xa8I2J)Z^L?`'EŋQSS7&55ukuuy;m`Ҍ+p7B%x )"CNrmߤ7yɲNG HDASz#& ~*n f,ϩ&C7N'>}+[  mW807nu٥s:.ʉ)p*˲;[nvۻw=z{<zEPQcbbwW_m.^( KƤ>Csc dEAA\=rkr`fqbW%$ĚíÅZxiBjjCϵao1y_0i4$M'dr]af%x<P(` ,*&H&kOA' נC%"2JsOhimH)BzÖƥeee5'N@]]zͳb ~ZNvVTuMF S1v8q xBEY1Aue)hka;G-MHHXSO==Z0U Z6;'IHhr`j RS!!&Kǒ9CZ Mᑊ?\xluO?~xTTT|QQvbh ~6h%+]YډZ>qV goW!w\JM"mDZu.r ɩ1f<.s#Z[[ӏ+ihjl444w8_P4lmZ Օe7x,!*D49dF[1h(ض,|$f}u۷cƌXjUe_Urذ'86ۉ{ "Q0[‘>GxX8$0ŋM?|FTIICDDtW9n \Ր!!VTp5S9j*n(ڷ%%\p\ߍعs*@́}кC7V\z5Й,PTK tzD⺛E† +OPu"ܣk5JK SSSQ^^_jEAI`tmO2 wtz#<.'^oوc0-{#"z/\C;6}|?ZZ%&&"Ku5C{3cƌY6m:| 5FiI D_} U!;q*$Iw:cйWᰡOr*QHW[Q_]-*¡px$C$Gm&)[O& fgW/aԀ;>g^ Ea>Ϻn1/\Cc:66_ F^*kkp,+K[[[NL%osdU9ӍW@sc5&s}%̑\{ (q|W^SSJK~-[&}6֎ z%EJIwzퟍ:Sm ոYHF+PZVacFRfs0z2)!.O/u,n7օK.K5wY:71c G ck/~:PA_u^ēkܴ%Ņ,)eZL`&DYQJ ć$E渄5e%Ցp\?;ΊӾwXnjjr475u3[nI;%g=wI'b(9﩯iC[lkkks45w9p?~Ł>h޲2vz`DptWH2Ŋ˯_^}nUUU FFF'Mu-I Z1nzdl fiT"]z"5F;J5jp)3 ,xi@$%%]_=7wP/ s[Zw Mnt,R[w-V5LF#SOxnpw$< r5εpI hEͩpmKl!>w%TPxU.'_&2z(h6\&$k3gkqEL3fa4[0lx"ڃ&B At:H][7"N"Rϱ/3ĨNG\P[C]B7 }B/,H Lڸ+w'|իߺ~]iݩS6A+b!NU>趦CH QA5\ a̦1V@Kmm\GL5\f9@7 ,|IP9 8^PU&MNo@|Z&iiTdefQ454<*]Tޝ+d'@L@A@vwD6wp0*:N$,I@ ${:Kw'NwWչ?; 茌~OUWթη9\ѨK7 MĤdZYYٽ Ƞ[|v/1 lm[F\Դj{(Ч^:Z:0zvEY#I`'梵C )EEEYţģN)cRs^;0B!:;5غy#;l JJ aUtZۭE1tCG(j DF3x0y/&aR7WoC (f,Y8޻n3W _tnᥘ;oxIچ-VTV~=8Uf[n'7\7?y Z6=/ɢҔnQT@jJkDɓyOP(?NJ)$IhDQ<(+iNJIr]DFW,w*QmmComX -[+*֝p7M *Jov>~EpFG$z$*$$ ðQi2}"BɼSZ'r䗖>㇧[Zy"~(Bx"""BjH6נ c.lOEeeq6HNI5/)?vgofdoDd݂<sB<Q1Xx^TW#s﷨*-FsS#,-;2x&IU@T!*:c1qUHNNru}UMc *Z&c,J>92p,5 ^YW0 E MK!pը./$8VH$}@0|߬e'|~n΃0YC"Gm(9eߴq\d\M~w8zk7U`ɢ4"44,o7ߴ w- 5GRGO+ypFz)2N޷@&~L5( mphmkqc0AՃWijT֋{WFοHL(܀MB\ =`6w2kV=8?3*-- F dbo/38NTWbώ-Xz6S7ub㦏Q&%IꑢT^TI'i?xOOH0a@7." gmݝig669 8~$kťQP[37TVjͬ$6gO')ǖeCN'Tj Gz=:gL>]ջOd#9w'2B}ZKkN)uZ՗7EИx0 _*< kVb]!#1|ѓH 1,@@nųصC7"$:~2r.ǎsYYw~j+[F!/@VnNNŸ˵-n.R {2[1."FS]%FBRR{3XG{GaHpЛ;%+~)6}۳@ԻDpH ()P(T: nmـ[֡!!! wao#u`/]jConYVXmX1f cǎ5izHD$γ`#X}$ŷ%\?t#Rw qq6x];D *0DԳ00DDI5/G.hP&A"}OO?^#333.c,uc~xT TQ2qoYdjJPb= u!$"zn6s#vc ?^\=ȕW^.N7fIh vR7CDd""qh~SaqqfϞ.Vd5\,=ԩܨHByۥQ+zS}܁^}p`$7V{Z)Ó'׌5 >C.}p 4n$Db`yZ81g#vPs4~n Uŧ;+흝ݥ*gHII DQDIyg+y$K& 8\S.1DͤV_A_<$]gJDmOl/Ð$LQSOMV>;re'AEہ׮@$,-(Ï`g IКͭϹܩ1)A͊ eX$x8y0N΃@ rBrTaM:]kbLL^sK7^}# \4+==999;:uEֈZӯ[pL[? 00pD7y`!K ?-fp6uc.݁ШX%@d!CIFGe =EU',QA7HJPP8[s#֪7CBB?vCzIj.}$DեP "yҞ\D@{=L`P0 w?-Q (VUAd^g"U(^r6i5V5 s?6n\Pln>71Fy9_4ي߇a]bW_[5N]N\R\rp?U%P(xtP(P(T0r}7=+^B (vw`X(8} %?evbZ͏9ݮ!xuԣGjl%"pI\p*$t^bR}$Ϝ)nU3(kU~K@~fd|̙YY'㊚keeEIQF p.G`30 ^N+NSyL6MKSUeEaٙ{phϷ(/>`j*MWOގ3žo V߮鞕M;(4-5ߤGx'oC &N0a.>BAhGDr  VΖqQBx`OC͆֡iia`;nCBbTۻɉm̭:c A{{;N8{ ~Kamv[$Q!AAOArr20(:AaP`

Xp!֯_͓!a ]VVXWW'ʕ+q}{ 4=Z˱iHġW9@DpAs@~)fp/.z\PHѹM C;Gr[Jn {`@ F3_/^k^(-[]YY=b8#V*nH qjTy͛?߾CCnySx'{x{EdX[@M{+ξCet,N1@ k} N}e_oF:zDM(s pZ(~Cbf7,} }̚1 ;vY|O " zV`sYtmY H_"n (E V޼7,)W@ NVoT B@~ )ӎ?'O1er<:C(O>fLw"‘|9s*ֿ ((\(E鎿c7t %NKL7/Ȯc0^2- P CMMK[t亦in ЇwW]-O!8 4T~; KDB@x<`{ )Q=K5751jdJhiSbxJ t9@@{9} W)$/Wh9J>Q>Y̓Ύ0Ɵ>_.FDưd,[xn&ςw*O84Xg"O*+s5P#S 07!1H/!zzKvuY}/ G&M3u͏$JUr 5hgޓ~gFz_m ]߫0SRCHs 8Jm}Q_ *NZswm.+UfLPgO;w*=u2,, PP9p$.H o.% զRk >2!-펧BBnvmV>?7s%Ga Vq'EAJPpYD sLGn5(֯}e+FOK[~Ax$ӛ+ ^ kx t 貃_w B6xoЌbke?8]!J,b]6&H rThA&ރ\b0n}ч*JK'MCpX({}c6-$쏄? qu(LԾR} n|rml`w}ïS޾$e JՊ[:h: (zE( AYrnB]"C/zN@}o ><@8LרLw<6V3>n<#3 :CCDX˫+|Aqzm1?bd SR+P*32mo|XtxpqZ?u(q\:vo|a ic}#>>>>>>5pnےIENDB`compose-1.8.0/project/000077500000000000000000000000001274620702700146425ustar00rootroot00000000000000compose-1.8.0/project/ISSUE-TRIAGE.md000066400000000000000000000017371274620702700170350ustar00rootroot00000000000000Triaging of issues ------------------ The docker-compose issue triage process follows https://github.com/docker/docker/blob/master/project/ISSUE-TRIAGE.md with the following additions or exceptions. ### Classify the Issue The following labels are provided in additional to the standard labels: | Kind | Description | |--------------|-------------------------------------------------------------------| | kind/cleanup | A refactor or improvement that is related to quality not function | | kind/parity | A request for feature parity with docker cli | ### Functional areas Most issues should fit into one of the following functional areas: | Area | |-----------------| | area/build | | area/cli | | area/config | | area/logs | | area/networking | | area/packaging | | area/run | | area/scale | | area/tests | | area/up | | area/volumes | compose-1.8.0/project/RELEASE-PROCESS.md000066400000000000000000000103761274620702700173670ustar00rootroot00000000000000Building a Compose release ========================== ## Prerequisites The release scripts require the following tools installed on the host: * https://hub.github.com/ * https://stedolan.github.io/jq/ * http://pandoc.org/ ## To get started with a new release Create a branch, update version, and add release notes by running `make-branch` ./script/release/make-branch $VERSION [$BASE_VERSION] `$BASE_VERSION` will default to master. Use the last version tag for a bug fix release. As part of this script you'll be asked to: 1. Update the version in `docs/install.md` and `compose/__init__.py`. If the next release will be an RC, append `rcN`, e.g. `1.4.0rc1`. 2. Write release notes in `CHANGES.md`. Almost every feature enhancement should be mentioned, with the most visible/exciting ones first. Use descriptive sentences and give context where appropriate. Bug fixes are worth mentioning if it's likely that they've affected lots of people, or if they were regressions in the previous version. Improvements to the code are not worth mentioning. ## When a PR is merged into master that we want in the release 1. Check out the bump branch and run the cherry pick script git checkout bump-$VERSION ./script/release/cherry-pick-pr $PR_NUMBER 2. When you are done cherry-picking branches move the bump version commit to HEAD ./script/release/rebase-bump-commit git push --force $USERNAME bump-$VERSION ## To release a version (whether RC or stable) Check out the bump branch and run the `build-binaries` script git checkout bump-$VERSION ./script/release/build-binaries When prompted build the non-linux binaries and test them. 1. Download the osx binary from Bintray. Make sure that the latest build has finished, otherwise you'll be downloading an old binary. https://dl.bintray.com/docker-compose/$BRANCH_NAME/ 2. Download the windows binary from AppVeyor https://ci.appveyor.com/project/docker/compose 3. Draft a release from the tag on GitHub (the script will open the window for you) In the "Tag version" dropdown, select the tag you just pushed. 4. Paste in installation instructions and release notes. Here's an example - change the Compose version and Docker version as appropriate: Firstly, note that Compose 1.5.0 requires Docker 1.8.0 or later. Secondly, if you're a Mac user, the **[Docker Toolbox](https://www.docker.com/toolbox)** will install Compose 1.5.0 for you, alongside the latest versions of the Docker Engine, Machine and Kitematic. Otherwise, you can use the usual commands to install/upgrade. Either download the binary: curl -L https://github.com/docker/compose/releases/download/1.5.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose Or install the PyPi package: pip install -U docker-compose==1.5.0 Here's what's new: ...release notes go here... 5. Attach the binaries and `script/run/run.sh` 6. Add "Thanks" with a list of contributors. The contributor list can be generated by running `./script/release/contributors`. 7. If everything looks good, it's time to push the release. ./script/release/push-release 8. Publish the release on GitHub. 9. Check that all the binaries download (following the install instructions) and run. 10. Email maintainers@dockerproject.org and engineering@docker.com about the new release. ## If it’s a stable release (not an RC) 1. Merge the bump PR. 2. Make sure `origin/release` is updated locally: git fetch origin 3. Update the `docs` branch on the upstream repo: git push git@github.com:docker/compose.git origin/release:docs 4. Let the docs team know that it’s been updated so they can publish it. 5. Close the release’s milestone. ## If it’s a minor release (1.x.0), rather than a patch release (1.x.y) 1. Open a PR against `master` to: - update `CHANGELOG.md` to bring it in line with `release` - bump the version in `compose/__init__.py` to the *next* minor version number with `dev` appended. For example, if you just released `1.4.0`, update it to `1.5.0dev`. 2. Get the PR merged. ## Finally 1. Celebrate, however you’d like. compose-1.8.0/requirements-build.txt000066400000000000000000000000231274620702700175500ustar00rootroot00000000000000pyinstaller==3.1.1 compose-1.8.0/requirements-dev.txt000066400000000000000000000000741274620702700172350ustar00rootroot00000000000000coverage==3.7.1 mock>=1.0.1 pytest==2.7.2 pytest-cov==2.1.0 compose-1.8.0/requirements.txt000066400000000000000000000005201274620702700164550ustar00rootroot00000000000000PyYAML==3.11 backports.ssl-match-hostname==3.5.0.1; python_version < '3' cached-property==1.2.0 docker-py==1.9.0 dockerpty==0.4.1 docopt==0.6.1 enum34==1.0.4; python_version < '3.4' functools32==3.2.3.post2; python_version < '3.2' ipaddress==1.0.16 jsonschema==2.5.1 requests==2.7.0 six==1.7.3 texttable==0.8.4 websocket-client==0.32.0 compose-1.8.0/script/000077500000000000000000000000001274620702700145005ustar00rootroot00000000000000compose-1.8.0/script/build/000077500000000000000000000000001274620702700155775ustar00rootroot00000000000000compose-1.8.0/script/build/image000077500000000000000000000005161274620702700166110ustar00rootroot00000000000000#!/bin/bash set -e if [ -z "$1" ]; then >&2 echo "First argument must be image tag." exit 1 fi TAG=$1 VERSION="$(python setup.py --version)" ./script/build/write-git-sha python setup.py sdist cp dist/docker-compose-$VERSION.tar.gz dist/docker-compose-release.tar.gz docker build -t docker/compose:$TAG -f Dockerfile.run . compose-1.8.0/script/build/linux000077500000000000000000000003621274620702700166650ustar00rootroot00000000000000#!/bin/bash set -ex ./script/clean TAG="docker-compose" docker build -t "$TAG" . | tail -n 200 docker run \ --rm --entrypoint="script/build/linux-entrypoint" \ -v $(pwd)/dist:/code/dist \ -v $(pwd)/.git:/code/.git \ "$TAG" compose-1.8.0/script/build/linux-entrypoint000077500000000000000000000004771274620702700211050ustar00rootroot00000000000000#!/bin/bash set -ex TARGET=dist/docker-compose-$(uname -s)-$(uname -m) VENV=/code/.tox/py27 mkdir -p `pwd`/dist chmod 777 `pwd`/dist $VENV/bin/pip install -q -r requirements-build.txt ./script/build/write-git-sha su -c "$VENV/bin/pyinstaller docker-compose.spec" user mv dist/docker-compose $TARGET $TARGET version compose-1.8.0/script/build/osx000077500000000000000000000006121274620702700163350ustar00rootroot00000000000000#!/bin/bash set -ex PATH="/usr/local/bin:$PATH" rm -rf venv virtualenv -p /usr/local/bin/python venv venv/bin/pip install -r requirements.txt venv/bin/pip install -r requirements-build.txt venv/bin/pip install --no-deps . ./script/build/write-git-sha venv/bin/pyinstaller docker-compose.spec mv dist/docker-compose dist/docker-compose-Darwin-x86_64 dist/docker-compose-Darwin-x86_64 version compose-1.8.0/script/build/windows.ps1000066400000000000000000000032531274620702700177210ustar00rootroot00000000000000# Builds the Windows binary. # # From a fresh 64-bit Windows 10 install, prepare the system as follows: # # 1. Install Git: # # http://git-scm.com/download/win # # 2. Install Python 2.7.10: # # https://www.python.org/downloads/ # # 3. Append ";C:\Python27;C:\Python27\Scripts" to the "Path" environment variable: # # https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true # # 4. In Powershell, run the following commands: # # $ pip install virtualenv # $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned # # 5. Clone the repository: # # $ git clone https://github.com/docker/compose.git # $ cd compose # # 6. Build the binary: # # .\script\build\windows.ps1 $ErrorActionPreference = "Stop" # Remove virtualenv if (Test-Path venv) { Remove-Item -Recurse -Force .\venv } # Remove .pyc files Get-ChildItem -Recurse -Include *.pyc | foreach ($_) { Remove-Item $_.FullName } # Create virtualenv virtualenv .\venv # pip and pyinstaller generate lots of warnings, so we need to ignore them $ErrorActionPreference = "Continue" # Install dependencies .\venv\Scripts\pip install pypiwin32==219 .\venv\Scripts\pip install -r requirements.txt .\venv\Scripts\pip install --no-deps . .\venv\Scripts\pip install --allow-external pyinstaller -r requirements-build.txt git rev-parse --short HEAD | out-file -encoding ASCII compose\GITSHA # Build binary .\venv\Scripts\pyinstaller .\docker-compose.spec $ErrorActionPreference = "Stop" Move-Item -Force .\dist\docker-compose.exe .\dist\docker-compose-Windows-x86_64.exe .\dist\docker-compose-Windows-x86_64.exe --version compose-1.8.0/script/build/write-git-sha000077500000000000000000000003251274620702700202110ustar00rootroot00000000000000#!/bin/bash # # Write the current commit sha to the file GITSHA. This file is included in # packaging so that `docker-compose version` can include the git sha. # set -e git rev-parse --short HEAD > compose/GITSHA compose-1.8.0/script/ci000077500000000000000000000002761274620702700150260ustar00rootroot00000000000000#!/bin/bash # # Backwards compatiblity for jenkins # # TODO: remove this script after all current PRs and jenkins are updated with # the new script/test/ci change set -e exec script/test/ci compose-1.8.0/script/clean000077500000000000000000000002561274620702700155130ustar00rootroot00000000000000#!/bin/sh set -e find . -type f -name '*.pyc' -delete find . -name .coverage.* -delete find . -name __pycache__ -delete rm -rf docs/_site build dist docker-compose.egg-info compose-1.8.0/script/release/000077500000000000000000000000001274620702700161205ustar00rootroot00000000000000compose-1.8.0/script/release/build-binaries000077500000000000000000000016061274620702700207420ustar00rootroot00000000000000#!/bin/bash # # Build the release binaries # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Build binaries for the release. This script requires that 'git config branch.${BRANCH}.release' is set to the release version for the release branch. EOM exit 1 } BRANCH="$(git rev-parse --abbrev-ref HEAD)" VERSION="$(git config "branch.${BRANCH}.release")" || usage REPO=docker/compose # Build the binaries script/clean script/build/linux echo "Building the container distribution" script/build/image $VERSION echo "Create a github release" # TODO: script more of this https://developer.github.com/v3/repos/releases/ browser https://github.com/$REPO/releases/new echo "Don't forget to download the osx and windows binaries from appveyor/bintray\!" echo "https://dl.bintray.com/docker-compose/$BRANCH/" echo "https://ci.appveyor.com/project/docker/compose" echo compose-1.8.0/script/release/cherry-pick-pr000077500000000000000000000010321274620702700207010ustar00rootroot00000000000000#!/bin/bash # # Cherry-pick a PR into the release branch # set -e set -o pipefail function usage() { >&2 cat << EOM Cherry-pick commits from a github pull request. Usage: $0 EOM exit 1 } [ -n "$1" ] || usage if [ -z "$(command -v hub 2> /dev/null)" ]; then >&2 echo "$0 requires https://hub.github.com/." >&2 echo "Please install it and make sure it is available on your \$PATH." exit 2 fi REPO=docker/compose GITHUB=https://github.com/$REPO/pull PR=$1 url="$GITHUB/$PR" hub am -3 $url compose-1.8.0/script/release/contributors000077500000000000000000000010141274620702700205770ustar00rootroot00000000000000#!/bin/bash set -e function usage() { >&2 cat << EOM Print the list of github contributors for the release Usage: $0 EOM exit 1 } [[ -n "$1" ]] || usage PREV_RELEASE=$1 VERSION=HEAD URL="https://api.github.com/repos/docker/compose/compare" contribs=$(curl -sf "$URL/$PREV_RELEASE...$VERSION" | \ jq -r '.commits[].author.login' | \ sort | \ uniq -c | \ sort -nr) echo "Contributions by user: " echo "$contribs" echo echo "$contribs" | awk '{print "@"$2","}' | xargs compose-1.8.0/script/release/make-branch000077500000000000000000000040331274620702700202160ustar00rootroot00000000000000#!/bin/bash # # Prepare a new release branch # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Create a new release branch 'release-' Usage: $0 [] Options: version version string for the release (ex: 1.6.0) base_version branch or tag to start from. Defaults to master. For bug-fix releases use the previous stage release tag. EOM exit 1 } [ -n "$1" ] || usage VERSION=$1 BRANCH=bump-$VERSION REPO=docker/compose GITHUB_REPO=git@github.com:$REPO if [ -z "$2" ]; then BASE_VERSION="master" else BASE_VERSION=$2 fi DEFAULT_REMOTE=release REMOTE="$(find_remote "$GITHUB_REPO")" # If we don't have a docker remote add one if [ -z "$REMOTE" ]; then echo "Creating $DEFAULT_REMOTE remote" git remote add ${DEFAULT_REMOTE} ${GITHUB_REPO} fi # handle the difference between a branch and a tag if [ -z "$(git name-rev --tags $BASE_VERSION | grep tags)" ]; then BASE_VERSION=$REMOTE/$BASE_VERSION fi echo "Creating a release branch $VERSION from $BASE_VERSION" read -n1 -r -p "Continue? (ctrl+c to cancel)" git fetch $REMOTE -p git checkout -b $BRANCH $BASE_VERSION echo "Merging remote release branch into new release branch" git merge --strategy=ours --no-edit $REMOTE/release # Store the release version for this branch in git, so that other release # scripts can use it git config "branch.${BRANCH}.release" $VERSION editor=${EDITOR:-vim} echo "Update versions in docs/install.md, compose/__init__.py, script/run/run.sh" $editor docs/install.md $editor compose/__init__.py $editor script/run/run.sh echo "Write release notes in CHANGELOG.md" browser "https://github.com/docker/compose/issues?q=milestone%3A$VERSION+is%3Aclosed" $editor CHANGELOG.md git diff echo "Verify changes before commit. Exit the shell to commit changes" $SHELL || true git commit -a -m "Bump $VERSION" --signoff --no-verify echo "Push branch to docker remote" git push $REMOTE browser https://github.com/$REPO/compare/docker:release...$BRANCH?expand=1 compose-1.8.0/script/release/push-release000077500000000000000000000041321274620702700204430ustar00rootroot00000000000000#!/bin/bash # # Create the official release # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Publish a release by building all artifacts and pushing them. This script requires that 'git config branch.${BRANCH}.release' is set to the release version for the release branch. EOM exit 1 } BRANCH="$(git rev-parse --abbrev-ref HEAD)" VERSION="$(git config "branch.${BRANCH}.release")" || usage if [ -z "$(command -v jq 2> /dev/null)" ]; then >&2 echo "$0 requires https://stedolan.github.io/jq/" >&2 echo "Please install it and make sure it is available on your \$PATH." exit 2 fi if [ -z "$(command -v pandoc 2> /dev/null)" ]; then >&2 echo "$0 requires http://pandoc.org/" >&2 echo "Please install it and make sure it is available on your \$PATH." exit 2 fi API=https://api.github.com/repos REPO=docker/compose GITHUB_REPO=git@github.com:$REPO # Check the build status is green sha=$(git rev-parse HEAD) url=$API/$REPO/statuses/$sha build_status=$(curl -s $url | jq -r '.[0].state') if [ -n "$SKIP_BUILD_CHECK" ]; then echo "Skipping build status check..." elif [[ "$build_status" != "success" ]]; then >&2 echo "Build status is $build_status, but it should be success." exit -1 fi echo "Tagging the release as $VERSION" git tag $VERSION git push $GITHUB_REPO $VERSION echo "Uploading the docker image" docker push docker/compose:$VERSION echo "Uploading sdist to pypi" pandoc -f markdown -t rst README.md -o README.rst sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst ./script/build/write-git-sha python setup.py sdist if [ "$(command -v twine 2> /dev/null)" ]; then twine upload ./dist/docker-compose-${VERSION/-/}.tar.gz else python setup.py upload fi echo "Testing pip package" virtualenv venv-test source venv-test/bin/activate pip install docker-compose==$VERSION docker-compose version deactivate rm -rf venv-test echo "Now publish the github release, and test the downloads." echo "Email maintainers@dockerproject.org and engineering@docker.com about the new release." compose-1.8.0/script/release/rebase-bump-commit000077500000000000000000000015661274620702700215460ustar00rootroot00000000000000#!/bin/bash # # Move the "bump to " commit to the HEAD of the branch # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Move the "bump to " commit to the HEAD of the branch This script requires that 'git config branch.${BRANCH}.release' is set to the release version for the release branch. EOM exit 1 } BRANCH="$(git rev-parse --abbrev-ref HEAD)" VERSION="$(git config "branch.${BRANCH}.release")" || usage COMMIT_MSG="Bump $VERSION" sha="$(git log --grep "$COMMIT_MSG\$" --format="%H")" if [ -z "$sha" ]; then >&2 echo "No commit with message \"$COMMIT_MSG\"" exit 2 fi if [[ "$sha" == "$(git rev-parse HEAD)" ]]; then >&2 echo "Bump commit already at HEAD" exit 0 fi commits=$(git log --format="%H" "$sha..HEAD" | wc -l | xargs echo) git rebase --onto $sha~1 HEAD~$commits $BRANCH git cherry-pick $sha compose-1.8.0/script/release/utils.sh000066400000000000000000000006241274620702700176160ustar00rootroot00000000000000#!/bin/bash # # Util functions for release scritps # set -e set -o pipefail function browser() { local url=$1 xdg-open $url || open $url } function find_remote() { local url=$1 for remote in $(git remote); do git config --get remote.${remote}.url | grep $url > /dev/null && echo -n $remote done # Always return true, extra remotes cause it to return false true } compose-1.8.0/script/run/000077500000000000000000000000001274620702700153045ustar00rootroot00000000000000compose-1.8.0/script/run/run.ps1000066400000000000000000000015751274620702700165450ustar00rootroot00000000000000# Run docker-compose in a container via boot2docker. # # The current directory will be mirrored as a volume and additional # volumes (or any other options) can be mounted by using # $Env:DOCKER_COMPOSE_OPTIONS. if ($Env:DOCKER_COMPOSE_VERSION -eq $null -or $Env:DOCKER_COMPOSE_VERSION.Length -eq 0) { $Env:DOCKER_COMPOSE_VERSION = "latest" } if ($Env:DOCKER_COMPOSE_OPTIONS -eq $null) { $Env:DOCKER_COMPOSE_OPTIONS = "" } if (-not $Env:DOCKER_HOST) { docker-machine env --shell=powershell default | Invoke-Expression if (-not $?) { exit $LastExitCode } } $local="/$($PWD -replace '^(.):(.*)$', '"$1".ToLower()+"$2".Replace("\","/")' | Invoke-Expression)" docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -v "${local}:$local" -w "$local" $Env:DOCKER_COMPOSE_OPTIONS "docker/compose:$Env:DOCKER_COMPOSE_VERSION" $args exit $LastExitCode compose-1.8.0/script/run/run.sh000077500000000000000000000025701274620702700164530ustar00rootroot00000000000000#!/bin/bash # # Run docker-compose in a container # # This script will attempt to mirror the host paths by using volumes for the # following paths: # * $(pwd) # * $(dirname $COMPOSE_FILE) if it's set # * $HOME if it's set # # You can add additional volumes (or any docker run options) using # the $COMPOSE_OPTIONS environment variable. # set -e VERSION="1.8.0" IMAGE="docker/compose:$VERSION" # Setup options for connecting to docker host if [ -z "$DOCKER_HOST" ]; then DOCKER_HOST="/var/run/docker.sock" fi if [ -S "$DOCKER_HOST" ]; then DOCKER_ADDR="-v $DOCKER_HOST:$DOCKER_HOST -e DOCKER_HOST" else DOCKER_ADDR="-e DOCKER_HOST -e DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH" fi # Setup volume mounts for compose config and context if [ "$(pwd)" != '/' ]; then VOLUMES="-v $(pwd):$(pwd)" fi if [ -n "$COMPOSE_FILE" ]; then compose_dir=$(dirname $COMPOSE_FILE) fi # TODO: also check --file argument if [ -n "$compose_dir" ]; then VOLUMES="$VOLUMES -v $compose_dir:$compose_dir" fi if [ -n "$HOME" ]; then VOLUMES="$VOLUMES -v $HOME:$HOME -v $HOME:/root" # mount $HOME in /root to share docker.config fi # Only allocate tty if we detect one if [ -t 1 ]; then DOCKER_RUN_OPTIONS="-t" fi if [ -t 0 ]; then DOCKER_RUN_OPTIONS="$DOCKER_RUN_OPTIONS -i" fi exec docker run --rm $DOCKER_RUN_OPTIONS $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w "$(pwd)" $IMAGE "$@" compose-1.8.0/script/setup/000077500000000000000000000000001274620702700156405ustar00rootroot00000000000000compose-1.8.0/script/setup/osx000077500000000000000000000024031274620702700163760ustar00rootroot00000000000000#!/bin/bash set -ex python_version() { python -V 2>&1 } openssl_version() { python -c "import ssl; print ssl.OPENSSL_VERSION" } desired_python_version="2.7.9" desired_python_brew_version="2.7.9" python_formula="https://raw.githubusercontent.com/Homebrew/homebrew/1681e193e4d91c9620c4901efd4458d9b6fcda8e/Library/Formula/python.rb" desired_openssl_version="1.0.2h" desired_openssl_brew_version="1.0.2h" openssl_formula="https://raw.githubusercontent.com/Homebrew/homebrew-core/30d3766453347f6e22b3ed6c74bb926d6def2eb5/Formula/openssl.rb" PATH="/usr/local/bin:$PATH" if !(which brew); then ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" fi brew update > /dev/null if !(python_version | grep "$desired_python_version"); then if brew list | grep python; then brew unlink python fi brew install "$python_formula" brew switch python "$desired_python_brew_version" fi if !(openssl_version | grep "$desired_openssl_version"); then if brew list | grep openssl; then brew unlink openssl fi brew install "$openssl_formula" brew switch openssl "$desired_openssl_brew_version" fi echo "*** Using $(python_version)" echo "*** Using $(openssl_version)" if !(which virtualenv); then pip install virtualenv fi compose-1.8.0/script/test/000077500000000000000000000000001274620702700154575ustar00rootroot00000000000000compose-1.8.0/script/test/all000077500000000000000000000026251274620702700161620ustar00rootroot00000000000000#!/bin/bash # This should be run inside a container built from the Dockerfile # at the root of the repo - script/test will do it automatically. set -e >&2 echo "Running lint checks" docker run --rm \ --tty \ ${GIT_VOLUME} \ --entrypoint="tox" \ "$TAG" -e pre-commit get_versions="docker run --rm --entrypoint=/code/.tox/py27/bin/python $TAG /code/script/test/versions.py docker/docker" if [ "$DOCKER_VERSIONS" == "" ]; then DOCKER_VERSIONS="$($get_versions default)" elif [ "$DOCKER_VERSIONS" == "all" ]; then DOCKER_VERSIONS=$($get_versions -n 2 recent) fi BUILD_NUMBER=${BUILD_NUMBER-$USER} for version in $DOCKER_VERSIONS; do >&2 echo "Running tests against Docker $version" daemon_container="compose-dind-$version-$BUILD_NUMBER" function on_exit() { if [[ "$?" != "0" ]]; then docker logs "$daemon_container" 2>&1 | tail -n 100 fi docker rm -vf "$daemon_container" } trap "on_exit" EXIT repo="dockerswarm/dind" docker run \ -d \ --name "$daemon_container" \ --privileged \ --volume="/var/lib/docker" \ "$repo:$version" \ docker daemon -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \ 2>&1 | tail -n 10 docker run \ --rm \ --tty \ --link="$daemon_container:docker" \ --env="DOCKER_HOST=tcp://docker:2375" \ --env="DOCKER_VERSION=$version" \ --entrypoint="tox" \ "$TAG" \ -e py27,py34 -- "$@" done compose-1.8.0/script/test/ci000077500000000000000000000012221274620702700157750ustar00rootroot00000000000000#!/bin/bash # This should be run inside a container built from the Dockerfile # at the root of the repo: # # $ TAG="docker-compose:$(git rev-parse --short HEAD)" # $ docker build -t "$TAG" . # $ docker run --rm \ # --volume="/var/run/docker.sock:/var/run/docker.sock" \ # --volume="$(pwd)/.git:/code/.git" \ # -e "TAG=$TAG" \ # --entrypoint="script/test/ci" "$TAG" set -ex docker version export DOCKER_VERSIONS=all STORAGE_DRIVER=${STORAGE_DRIVER:-overlay} export DOCKER_DAEMON_ARGS="--storage-driver=$STORAGE_DRIVER" GIT_VOLUME="--volumes-from=$(hostname)" . script/test/all >&2 echo "Building Linux binary" . script/build/linux-entrypoint compose-1.8.0/script/test/default000077500000000000000000000004441274620702700170330ustar00rootroot00000000000000#!/bin/bash # See CONTRIBUTING.md for usage. set -ex TAG="docker-compose:$(git rev-parse --short HEAD)" rm -rf coverage-html # Create the host directory so it's owned by $USER mkdir -p coverage-html docker build -t "$TAG" . GIT_VOLUME="--volume=$(pwd)/.git:/code/.git" . script/test/all compose-1.8.0/script/test/versions.py000077500000000000000000000100001274620702700176730ustar00rootroot00000000000000#!/usr/bin/env python """ Query the github API for the git tags of a project, and return a list of version tags for recent releases, or the default release. The default release is the most recent non-RC version. Recent is a list of unqiue major.minor versions, where each is the most recent version in the series. For example, if the list of versions is: 1.8.0-rc2 1.8.0-rc1 1.7.1 1.7.0 1.7.0-rc1 1.6.2 1.6.1 `default` would return `1.7.1` and `recent -n 3` would return `1.8.0-rc2 1.7.1 1.6.2` """ from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import argparse import itertools import operator import sys from collections import namedtuple import requests GITHUB_API = 'https://api.github.com/repos' class Version(namedtuple('_Version', 'major minor patch rc')): @classmethod def parse(cls, version): version = version.lstrip('v') version, _, rc = version.partition('-') major, minor, patch = version.split('.', 3) return cls(int(major), int(minor), int(patch), rc) @property def major_minor(self): return self.major, self.minor @property def order(self): """Return a representation that allows this object to be sorted correctly with the default comparator. """ # rc releases should appear before official releases rc = (0, self.rc) if self.rc else (1, ) return (self.major, self.minor, self.patch) + rc def __str__(self): rc = '-{}'.format(self.rc) if self.rc else '' return '.'.join(map(str, self[:3])) + rc def group_versions(versions): """Group versions by `major.minor` releases. Example: >>> group_versions([ Version(1, 0, 0), Version(2, 0, 0, 'rc1'), Version(2, 0, 0), Version(2, 1, 0), ]) [ [Version(1, 0, 0)], [Version(2, 0, 0), Version(2, 0, 0, 'rc1')], [Version(2, 1, 0)], ] """ return list( list(releases) for _, releases in itertools.groupby(versions, operator.attrgetter('major_minor')) ) def get_latest_versions(versions, num=1): """Return a list of the most recent versions for each major.minor version group. """ versions = group_versions(versions) return [versions[index][0] for index in range(num)] def get_default(versions): """Return a :class:`Version` for the latest non-rc version.""" for version in versions: if not version.rc: return version def get_versions(tags): for tag in tags: try: yield Version.parse(tag['name']) except ValueError: print("Skipping invalid tag: {name}".format(**tag), file=sys.stderr) def get_github_releases(project): """Query the Github API for a list of version tags and return them in sorted order. See https://developer.github.com/v3/repos/#list-tags """ url = '{}/{}/tags'.format(GITHUB_API, project) response = requests.get(url) response.raise_for_status() versions = get_versions(response.json()) return sorted(versions, reverse=True, key=operator.attrgetter('order')) def parse_args(argv): parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('project', help="Github project name (ex: docker/docker)") parser.add_argument('command', choices=['recent', 'default']) parser.add_argument('-n', '--num', type=int, default=2, help="Number of versions to return from `recent`") return parser.parse_args(argv) def main(argv=None): args = parse_args(argv) versions = get_github_releases(args.project) if args.command == 'recent': print(' '.join(map(str, get_latest_versions(versions, args.num)))) elif args.command == 'default': print(get_default(versions)) else: raise ValueError("Unknown command {}".format(args.command)) if __name__ == "__main__": main() compose-1.8.0/script/travis/000077500000000000000000000000001274620702700160105ustar00rootroot00000000000000compose-1.8.0/script/travis/bintray.json.tmpl000066400000000000000000000015241274620702700213300ustar00rootroot00000000000000{ "package": { "name": "${TRAVIS_OS_NAME}", "repo": "${TRAVIS_BRANCH}", "subject": "docker-compose", "desc": "Automated build of master branch from travis ci.", "website_url": "https://github.com/docker/compose", "issue_tracker_url": "https://github.com/docker/compose/issues", "vcs_url": "https://github.com/docker/compose.git", "licenses": ["Apache-2.0"] }, "version": { "name": "${TRAVIS_BRANCH}", "desc": "Automated build of the ${TRAVIS_BRANCH} branch.", "released": "${DATE}", "vcs_tag": "master" }, "files": [ { "includePattern": "dist/(.*)", "excludePattern": ".*\.tar.gz", "uploadPattern": "$1", "matrixParams": { "override": 1 } } ], "publish": true } compose-1.8.0/script/travis/build-binary000077500000000000000000000004111274620702700203130ustar00rootroot00000000000000#!/bin/bash set -ex if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then script/build/linux # TODO: requires auth to push, so disable for now # script/build/image master # docker push docker/compose:master else script/setup/osx script/build/osx fi compose-1.8.0/script/travis/ci000077500000000000000000000003051274620702700163270ustar00rootroot00000000000000#!/bin/bash set -e if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then tox -e py27,py34 -- tests/unit else # TODO: we could also install py34 and test against it tox -e py27 -- tests/unit fi compose-1.8.0/script/travis/install000077500000000000000000000002601274620702700174020ustar00rootroot00000000000000#!/bin/bash set -ex if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then pip install tox==2.1.1 else sudo pip install --upgrade pip tox==2.1.1 virtualenv pip --version fi compose-1.8.0/script/travis/render-bintray-config.py000077500000000000000000000004531274620702700225570ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import datetime import os.path import sys os.environ['DATE'] = str(datetime.date.today()) for line in sys.stdin: print(os.path.expandvars(line), end='') compose-1.8.0/setup.py000066400000000000000000000041621274620702700147110ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import unicode_literals import codecs import os import re import sys from setuptools import find_packages from setuptools import setup def read(*parts): path = os.path.join(os.path.dirname(__file__), *parts) with codecs.open(path, encoding='utf-8') as fobj: return fobj.read() def find_version(*file_paths): version_file = read(*file_paths) version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M) if version_match: return version_match.group(1) raise RuntimeError("Unable to find version string.") install_requires = [ 'cached-property >= 1.2.0, < 2', 'docopt >= 0.6.1, < 0.7', 'PyYAML >= 3.10, < 4', 'requests >= 2.6.1, < 2.8', 'texttable >= 0.8.1, < 0.9', 'websocket-client >= 0.32.0, < 1.0', 'docker-py >= 1.9.0, < 2.0', 'dockerpty >= 0.4.1, < 0.5', 'six >= 1.3.0, < 2', 'jsonschema >= 2.5.1, < 3', ] tests_require = [ 'pytest', ] if sys.version_info[:2] < (3, 4): tests_require.append('mock >= 1.0.1') install_requires.append('enum34 >= 1.0.4, < 2') setup( name='docker-compose', version=find_version("compose", "__init__.py"), description='Multi-container orchestration for Docker', url='https://www.docker.com/', author='Docker, Inc.', license='Apache License 2.0', packages=find_packages(exclude=['tests.*', 'tests']), include_package_data=True, test_suite='nose.collector', install_requires=install_requires, tests_require=tests_require, entry_points=""" [console_scripts] docker-compose=compose.cli.main:main """, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Environment :: Console', 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', ], ) compose-1.8.0/tests/000077500000000000000000000000001274620702700143365ustar00rootroot00000000000000compose-1.8.0/tests/__init__.py000066400000000000000000000004261274620702700164510ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import sys if sys.version_info >= (2, 7): import unittest # NOQA else: import unittest2 as unittest # NOQA try: from unittest import mock except ImportError: import mock # NOQA compose-1.8.0/tests/acceptance/000077500000000000000000000000001274620702700164245ustar00rootroot00000000000000compose-1.8.0/tests/acceptance/__init__.py000066400000000000000000000000001274620702700205230ustar00rootroot00000000000000compose-1.8.0/tests/acceptance/cli_test.py000066400000000000000000002005641274620702700206130ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import datetime import json import os import signal import subprocess import time from collections import Counter from collections import namedtuple from operator import attrgetter import py import yaml from docker import errors from .. import mock from compose.cli.command import get_project from compose.container import Container from compose.project import OneOffFilter from tests.integration.testcases import DockerClientTestCase from tests.integration.testcases import get_links from tests.integration.testcases import pull_busybox from tests.integration.testcases import v2_only ProcessResult = namedtuple('ProcessResult', 'stdout stderr') BUILD_CACHE_TEXT = 'Using cache' BUILD_PULL_TEXT = 'Status: Image is up to date for busybox:latest' def start_process(base_dir, options): proc = subprocess.Popen( ['docker-compose'] + options, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=base_dir) print("Running process: %s" % proc.pid) return proc def wait_on_process(proc, returncode=0): stdout, stderr = proc.communicate() if proc.returncode != returncode: print("Stderr: {}".format(stderr)) print("Stdout: {}".format(stdout)) assert proc.returncode == returncode return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8')) def wait_on_condition(condition, delay=0.1, timeout=40): start_time = time.time() while not condition(): if time.time() - start_time > timeout: raise AssertionError("Timeout: %s" % condition) time.sleep(delay) def kill_service(service): for container in service.containers(): container.kill() class ContainerCountCondition(object): def __init__(self, project, expected): self.project = project self.expected = expected def __call__(self): return len(self.project.containers()) == self.expected def __str__(self): return "waiting for counter count == %s" % self.expected class ContainerStateCondition(object): def __init__(self, client, name, status): self.client = client self.name = name self.status = status def __call__(self): try: container = self.client.inspect_container(self.name) return container['State']['Status'] == self.status except errors.APIError: return False def __str__(self): return "waiting for container to be %s" % self.status class CLITestCase(DockerClientTestCase): def setUp(self): super(CLITestCase, self).setUp() self.base_dir = 'tests/fixtures/simple-composefile' def tearDown(self): if self.base_dir: self.project.kill() self.project.remove_stopped() for container in self.project.containers(stopped=True, one_off=OneOffFilter.only): container.remove(force=True) networks = self.client.networks() for n in networks: if n['Name'].startswith('{}_'.format(self.project.name)): self.client.remove_network(n['Name']) super(CLITestCase, self).tearDown() @property def project(self): # Hack: allow project to be overridden if not hasattr(self, '_project'): self._project = get_project(self.base_dir) return self._project def dispatch(self, options, project_options=None, returncode=0): project_options = project_options or [] proc = start_process(self.base_dir, project_options + options) return wait_on_process(proc, returncode=returncode) def execute(self, container, cmd): # Remove once Hijack and CloseNotifier sign a peace treaty self.client.close() exc = self.client.exec_create(container.id, cmd) self.client.exec_start(exc) return self.client.exec_inspect(exc)['ExitCode'] def lookup(self, container, hostname): return self.execute(container, ["nslookup", hostname]) == 0 def test_help(self): self.base_dir = 'tests/fixtures/no-composefile' result = self.dispatch(['help', 'up'], returncode=0) assert 'Usage: up [options] [SERVICE...]' in result.stdout # Prevent tearDown from trying to create a project self.base_dir = None def test_shorthand_host_opt(self): self.dispatch( ['-H={0}'.format(os.environ.get('DOCKER_HOST', 'unix://')), 'up', '-d'], returncode=0 ) def test_config_list_services(self): self.base_dir = 'tests/fixtures/v2-full' result = self.dispatch(['config', '--services']) assert set(result.stdout.rstrip().split('\n')) == {'web', 'other'} def test_config_quiet_with_error(self): self.base_dir = None result = self.dispatch([ '-f', 'tests/fixtures/invalid-composefile/invalid.yml', 'config', '-q' ], returncode=1) assert "'notaservice' must be a mapping" in result.stderr def test_config_quiet(self): self.base_dir = 'tests/fixtures/v2-full' assert self.dispatch(['config', '-q']).stdout == '' def test_config_default(self): self.base_dir = 'tests/fixtures/v2-full' result = self.dispatch(['config']) # assert there are no python objects encoded in the output assert '!!' not in result.stdout output = yaml.load(result.stdout) expected = { 'version': '2.0', 'volumes': {'data': {'driver': 'local'}}, 'networks': {'front': {}}, 'services': { 'web': { 'build': { 'context': os.path.abspath(self.base_dir), }, 'networks': {'front': None, 'default': None}, 'volumes_from': ['service:other:rw'], }, 'other': { 'image': 'busybox:latest', 'command': 'top', 'volumes': ['/data:rw'], }, }, } assert output == expected def test_config_restart(self): self.base_dir = 'tests/fixtures/restart' result = self.dispatch(['config']) assert yaml.load(result.stdout) == { 'version': '2.0', 'services': { 'never': { 'image': 'busybox', 'restart': 'no', }, 'always': { 'image': 'busybox', 'restart': 'always', }, 'on-failure': { 'image': 'busybox', 'restart': 'on-failure', }, 'on-failure-5': { 'image': 'busybox', 'restart': 'on-failure:5', }, }, 'networks': {}, 'volumes': {}, } def test_config_external_network(self): self.base_dir = 'tests/fixtures/networks' result = self.dispatch(['-f', 'external-networks.yml', 'config']) json_result = yaml.load(result.stdout) assert 'networks' in json_result assert json_result['networks'] == { 'networks_foo': { 'external': True # {'name': 'networks_foo'} }, 'bar': { 'external': {'name': 'networks_bar'} } } def test_config_v1(self): self.base_dir = 'tests/fixtures/v1-config' result = self.dispatch(['config']) assert yaml.load(result.stdout) == { 'version': '2.0', 'services': { 'net': { 'image': 'busybox', 'network_mode': 'bridge', }, 'volume': { 'image': 'busybox', 'volumes': ['/data:rw'], 'network_mode': 'bridge', }, 'app': { 'image': 'busybox', 'volumes_from': ['service:volume:rw'], 'network_mode': 'service:net', }, }, 'networks': {}, 'volumes': {}, } def test_ps(self): self.project.get_service('simple').create_container() result = self.dispatch(['ps']) assert 'simplecomposefile_simple_1' in result.stdout def test_ps_default_composefile(self): self.base_dir = 'tests/fixtures/multiple-composefiles' self.dispatch(['up', '-d']) result = self.dispatch(['ps']) self.assertIn('multiplecomposefiles_simple_1', result.stdout) self.assertIn('multiplecomposefiles_another_1', result.stdout) self.assertNotIn('multiplecomposefiles_yetanother_1', result.stdout) def test_ps_alternate_composefile(self): config_path = os.path.abspath( 'tests/fixtures/multiple-composefiles/compose2.yml') self._project = get_project(self.base_dir, [config_path]) self.base_dir = 'tests/fixtures/multiple-composefiles' self.dispatch(['-f', 'compose2.yml', 'up', '-d']) result = self.dispatch(['-f', 'compose2.yml', 'ps']) self.assertNotIn('multiplecomposefiles_simple_1', result.stdout) self.assertNotIn('multiplecomposefiles_another_1', result.stdout) self.assertIn('multiplecomposefiles_yetanother_1', result.stdout) def test_pull(self): result = self.dispatch(['pull']) assert sorted(result.stderr.split('\n'))[1:] == [ 'Pulling another (busybox:latest)...', 'Pulling simple (busybox:latest)...', ] def test_pull_with_digest(self): result = self.dispatch(['-f', 'digest.yml', 'pull']) assert 'Pulling simple (busybox:latest)...' in result.stderr assert ('Pulling digest (busybox@' 'sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b520' '04ee8502d)...') in result.stderr def test_pull_with_ignore_pull_failures(self): result = self.dispatch([ '-f', 'ignore-pull-failures.yml', 'pull', '--ignore-pull-failures']) assert 'Pulling simple (busybox:latest)...' in result.stderr assert 'Pulling another (nonexisting-image:latest)...' in result.stderr assert 'Error: image library/nonexisting-image' in result.stderr assert 'not found' in result.stderr def test_build_plain(self): self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple']) result = self.dispatch(['build', 'simple']) assert BUILD_CACHE_TEXT in result.stdout assert BUILD_PULL_TEXT not in result.stdout def test_build_no_cache(self): self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple']) result = self.dispatch(['build', '--no-cache', 'simple']) assert BUILD_CACHE_TEXT not in result.stdout assert BUILD_PULL_TEXT not in result.stdout def test_build_pull(self): # Make sure we have the latest busybox already pull_busybox(self.client) self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple'], None) result = self.dispatch(['build', '--pull', 'simple']) assert BUILD_CACHE_TEXT in result.stdout assert BUILD_PULL_TEXT in result.stdout def test_build_no_cache_pull(self): # Make sure we have the latest busybox already pull_busybox(self.client) self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple']) result = self.dispatch(['build', '--no-cache', '--pull', 'simple']) assert BUILD_CACHE_TEXT not in result.stdout assert BUILD_PULL_TEXT in result.stdout def test_build_failed(self): self.base_dir = 'tests/fixtures/simple-failing-dockerfile' self.dispatch(['build', 'simple'], returncode=1) labels = ["com.docker.compose.test_failing_image=true"] containers = [ Container.from_ps(self.project.client, c) for c in self.project.client.containers( all=True, filters={"label": labels}) ] assert len(containers) == 1 def test_build_failed_forcerm(self): self.base_dir = 'tests/fixtures/simple-failing-dockerfile' self.dispatch(['build', '--force-rm', 'simple'], returncode=1) labels = ["com.docker.compose.test_failing_image=true"] containers = [ Container.from_ps(self.project.client, c) for c in self.project.client.containers( all=True, filters={"label": labels}) ] assert not containers def test_bundle_with_digests(self): self.base_dir = 'tests/fixtures/bundle-with-digests/' tmpdir = py.test.ensuretemp('cli_test_bundle') self.addCleanup(tmpdir.remove) filename = str(tmpdir.join('example.dab')) self.dispatch(['bundle', '--output', filename]) with open(filename, 'r') as fh: bundle = json.load(fh) assert bundle == { 'Version': '0.1', 'Services': { 'web': { 'Image': ('dockercloud/hello-world@sha256:fe79a2cfbd17eefc3' '44fb8419420808df95a1e22d93b7f621a7399fd1e9dca1d'), 'Networks': ['default'], }, 'redis': { 'Image': ('redis@sha256:a84cb8f53a70e19f61ff2e1d5e73fb7ae62d' '374b2b7392de1e7d77be26ef8f7b'), 'Networks': ['default'], } }, } def test_create(self): self.dispatch(['create']) service = self.project.get_service('simple') another = self.project.get_service('another') self.assertEqual(len(service.containers()), 0) self.assertEqual(len(another.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertEqual(len(another.containers(stopped=True)), 1) def test_create_with_force_recreate(self): self.dispatch(['create'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) old_ids = [c.id for c in service.containers(stopped=True)] self.dispatch(['create', '--force-recreate'], None) self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) new_ids = [c.id for c in service.containers(stopped=True)] self.assertNotEqual(old_ids, new_ids) def test_create_with_no_recreate(self): self.dispatch(['create'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) old_ids = [c.id for c in service.containers(stopped=True)] self.dispatch(['create', '--no-recreate'], None) self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) new_ids = [c.id for c in service.containers(stopped=True)] self.assertEqual(old_ids, new_ids) def test_create_with_force_recreate_and_no_recreate(self): self.dispatch( ['create', '--force-recreate', '--no-recreate'], returncode=1) def test_down_invalid_rmi_flag(self): result = self.dispatch(['down', '--rmi', 'bogus'], returncode=1) assert '--rmi flag must be' in result.stderr @v2_only() def test_down(self): self.base_dir = 'tests/fixtures/v2-full' self.dispatch(['up', '-d']) wait_on_condition(ContainerCountCondition(self.project, 2)) self.dispatch(['run', 'web', 'true']) self.dispatch(['run', '-d', 'web', 'tail', '-f', '/dev/null']) assert len(self.project.containers(one_off=OneOffFilter.only, stopped=True)) == 2 result = self.dispatch(['down', '--rmi=local', '--volumes']) assert 'Stopping v2full_web_1' in result.stderr assert 'Stopping v2full_other_1' in result.stderr assert 'Stopping v2full_web_run_2' in result.stderr assert 'Removing v2full_web_1' in result.stderr assert 'Removing v2full_other_1' in result.stderr assert 'Removing v2full_web_run_1' in result.stderr assert 'Removing v2full_web_run_2' in result.stderr assert 'Removing volume v2full_data' in result.stderr assert 'Removing image v2full_web' in result.stderr assert 'Removing image busybox' not in result.stderr assert 'Removing network v2full_default' in result.stderr assert 'Removing network v2full_front' in result.stderr def test_up_detached(self): self.dispatch(['up', '-d']) service = self.project.get_service('simple') another = self.project.get_service('another') self.assertEqual(len(service.containers()), 1) self.assertEqual(len(another.containers()), 1) # Ensure containers don't have stdin and stdout connected in -d mode container, = service.containers() self.assertFalse(container.get('Config.AttachStderr')) self.assertFalse(container.get('Config.AttachStdout')) self.assertFalse(container.get('Config.AttachStdin')) def test_up_attached(self): self.base_dir = 'tests/fixtures/echo-services' result = self.dispatch(['up', '--no-color']) assert 'simple_1 | simple' in result.stdout assert 'another_1 | another' in result.stdout assert 'simple_1 exited with code 0' in result.stdout assert 'another_1 exited with code 0' in result.stdout @v2_only() def test_up(self): self.base_dir = 'tests/fixtures/v2-simple' self.dispatch(['up', '-d'], None) services = self.project.get_services() network_name = self.project.networks.networks['default'].full_name networks = self.client.networks(names=[network_name]) self.assertEqual(len(networks), 1) self.assertEqual(networks[0]['Driver'], 'bridge') assert 'com.docker.network.bridge.enable_icc' not in networks[0]['Options'] network = self.client.inspect_network(networks[0]['Id']) for service in services: containers = service.containers() self.assertEqual(len(containers), 1) container = containers[0] self.assertIn(container.id, network['Containers']) networks = container.get('NetworkSettings.Networks') self.assertEqual(list(networks), [network['Name']]) self.assertEqual( sorted(networks[network['Name']]['Aliases']), sorted([service.name, container.short_id])) for service in services: assert self.lookup(container, service.name) @v2_only() def test_up_with_default_network_config(self): filename = 'default-network-config.yml' self.base_dir = 'tests/fixtures/networks' self._project = get_project(self.base_dir, [filename]) self.dispatch(['-f', filename, 'up', '-d'], None) network_name = self.project.networks.networks['default'].full_name networks = self.client.networks(names=[network_name]) assert networks[0]['Options']['com.docker.network.bridge.enable_icc'] == 'false' @v2_only() def test_up_with_network_aliases(self): filename = 'network-aliases.yml' self.base_dir = 'tests/fixtures/networks' self.dispatch(['-f', filename, 'up', '-d'], None) back_name = '{}_back'.format(self.project.name) front_name = '{}_front'.format(self.project.name) networks = [ n for n in self.client.networks() if n['Name'].startswith('{}_'.format(self.project.name)) ] # Two networks were created: back and front assert sorted(n['Name'] for n in networks) == [back_name, front_name] web_container = self.project.get_service('web').containers()[0] back_aliases = web_container.get( 'NetworkSettings.Networks.{}.Aliases'.format(back_name) ) assert 'web' in back_aliases front_aliases = web_container.get( 'NetworkSettings.Networks.{}.Aliases'.format(front_name) ) assert 'web' in front_aliases assert 'forward_facing' in front_aliases assert 'ahead' in front_aliases @v2_only() def test_up_with_network_static_addresses(self): filename = 'network-static-addresses.yml' ipv4_address = '172.16.100.100' ipv6_address = 'fe80::1001:100' self.base_dir = 'tests/fixtures/networks' self.dispatch(['-f', filename, 'up', '-d'], None) static_net = '{}_static_test'.format(self.project.name) networks = [ n for n in self.client.networks() if n['Name'].startswith('{}_'.format(self.project.name)) ] # One networks was created: front assert sorted(n['Name'] for n in networks) == [static_net] web_container = self.project.get_service('web').containers()[0] ipam_config = web_container.get( 'NetworkSettings.Networks.{}.IPAMConfig'.format(static_net) ) assert ipv4_address in ipam_config.values() assert ipv6_address in ipam_config.values() @v2_only() def test_up_with_networks(self): self.base_dir = 'tests/fixtures/networks' self.dispatch(['up', '-d'], None) back_name = '{}_back'.format(self.project.name) front_name = '{}_front'.format(self.project.name) networks = [ n for n in self.client.networks() if n['Name'].startswith('{}_'.format(self.project.name)) ] # Two networks were created: back and front assert sorted(n['Name'] for n in networks) == [back_name, front_name] back_network = [n for n in networks if n['Name'] == back_name][0] front_network = [n for n in networks if n['Name'] == front_name][0] web_container = self.project.get_service('web').containers()[0] app_container = self.project.get_service('app').containers()[0] db_container = self.project.get_service('db').containers()[0] for net_name in [front_name, back_name]: links = app_container.get('NetworkSettings.Networks.{}.Links'.format(net_name)) assert '{}:database'.format(db_container.name) in links # db and app joined the back network assert sorted(back_network['Containers']) == sorted([db_container.id, app_container.id]) # web and app joined the front network assert sorted(front_network['Containers']) == sorted([web_container.id, app_container.id]) # web can see app but not db assert self.lookup(web_container, "app") assert not self.lookup(web_container, "db") # app can see db assert self.lookup(app_container, "db") # app has aliased db to "database" assert self.lookup(app_container, "database") @v2_only() def test_up_missing_network(self): self.base_dir = 'tests/fixtures/networks' result = self.dispatch( ['-f', 'missing-network.yml', 'up', '-d'], returncode=1) assert 'Service "web" uses an undefined network "foo"' in result.stderr @v2_only() def test_up_with_network_mode(self): c = self.client.create_container('busybox', 'top', name='composetest_network_mode_container') self.addCleanup(self.client.remove_container, c, force=True) self.client.start(c) container_mode_source = 'container:{}'.format(c['Id']) filename = 'network-mode.yml' self.base_dir = 'tests/fixtures/networks' self._project = get_project(self.base_dir, [filename]) self.dispatch(['-f', filename, 'up', '-d'], None) networks = [ n for n in self.client.networks() if n['Name'].startswith('{}_'.format(self.project.name)) ] assert not networks for name in ['bridge', 'host', 'none']: container = self.project.get_service(name).containers()[0] assert list(container.get('NetworkSettings.Networks')) == [name] assert container.get('HostConfig.NetworkMode') == name service_mode_source = 'container:{}'.format( self.project.get_service('bridge').containers()[0].id) service_mode_container = self.project.get_service('service').containers()[0] assert not service_mode_container.get('NetworkSettings.Networks') assert service_mode_container.get('HostConfig.NetworkMode') == service_mode_source container_mode_container = self.project.get_service('container').containers()[0] assert not container_mode_container.get('NetworkSettings.Networks') assert container_mode_container.get('HostConfig.NetworkMode') == container_mode_source @v2_only() def test_up_external_networks(self): filename = 'external-networks.yml' self.base_dir = 'tests/fixtures/networks' self._project = get_project(self.base_dir, [filename]) result = self.dispatch(['-f', filename, 'up', '-d'], returncode=1) assert 'declared as external, but could not be found' in result.stderr networks = [ n['Name'] for n in self.client.networks() if n['Name'].startswith('{}_'.format(self.project.name)) ] assert not networks network_names = ['{}_{}'.format(self.project.name, n) for n in ['foo', 'bar']] for name in network_names: self.client.create_network(name) self.dispatch(['-f', filename, 'up', '-d']) container = self.project.containers()[0] assert sorted(list(container.get('NetworkSettings.Networks'))) == sorted(network_names) @v2_only() def test_up_with_external_default_network(self): filename = 'external-default.yml' self.base_dir = 'tests/fixtures/networks' self._project = get_project(self.base_dir, [filename]) result = self.dispatch(['-f', filename, 'up', '-d'], returncode=1) assert 'declared as external, but could not be found' in result.stderr networks = [ n['Name'] for n in self.client.networks() if n['Name'].startswith('{}_'.format(self.project.name)) ] assert not networks network_name = 'composetest_external_network' self.client.create_network(network_name) self.dispatch(['-f', filename, 'up', '-d']) container = self.project.containers()[0] assert list(container.get('NetworkSettings.Networks')) == [network_name] @v2_only() def test_up_no_services(self): self.base_dir = 'tests/fixtures/no-services' self.dispatch(['up', '-d'], None) network_names = [ n['Name'] for n in self.client.networks() if n['Name'].startswith('{}_'.format(self.project.name)) ] assert network_names == [] def test_up_with_links_v1(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', 'web'], None) # No network was created network_name = self.project.networks.networks['default'].full_name networks = self.client.networks(names=[network_name]) assert networks == [] web = self.project.get_service('web') db = self.project.get_service('db') console = self.project.get_service('console') # console was not started self.assertEqual(len(web.containers()), 1) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 0) # web has links web_container = web.containers()[0] self.assertTrue(web_container.get('HostConfig.Links')) def test_up_with_net_is_invalid(self): self.base_dir = 'tests/fixtures/net-container' result = self.dispatch( ['-f', 'v2-invalid.yml', 'up', '-d'], returncode=1) assert "Unsupported config option for services.bar: 'net'" in result.stderr def test_up_with_net_v1(self): self.base_dir = 'tests/fixtures/net-container' self.dispatch(['up', '-d'], None) bar = self.project.get_service('bar') bar_container = bar.containers()[0] foo = self.project.get_service('foo') foo_container = foo.containers()[0] assert foo_container.get('HostConfig.NetworkMode') == \ 'container:{}'.format(bar_container.id) def test_up_with_no_deps(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', '--no-deps', 'web'], None) web = self.project.get_service('web') db = self.project.get_service('db') console = self.project.get_service('console') self.assertEqual(len(web.containers()), 1) self.assertEqual(len(db.containers()), 0) self.assertEqual(len(console.containers()), 0) def test_up_with_force_recreate(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) old_ids = [c.id for c in service.containers()] self.dispatch(['up', '-d', '--force-recreate'], None) self.assertEqual(len(service.containers()), 1) new_ids = [c.id for c in service.containers()] self.assertNotEqual(old_ids, new_ids) def test_up_with_no_recreate(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) old_ids = [c.id for c in service.containers()] self.dispatch(['up', '-d', '--no-recreate'], None) self.assertEqual(len(service.containers()), 1) new_ids = [c.id for c in service.containers()] self.assertEqual(old_ids, new_ids) def test_up_with_force_recreate_and_no_recreate(self): self.dispatch( ['up', '-d', '--force-recreate', '--no-recreate'], returncode=1) def test_up_with_timeout(self): self.dispatch(['up', '-d', '-t', '1']) service = self.project.get_service('simple') another = self.project.get_service('another') self.assertEqual(len(service.containers()), 1) self.assertEqual(len(another.containers()), 1) # Ensure containers don't have stdin and stdout connected in -d mode config = service.containers()[0].inspect()['Config'] self.assertFalse(config['AttachStderr']) self.assertFalse(config['AttachStdout']) self.assertFalse(config['AttachStdin']) def test_up_handles_sigint(self): proc = start_process(self.base_dir, ['up', '-t', '2']) wait_on_condition(ContainerCountCondition(self.project, 2)) os.kill(proc.pid, signal.SIGINT) wait_on_condition(ContainerCountCondition(self.project, 0)) def test_up_handles_sigterm(self): proc = start_process(self.base_dir, ['up', '-t', '2']) wait_on_condition(ContainerCountCondition(self.project, 2)) os.kill(proc.pid, signal.SIGTERM) wait_on_condition(ContainerCountCondition(self.project, 0)) @v2_only() def test_up_handles_force_shutdown(self): self.base_dir = 'tests/fixtures/sleeps-composefile' proc = start_process(self.base_dir, ['up', '-t', '200']) wait_on_condition(ContainerCountCondition(self.project, 2)) os.kill(proc.pid, signal.SIGTERM) time.sleep(0.1) os.kill(proc.pid, signal.SIGTERM) wait_on_condition(ContainerCountCondition(self.project, 0)) def test_up_handles_abort_on_container_exit(self): start_process(self.base_dir, ['up', '--abort-on-container-exit']) wait_on_condition(ContainerCountCondition(self.project, 2)) self.project.stop(['simple']) wait_on_condition(ContainerCountCondition(self.project, 0)) def test_exec_without_tty(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', 'console']) self.assertEqual(len(self.project.containers()), 1) stdout, stderr = self.dispatch(['exec', '-T', 'console', 'ls', '-1d', '/']) self.assertEquals(stdout, "/\n") self.assertEquals(stderr, "") def test_exec_custom_user(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', 'console']) self.assertEqual(len(self.project.containers()), 1) stdout, stderr = self.dispatch(['exec', '-T', '--user=operator', 'console', 'whoami']) self.assertEquals(stdout, "operator\n") self.assertEquals(stderr, "") def test_run_service_without_links(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['run', 'console', '/bin/true']) self.assertEqual(len(self.project.containers()), 0) # Ensure stdin/out was open container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] config = container.inspect()['Config'] self.assertTrue(config['AttachStderr']) self.assertTrue(config['AttachStdout']) self.assertTrue(config['AttachStdin']) def test_run_service_with_links(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['run', 'web', '/bin/true'], None) db = self.project.get_service('db') console = self.project.get_service('console') self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 0) @v2_only() def test_run_service_with_dependencies(self): self.base_dir = 'tests/fixtures/v2-dependencies' self.dispatch(['run', 'web', '/bin/true'], None) db = self.project.get_service('db') console = self.project.get_service('console') self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 0) def test_run_with_no_deps(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['run', '--no-deps', 'web', '/bin/true']) db = self.project.get_service('db') self.assertEqual(len(db.containers()), 0) def test_run_does_not_recreate_linked_containers(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', 'db']) db = self.project.get_service('db') self.assertEqual(len(db.containers()), 1) old_ids = [c.id for c in db.containers()] self.dispatch(['run', 'web', '/bin/true'], None) self.assertEqual(len(db.containers()), 1) new_ids = [c.id for c in db.containers()] self.assertEqual(old_ids, new_ids) def test_run_without_command(self): self.base_dir = 'tests/fixtures/commands-composefile' self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test') self.dispatch(['run', 'implicit']) service = self.project.get_service('implicit') containers = service.containers(stopped=True, one_off=OneOffFilter.only) self.assertEqual( [c.human_readable_command for c in containers], [u'/bin/sh -c echo "success"'], ) self.dispatch(['run', 'explicit']) service = self.project.get_service('explicit') containers = service.containers(stopped=True, one_off=OneOffFilter.only) self.assertEqual( [c.human_readable_command for c in containers], [u'/bin/true'], ) def test_run_service_with_dockerfile_entrypoint(self): self.base_dir = 'tests/fixtures/entrypoint-dockerfile' self.dispatch(['run', 'test']) container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] assert container.get('Config.Entrypoint') == ['printf'] assert container.get('Config.Cmd') == ['default', 'args'] def test_run_service_with_dockerfile_entrypoint_overridden(self): self.base_dir = 'tests/fixtures/entrypoint-dockerfile' self.dispatch(['run', '--entrypoint', 'echo', 'test']) container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] assert container.get('Config.Entrypoint') == ['echo'] assert not container.get('Config.Cmd') def test_run_service_with_dockerfile_entrypoint_and_command_overridden(self): self.base_dir = 'tests/fixtures/entrypoint-dockerfile' self.dispatch(['run', '--entrypoint', 'echo', 'test', 'foo']) container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] assert container.get('Config.Entrypoint') == ['echo'] assert container.get('Config.Cmd') == ['foo'] def test_run_service_with_compose_file_entrypoint(self): self.base_dir = 'tests/fixtures/entrypoint-composefile' self.dispatch(['run', 'test']) container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] assert container.get('Config.Entrypoint') == ['printf'] assert container.get('Config.Cmd') == ['default', 'args'] def test_run_service_with_compose_file_entrypoint_overridden(self): self.base_dir = 'tests/fixtures/entrypoint-composefile' self.dispatch(['run', '--entrypoint', 'echo', 'test']) container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] assert container.get('Config.Entrypoint') == ['echo'] assert not container.get('Config.Cmd') def test_run_service_with_compose_file_entrypoint_and_command_overridden(self): self.base_dir = 'tests/fixtures/entrypoint-composefile' self.dispatch(['run', '--entrypoint', 'echo', 'test', 'foo']) container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] assert container.get('Config.Entrypoint') == ['echo'] assert container.get('Config.Cmd') == ['foo'] def test_run_service_with_compose_file_entrypoint_and_empty_string_command(self): self.base_dir = 'tests/fixtures/entrypoint-composefile' self.dispatch(['run', '--entrypoint', 'echo', 'test', '']) container = self.project.containers(stopped=True, one_off=OneOffFilter.only)[0] assert container.get('Config.Entrypoint') == ['echo'] assert container.get('Config.Cmd') == [''] def test_run_service_with_user_overridden(self): self.base_dir = 'tests/fixtures/user-composefile' name = 'service' user = 'sshd' self.dispatch(['run', '--user={user}'.format(user=user), name], returncode=1) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=OneOffFilter.only)[0] self.assertEqual(user, container.get('Config.User')) def test_run_service_with_user_overridden_short_form(self): self.base_dir = 'tests/fixtures/user-composefile' name = 'service' user = 'sshd' self.dispatch(['run', '-u', user, name], returncode=1) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=OneOffFilter.only)[0] self.assertEqual(user, container.get('Config.User')) def test_run_service_with_environement_overridden(self): name = 'service' self.base_dir = 'tests/fixtures/environment-composefile' self.dispatch([ 'run', '-e', 'foo=notbar', '-e', 'allo=moto=bobo', '-e', 'alpha=beta', name, '/bin/true', ]) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=OneOffFilter.only)[0] # env overriden self.assertEqual('notbar', container.environment['foo']) # keep environement from yaml self.assertEqual('world', container.environment['hello']) # added option from command line self.assertEqual('beta', container.environment['alpha']) # make sure a value with a = don't crash out self.assertEqual('moto=bobo', container.environment['allo']) def test_run_service_without_map_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['run', '-d', 'simple']) container = self.project.get_service('simple').containers(one_off=OneOffFilter.only)[0] # get port information port_random = container.get_local_port(3000) port_assigned = container.get_local_port(3001) # close all one off containers we just created container.stop() # check the ports self.assertEqual(port_random, None) self.assertEqual(port_assigned, None) def test_run_service_with_map_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['run', '-d', '--service-ports', 'simple']) container = self.project.get_service('simple').containers(one_off=OneOffFilter.only)[0] # get port information port_random = container.get_local_port(3000) port_assigned = container.get_local_port(3001) port_range = container.get_local_port(3002), container.get_local_port(3003) # close all one off containers we just created container.stop() # check the ports self.assertNotEqual(port_random, None) self.assertIn("0.0.0.0", port_random) self.assertEqual(port_assigned, "0.0.0.0:49152") self.assertEqual(port_range[0], "0.0.0.0:49153") self.assertEqual(port_range[1], "0.0.0.0:49154") def test_run_service_with_explicitly_maped_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['run', '-d', '-p', '30000:3000', '--publish', '30001:3001', 'simple']) container = self.project.get_service('simple').containers(one_off=OneOffFilter.only)[0] # get port information port_short = container.get_local_port(3000) port_full = container.get_local_port(3001) # close all one off containers we just created container.stop() # check the ports self.assertEqual(port_short, "0.0.0.0:30000") self.assertEqual(port_full, "0.0.0.0:30001") def test_run_service_with_explicitly_maped_ip_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch([ 'run', '-d', '-p', '127.0.0.1:30000:3000', '--publish', '127.0.0.1:30001:3001', 'simple' ]) container = self.project.get_service('simple').containers(one_off=OneOffFilter.only)[0] # get port information port_short = container.get_local_port(3000) port_full = container.get_local_port(3001) # close all one off containers we just created container.stop() # check the ports self.assertEqual(port_short, "127.0.0.1:30000") self.assertEqual(port_full, "127.0.0.1:30001") def test_run_with_expose_ports(self): # create one off container self.base_dir = 'tests/fixtures/expose-composefile' self.dispatch(['run', '-d', '--service-ports', 'simple']) container = self.project.get_service('simple').containers(one_off=OneOffFilter.only)[0] ports = container.ports self.assertEqual(len(ports), 9) # exposed ports are not mapped to host ports assert ports['3000/tcp'] is None assert ports['3001/tcp'] is None assert ports['3001/udp'] is None assert ports['3002/tcp'] is None assert ports['3003/tcp'] is None assert ports['3004/tcp'] is None assert ports['3005/tcp'] is None assert ports['3006/udp'] is None assert ports['3007/udp'] is None # close all one off containers we just created container.stop() def test_run_with_custom_name(self): self.base_dir = 'tests/fixtures/environment-composefile' name = 'the-container-name' self.dispatch(['run', '--name', name, 'service', '/bin/true']) service = self.project.get_service('service') container, = service.containers(stopped=True, one_off=OneOffFilter.only) self.assertEqual(container.name, name) def test_run_service_with_workdir_overridden(self): self.base_dir = 'tests/fixtures/run-workdir' name = 'service' workdir = '/var' self.dispatch(['run', '--workdir={workdir}'.format(workdir=workdir), name]) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=True)[0] self.assertEqual(workdir, container.get('Config.WorkingDir')) def test_run_service_with_workdir_overridden_short_form(self): self.base_dir = 'tests/fixtures/run-workdir' name = 'service' workdir = '/var' self.dispatch(['run', '-w', workdir, name]) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=True)[0] self.assertEqual(workdir, container.get('Config.WorkingDir')) @v2_only() def test_run_interactive_connects_to_network(self): self.base_dir = 'tests/fixtures/networks' self.dispatch(['up', '-d']) self.dispatch(['run', 'app', 'nslookup', 'app']) self.dispatch(['run', 'app', 'nslookup', 'db']) containers = self.project.get_service('app').containers( stopped=True, one_off=OneOffFilter.only) assert len(containers) == 2 for container in containers: networks = container.get('NetworkSettings.Networks') assert sorted(list(networks)) == [ '{}_{}'.format(self.project.name, name) for name in ['back', 'front'] ] for _, config in networks.items(): # TODO: once we drop support for API <1.24, this can be changed to: # assert config['Aliases'] == [container.short_id] aliases = set(config['Aliases'] or []) - set([container.short_id]) assert not aliases @v2_only() def test_run_detached_connects_to_network(self): self.base_dir = 'tests/fixtures/networks' self.dispatch(['up', '-d']) self.dispatch(['run', '-d', 'app', 'top']) container = self.project.get_service('app').containers(one_off=OneOffFilter.only)[0] networks = container.get('NetworkSettings.Networks') assert sorted(list(networks)) == [ '{}_{}'.format(self.project.name, name) for name in ['back', 'front'] ] for _, config in networks.items(): # TODO: once we drop support for API <1.24, this can be changed to: # assert config['Aliases'] == [container.short_id] aliases = set(config['Aliases'] or []) - set([container.short_id]) assert not aliases assert self.lookup(container, 'app') assert self.lookup(container, 'db') def test_run_handles_sigint(self): proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top']) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', 'running')) os.kill(proc.pid, signal.SIGINT) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', 'exited')) def test_run_handles_sigterm(self): proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top']) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', 'running')) os.kill(proc.pid, signal.SIGTERM) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', 'exited')) @mock.patch.dict(os.environ) def test_run_env_values_from_system(self): os.environ['FOO'] = 'bar' os.environ['BAR'] = 'baz' self.dispatch(['run', '-e', 'FOO', 'simple', 'true'], None) container = self.project.containers(one_off=OneOffFilter.only, stopped=True)[0] environment = container.get('Config.Env') assert 'FOO=bar' in environment assert 'BAR=baz' not in environment def test_rm(self): service = self.project.get_service('simple') service.create_container() kill_service(service) self.assertEqual(len(service.containers(stopped=True)), 1) self.dispatch(['rm', '--force'], None) self.assertEqual(len(service.containers(stopped=True)), 0) service = self.project.get_service('simple') service.create_container() kill_service(service) self.assertEqual(len(service.containers(stopped=True)), 1) self.dispatch(['rm', '-f'], None) self.assertEqual(len(service.containers(stopped=True)), 0) def test_rm_all(self): service = self.project.get_service('simple') service.create_container(one_off=False) service.create_container(one_off=True) kill_service(service) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertEqual(len(service.containers(stopped=True, one_off=OneOffFilter.only)), 1) self.dispatch(['rm', '-f'], None) self.assertEqual(len(service.containers(stopped=True)), 0) self.assertEqual(len(service.containers(stopped=True, one_off=OneOffFilter.only)), 0) service.create_container(one_off=False) service.create_container(one_off=True) kill_service(service) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertEqual(len(service.containers(stopped=True, one_off=OneOffFilter.only)), 1) self.dispatch(['rm', '-f', '--all'], None) self.assertEqual(len(service.containers(stopped=True)), 0) self.assertEqual(len(service.containers(stopped=True, one_off=OneOffFilter.only)), 0) def test_stop(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.dispatch(['stop', '-t', '1'], None) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertFalse(service.containers(stopped=True)[0].is_running) def test_stop_signal(self): self.base_dir = 'tests/fixtures/stop-signal-composefile' self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.dispatch(['stop', '-t', '1'], None) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertFalse(service.containers(stopped=True)[0].is_running) self.assertEqual(service.containers(stopped=True)[0].exit_code, 0) def test_start_no_containers(self): result = self.dispatch(['start'], returncode=1) assert 'No containers to start' in result.stderr @v2_only() def test_up_logging(self): self.base_dir = 'tests/fixtures/logging-composefile' self.dispatch(['up', '-d']) simple = self.project.get_service('simple').containers()[0] log_config = simple.get('HostConfig.LogConfig') self.assertTrue(log_config) self.assertEqual(log_config.get('Type'), 'none') another = self.project.get_service('another').containers()[0] log_config = another.get('HostConfig.LogConfig') self.assertTrue(log_config) self.assertEqual(log_config.get('Type'), 'json-file') self.assertEqual(log_config.get('Config')['max-size'], '10m') def test_up_logging_legacy(self): self.base_dir = 'tests/fixtures/logging-composefile-legacy' self.dispatch(['up', '-d']) simple = self.project.get_service('simple').containers()[0] log_config = simple.get('HostConfig.LogConfig') self.assertTrue(log_config) self.assertEqual(log_config.get('Type'), 'none') another = self.project.get_service('another').containers()[0] log_config = another.get('HostConfig.LogConfig') self.assertTrue(log_config) self.assertEqual(log_config.get('Type'), 'json-file') self.assertEqual(log_config.get('Config')['max-size'], '10m') def test_pause_unpause(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertFalse(service.containers()[0].is_paused) self.dispatch(['pause'], None) self.assertTrue(service.containers()[0].is_paused) self.dispatch(['unpause'], None) self.assertFalse(service.containers()[0].is_paused) def test_pause_no_containers(self): result = self.dispatch(['pause'], returncode=1) assert 'No containers to pause' in result.stderr def test_unpause_no_containers(self): result = self.dispatch(['unpause'], returncode=1) assert 'No containers to unpause' in result.stderr def test_logs_invalid_service_name(self): self.dispatch(['logs', 'madeupname'], returncode=1) def test_logs_follow(self): self.base_dir = 'tests/fixtures/echo-services' self.dispatch(['up', '-d']) result = self.dispatch(['logs', '-f']) assert result.stdout.count('\n') == 5 assert 'simple' in result.stdout assert 'another' in result.stdout assert 'exited with code 0' in result.stdout def test_logs_follow_logs_from_new_containers(self): self.base_dir = 'tests/fixtures/logs-composefile' self.dispatch(['up', '-d', 'simple']) proc = start_process(self.base_dir, ['logs', '-f']) self.dispatch(['up', '-d', 'another']) wait_on_condition(ContainerStateCondition( self.project.client, 'logscomposefile_another_1', 'exited')) self.dispatch(['kill', 'simple']) result = wait_on_process(proc) assert 'hello' in result.stdout assert 'test' in result.stdout assert 'logscomposefile_another_1 exited with code 0' in result.stdout assert 'logscomposefile_simple_1 exited with code 137' in result.stdout def test_logs_default(self): self.base_dir = 'tests/fixtures/logs-composefile' self.dispatch(['up', '-d']) result = self.dispatch(['logs']) assert 'hello' in result.stdout assert 'test' in result.stdout assert 'exited with' not in result.stdout def test_logs_on_stopped_containers_exits(self): self.base_dir = 'tests/fixtures/echo-services' self.dispatch(['up']) result = self.dispatch(['logs']) assert 'simple' in result.stdout assert 'another' in result.stdout assert 'exited with' not in result.stdout def test_logs_timestamps(self): self.base_dir = 'tests/fixtures/echo-services' self.dispatch(['up', '-d']) result = self.dispatch(['logs', '-f', '-t']) self.assertRegexpMatches(result.stdout, '(\d{4})-(\d{2})-(\d{2})T(\d{2})\:(\d{2})\:(\d{2})') def test_logs_tail(self): self.base_dir = 'tests/fixtures/logs-tail-composefile' self.dispatch(['up']) result = self.dispatch(['logs', '--tail', '2']) assert result.stdout.count('\n') == 3 def test_kill(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.dispatch(['kill'], None) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertFalse(service.containers(stopped=True)[0].is_running) def test_kill_signal_sigstop(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.dispatch(['kill', '-s', 'SIGSTOP'], None) self.assertEqual(len(service.containers()), 1) # The container is still running. It has only been paused self.assertTrue(service.containers()[0].is_running) def test_kill_stopped_service(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.dispatch(['kill', '-s', 'SIGSTOP'], None) self.assertTrue(service.containers()[0].is_running) self.dispatch(['kill', '-s', 'SIGKILL'], None) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertFalse(service.containers(stopped=True)[0].is_running) def test_restart(self): service = self.project.get_service('simple') container = service.create_container() service.start_container(container) started_at = container.dictionary['State']['StartedAt'] self.dispatch(['restart', '-t', '1'], None) container.inspect() self.assertNotEqual( container.dictionary['State']['FinishedAt'], '0001-01-01T00:00:00Z', ) self.assertNotEqual( container.dictionary['State']['StartedAt'], started_at, ) def test_restart_stopped_container(self): service = self.project.get_service('simple') container = service.create_container() container.start() container.kill() self.assertEqual(len(service.containers(stopped=True)), 1) self.dispatch(['restart', '-t', '1'], None) self.assertEqual(len(service.containers(stopped=False)), 1) def test_restart_no_containers(self): result = self.dispatch(['restart'], returncode=1) assert 'No containers to restart' in result.stderr def test_scale(self): project = self.project self.dispatch(['scale', 'simple=1']) self.assertEqual(len(project.get_service('simple').containers()), 1) self.dispatch(['scale', 'simple=3', 'another=2']) self.assertEqual(len(project.get_service('simple').containers()), 3) self.assertEqual(len(project.get_service('another').containers()), 2) self.dispatch(['scale', 'simple=1', 'another=1']) self.assertEqual(len(project.get_service('simple').containers()), 1) self.assertEqual(len(project.get_service('another').containers()), 1) self.dispatch(['scale', 'simple=1', 'another=1']) self.assertEqual(len(project.get_service('simple').containers()), 1) self.assertEqual(len(project.get_service('another').containers()), 1) self.dispatch(['scale', 'simple=0', 'another=0']) self.assertEqual(len(project.get_service('simple').containers()), 0) self.assertEqual(len(project.get_service('another').containers()), 0) def test_port(self): self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['up', '-d'], None) container = self.project.get_service('simple').get_container() def get_port(number): result = self.dispatch(['port', 'simple', str(number)]) return result.stdout.rstrip() self.assertEqual(get_port(3000), container.get_local_port(3000)) self.assertEqual(get_port(3001), "0.0.0.0:49152") self.assertEqual(get_port(3002), "0.0.0.0:49153") def test_port_with_scale(self): self.base_dir = 'tests/fixtures/ports-composefile-scale' self.dispatch(['scale', 'simple=2'], None) containers = sorted( self.project.containers(service_names=['simple']), key=attrgetter('name')) def get_port(number, index=None): if index is None: result = self.dispatch(['port', 'simple', str(number)]) else: result = self.dispatch(['port', '--index=' + str(index), 'simple', str(number)]) return result.stdout.rstrip() self.assertEqual(get_port(3000), containers[0].get_local_port(3000)) self.assertEqual(get_port(3000, index=1), containers[0].get_local_port(3000)) self.assertEqual(get_port(3000, index=2), containers[1].get_local_port(3000)) self.assertEqual(get_port(3002), "") def test_events_json(self): events_proc = start_process(self.base_dir, ['events', '--json']) self.dispatch(['up', '-d']) wait_on_condition(ContainerCountCondition(self.project, 2)) os.kill(events_proc.pid, signal.SIGINT) result = wait_on_process(events_proc, returncode=1) lines = [json.loads(line) for line in result.stdout.rstrip().split('\n')] assert Counter(e['action'] for e in lines) == {'create': 2, 'start': 2} def test_events_human_readable(self): def has_timestamp(string): str_iso_date, str_iso_time, container_info = string.split(' ', 2) try: return isinstance(datetime.datetime.strptime( '%s %s' % (str_iso_date, str_iso_time), '%Y-%m-%d %H:%M:%S.%f'), datetime.datetime) except ValueError: return False events_proc = start_process(self.base_dir, ['events']) self.dispatch(['up', '-d', 'simple']) wait_on_condition(ContainerCountCondition(self.project, 1)) os.kill(events_proc.pid, signal.SIGINT) result = wait_on_process(events_proc, returncode=1) lines = result.stdout.rstrip().split('\n') assert len(lines) == 2 container, = self.project.containers() expected_template = ( ' container {} {} (image=busybox:latest, ' 'name=simplecomposefile_simple_1)') assert expected_template.format('create', container.id) in lines[0] assert expected_template.format('start', container.id) in lines[1] assert has_timestamp(lines[0]) def test_env_file_relative_to_compose_file(self): config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml') self.dispatch(['-f', config_path, 'up', '-d'], None) self._project = get_project(self.base_dir, [config_path]) containers = self.project.containers(stopped=True) self.assertEqual(len(containers), 1) self.assertIn("FOO=1", containers[0].get('Config.Env')) @mock.patch.dict(os.environ) def test_home_and_env_var_in_volume_path(self): os.environ['VOLUME_NAME'] = 'my-volume' os.environ['HOME'] = '/tmp/home-dir' self.base_dir = 'tests/fixtures/volume-path-interpolation' self.dispatch(['up', '-d'], None) container = self.project.containers(stopped=True)[0] actual_host_path = container.get_mount('/container-path')['Source'] components = actual_host_path.split('/') assert components[-2:] == ['home-dir', 'my-volume'] def test_up_with_default_override_file(self): self.base_dir = 'tests/fixtures/override-files' self.dispatch(['up', '-d'], None) containers = self.project.containers() self.assertEqual(len(containers), 2) web, db = containers self.assertEqual(web.human_readable_command, 'top') self.assertEqual(db.human_readable_command, 'top') def test_up_with_multiple_files(self): self.base_dir = 'tests/fixtures/override-files' config_paths = [ 'docker-compose.yml', 'docker-compose.override.yml', 'extra.yml', ] self._project = get_project(self.base_dir, config_paths) self.dispatch( [ '-f', config_paths[0], '-f', config_paths[1], '-f', config_paths[2], 'up', '-d', ], None) containers = self.project.containers() self.assertEqual(len(containers), 3) web, other, db = containers self.assertEqual(web.human_readable_command, 'top') self.assertTrue({'db', 'other'} <= set(get_links(web))) self.assertEqual(db.human_readable_command, 'top') self.assertEqual(other.human_readable_command, 'top') def test_up_with_extends(self): self.base_dir = 'tests/fixtures/extends' self.dispatch(['up', '-d'], None) self.assertEqual( set([s.name for s in self.project.services]), set(['mydb', 'myweb']), ) # Sort by name so we get [db, web] containers = sorted( self.project.containers(stopped=True), key=lambda c: c.name, ) self.assertEqual(len(containers), 2) web = containers[1] self.assertEqual( set(get_links(web)), set(['db', 'mydb_1', 'extends_mydb_1'])) expected_env = set([ "FOO=1", "BAR=2", "BAZ=2", ]) self.assertTrue(expected_env <= set(web.get('Config.Env'))) compose-1.8.0/tests/fixtures/000077500000000000000000000000001274620702700162075ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/UpperCaseDir/000077500000000000000000000000001274620702700205355ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/UpperCaseDir/docker-compose.yml000066400000000000000000000001371274620702700241730ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: busybox:latest command: top compose-1.8.0/tests/fixtures/build-ctx/000077500000000000000000000000001274620702700201025ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/build-ctx/Dockerfile000066400000000000000000000001201274620702700220650ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true CMD echo "success" compose-1.8.0/tests/fixtures/build-path/000077500000000000000000000000001274620702700202405ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/build-path/docker-compose.yml000066400000000000000000000000341274620702700236720ustar00rootroot00000000000000foo: build: ../build-ctx/ compose-1.8.0/tests/fixtures/bundle-with-digests/000077500000000000000000000000001274620702700220715ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/bundle-with-digests/docker-compose.yml000066400000000000000000000003601274620702700255250ustar00rootroot00000000000000 version: '2.0' services: web: image: dockercloud/hello-world@sha256:fe79a2cfbd17eefc344fb8419420808df95a1e22d93b7f621a7399fd1e9dca1d redis: image: redis@sha256:a84cb8f53a70e19f61ff2e1d5e73fb7ae62d374b2b7392de1e7d77be26ef8f7b compose-1.8.0/tests/fixtures/commands-composefile/000077500000000000000000000000001274620702700223135ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/commands-composefile/docker-compose.yml000066400000000000000000000001431274620702700257460ustar00rootroot00000000000000implicit: image: composetest_test explicit: image: composetest_test command: [ "/bin/true" ] compose-1.8.0/tests/fixtures/default-env-file/000077500000000000000000000000001274620702700213365ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/default-env-file/.env000066400000000000000000000000661274620702700221310ustar00rootroot00000000000000IMAGE=alpine:latest COMMAND=true PORT1=5643 PORT2=9999compose-1.8.0/tests/fixtures/default-env-file/docker-compose.yml000066400000000000000000000001361274620702700247730ustar00rootroot00000000000000web: image: ${IMAGE} command: ${COMMAND} ports: - $PORT1 - $PORT2 compose-1.8.0/tests/fixtures/dockerfile-with-volume/000077500000000000000000000000001274620702700225745ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/dockerfile-with-volume/Dockerfile000066400000000000000000000001221274620702700245610ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true VOLUME /data CMD top compose-1.8.0/tests/fixtures/echo-services/000077500000000000000000000000001274620702700207465ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/echo-services/docker-compose.yml000066400000000000000000000001601274620702700244000ustar00rootroot00000000000000simple: image: busybox:latest command: echo simple another: image: busybox:latest command: echo another compose-1.8.0/tests/fixtures/entrypoint-composefile/000077500000000000000000000000001274620702700227255ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/entrypoint-composefile/docker-compose.yml000066400000000000000000000001431274620702700263600ustar00rootroot00000000000000version: "2" services: test: image: busybox entrypoint: printf command: default args compose-1.8.0/tests/fixtures/entrypoint-dockerfile/000077500000000000000000000000001274620702700225275ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/entrypoint-dockerfile/Dockerfile000066400000000000000000000001531274620702700245200ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true ENTRYPOINT ["printf"] CMD ["default", "args"] compose-1.8.0/tests/fixtures/entrypoint-dockerfile/docker-compose.yml000066400000000000000000000000541274620702700261630ustar00rootroot00000000000000version: "2" services: test: build: . compose-1.8.0/tests/fixtures/env-file/000077500000000000000000000000001274620702700177145ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/env-file/docker-compose.yml000066400000000000000000000001021274620702700233420ustar00rootroot00000000000000web: image: busybox command: /bin/true env_file: ./test.env compose-1.8.0/tests/fixtures/env-file/test.env000066400000000000000000000000051274620702700214000ustar00rootroot00000000000000FOO=1compose-1.8.0/tests/fixtures/env/000077500000000000000000000000001274620702700167775ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/env/one.env000066400000000000000000000001731274620702700202730ustar00rootroot00000000000000# Keep the blank lines and comments in this file, please ONE=2 TWO=1 # (thanks) THREE=3 FOO=bar # FOO=somethingelse compose-1.8.0/tests/fixtures/env/resolve.env000066400000000000000000000000551274620702700211700ustar00rootroot00000000000000FILE_DEF=bär FILE_DEF_EMPTY= ENV_DEF NO_DEF compose-1.8.0/tests/fixtures/env/two.env000066400000000000000000000000201274620702700203120ustar00rootroot00000000000000FOO=baz DOO=dah compose-1.8.0/tests/fixtures/environment-composefile/000077500000000000000000000000001274620702700230565ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/environment-composefile/docker-compose.yml000066400000000000000000000001361274620702700265130ustar00rootroot00000000000000service: image: busybox:latest command: top environment: foo: bar hello: world compose-1.8.0/tests/fixtures/environment-interpolation/000077500000000000000000000000001274620702700234405ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/environment-interpolation/docker-compose.yml000066400000000000000000000004121274620702700270720ustar00rootroot00000000000000web: # unbracketed name image: $IMAGE # array element ports: - "${HOST_PORT}:8000" # dictionary item value labels: mylabel: "${LABEL_VALUE}" # unset value hostname: "host-${UNSET_VALUE}" # escaped interpolation command: "$${ESCAPED}" compose-1.8.0/tests/fixtures/expose-composefile/000077500000000000000000000000001274620702700220155ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/expose-composefile/docker-compose.yml000066400000000000000000000002471274620702700254550ustar00rootroot00000000000000 simple: image: busybox:latest command: top expose: - '3000' - '3001/tcp' - '3001/udp' - '3002-3003' - '3004-3005/tcp' - '3006-3007/udp' compose-1.8.0/tests/fixtures/extends/000077500000000000000000000000001274620702700176615ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/extends/circle-1.yml000066400000000000000000000002231274620702700220000ustar00rootroot00000000000000foo: image: busybox bar: image: busybox web: extends: file: circle-2.yml service: other baz: image: busybox quux: image: busybox compose-1.8.0/tests/fixtures/extends/circle-2.yml000066400000000000000000000002231274620702700220010ustar00rootroot00000000000000foo: image: busybox bar: image: busybox other: extends: file: circle-1.yml service: web baz: image: busybox quux: image: busybox compose-1.8.0/tests/fixtures/extends/common-env-labels-ulimits.yml000066400000000000000000000003041274620702700254030ustar00rootroot00000000000000web: extends: file: common.yml service: web environment: - FOO=2 - BAZ=3 labels: ['label=one'] ulimits: nproc: 65535 memlock: soft: 1024 hard: 2048 compose-1.8.0/tests/fixtures/extends/common.yml000066400000000000000000000001361274620702700216740ustar00rootroot00000000000000web: image: busybox command: /bin/true net: host environment: - FOO=1 - BAR=1 compose-1.8.0/tests/fixtures/extends/docker-compose.yml000066400000000000000000000003641274620702700233210ustar00rootroot00000000000000myweb: extends: file: common.yml service: web command: top links: - "mydb:db" environment: # leave FOO alone # override BAR BAR: "2" # add BAZ BAZ: "2" net: bridge mydb: image: busybox command: top compose-1.8.0/tests/fixtures/extends/invalid-links.yml000066400000000000000000000001751274620702700231530ustar00rootroot00000000000000mydb: build: '.' myweb: build: '.' extends: service: web command: top web: build: '.' links: - "mydb:db" compose-1.8.0/tests/fixtures/extends/invalid-net-v2.yml000066400000000000000000000002541274620702700231440ustar00rootroot00000000000000version: "2" services: myweb: build: '.' extends: service: web command: top web: build: '.' network_mode: "service:net" net: build: '.' compose-1.8.0/tests/fixtures/extends/invalid-net.yml000066400000000000000000000001471274620702700226200ustar00rootroot00000000000000myweb: build: '.' extends: service: web command: top web: build: '.' net: "container:db" compose-1.8.0/tests/fixtures/extends/invalid-volumes.yml000066400000000000000000000001541274620702700235220ustar00rootroot00000000000000myweb: build: '.' extends: service: web command: top web: build: '.' volumes_from: - "db" compose-1.8.0/tests/fixtures/extends/nested-intermediate.yml000066400000000000000000000001371274620702700243370ustar00rootroot00000000000000webintermediate: extends: file: common.yml service: web environment: - "FOO=2" compose-1.8.0/tests/fixtures/extends/nested.yml000066400000000000000000000001561274620702700216700ustar00rootroot00000000000000myweb: extends: file: nested-intermediate.yml service: webintermediate environment: - "BAR=2" compose-1.8.0/tests/fixtures/extends/no-file-specified.yml000066400000000000000000000001631274620702700236660ustar00rootroot00000000000000myweb: extends: service: web environment: - "BAR=1" web: image: busybox environment: - "BAZ=3" compose-1.8.0/tests/fixtures/extends/nonexistent-path-base.yml000066400000000000000000000001371274620702700246250ustar00rootroot00000000000000dnebase: build: nonexistent.path command: /bin/true environment: - FOO=1 - BAR=1 compose-1.8.0/tests/fixtures/extends/nonexistent-path-child.yml000066400000000000000000000002171274620702700247750ustar00rootroot00000000000000dnechild: extends: file: nonexistent-path-base.yml service: dnebase image: busybox command: /bin/true environment: - BAR=2 compose-1.8.0/tests/fixtures/extends/nonexistent-service.yml000066400000000000000000000000621274620702700244160ustar00rootroot00000000000000web: image: busybox extends: service: foo compose-1.8.0/tests/fixtures/extends/service-with-invalid-schema.yml000066400000000000000000000001111274620702700256700ustar00rootroot00000000000000myweb: extends: file: valid-composite-extends.yml service: web compose-1.8.0/tests/fixtures/extends/service-with-valid-composite-extends.yml000066400000000000000000000001301274620702700275540ustar00rootroot00000000000000myweb: build: '.' extends: file: 'valid-composite-extends.yml' service: web compose-1.8.0/tests/fixtures/extends/specify-file-as-self.yml000066400000000000000000000004221274620702700243110ustar00rootroot00000000000000myweb: extends: file: specify-file-as-self.yml service: web environment: - "BAR=1" web: extends: file: specify-file-as-self.yml service: otherweb image: busybox environment: - "BAZ=3" otherweb: image: busybox environment: - "YEP=1" compose-1.8.0/tests/fixtures/extends/valid-common-config.yml000066400000000000000000000001441274620702700242330ustar00rootroot00000000000000myweb: build: '.' extends: file: valid-common.yml service: common-config command: top compose-1.8.0/tests/fixtures/extends/valid-common.yml000066400000000000000000000000521274620702700227660ustar00rootroot00000000000000common-config: environment: - FOO=1 compose-1.8.0/tests/fixtures/extends/valid-composite-extends.yml000066400000000000000000000000241274620702700251470ustar00rootroot00000000000000web: command: top compose-1.8.0/tests/fixtures/extends/valid-interpolation-2.yml000066400000000000000000000000671274620702700245320ustar00rootroot00000000000000web: build: '.' hostname: "host-${HOSTNAME_VALUE}" compose-1.8.0/tests/fixtures/extends/valid-interpolation.yml000066400000000000000000000001261274620702700243670ustar00rootroot00000000000000myweb: extends: service: web file: valid-interpolation-2.yml command: top compose-1.8.0/tests/fixtures/extends/verbose-and-shorthand.yml000066400000000000000000000002611274620702700246000ustar00rootroot00000000000000base: image: busybox environment: - "BAR=1" verbose: extends: service: base environment: - "FOO=1" shorthand: extends: base environment: - "FOO=2" compose-1.8.0/tests/fixtures/invalid-composefile/000077500000000000000000000000001274620702700221405ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/invalid-composefile/invalid.yml000066400000000000000000000000621274620702700243070ustar00rootroot00000000000000 notaservice: oops web: image: 'alpine:edge' compose-1.8.0/tests/fixtures/links-composefile/000077500000000000000000000000001274620702700216325ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/links-composefile/docker-compose.yml000066400000000000000000000002341274620702700252660ustar00rootroot00000000000000db: image: busybox:latest command: top web: image: busybox:latest command: top links: - db:db console: image: busybox:latest command: top compose-1.8.0/tests/fixtures/logging-composefile-legacy/000077500000000000000000000000001274620702700234025ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/logging-composefile-legacy/docker-compose.yml000066400000000000000000000002551274620702700270410ustar00rootroot00000000000000simple: image: busybox:latest command: top log_driver: "none" another: image: busybox:latest command: top log_driver: "json-file" log_opt: max-size: "10m" compose-1.8.0/tests/fixtures/logging-composefile/000077500000000000000000000000001274620702700221405ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/logging-composefile/docker-compose.yml000066400000000000000000000003641274620702700256000ustar00rootroot00000000000000version: "2" services: simple: image: busybox:latest command: top logging: driver: "none" another: image: busybox:latest command: top logging: driver: "json-file" options: max-size: "10m" compose-1.8.0/tests/fixtures/logs-composefile/000077500000000000000000000000001274620702700214565ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/logs-composefile/docker-compose.yml000066400000000000000000000002211274620702700251060ustar00rootroot00000000000000simple: image: busybox:latest command: sh -c "echo hello && tail -f /dev/null" another: image: busybox:latest command: sh -c "echo test" compose-1.8.0/tests/fixtures/logs-tail-composefile/000077500000000000000000000000001274620702700224055ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/logs-tail-composefile/docker-compose.yml000066400000000000000000000001301274620702700260340ustar00rootroot00000000000000simple: image: busybox:latest command: sh -c "echo a && echo b && echo c && echo d" compose-1.8.0/tests/fixtures/longer-filename-composefile/000077500000000000000000000000001274620702700235565ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/longer-filename-composefile/docker-compose.yaml000066400000000000000000000000741274620702700273550ustar00rootroot00000000000000definedinyamlnotyml: image: busybox:latest command: top compose-1.8.0/tests/fixtures/multiple-composefiles/000077500000000000000000000000001274620702700225305ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/multiple-composefiles/compose2.yml000066400000000000000000000000631274620702700250010ustar00rootroot00000000000000yetanother: image: busybox:latest command: top compose-1.8.0/tests/fixtures/multiple-composefiles/docker-compose.yml000066400000000000000000000001371274620702700261660ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: busybox:latest command: top compose-1.8.0/tests/fixtures/net-container/000077500000000000000000000000001274620702700207555ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/net-container/docker-compose.yml000066400000000000000000000001411274620702700244060ustar00rootroot00000000000000foo: image: busybox command: top net: "container:bar" bar: image: busybox command: top compose-1.8.0/tests/fixtures/net-container/v2-invalid.yml000066400000000000000000000002071274620702700234520ustar00rootroot00000000000000version: "2" services: foo: image: busybox command: top bar: image: busybox command: top net: "container:foo" compose-1.8.0/tests/fixtures/networks/000077500000000000000000000000001274620702700200635ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/networks/bridge.yml000066400000000000000000000001601274620702700220370ustar00rootroot00000000000000version: "2" services: web: image: busybox command: top networks: - bridge - default compose-1.8.0/tests/fixtures/networks/default-network-config.yml000066400000000000000000000003611274620702700251640ustar00rootroot00000000000000version: "2" services: simple: image: busybox:latest command: top another: image: busybox:latest command: top networks: default: driver: bridge driver_opts: "com.docker.network.bridge.enable_icc": "false" compose-1.8.0/tests/fixtures/networks/docker-compose.yml000066400000000000000000000004521274620702700235210ustar00rootroot00000000000000version: "2" services: web: image: busybox command: top networks: ["front"] app: image: busybox command: top networks: ["front", "back"] links: - "db:database" db: image: busybox command: top networks: ["back"] networks: front: {} back: {} compose-1.8.0/tests/fixtures/networks/external-default.yml000066400000000000000000000003161274620702700240520ustar00rootroot00000000000000version: "2" services: simple: image: busybox:latest command: top another: image: busybox:latest command: top networks: default: external: name: composetest_external_network compose-1.8.0/tests/fixtures/networks/external-networks.yml000066400000000000000000000003161274620702700243020ustar00rootroot00000000000000version: "2" services: web: image: busybox command: top networks: - networks_foo - bar networks: networks_foo: external: true bar: external: name: networks_bar compose-1.8.0/tests/fixtures/networks/missing-network.yml000066400000000000000000000001561274620702700237500ustar00rootroot00000000000000version: "2" services: web: image: busybox command: top networks: ["foo"] networks: bar: {} compose-1.8.0/tests/fixtures/networks/network-aliases.yml000066400000000000000000000003121274620702700237120ustar00rootroot00000000000000version: "2" services: web: image: busybox command: top networks: front: aliases: - forward_facing - ahead back: networks: front: {} back: {} compose-1.8.0/tests/fixtures/networks/network-mode.yml000066400000000000000000000006551274620702700232270ustar00rootroot00000000000000version: "2" services: bridge: image: busybox command: top network_mode: bridge service: image: busybox command: top network_mode: "service:bridge" container: image: busybox command: top network_mode: "container:composetest_network_mode_container" host: image: busybox command: top network_mode: host none: image: busybox command: top network_mode: none compose-1.8.0/tests/fixtures/networks/network-static-addresses.yml000077500000000000000000000006751274620702700255520ustar00rootroot00000000000000version: "2" services: web: image: busybox command: top networks: static_test: ipv4_address: 172.16.100.100 ipv6_address: fe80::1001:100 networks: static_test: driver: bridge driver_opts: com.docker.network.enable_ipv6: "true" ipam: driver: default config: - subnet: 172.16.100.0/24 gateway: 172.16.100.1 - subnet: fe80::/64 gateway: fe80::1001:1 compose-1.8.0/tests/fixtures/no-composefile/000077500000000000000000000000001274620702700211265ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/no-composefile/.gitignore000066400000000000000000000000001274620702700231040ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/no-links-composefile/000077500000000000000000000000001274620702700222445ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/no-links-composefile/docker-compose.yml000066400000000000000000000002071274620702700257000ustar00rootroot00000000000000db: image: busybox:latest command: top web: image: busybox:latest command: top console: image: busybox:latest command: top compose-1.8.0/tests/fixtures/no-services/000077500000000000000000000000001274620702700204445ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/no-services/docker-compose.yml000066400000000000000000000000541274620702700241000ustar00rootroot00000000000000version: "2" networks: foo: {} bar: {} compose-1.8.0/tests/fixtures/override-files/000077500000000000000000000000001274620702700211265ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/override-files/docker-compose.override.yml000066400000000000000000000000611274620702700263760ustar00rootroot00000000000000 web: command: "top" db: command: "top" compose-1.8.0/tests/fixtures/override-files/docker-compose.yml000066400000000000000000000002111274620702700245550ustar00rootroot00000000000000 web: image: busybox:latest command: "sleep 200" links: - db db: image: busybox:latest command: "sleep 200" compose-1.8.0/tests/fixtures/override-files/extra.yml000066400000000000000000000001431274620702700227720ustar00rootroot00000000000000 web: links: - db - other other: image: busybox:latest command: "top" compose-1.8.0/tests/fixtures/ports-composefile-scale/000077500000000000000000000000001274620702700227465ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/ports-composefile-scale/docker-compose.yml000066400000000000000000000001211274620702700263750ustar00rootroot00000000000000 simple: image: busybox:latest command: /bin/sleep 300 ports: - '3000' compose-1.8.0/tests/fixtures/ports-composefile/000077500000000000000000000000001274620702700216615ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/ports-composefile/docker-compose.yml000066400000000000000000000001671274620702700253220ustar00rootroot00000000000000 simple: image: busybox:latest command: top ports: - '3000' - '49152:3001' - '49153-49154:3002-3003' compose-1.8.0/tests/fixtures/restart/000077500000000000000000000000001274620702700176735ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/restart/docker-compose.yml000066400000000000000000000003561274620702700233340ustar00rootroot00000000000000version: "2" services: never: image: busybox restart: "no" always: image: busybox restart: always on-failure: image: busybox restart: on-failure on-failure-5: image: busybox restart: "on-failure:5" compose-1.8.0/tests/fixtures/run-workdir/000077500000000000000000000000001274620702700204725ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/run-workdir/docker-compose.yml000066400000000000000000000001121274620702700241210ustar00rootroot00000000000000service: image: busybox:latest working_dir: /etc command: /bin/true compose-1.8.0/tests/fixtures/simple-composefile/000077500000000000000000000000001274620702700220035ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/simple-composefile/digest.yml000066400000000000000000000002371274620702700240070ustar00rootroot00000000000000simple: image: busybox:latest command: top digest: image: busybox@sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d command: top compose-1.8.0/tests/fixtures/simple-composefile/docker-compose.yml000066400000000000000000000001371274620702700254410ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: busybox:latest command: top compose-1.8.0/tests/fixtures/simple-composefile/ignore-pull-failures.yml000066400000000000000000000001511274620702700265700ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: nonexisting-image:latest command: top compose-1.8.0/tests/fixtures/simple-dockerfile/000077500000000000000000000000001274620702700216055ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/simple-dockerfile/Dockerfile000066400000000000000000000001201274620702700235700ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true CMD echo "success" compose-1.8.0/tests/fixtures/simple-dockerfile/docker-compose.yml000066400000000000000000000000231274620702700252350ustar00rootroot00000000000000simple: build: . compose-1.8.0/tests/fixtures/simple-failing-dockerfile/000077500000000000000000000000001274620702700232145ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/simple-failing-dockerfile/Dockerfile000066400000000000000000000004571274620702700252140ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true LABEL com.docker.compose.test_failing_image=true # With the following label the container wil be cleaned up automatically # Must be kept in sync with LABEL_PROJECT from compose/const.py LABEL com.docker.compose.project=composetest RUN exit 1 compose-1.8.0/tests/fixtures/simple-failing-dockerfile/docker-compose.yml000066400000000000000000000000231274620702700266440ustar00rootroot00000000000000simple: build: . compose-1.8.0/tests/fixtures/sleeps-composefile/000077500000000000000000000000001274620702700220055ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/sleeps-composefile/docker-compose.yml000066400000000000000000000002201274620702700254340ustar00rootroot00000000000000 version: "2" services: simple: image: busybox:latest command: sleep 200 another: image: busybox:latest command: sleep 200 compose-1.8.0/tests/fixtures/stop-signal-composefile/000077500000000000000000000000001274620702700227525ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/stop-signal-composefile/docker-compose.yml000066400000000000000000000002671274620702700264140ustar00rootroot00000000000000simple: image: busybox:latest command: - sh - '-c' - | trap 'exit 0' SIGINT trap 'exit 1' SIGTERM while true; do :; done stop_signal: SIGINT compose-1.8.0/tests/fixtures/tls/000077500000000000000000000000001274620702700170115ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/tls/ca.pem000066400000000000000000000000001274620702700200650ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/tls/cert.pem000066400000000000000000000000001274620702700204370ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/tls/key.key000066400000000000000000000000001274620702700203010ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/user-composefile/000077500000000000000000000000001274620702700214705ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/user-composefile/docker-compose.yml000066400000000000000000000001001274620702700251140ustar00rootroot00000000000000service: image: busybox:latest user: notauser command: id compose-1.8.0/tests/fixtures/v1-config/000077500000000000000000000000001274620702700200005ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/v1-config/docker-compose.yml000066400000000000000000000002161274620702700234340ustar00rootroot00000000000000net: image: busybox volume: image: busybox volumes: - /data app: image: busybox net: "container:net" volumes_from: ["volume"] compose-1.8.0/tests/fixtures/v2-dependencies/000077500000000000000000000000001274620702700211625ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/v2-dependencies/docker-compose.yml000066400000000000000000000003431274620702700246170ustar00rootroot00000000000000version: "2.0" services: db: image: busybox:latest command: top web: image: busybox:latest command: top depends_on: - db console: image: busybox:latest command: top compose-1.8.0/tests/fixtures/v2-full/000077500000000000000000000000001274620702700174765ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/v2-full/Dockerfile000066400000000000000000000000621274620702700214660ustar00rootroot00000000000000 FROM busybox:latest RUN echo something CMD top compose-1.8.0/tests/fixtures/v2-full/docker-compose.yml000066400000000000000000000004041274620702700231310ustar00rootroot00000000000000 version: "2" volumes: data: driver: local networks: front: {} services: web: build: . networks: - front - default volumes_from: - other other: image: busybox:latest command: top volumes: - /data compose-1.8.0/tests/fixtures/v2-simple/000077500000000000000000000000001274620702700200255ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/v2-simple/docker-compose.yml000066400000000000000000000002021274620702700234540ustar00rootroot00000000000000version: "2" services: simple: image: busybox:latest command: top another: image: busybox:latest command: top compose-1.8.0/tests/fixtures/v2-simple/links-invalid.yml000066400000000000000000000002351274620702700233140ustar00rootroot00000000000000version: "2" services: simple: image: busybox:latest command: top links: - another another: image: busybox:latest command: top compose-1.8.0/tests/fixtures/volume-path-interpolation/000077500000000000000000000000001274620702700233355ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/volume-path-interpolation/docker-compose.yml000066400000000000000000000001321274620702700267660ustar00rootroot00000000000000test: image: busybox command: top volumes: - "~/${VOLUME_NAME}:/container-path" compose-1.8.0/tests/fixtures/volume-path/000077500000000000000000000000001274620702700204505ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/volume-path/common/000077500000000000000000000000001274620702700217405ustar00rootroot00000000000000compose-1.8.0/tests/fixtures/volume-path/common/services.yml000066400000000000000000000001021274620702700242770ustar00rootroot00000000000000db: image: busybox volumes: - ./foo:/foo - ./bar:/bar compose-1.8.0/tests/fixtures/volume-path/docker-compose.yml000066400000000000000000000001311274620702700241000ustar00rootroot00000000000000db: extends: file: common/services.yml service: db volumes: - ./bar:/bar compose-1.8.0/tests/helpers.py000066400000000000000000000007571274620702700163630ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from compose.config.config import ConfigDetails from compose.config.config import ConfigFile from compose.config.config import load def build_config(contents, **kwargs): return load(build_config_details(contents, **kwargs)) def build_config_details(contents, working_dir='working_dir', filename='filename.yml'): return ConfigDetails( working_dir, [ConfigFile(filename, contents)], ) compose-1.8.0/tests/integration/000077500000000000000000000000001274620702700166615ustar00rootroot00000000000000compose-1.8.0/tests/integration/__init__.py000066400000000000000000000000001274620702700207600ustar00rootroot00000000000000compose-1.8.0/tests/integration/project_test.py000066400000000000000000001162101274620702700217410ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import random import py import pytest from docker.errors import NotFound from .. import mock from ..helpers import build_config from .testcases import DockerClientTestCase from compose.config import config from compose.config import ConfigurationError from compose.config.config import V2_0 from compose.config.types import VolumeFromSpec from compose.config.types import VolumeSpec from compose.const import LABEL_PROJECT from compose.const import LABEL_SERVICE from compose.container import Container from compose.project import Project from compose.project import ProjectError from compose.service import ConvergenceStrategy from tests.integration.testcases import v2_only class ProjectTest(DockerClientTestCase): def test_containers(self): web = self.create_service('web') db = self.create_service('db') project = Project('composetest', [web, db], self.client) project.up() containers = project.containers() self.assertEqual(len(containers), 2) def test_containers_with_service_names(self): web = self.create_service('web') db = self.create_service('db') project = Project('composetest', [web, db], self.client) project.up() containers = project.containers(['web']) self.assertEqual( [c.name for c in containers], ['composetest_web_1']) def test_containers_with_extra_service(self): web = self.create_service('web') web_1 = web.create_container() db = self.create_service('db') db_1 = db.create_container() self.create_service('extra').create_container() project = Project('composetest', [web, db], self.client) self.assertEqual( set(project.containers(stopped=True)), set([web_1, db_1]), ) def test_volumes_from_service(self): project = Project.from_config( name='composetest', config_data=build_config({ 'data': { 'image': 'busybox:latest', 'volumes': ['/var/data'], }, 'db': { 'image': 'busybox:latest', 'volumes_from': ['data'], }, }), client=self.client, ) db = project.get_service('db') data = project.get_service('data') self.assertEqual(db.volumes_from, [VolumeFromSpec(data, 'rw', 'service')]) def test_volumes_from_container(self): data_container = Container.create( self.client, image='busybox:latest', volumes=['/var/data'], name='composetest_data_container', labels={LABEL_PROJECT: 'composetest'}, ) project = Project.from_config( name='composetest', config_data=build_config({ 'db': { 'image': 'busybox:latest', 'volumes_from': ['composetest_data_container'], }, }), client=self.client, ) db = project.get_service('db') self.assertEqual(db._get_volumes_from(), [data_container.id + ':rw']) @v2_only() def test_network_mode_from_service(self): project = Project.from_config( name='composetest', client=self.client, config_data=build_config({ 'version': V2_0, 'services': { 'net': { 'image': 'busybox:latest', 'command': ["top"] }, 'web': { 'image': 'busybox:latest', 'network_mode': 'service:net', 'command': ["top"] }, }, }), ) project.up() web = project.get_service('web') net = project.get_service('net') self.assertEqual(web.network_mode.mode, 'container:' + net.containers()[0].id) @v2_only() def test_network_mode_from_container(self): def get_project(): return Project.from_config( name='composetest', config_data=build_config({ 'version': V2_0, 'services': { 'web': { 'image': 'busybox:latest', 'network_mode': 'container:composetest_net_container' }, }, }), client=self.client, ) with pytest.raises(ConfigurationError) as excinfo: get_project() assert "container 'composetest_net_container' which does not exist" in excinfo.exconly() net_container = Container.create( self.client, image='busybox:latest', name='composetest_net_container', command='top', labels={LABEL_PROJECT: 'composetest'}, ) net_container.start() project = get_project() project.up() web = project.get_service('web') self.assertEqual(web.network_mode.mode, 'container:' + net_container.id) def test_net_from_service_v1(self): project = Project.from_config( name='composetest', config_data=build_config({ 'net': { 'image': 'busybox:latest', 'command': ["top"] }, 'web': { 'image': 'busybox:latest', 'net': 'container:net', 'command': ["top"] }, }), client=self.client, ) project.up() web = project.get_service('web') net = project.get_service('net') self.assertEqual(web.network_mode.mode, 'container:' + net.containers()[0].id) def test_net_from_container_v1(self): def get_project(): return Project.from_config( name='composetest', config_data=build_config({ 'web': { 'image': 'busybox:latest', 'net': 'container:composetest_net_container' }, }), client=self.client, ) with pytest.raises(ConfigurationError) as excinfo: get_project() assert "container 'composetest_net_container' which does not exist" in excinfo.exconly() net_container = Container.create( self.client, image='busybox:latest', name='composetest_net_container', command='top', labels={LABEL_PROJECT: 'composetest'}, ) net_container.start() project = get_project() project.up() web = project.get_service('web') self.assertEqual(web.network_mode.mode, 'container:' + net_container.id) def test_start_pause_unpause_stop_kill_remove(self): web = self.create_service('web') db = self.create_service('db') project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(web.containers()), 0) self.assertEqual(len(db.containers()), 0) web_container_1 = web.create_container() web_container_2 = web.create_container() db_container = db.create_container() project.start(service_names=['web']) self.assertEqual( set(c.name for c in project.containers()), set([web_container_1.name, web_container_2.name])) project.start() self.assertEqual( set(c.name for c in project.containers()), set([web_container_1.name, web_container_2.name, db_container.name])) project.pause(service_names=['web']) self.assertEqual( set([c.name for c in project.containers() if c.is_paused]), set([web_container_1.name, web_container_2.name])) project.pause() self.assertEqual( set([c.name for c in project.containers() if c.is_paused]), set([web_container_1.name, web_container_2.name, db_container.name])) project.unpause(service_names=['db']) self.assertEqual(len([c.name for c in project.containers() if c.is_paused]), 2) project.unpause() self.assertEqual(len([c.name for c in project.containers() if c.is_paused]), 0) project.stop(service_names=['web'], timeout=1) self.assertEqual(set(c.name for c in project.containers()), set([db_container.name])) project.kill(service_names=['db']) self.assertEqual(len(project.containers()), 0) self.assertEqual(len(project.containers(stopped=True)), 3) project.remove_stopped(service_names=['web']) self.assertEqual(len(project.containers(stopped=True)), 1) project.remove_stopped() self.assertEqual(len(project.containers(stopped=True)), 0) def test_create(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.create(['db']) self.assertEqual(len(project.containers()), 0) self.assertEqual(len(project.containers(stopped=True)), 1) self.assertEqual(len(db.containers()), 0) self.assertEqual(len(db.containers(stopped=True)), 1) self.assertEqual(len(web.containers(stopped=True)), 0) def test_create_twice(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.create(['db', 'web']) project.create(['db', 'web']) self.assertEqual(len(project.containers()), 0) self.assertEqual(len(project.containers(stopped=True)), 2) self.assertEqual(len(db.containers()), 0) self.assertEqual(len(db.containers(stopped=True)), 1) self.assertEqual(len(web.containers()), 0) self.assertEqual(len(web.containers(stopped=True)), 1) def test_create_with_links(self): db = self.create_service('db') web = self.create_service('web', links=[(db, 'db')]) project = Project('composetest', [db, web], self.client) project.create(['web']) self.assertEqual(len(project.containers()), 0) self.assertEqual(len(project.containers(stopped=True)), 2) self.assertEqual(len(db.containers()), 0) self.assertEqual(len(db.containers(stopped=True)), 1) self.assertEqual(len(web.containers()), 0) self.assertEqual(len(web.containers(stopped=True)), 1) def test_create_strategy_always(self): db = self.create_service('db') project = Project('composetest', [db], self.client) project.create(['db']) old_id = project.containers(stopped=True)[0].id project.create(['db'], strategy=ConvergenceStrategy.always) self.assertEqual(len(project.containers()), 0) self.assertEqual(len(project.containers(stopped=True)), 1) db_container = project.containers(stopped=True)[0] self.assertNotEqual(db_container.id, old_id) def test_create_strategy_never(self): db = self.create_service('db') project = Project('composetest', [db], self.client) project.create(['db']) old_id = project.containers(stopped=True)[0].id project.create(['db'], strategy=ConvergenceStrategy.never) self.assertEqual(len(project.containers()), 0) self.assertEqual(len(project.containers(stopped=True)), 1) db_container = project.containers(stopped=True)[0] self.assertEqual(db_container.id, old_id) def test_project_up(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) self.assertEqual(len(project.containers()), 1) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(web.containers()), 0) def test_project_up_starts_uncreated_services(self): db = self.create_service('db') web = self.create_service('web', links=[(db, 'db')]) project = Project('composetest', [db, web], self.client) project.up(['db']) self.assertEqual(len(project.containers()), 1) project.up() self.assertEqual(len(project.containers()), 2) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(web.containers()), 1) def test_recreate_preserves_volumes(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/etc')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) self.assertEqual(len(project.containers()), 1) old_db_id = project.containers()[0].id db_volume_path = project.containers()[0].get('Volumes./etc') project.up(strategy=ConvergenceStrategy.always) self.assertEqual(len(project.containers()), 2) db_container = [c for c in project.containers() if 'db' in c.name][0] self.assertNotEqual(db_container.id, old_db_id) self.assertEqual(db_container.get('Volumes./etc'), db_volume_path) def test_project_up_with_no_recreate_running(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) self.assertEqual(len(project.containers()), 1) old_db_id = project.containers()[0].id container, = project.containers() db_volume_path = container.get_mount('/var/db')['Source'] project.up(strategy=ConvergenceStrategy.never) self.assertEqual(len(project.containers()), 2) db_container = [c for c in project.containers() if 'db' in c.name][0] self.assertEqual(db_container.id, old_db_id) self.assertEqual( db_container.get_mount('/var/db')['Source'], db_volume_path) def test_project_up_with_no_recreate_stopped(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) project.kill() old_containers = project.containers(stopped=True) self.assertEqual(len(old_containers), 1) old_container, = old_containers old_db_id = old_container.id db_volume_path = old_container.get_mount('/var/db')['Source'] project.up(strategy=ConvergenceStrategy.never) new_containers = project.containers(stopped=True) self.assertEqual(len(new_containers), 2) self.assertEqual([c.is_running for c in new_containers], [True, True]) db_container = [c for c in new_containers if 'db' in c.name][0] self.assertEqual(db_container.id, old_db_id) self.assertEqual( db_container.get_mount('/var/db')['Source'], db_volume_path) def test_project_up_without_all_services(self): console = self.create_service('console') db = self.create_service('db') project = Project('composetest', [console, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up() self.assertEqual(len(project.containers()), 2) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 1) def test_project_up_starts_links(self): console = self.create_service('console') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) web = self.create_service('web', links=[(db, 'db')]) project = Project('composetest', [web, db, console], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['web']) self.assertEqual(len(project.containers()), 2) self.assertEqual(len(web.containers()), 1) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 0) def test_project_up_starts_depends(self): project = Project.from_config( name='composetest', config_data=build_config({ 'console': { 'image': 'busybox:latest', 'command': ["top"], }, 'data': { 'image': 'busybox:latest', 'command': ["top"] }, 'db': { 'image': 'busybox:latest', 'command': ["top"], 'volumes_from': ['data'], }, 'web': { 'image': 'busybox:latest', 'command': ["top"], 'links': ['db'], }, }), client=self.client, ) project.start() self.assertEqual(len(project.containers()), 0) project.up(['web']) self.assertEqual(len(project.containers()), 3) self.assertEqual(len(project.get_service('web').containers()), 1) self.assertEqual(len(project.get_service('db').containers()), 1) self.assertEqual(len(project.get_service('data').containers()), 1) self.assertEqual(len(project.get_service('console').containers()), 0) def test_project_up_with_no_deps(self): project = Project.from_config( name='composetest', config_data=build_config({ 'console': { 'image': 'busybox:latest', 'command': ["top"], }, 'data': { 'image': 'busybox:latest', 'command': ["top"] }, 'db': { 'image': 'busybox:latest', 'command': ["top"], 'volumes_from': ['data'], }, 'web': { 'image': 'busybox:latest', 'command': ["top"], 'links': ['db'], }, }), client=self.client, ) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db'], start_deps=False) self.assertEqual(len(project.containers(stopped=True)), 2) self.assertEqual(len(project.get_service('web').containers()), 0) self.assertEqual(len(project.get_service('db').containers()), 1) self.assertEqual(len(project.get_service('data').containers()), 0) self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1) self.assertEqual(len(project.get_service('console').containers()), 0) def test_unscale_after_restart(self): web = self.create_service('web') project = Project('composetest', [web], self.client) project.start() service = project.get_service('web') service.scale(1) self.assertEqual(len(service.containers()), 1) service.scale(3) self.assertEqual(len(service.containers()), 3) project.up() service = project.get_service('web') self.assertEqual(len(service.containers()), 3) service.scale(1) self.assertEqual(len(service.containers()), 1) project.up() service = project.get_service('web') self.assertEqual(len(service.containers()), 1) # does scale=0 ,makes any sense? after recreating at least 1 container is running service.scale(0) project.up() service = project.get_service('web') self.assertEqual(len(service.containers()), 1) @v2_only() def test_project_up_networks(self): config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top', 'networks': { 'foo': None, 'bar': None, 'baz': {'aliases': ['extra']}, }, }], volumes={}, networks={ 'foo': {'driver': 'bridge'}, 'bar': {'driver': None}, 'baz': {}, }, ) project = Project.from_config( client=self.client, name='composetest', config_data=config_data, ) project.up() containers = project.containers() assert len(containers) == 1 container, = containers for net_name in ['foo', 'bar', 'baz']: full_net_name = 'composetest_{}'.format(net_name) network_data = self.client.inspect_network(full_net_name) assert network_data['Name'] == full_net_name aliases_key = 'NetworkSettings.Networks.{net}.Aliases' assert 'web' in container.get(aliases_key.format(net='composetest_foo')) assert 'web' in container.get(aliases_key.format(net='composetest_baz')) assert 'extra' in container.get(aliases_key.format(net='composetest_baz')) foo_data = self.client.inspect_network('composetest_foo') assert foo_data['Driver'] == 'bridge' @v2_only() def test_up_with_ipam_config(self): config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'networks': {'front': None}, }], volumes={}, networks={ 'front': { 'driver': 'bridge', 'driver_opts': { "com.docker.network.bridge.enable_icc": "false", }, 'ipam': { 'driver': 'default', 'config': [{ "subnet": "172.28.0.0/16", "ip_range": "172.28.5.0/24", "gateway": "172.28.5.254", "aux_addresses": { "a": "172.28.1.5", "b": "172.28.1.6", "c": "172.28.1.7", }, }], }, }, }, ) project = Project.from_config( client=self.client, name='composetest', config_data=config_data, ) project.up() network = self.client.networks(names=['composetest_front'])[0] assert network['Options'] == { "com.docker.network.bridge.enable_icc": "false" } assert network['IPAM'] == { 'Driver': 'default', 'Options': None, 'Config': [{ 'Subnet': "172.28.0.0/16", 'IPRange': "172.28.5.0/24", 'Gateway': "172.28.5.254", 'AuxiliaryAddresses': { 'a': '172.28.1.5', 'b': '172.28.1.6', 'c': '172.28.1.7', }, }], } @v2_only() def test_up_with_network_static_addresses(self): config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top', 'networks': { 'static_test': { 'ipv4_address': '172.16.100.100', 'ipv6_address': 'fe80::1001:102' } }, }], volumes={}, networks={ 'static_test': { 'driver': 'bridge', 'driver_opts': { "com.docker.network.enable_ipv6": "true", }, 'ipam': { 'driver': 'default', 'config': [ {"subnet": "172.16.100.0/24", "gateway": "172.16.100.1"}, {"subnet": "fe80::/64", "gateway": "fe80::1001:1"} ] } } } ) project = Project.from_config( client=self.client, name='composetest', config_data=config_data, ) project.up(detached=True) network = self.client.networks(names=['static_test'])[0] service_container = project.get_service('web').containers()[0] assert network['Options'] == { "com.docker.network.enable_ipv6": "true" } IPAMConfig = (service_container.inspect().get('NetworkSettings', {}). get('Networks', {}).get('composetest_static_test', {}). get('IPAMConfig', {})) assert IPAMConfig.get('IPv4Address') == '172.16.100.100' assert IPAMConfig.get('IPv6Address') == 'fe80::1001:102' @v2_only() def test_up_with_network_static_addresses_missing_subnet(self): config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'networks': { 'static_test': { 'ipv4_address': '172.16.100.100', 'ipv6_address': 'fe80::1001:101' } }, }], volumes={}, networks={ 'static_test': { 'driver': 'bridge', 'driver_opts': { "com.docker.network.enable_ipv6": "true", }, 'ipam': { 'driver': 'default', }, }, }, ) project = Project.from_config( client=self.client, name='composetest', config_data=config_data, ) with self.assertRaises(ProjectError): project.up() @v2_only() def test_project_up_volumes(self): vol_name = '{0:x}'.format(random.getrandbits(32)) full_vol_name = 'composetest_{0}'.format(vol_name) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={vol_name: {'driver': 'local'}}, networks={}, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.up() self.assertEqual(len(project.containers()), 1) volume_data = self.client.inspect_volume(full_vol_name) self.assertEqual(volume_data['Name'], full_vol_name) self.assertEqual(volume_data['Driver'], 'local') @v2_only() def test_project_up_logging_with_multiple_files(self): base_file = config.ConfigFile( 'base.yml', { 'version': V2_0, 'services': { 'simple': {'image': 'busybox:latest', 'command': 'top'}, 'another': { 'image': 'busybox:latest', 'command': 'top', 'logging': { 'driver': "json-file", 'options': { 'max-size': "10m" } } } } }) override_file = config.ConfigFile( 'override.yml', { 'version': V2_0, 'services': { 'another': { 'logging': { 'driver': "none" } } } }) details = config.ConfigDetails('.', [base_file, override_file]) tmpdir = py.test.ensuretemp('logging_test') self.addCleanup(tmpdir.remove) with tmpdir.as_cwd(): config_data = config.load(details) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.up() containers = project.containers() self.assertEqual(len(containers), 2) another = project.get_service('another').containers()[0] log_config = another.get('HostConfig.LogConfig') self.assertTrue(log_config) self.assertEqual(log_config.get('Type'), 'none') @v2_only() def test_project_up_port_mappings_with_multiple_files(self): base_file = config.ConfigFile( 'base.yml', { 'version': V2_0, 'services': { 'simple': { 'image': 'busybox:latest', 'command': 'top', 'ports': ['1234:1234'] }, }, }) override_file = config.ConfigFile( 'override.yml', { 'version': V2_0, 'services': { 'simple': { 'ports': ['1234:1234'] } } }) details = config.ConfigDetails('.', [base_file, override_file]) config_data = config.load(details) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.up() containers = project.containers() self.assertEqual(len(containers), 1) @v2_only() def test_initialize_volumes(self): vol_name = '{0:x}'.format(random.getrandbits(32)) full_vol_name = 'composetest_{0}'.format(vol_name) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={vol_name: {}}, networks={}, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.volumes.initialize() volume_data = self.client.inspect_volume(full_vol_name) self.assertEqual(volume_data['Name'], full_vol_name) self.assertEqual(volume_data['Driver'], 'local') @v2_only() def test_project_up_implicit_volume_driver(self): vol_name = '{0:x}'.format(random.getrandbits(32)) full_vol_name = 'composetest_{0}'.format(vol_name) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={vol_name: {}}, networks={}, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.up() volume_data = self.client.inspect_volume(full_vol_name) self.assertEqual(volume_data['Name'], full_vol_name) self.assertEqual(volume_data['Driver'], 'local') @v2_only() def test_initialize_volumes_invalid_volume_driver(self): vol_name = '{0:x}'.format(random.getrandbits(32)) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={vol_name: {'driver': 'foobar'}}, networks={}, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) with self.assertRaises(config.ConfigurationError): project.volumes.initialize() @v2_only() def test_initialize_volumes_updated_driver(self): vol_name = '{0:x}'.format(random.getrandbits(32)) full_vol_name = 'composetest_{0}'.format(vol_name) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={vol_name: {'driver': 'local'}}, networks={}, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.volumes.initialize() volume_data = self.client.inspect_volume(full_vol_name) self.assertEqual(volume_data['Name'], full_vol_name) self.assertEqual(volume_data['Driver'], 'local') config_data = config_data._replace( volumes={vol_name: {'driver': 'smb'}} ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) with self.assertRaises(config.ConfigurationError) as e: project.volumes.initialize() assert 'Configuration for volume {0} specifies driver smb'.format( vol_name ) in str(e.exception) @v2_only() def test_initialize_volumes_updated_blank_driver(self): vol_name = '{0:x}'.format(random.getrandbits(32)) full_vol_name = 'composetest_{0}'.format(vol_name) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={vol_name: {'driver': 'local'}}, networks={}, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.volumes.initialize() volume_data = self.client.inspect_volume(full_vol_name) self.assertEqual(volume_data['Name'], full_vol_name) self.assertEqual(volume_data['Driver'], 'local') config_data = config_data._replace( volumes={vol_name: {}} ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.volumes.initialize() volume_data = self.client.inspect_volume(full_vol_name) self.assertEqual(volume_data['Name'], full_vol_name) self.assertEqual(volume_data['Driver'], 'local') @v2_only() def test_initialize_volumes_external_volumes(self): # Use composetest_ prefix so it gets garbage-collected in tearDown() vol_name = 'composetest_{0:x}'.format(random.getrandbits(32)) full_vol_name = 'composetest_{0}'.format(vol_name) self.client.create_volume(vol_name) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={ vol_name: {'external': True, 'external_name': vol_name} }, networks=None, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.volumes.initialize() with self.assertRaises(NotFound): self.client.inspect_volume(full_vol_name) @v2_only() def test_initialize_volumes_inexistent_external_volume(self): vol_name = '{0:x}'.format(random.getrandbits(32)) config_data = config.Config( version=V2_0, services=[{ 'name': 'web', 'image': 'busybox:latest', 'command': 'top' }], volumes={ vol_name: {'external': True, 'external_name': vol_name} }, networks=None, ) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) with self.assertRaises(config.ConfigurationError) as e: project.volumes.initialize() assert 'Volume {0} declared as external'.format( vol_name ) in str(e.exception) @v2_only() def test_project_up_named_volumes_in_binds(self): vol_name = '{0:x}'.format(random.getrandbits(32)) full_vol_name = 'composetest_{0}'.format(vol_name) base_file = config.ConfigFile( 'base.yml', { 'version': V2_0, 'services': { 'simple': { 'image': 'busybox:latest', 'command': 'top', 'volumes': ['{0}:/data'.format(vol_name)] }, }, 'volumes': { vol_name: {'driver': 'local'} } }) config_details = config.ConfigDetails('.', [base_file]) config_data = config.load(config_details) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) service = project.services[0] self.assertEqual(service.name, 'simple') volumes = service.options.get('volumes') self.assertEqual(len(volumes), 1) self.assertEqual(volumes[0].external, full_vol_name) project.up() engine_volumes = self.client.volumes()['Volumes'] container = service.get_container() assert [mount['Name'] for mount in container.get('Mounts')] == [full_vol_name] assert next((v for v in engine_volumes if v['Name'] == vol_name), None) is None def test_project_up_orphans(self): config_dict = { 'service1': { 'image': 'busybox:latest', 'command': 'top', } } config_data = build_config(config_dict) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) project.up() config_dict['service2'] = config_dict['service1'] del config_dict['service1'] config_data = build_config(config_dict) project = Project.from_config( name='composetest', config_data=config_data, client=self.client ) with mock.patch('compose.project.log') as mock_log: project.up() mock_log.warning.assert_called_once_with(mock.ANY) assert len([ ctnr for ctnr in project._labeled_containers() if ctnr.labels.get(LABEL_SERVICE) == 'service1' ]) == 1 project.up(remove_orphans=True) assert len([ ctnr for ctnr in project._labeled_containers() if ctnr.labels.get(LABEL_SERVICE) == 'service1' ]) == 0 compose-1.8.0/tests/integration/resilience_test.py000066400000000000000000000033471274620702700224230ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from .. import mock from .testcases import DockerClientTestCase from compose.config.types import VolumeSpec from compose.project import Project from compose.service import ConvergenceStrategy class ResilienceTest(DockerClientTestCase): def setUp(self): self.db = self.create_service( 'db', volumes=[VolumeSpec.parse('/var/db')], command='top') self.project = Project('composetest', [self.db], self.client) container = self.db.create_container() self.db.start_container(container) self.host_path = container.get_mount('/var/db')['Source'] def test_successful_recreate(self): self.project.up(strategy=ConvergenceStrategy.always) container = self.db.containers()[0] self.assertEqual(container.get_mount('/var/db')['Source'], self.host_path) def test_create_failure(self): with mock.patch('compose.service.Service.create_container', crash): with self.assertRaises(Crash): self.project.up(strategy=ConvergenceStrategy.always) self.project.up() container = self.db.containers()[0] self.assertEqual(container.get_mount('/var/db')['Source'], self.host_path) def test_start_failure(self): with mock.patch('compose.service.Service.start_container', crash): with self.assertRaises(Crash): self.project.up(strategy=ConvergenceStrategy.always) self.project.up() container = self.db.containers()[0] self.assertEqual(container.get_mount('/var/db')['Source'], self.host_path) class Crash(Exception): pass def crash(*args, **kwargs): raise Crash() compose-1.8.0/tests/integration/service_test.py000066400000000000000000001252501274620702700217370ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import os import shutil import tempfile from os import path import pytest from docker.errors import APIError from six import StringIO from six import text_type from .. import mock from .testcases import DockerClientTestCase from .testcases import get_links from .testcases import pull_busybox from compose import __version__ from compose.config.types import VolumeFromSpec from compose.config.types import VolumeSpec from compose.const import LABEL_CONFIG_HASH from compose.const import LABEL_CONTAINER_NUMBER from compose.const import LABEL_ONE_OFF from compose.const import LABEL_PROJECT from compose.const import LABEL_SERVICE from compose.const import LABEL_VERSION from compose.container import Container from compose.project import OneOffFilter from compose.service import ConvergencePlan from compose.service import ConvergenceStrategy from compose.service import NetworkMode from compose.service import Service from tests.integration.testcases import v2_only def create_and_start_container(service, **override_options): container = service.create_container(**override_options) return service.start_container(container) class ServiceTest(DockerClientTestCase): def test_containers(self): foo = self.create_service('foo') bar = self.create_service('bar') create_and_start_container(foo) self.assertEqual(len(foo.containers()), 1) self.assertEqual(foo.containers()[0].name, 'composetest_foo_1') self.assertEqual(len(bar.containers()), 0) create_and_start_container(bar) create_and_start_container(bar) self.assertEqual(len(foo.containers()), 1) self.assertEqual(len(bar.containers()), 2) names = [c.name for c in bar.containers()] self.assertIn('composetest_bar_1', names) self.assertIn('composetest_bar_2', names) def test_containers_one_off(self): db = self.create_service('db') container = db.create_container(one_off=True) self.assertEqual(db.containers(stopped=True), []) self.assertEqual(db.containers(one_off=OneOffFilter.only, stopped=True), [container]) def test_project_is_added_to_container_name(self): service = self.create_service('web') create_and_start_container(service) self.assertEqual(service.containers()[0].name, 'composetest_web_1') def test_create_container_with_one_off(self): db = self.create_service('db') container = db.create_container(one_off=True) self.assertEqual(container.name, 'composetest_db_run_1') def test_create_container_with_one_off_when_existing_container_is_running(self): db = self.create_service('db') db.start() container = db.create_container(one_off=True) self.assertEqual(container.name, 'composetest_db_run_1') def test_create_container_with_unspecified_volume(self): service = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) container = service.create_container() service.start_container(container) assert container.get_mount('/var/db') def test_create_container_with_volume_driver(self): service = self.create_service('db', volume_driver='foodriver') container = service.create_container() service.start_container(container) self.assertEqual('foodriver', container.get('HostConfig.VolumeDriver')) def test_create_container_with_cpu_shares(self): service = self.create_service('db', cpu_shares=73) container = service.create_container() service.start_container(container) self.assertEqual(container.get('HostConfig.CpuShares'), 73) def test_create_container_with_cpu_quota(self): service = self.create_service('db', cpu_quota=40000) container = service.create_container() container.start() self.assertEqual(container.get('HostConfig.CpuQuota'), 40000) def test_create_container_with_shm_size(self): self.require_api_version('1.22') service = self.create_service('db', shm_size=67108864) container = service.create_container() service.start_container(container) self.assertEqual(container.get('HostConfig.ShmSize'), 67108864) def test_create_container_with_extra_hosts_list(self): extra_hosts = ['somehost:162.242.195.82', 'otherhost:50.31.209.229'] service = self.create_service('db', extra_hosts=extra_hosts) container = service.create_container() service.start_container(container) self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts)) def test_create_container_with_extra_hosts_dicts(self): extra_hosts = {'somehost': '162.242.195.82', 'otherhost': '50.31.209.229'} extra_hosts_list = ['somehost:162.242.195.82', 'otherhost:50.31.209.229'] service = self.create_service('db', extra_hosts=extra_hosts) container = service.create_container() service.start_container(container) self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts_list)) def test_create_container_with_cpu_set(self): service = self.create_service('db', cpuset='0') container = service.create_container() service.start_container(container) self.assertEqual(container.get('HostConfig.CpusetCpus'), '0') def test_create_container_with_read_only_root_fs(self): read_only = True service = self.create_service('db', read_only=read_only) container = service.create_container() service.start_container(container) assert container.get('HostConfig.ReadonlyRootfs') == read_only def test_create_container_with_security_opt(self): security_opt = ['label:disable'] service = self.create_service('db', security_opt=security_opt) container = service.create_container() service.start_container(container) self.assertEqual(set(container.get('HostConfig.SecurityOpt')), set(security_opt)) def test_create_container_with_mac_address(self): service = self.create_service('db', mac_address='02:42:ac:11:65:43') container = service.create_container() service.start_container(container) self.assertEqual(container.inspect()['Config']['MacAddress'], '02:42:ac:11:65:43') def test_create_container_with_specified_volume(self): host_path = '/tmp/host-path' container_path = '/container-path' service = self.create_service( 'db', volumes=[VolumeSpec(host_path, container_path, 'rw')]) container = service.create_container() service.start_container(container) assert container.get_mount(container_path) # Match the last component ("host-path"), because boot2docker symlinks /tmp actual_host_path = container.get_mount(container_path)['Source'] self.assertTrue(path.basename(actual_host_path) == path.basename(host_path), msg=("Last component differs: %s, %s" % (actual_host_path, host_path))) def test_recreate_preserves_volume_with_trailing_slash(self): """When the Compose file specifies a trailing slash in the container path, make sure we copy the volume over when recreating. """ service = self.create_service('data', volumes=[VolumeSpec.parse('/data/')]) old_container = create_and_start_container(service) volume_path = old_container.get_mount('/data')['Source'] new_container = service.recreate_container(old_container) self.assertEqual(new_container.get_mount('/data')['Source'], volume_path) def test_duplicate_volume_trailing_slash(self): """ When an image specifies a volume, and the Compose file specifies a host path but adds a trailing slash, make sure that we don't create duplicate binds. """ host_path = '/tmp/data' container_path = '/data' volumes = [VolumeSpec.parse('{}:{}/'.format(host_path, container_path))] tmp_container = self.client.create_container( 'busybox', 'true', volumes={container_path: {}}, labels={'com.docker.compose.test_image': 'true'}, ) image = self.client.commit(tmp_container)['Id'] service = self.create_service('db', image=image, volumes=volumes) old_container = create_and_start_container(service) self.assertEqual( old_container.get('Config.Volumes'), {container_path: {}}, ) service = self.create_service('db', image=image, volumes=volumes) new_container = service.recreate_container(old_container) self.assertEqual( new_container.get('Config.Volumes'), {container_path: {}}, ) self.assertEqual(service.containers(stopped=False), [new_container]) def test_create_container_with_volumes_from(self): volume_service = self.create_service('data') volume_container_1 = volume_service.create_container() volume_container_2 = Container.create( self.client, image='busybox:latest', command=["top"], labels={LABEL_PROJECT: 'composetest'}, ) host_service = self.create_service( 'host', volumes_from=[ VolumeFromSpec(volume_service, 'rw', 'service'), VolumeFromSpec(volume_container_2, 'rw', 'container') ] ) host_container = host_service.create_container() host_service.start_container(host_container) self.assertIn(volume_container_1.id + ':rw', host_container.get('HostConfig.VolumesFrom')) self.assertIn(volume_container_2.id + ':rw', host_container.get('HostConfig.VolumesFrom')) def test_execute_convergence_plan_recreate(self): service = self.create_service( 'db', environment={'FOO': '1'}, volumes=[VolumeSpec.parse('/etc')], entrypoint=['top'], command=['-d', '1'] ) old_container = service.create_container() self.assertEqual(old_container.get('Config.Entrypoint'), ['top']) self.assertEqual(old_container.get('Config.Cmd'), ['-d', '1']) self.assertIn('FOO=1', old_container.get('Config.Env')) self.assertEqual(old_container.name, 'composetest_db_1') service.start_container(old_container) old_container.inspect() # reload volume data volume_path = old_container.get_mount('/etc')['Source'] num_containers_before = len(self.client.containers(all=True)) service.options['environment']['FOO'] = '2' new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [old_container])) self.assertEqual(new_container.get('Config.Entrypoint'), ['top']) self.assertEqual(new_container.get('Config.Cmd'), ['-d', '1']) self.assertIn('FOO=2', new_container.get('Config.Env')) self.assertEqual(new_container.name, 'composetest_db_1') self.assertEqual(new_container.get_mount('/etc')['Source'], volume_path) self.assertIn( 'affinity:container==%s' % old_container.id, new_container.get('Config.Env')) self.assertEqual(len(self.client.containers(all=True)), num_containers_before) self.assertNotEqual(old_container.id, new_container.id) self.assertRaises(APIError, self.client.inspect_container, old_container.id) def test_execute_convergence_plan_recreate_twice(self): service = self.create_service( 'db', volumes=[VolumeSpec.parse('/etc')], entrypoint=['top'], command=['-d', '1']) orig_container = service.create_container() service.start_container(orig_container) orig_container.inspect() # reload volume data volume_path = orig_container.get_mount('/etc')['Source'] # Do this twice to reproduce the bug for _ in range(2): new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [orig_container])) assert new_container.get_mount('/etc')['Source'] == volume_path assert ('affinity:container==%s' % orig_container.id in new_container.get('Config.Env')) orig_container = new_container def test_execute_convergence_plan_when_containers_are_stopped(self): service = self.create_service( 'db', environment={'FOO': '1'}, volumes=[VolumeSpec.parse('/var/db')], entrypoint=['top'], command=['-d', '1'] ) service.create_container() containers = service.containers(stopped=True) self.assertEqual(len(containers), 1) container, = containers self.assertFalse(container.is_running) service.execute_convergence_plan(ConvergencePlan('start', [container])) containers = service.containers() self.assertEqual(len(containers), 1) container.inspect() self.assertEqual(container, containers[0]) self.assertTrue(container.is_running) def test_execute_convergence_plan_with_image_declared_volume(self): service = Service( project='composetest', name='db', client=self.client, build={'context': 'tests/fixtures/dockerfile-with-volume'}, ) old_container = create_and_start_container(service) self.assertEqual( [mount['Destination'] for mount in old_container.get('Mounts')], ['/data'] ) volume_path = old_container.get_mount('/data')['Source'] new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [old_container])) self.assertEqual( [mount['Destination'] for mount in new_container.get('Mounts')], ['/data'] ) self.assertEqual(new_container.get_mount('/data')['Source'], volume_path) def test_execute_convergence_plan_when_image_volume_masks_config(self): service = self.create_service( 'db', build={'context': 'tests/fixtures/dockerfile-with-volume'}, ) old_container = create_and_start_container(service) self.assertEqual( [mount['Destination'] for mount in old_container.get('Mounts')], ['/data'] ) volume_path = old_container.get_mount('/data')['Source'] service.options['volumes'] = [VolumeSpec.parse('/tmp:/data')] with mock.patch('compose.service.log') as mock_log: new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [old_container])) mock_log.warn.assert_called_once_with(mock.ANY) _, args, kwargs = mock_log.warn.mock_calls[0] self.assertIn( "Service \"db\" is using volume \"/data\" from the previous container", args[0]) self.assertEqual( [mount['Destination'] for mount in new_container.get('Mounts')], ['/data'] ) self.assertEqual(new_container.get_mount('/data')['Source'], volume_path) def test_execute_convergence_plan_when_host_volume_is_removed(self): host_path = '/tmp/host-path' service = self.create_service( 'db', build={'context': 'tests/fixtures/dockerfile-with-volume'}, volumes=[VolumeSpec(host_path, '/data', 'rw')]) old_container = create_and_start_container(service) assert ( [mount['Destination'] for mount in old_container.get('Mounts')] == ['/data'] ) service.options['volumes'] = [] with mock.patch('compose.service.log', autospec=True) as mock_log: new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [old_container])) assert not mock_log.warn.called assert ( [mount['Destination'] for mount in new_container.get('Mounts')] == ['/data'] ) assert new_container.get_mount('/data')['Source'] != host_path def test_execute_convergence_plan_without_start(self): service = self.create_service( 'db', build={'context': 'tests/fixtures/dockerfile-with-volume'} ) containers = service.execute_convergence_plan(ConvergencePlan('create', []), start=False) self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) containers = service.execute_convergence_plan( ConvergencePlan('recreate', containers), start=False) self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) service.execute_convergence_plan(ConvergencePlan('start', containers), start=False) self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) def test_start_container_passes_through_options(self): db = self.create_service('db') create_and_start_container(db, environment={'FOO': 'BAR'}) self.assertEqual(db.containers()[0].environment['FOO'], 'BAR') def test_start_container_inherits_options_from_constructor(self): db = self.create_service('db', environment={'FOO': 'BAR'}) create_and_start_container(db) self.assertEqual(db.containers()[0].environment['FOO'], 'BAR') def test_start_container_creates_links(self): db = self.create_service('db') web = self.create_service('web', links=[(db, None)]) create_and_start_container(db) create_and_start_container(db) create_and_start_container(web) self.assertEqual( set(get_links(web.containers()[0])), set([ 'composetest_db_1', 'db_1', 'composetest_db_2', 'db_2', 'db']) ) def test_start_container_creates_links_with_names(self): db = self.create_service('db') web = self.create_service('web', links=[(db, 'custom_link_name')]) create_and_start_container(db) create_and_start_container(db) create_and_start_container(web) self.assertEqual( set(get_links(web.containers()[0])), set([ 'composetest_db_1', 'db_1', 'composetest_db_2', 'db_2', 'custom_link_name']) ) def test_start_container_with_external_links(self): db = self.create_service('db') web = self.create_service('web', external_links=['composetest_db_1', 'composetest_db_2', 'composetest_db_3:db_3']) for _ in range(3): create_and_start_container(db) create_and_start_container(web) self.assertEqual( set(get_links(web.containers()[0])), set([ 'composetest_db_1', 'composetest_db_2', 'db_3']), ) def test_start_normal_container_does_not_create_links_to_its_own_service(self): db = self.create_service('db') create_and_start_container(db) create_and_start_container(db) c = create_and_start_container(db) self.assertEqual(set(get_links(c)), set([])) def test_start_one_off_container_creates_links_to_its_own_service(self): db = self.create_service('db') create_and_start_container(db) create_and_start_container(db) c = create_and_start_container(db, one_off=OneOffFilter.only) self.assertEqual( set(get_links(c)), set([ 'composetest_db_1', 'db_1', 'composetest_db_2', 'db_2', 'db']) ) def test_start_container_builds_images(self): service = Service( name='test', client=self.client, build={'context': 'tests/fixtures/simple-dockerfile'}, project='composetest', ) container = create_and_start_container(service) container.wait() self.assertIn(b'success', container.logs()) self.assertEqual(len(self.client.images(name='composetest_test')), 1) def test_start_container_uses_tagged_image_if_it_exists(self): self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test') service = Service( name='test', client=self.client, build={'context': 'this/does/not/exist/and/will/throw/error'}, project='composetest', ) container = create_and_start_container(service) container.wait() self.assertIn(b'success', container.logs()) def test_start_container_creates_ports(self): service = self.create_service('web', ports=[8000]) container = create_and_start_container(service).inspect() self.assertEqual(list(container['NetworkSettings']['Ports'].keys()), ['8000/tcp']) self.assertNotEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8000') def test_build(self): base_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, base_dir) with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f: f.write("FROM busybox\n") self.create_service('web', build={'context': base_dir}).build() assert self.client.inspect_image('composetest_web') def test_build_non_ascii_filename(self): base_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, base_dir) with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f: f.write("FROM busybox\n") with open(os.path.join(base_dir.encode('utf8'), b'foo\xE2bar'), 'w') as f: f.write("hello world\n") self.create_service('web', build={'context': text_type(base_dir)}).build() assert self.client.inspect_image('composetest_web') def test_build_with_image_name(self): base_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, base_dir) with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f: f.write("FROM busybox\n") image_name = 'examples/composetest:latest' self.addCleanup(self.client.remove_image, image_name) self.create_service('web', build={'context': base_dir}, image=image_name).build() assert self.client.inspect_image(image_name) def test_build_with_git_url(self): build_url = "https://github.com/dnephin/docker-build-from-url.git" service = self.create_service('buildwithurl', build={'context': build_url}) self.addCleanup(self.client.remove_image, service.image_name) service.build() assert service.image() def test_build_with_build_args(self): base_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, base_dir) with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f: f.write("FROM busybox\n") f.write("ARG build_version\n") service = self.create_service('buildwithargs', build={'context': text_type(base_dir), 'args': {"build_version": "1"}}) service.build() assert service.image() def test_start_container_stays_unpriviliged(self): service = self.create_service('web') container = create_and_start_container(service).inspect() self.assertEqual(container['HostConfig']['Privileged'], False) def test_start_container_becomes_priviliged(self): service = self.create_service('web', privileged=True) container = create_and_start_container(service).inspect() self.assertEqual(container['HostConfig']['Privileged'], True) def test_expose_does_not_publish_ports(self): service = self.create_service('web', expose=["8000"]) container = create_and_start_container(service).inspect() self.assertEqual(container['NetworkSettings']['Ports'], {'8000/tcp': None}) def test_start_container_creates_port_with_explicit_protocol(self): service = self.create_service('web', ports=['8000/udp']) container = create_and_start_container(service).inspect() self.assertEqual(list(container['NetworkSettings']['Ports'].keys()), ['8000/udp']) def test_start_container_creates_fixed_external_ports(self): service = self.create_service('web', ports=['8000:8000']) container = create_and_start_container(service).inspect() self.assertIn('8000/tcp', container['NetworkSettings']['Ports']) self.assertEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8000') def test_start_container_creates_fixed_external_ports_when_it_is_different_to_internal_port(self): service = self.create_service('web', ports=['8001:8000']) container = create_and_start_container(service).inspect() self.assertIn('8000/tcp', container['NetworkSettings']['Ports']) self.assertEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8001') def test_port_with_explicit_interface(self): service = self.create_service('web', ports=[ '127.0.0.1:8001:8000', '0.0.0.0:9001:9000/udp', ]) container = create_and_start_container(service).inspect() self.assertEqual(container['NetworkSettings']['Ports'], { '8000/tcp': [ { 'HostIp': '127.0.0.1', 'HostPort': '8001', }, ], '9000/udp': [ { 'HostIp': '0.0.0.0', 'HostPort': '9001', }, ], }) def test_create_with_image_id(self): # Get image id for the current busybox:latest pull_busybox(self.client) image_id = self.client.inspect_image('busybox:latest')['Id'][:12] service = self.create_service('foo', image=image_id) service.create_container() def test_scale(self): service = self.create_service('web') service.scale(1) self.assertEqual(len(service.containers()), 1) # Ensure containers don't have stdout or stdin connected container = service.containers()[0] config = container.inspect()['Config'] self.assertFalse(config['AttachStderr']) self.assertFalse(config['AttachStdout']) self.assertFalse(config['AttachStdin']) service.scale(3) self.assertEqual(len(service.containers()), 3) service.scale(1) self.assertEqual(len(service.containers()), 1) service.scale(0) self.assertEqual(len(service.containers()), 0) def test_scale_with_stopped_containers(self): """ Given there are some stopped containers and scale is called with a desired number that is the same as the number of stopped containers, test that those containers are restarted and not removed/recreated. """ service = self.create_service('web') next_number = service._next_container_number() valid_numbers = [next_number, next_number + 1] service.create_container(number=next_number) service.create_container(number=next_number + 1) with mock.patch('sys.stderr', new_callable=StringIO) as mock_stderr: service.scale(2) for container in service.containers(): self.assertTrue(container.is_running) self.assertTrue(container.number in valid_numbers) captured_output = mock_stderr.getvalue() self.assertNotIn('Creating', captured_output) self.assertIn('Starting', captured_output) def test_scale_with_stopped_containers_and_needing_creation(self): """ Given there are some stopped containers and scale is called with a desired number that is greater than the number of stopped containers, test that those containers are restarted and required number are created. """ service = self.create_service('web') next_number = service._next_container_number() service.create_container(number=next_number, quiet=True) for container in service.containers(): self.assertFalse(container.is_running) with mock.patch('sys.stderr', new_callable=StringIO) as mock_stderr: service.scale(2) self.assertEqual(len(service.containers()), 2) for container in service.containers(): self.assertTrue(container.is_running) captured_output = mock_stderr.getvalue() self.assertIn('Creating', captured_output) self.assertIn('Starting', captured_output) def test_scale_with_api_error(self): """Test that when scaling if the API returns an error, that error is handled and the remaining threads continue. """ service = self.create_service('web') next_number = service._next_container_number() service.create_container(number=next_number, quiet=True) with mock.patch( 'compose.container.Container.create', side_effect=APIError( message="testing", response={}, explanation="Boom")): with mock.patch('sys.stderr', new_callable=StringIO) as mock_stderr: service.scale(3) self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.assertIn( "ERROR: for composetest_web_2 Cannot create container for service web: Boom", mock_stderr.getvalue() ) def test_scale_with_unexpected_exception(self): """Test that when scaling if the API returns an error, that is not of type APIError, that error is re-raised. """ service = self.create_service('web') next_number = service._next_container_number() service.create_container(number=next_number, quiet=True) with mock.patch( 'compose.container.Container.create', side_effect=ValueError("BOOM") ): with self.assertRaises(ValueError): service.scale(3) self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) @mock.patch('compose.service.log') def test_scale_with_desired_number_already_achieved(self, mock_log): """ Test that calling scale with a desired number that is equal to the number of containers already running results in no change. """ service = self.create_service('web') next_number = service._next_container_number() container = service.create_container(number=next_number, quiet=True) container.start() container.inspect() assert container.is_running assert len(service.containers()) == 1 service.scale(1) assert len(service.containers()) == 1 container.inspect() assert container.is_running captured_output = mock_log.info.call_args[0] assert 'Desired container number already achieved' in captured_output @mock.patch('compose.service.log') def test_scale_with_custom_container_name_outputs_warning(self, mock_log): """Test that calling scale on a service that has a custom container name results in warning output. """ service = self.create_service('app', container_name='custom-container') self.assertEqual(service.custom_container_name, 'custom-container') service.scale(3) captured_output = mock_log.warn.call_args[0][0] self.assertEqual(len(service.containers()), 1) self.assertIn( "Remove the custom name to scale the service.", captured_output ) def test_scale_sets_ports(self): service = self.create_service('web', ports=['8000']) service.scale(2) containers = service.containers() self.assertEqual(len(containers), 2) for container in containers: self.assertEqual( list(container.get('HostConfig.PortBindings')), ['8000/tcp']) def test_scale_with_immediate_exit(self): service = self.create_service('web', image='busybox', command='true') service.scale(2) assert len(service.containers(stopped=True)) == 2 def test_network_mode_none(self): service = self.create_service('web', network_mode=NetworkMode('none')) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.NetworkMode'), 'none') def test_network_mode_bridged(self): service = self.create_service('web', network_mode=NetworkMode('bridge')) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.NetworkMode'), 'bridge') def test_network_mode_host(self): service = self.create_service('web', network_mode=NetworkMode('host')) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.NetworkMode'), 'host') def test_pid_mode_none_defined(self): service = self.create_service('web', pid=None) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.PidMode'), '') def test_pid_mode_host(self): service = self.create_service('web', pid='host') container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.PidMode'), 'host') def test_dns_no_value(self): service = self.create_service('web') container = create_and_start_container(service) self.assertIsNone(container.get('HostConfig.Dns')) def test_dns_list(self): service = self.create_service('web', dns=['8.8.8.8', '9.9.9.9']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.Dns'), ['8.8.8.8', '9.9.9.9']) def test_restart_always_value(self): service = self.create_service('web', restart={'Name': 'always'}) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.RestartPolicy.Name'), 'always') def test_restart_on_failure_value(self): service = self.create_service('web', restart={ 'Name': 'on-failure', 'MaximumRetryCount': 5 }) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.RestartPolicy.Name'), 'on-failure') self.assertEqual(container.get('HostConfig.RestartPolicy.MaximumRetryCount'), 5) def test_cap_add_list(self): service = self.create_service('web', cap_add=['SYS_ADMIN', 'NET_ADMIN']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.CapAdd'), ['SYS_ADMIN', 'NET_ADMIN']) def test_cap_drop_list(self): service = self.create_service('web', cap_drop=['SYS_ADMIN', 'NET_ADMIN']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.CapDrop'), ['SYS_ADMIN', 'NET_ADMIN']) def test_dns_search(self): service = self.create_service('web', dns_search=['dc1.example.com', 'dc2.example.com']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.DnsSearch'), ['dc1.example.com', 'dc2.example.com']) @v2_only() def test_tmpfs(self): service = self.create_service('web', tmpfs=['/run']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.Tmpfs'), {'/run': ''}) def test_working_dir_param(self): service = self.create_service('container', working_dir='/working/dir/sample') container = service.create_container() self.assertEqual(container.get('Config.WorkingDir'), '/working/dir/sample') def test_split_env(self): service = self.create_service( 'web', environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS=']) env = create_and_start_container(service).environment for k, v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.items(): self.assertEqual(env[k], v) def test_env_from_file_combined_with_env(self): service = self.create_service( 'web', environment=['ONE=1', 'TWO=2', 'THREE=3'], env_file=['tests/fixtures/env/one.env', 'tests/fixtures/env/two.env']) env = create_and_start_container(service).environment for k, v in { 'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah' }.items(): self.assertEqual(env[k], v) @mock.patch.dict(os.environ) def test_resolve_env(self): os.environ['FILE_DEF'] = 'E1' os.environ['FILE_DEF_EMPTY'] = 'E2' os.environ['ENV_DEF'] = 'E3' service = self.create_service( 'web', environment={ 'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None } ) env = create_and_start_container(service).environment for k, v in { 'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': None }.items(): self.assertEqual(env[k], v) def test_with_high_enough_api_version_we_get_default_network_mode(self): # TODO: remove this test once minimum docker version is 1.8.x with mock.patch.object(self.client, '_version', '1.20'): service = self.create_service('web') service_config = service._get_container_host_config({}) self.assertEquals(service_config['NetworkMode'], 'default') def test_labels(self): labels_dict = { 'com.example.description': "Accounting webapp", 'com.example.department': "Finance", 'com.example.label-with-empty-value': "", } compose_labels = { LABEL_CONTAINER_NUMBER: '1', LABEL_ONE_OFF: 'False', LABEL_PROJECT: 'composetest', LABEL_SERVICE: 'web', LABEL_VERSION: __version__, } expected = dict(labels_dict, **compose_labels) service = self.create_service('web', labels=labels_dict) labels = create_and_start_container(service).labels.items() for pair in expected.items(): self.assertIn(pair, labels) def test_empty_labels(self): labels_dict = {'foo': '', 'bar': ''} service = self.create_service('web', labels=labels_dict) labels = create_and_start_container(service).labels.items() for name in labels_dict: self.assertIn((name, ''), labels) def test_stop_signal(self): stop_signal = 'SIGINT' service = self.create_service('web', stop_signal=stop_signal) container = create_and_start_container(service) self.assertEqual(container.stop_signal, stop_signal) def test_custom_container_name(self): service = self.create_service('web', container_name='my-web-container') self.assertEqual(service.custom_container_name, 'my-web-container') container = create_and_start_container(service) self.assertEqual(container.name, 'my-web-container') one_off_container = service.create_container(one_off=True) self.assertNotEqual(one_off_container.name, 'my-web-container') @pytest.mark.skipif(True, reason="Broken on 1.11.0rc1") def test_log_drive_invalid(self): service = self.create_service('web', logging={'driver': 'xxx'}) expected_error_msg = "logger: no log driver named 'xxx' is registered" with self.assertRaisesRegexp(APIError, expected_error_msg): create_and_start_container(service) def test_log_drive_empty_default_jsonfile(self): service = self.create_service('web') log_config = create_and_start_container(service).log_config self.assertEqual('json-file', log_config['Type']) self.assertFalse(log_config['Config']) def test_log_drive_none(self): service = self.create_service('web', logging={'driver': 'none'}) log_config = create_and_start_container(service).log_config self.assertEqual('none', log_config['Type']) self.assertFalse(log_config['Config']) def test_devices(self): service = self.create_service('web', devices=["/dev/random:/dev/mapped-random"]) device_config = create_and_start_container(service).get('HostConfig.Devices') device_dict = { 'PathOnHost': '/dev/random', 'CgroupPermissions': 'rwm', 'PathInContainer': '/dev/mapped-random' } self.assertEqual(1, len(device_config)) self.assertDictEqual(device_dict, device_config[0]) def test_duplicate_containers(self): service = self.create_service('web') options = service._get_container_create_options({}, 1) original = Container.create(service.client, **options) self.assertEqual(set(service.containers(stopped=True)), set([original])) self.assertEqual(set(service.duplicate_containers()), set()) options['name'] = 'temporary_container_name' duplicate = Container.create(service.client, **options) self.assertEqual(set(service.containers(stopped=True)), set([original, duplicate])) self.assertEqual(set(service.duplicate_containers()), set([duplicate])) def converge(service, strategy=ConvergenceStrategy.changed): """Create a converge plan from a strategy and execute the plan.""" plan = service.convergence_plan(strategy) return service.execute_convergence_plan(plan, timeout=1) class ConfigHashTest(DockerClientTestCase): def test_no_config_hash_when_one_off(self): web = self.create_service('web') container = web.create_container(one_off=True) self.assertNotIn(LABEL_CONFIG_HASH, container.labels) def test_no_config_hash_when_overriding_options(self): web = self.create_service('web') container = web.create_container(environment={'FOO': '1'}) self.assertNotIn(LABEL_CONFIG_HASH, container.labels) def test_config_hash_with_custom_labels(self): web = self.create_service('web', labels={'foo': '1'}) container = converge(web)[0] self.assertIn(LABEL_CONFIG_HASH, container.labels) self.assertIn('foo', container.labels) def test_config_hash_sticks_around(self): web = self.create_service('web', command=["top"]) container = converge(web)[0] self.assertIn(LABEL_CONFIG_HASH, container.labels) web = self.create_service('web', command=["top", "-d", "1"]) container = converge(web)[0] self.assertIn(LABEL_CONFIG_HASH, container.labels) compose-1.8.0/tests/integration/state_test.py000066400000000000000000000231351274620702700214160ustar00rootroot00000000000000""" Integration tests which cover state convergence (aka smart recreate) performed by `docker-compose up`. """ from __future__ import absolute_import from __future__ import unicode_literals import py from .testcases import DockerClientTestCase from .testcases import get_links from compose.config import config from compose.project import Project from compose.service import ConvergenceStrategy class ProjectTestCase(DockerClientTestCase): def run_up(self, cfg, **kwargs): kwargs.setdefault('timeout', 1) kwargs.setdefault('detached', True) project = self.make_project(cfg) project.up(**kwargs) return set(project.containers(stopped=True)) def make_project(self, cfg): details = config.ConfigDetails( 'working_dir', [config.ConfigFile(None, cfg)]) return Project.from_config( name='composetest', client=self.client, config_data=config.load(details)) class BasicProjectTest(ProjectTestCase): def setUp(self): super(BasicProjectTest, self).setUp() self.cfg = { 'db': {'image': 'busybox:latest', 'command': 'top'}, 'web': {'image': 'busybox:latest', 'command': 'top'}, } def test_no_change(self): old_containers = self.run_up(self.cfg) self.assertEqual(len(old_containers), 2) new_containers = self.run_up(self.cfg) self.assertEqual(len(new_containers), 2) self.assertEqual(old_containers, new_containers) def test_partial_change(self): old_containers = self.run_up(self.cfg) old_db = [c for c in old_containers if c.name_without_project == 'db_1'][0] old_web = [c for c in old_containers if c.name_without_project == 'web_1'][0] self.cfg['web']['command'] = '/bin/true' new_containers = self.run_up(self.cfg) self.assertEqual(len(new_containers), 2) preserved = list(old_containers & new_containers) self.assertEqual(preserved, [old_db]) removed = list(old_containers - new_containers) self.assertEqual(removed, [old_web]) created = list(new_containers - old_containers) self.assertEqual(len(created), 1) self.assertEqual(created[0].name_without_project, 'web_1') self.assertEqual(created[0].get('Config.Cmd'), ['/bin/true']) def test_all_change(self): old_containers = self.run_up(self.cfg) self.assertEqual(len(old_containers), 2) self.cfg['web']['command'] = '/bin/true' self.cfg['db']['command'] = '/bin/true' new_containers = self.run_up(self.cfg) self.assertEqual(len(new_containers), 2) unchanged = old_containers & new_containers self.assertEqual(len(unchanged), 0) new = new_containers - old_containers self.assertEqual(len(new), 2) class ProjectWithDependenciesTest(ProjectTestCase): def setUp(self): super(ProjectWithDependenciesTest, self).setUp() self.cfg = { 'db': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', }, 'web': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', 'links': ['db'], }, 'nginx': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', 'links': ['web'], }, } def test_up(self): containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in containers), set(['db_1', 'web_1', 'nginx_1']), ) def test_change_leaf(self): old_containers = self.run_up(self.cfg) self.cfg['nginx']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in new_containers - old_containers), set(['nginx_1']), ) def test_change_middle(self): old_containers = self.run_up(self.cfg) self.cfg['web']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in new_containers - old_containers), set(['web_1', 'nginx_1']), ) def test_change_root(self): old_containers = self.run_up(self.cfg) self.cfg['db']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in new_containers - old_containers), set(['db_1', 'web_1', 'nginx_1']), ) def test_change_root_no_recreate(self): old_containers = self.run_up(self.cfg) self.cfg['db']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up( self.cfg, strategy=ConvergenceStrategy.never) self.assertEqual(new_containers - old_containers, set()) def test_service_removed_while_down(self): next_cfg = { 'web': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', }, 'nginx': self.cfg['nginx'], } containers = self.run_up(self.cfg) self.assertEqual(len(containers), 3) project = self.make_project(self.cfg) project.stop(timeout=1) containers = self.run_up(next_cfg) self.assertEqual(len(containers), 2) def test_service_recreated_when_dependency_created(self): containers = self.run_up(self.cfg, service_names=['web'], start_deps=False) self.assertEqual(len(containers), 1) containers = self.run_up(self.cfg) self.assertEqual(len(containers), 3) web, = [c for c in containers if c.service == 'web'] nginx, = [c for c in containers if c.service == 'nginx'] self.assertEqual(set(get_links(web)), {'composetest_db_1', 'db', 'db_1'}) self.assertEqual(set(get_links(nginx)), {'composetest_web_1', 'web', 'web_1'}) class ServiceStateTest(DockerClientTestCase): """Test cases for Service.convergence_plan.""" def test_trigger_create(self): web = self.create_service('web') self.assertEqual(('create', []), web.convergence_plan()) def test_trigger_noop(self): web = self.create_service('web') container = web.create_container() web.start() web = self.create_service('web') self.assertEqual(('noop', [container]), web.convergence_plan()) def test_trigger_start(self): options = dict(command=["top"]) web = self.create_service('web', **options) web.scale(2) containers = web.containers(stopped=True) containers[0].stop() containers[0].inspect() self.assertEqual([c.is_running for c in containers], [False, True]) self.assertEqual( ('start', containers[0:1]), web.convergence_plan(), ) def test_trigger_recreate_with_config_change(self): web = self.create_service('web', command=["top"]) container = web.create_container() web = self.create_service('web', command=["top", "-d", "1"]) self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_trigger_recreate_with_nonexistent_image_tag(self): web = self.create_service('web', image="busybox:latest") container = web.create_container() web = self.create_service('web', image="nonexistent-image") self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_trigger_recreate_with_image_change(self): repo = 'composetest_myimage' tag = 'latest' image = '{}:{}'.format(repo, tag) image_id = self.client.images(name='busybox')[0]['Id'] self.client.tag(image_id, repository=repo, tag=tag) self.addCleanup(self.client.remove_image, image) web = self.create_service('web', image=image) container = web.create_container() # update the image c = self.client.create_container(image, ['touch', '/hello.txt']) self.client.commit(c, repository=repo, tag=tag) self.client.remove_container(c) web = self.create_service('web', image=image) self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_trigger_recreate_with_build(self): context = py.test.ensuretemp('test_trigger_recreate_with_build') self.addCleanup(context.remove) base_image = "FROM busybox\nLABEL com.docker.compose.test_image=true\n" dockerfile = context.join('Dockerfile') dockerfile.write(base_image) web = self.create_service('web', build={'context': str(context)}) container = web.create_container() dockerfile.write(base_image + 'CMD echo hello world\n') web.build() web = self.create_service('web', build={'context': str(context)}) self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_image_changed_to_build(self): context = py.test.ensuretemp('test_image_changed_to_build') self.addCleanup(context.remove) context.join('Dockerfile').write(""" FROM busybox LABEL com.docker.compose.test_image=true """) web = self.create_service('web', image='busybox') container = web.create_container() web = self.create_service('web', build={'context': str(context)}) plan = web.convergence_plan() self.assertEqual(('recreate', [container]), plan) containers = web.execute_convergence_plan(plan) self.assertEqual(len(containers), 1) compose-1.8.0/tests/integration/testcases.py000066400000000000000000000065201274620702700212340ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import functools import os from docker.utils import version_lt from pytest import skip from .. import unittest from compose.cli.docker_client import docker_client from compose.config.config import resolve_environment from compose.config.config import V1 from compose.config.config import V2_0 from compose.config.environment import Environment from compose.const import API_VERSIONS from compose.const import LABEL_PROJECT from compose.progress_stream import stream_output from compose.service import Service def pull_busybox(client): client.pull('busybox:latest', stream=False) def get_links(container): links = container.get('HostConfig.Links') or [] def format_link(link): _, alias = link.split(':') return alias.split('/')[-1] return [format_link(link) for link in links] def engine_version_too_low_for_v2(): if 'DOCKER_VERSION' not in os.environ: return False version = os.environ['DOCKER_VERSION'].partition('-')[0] return version_lt(version, '1.10') def v2_only(): def decorator(f): @functools.wraps(f) def wrapper(self, *args, **kwargs): if engine_version_too_low_for_v2(): skip("Engine version is too low") return return f(self, *args, **kwargs) return wrapper return decorator class DockerClientTestCase(unittest.TestCase): @classmethod def setUpClass(cls): if engine_version_too_low_for_v2(): version = API_VERSIONS[V1] else: version = API_VERSIONS[V2_0] cls.client = docker_client(Environment(), version) def tearDown(self): for c in self.client.containers( all=True, filters={'label': '%s=composetest' % LABEL_PROJECT}): self.client.remove_container(c['Id'], force=True) for i in self.client.images( filters={'label': 'com.docker.compose.test_image'}): self.client.remove_image(i) volumes = self.client.volumes().get('Volumes') or [] for v in volumes: if 'composetest_' in v['Name']: self.client.remove_volume(v['Name']) networks = self.client.networks() for n in networks: if 'composetest_' in n['Name']: self.client.remove_network(n['Name']) def create_service(self, name, **kwargs): if 'image' not in kwargs and 'build' not in kwargs: kwargs['image'] = 'busybox:latest' if 'command' not in kwargs: kwargs['command'] = ["top"] kwargs['environment'] = resolve_environment( kwargs, Environment.from_env_file(None) ) labels = dict(kwargs.setdefault('labels', {})) labels['com.docker.compose.test-name'] = self.id() return Service(name, client=self.client, project='composetest', **kwargs) def check_build(self, *args, **kwargs): kwargs.setdefault('rm', True) build_output = self.client.build(*args, **kwargs) stream_output(build_output, open('/dev/null', 'w')) def require_api_version(self, minimum): api_version = self.client.version()['ApiVersion'] if version_lt(api_version, minimum): skip("API version is too low ({} < {})".format(api_version, minimum)) compose-1.8.0/tests/integration/volume_test.py000066400000000000000000000057021274620702700216050ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from docker.errors import DockerException from .testcases import DockerClientTestCase from compose.volume import Volume class VolumeTest(DockerClientTestCase): def setUp(self): self.tmp_volumes = [] def tearDown(self): for volume in self.tmp_volumes: try: self.client.remove_volume(volume.full_name) except DockerException: pass def create_volume(self, name, driver=None, opts=None, external=None): if external and isinstance(external, bool): external = name vol = Volume( self.client, 'composetest', name, driver=driver, driver_opts=opts, external_name=external ) self.tmp_volumes.append(vol) return vol def test_create_volume(self): vol = self.create_volume('volume01') vol.create() info = self.client.inspect_volume(vol.full_name) assert info['Name'] == vol.full_name def test_recreate_existing_volume(self): vol = self.create_volume('volume01') vol.create() info = self.client.inspect_volume(vol.full_name) assert info['Name'] == vol.full_name vol.create() info = self.client.inspect_volume(vol.full_name) assert info['Name'] == vol.full_name def test_inspect_volume(self): vol = self.create_volume('volume01') vol.create() info = vol.inspect() assert info['Name'] == vol.full_name def test_remove_volume(self): vol = Volume(self.client, 'composetest', 'volume01') vol.create() vol.remove() volumes = self.client.volumes()['Volumes'] assert len([v for v in volumes if v['Name'] == vol.full_name]) == 0 def test_external_volume(self): vol = self.create_volume('composetest_volume_ext', external=True) assert vol.external is True assert vol.full_name == vol.name vol.create() info = vol.inspect() assert info['Name'] == vol.name def test_external_aliased_volume(self): alias_name = 'composetest_alias01' vol = self.create_volume('volume01', external=alias_name) assert vol.external is True assert vol.full_name == alias_name vol.create() info = vol.inspect() assert info['Name'] == alias_name def test_exists(self): vol = self.create_volume('volume01') assert vol.exists() is False vol.create() assert vol.exists() is True def test_exists_external(self): vol = self.create_volume('volume01', external=True) assert vol.exists() is False vol.create() assert vol.exists() is True def test_exists_external_aliased(self): vol = self.create_volume('volume01', external='composetest_alias01') assert vol.exists() is False vol.create() assert vol.exists() is True compose-1.8.0/tests/unit/000077500000000000000000000000001274620702700153155ustar00rootroot00000000000000compose-1.8.0/tests/unit/__init__.py000066400000000000000000000000001274620702700174140ustar00rootroot00000000000000compose-1.8.0/tests/unit/bundle_test.py000066400000000000000000000143721274620702700202060ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import docker import mock import pytest from compose import bundle from compose import service from compose.cli.errors import UserError from compose.config.config import Config @pytest.fixture def mock_service(): return mock.create_autospec( service.Service, client=mock.create_autospec(docker.Client), options={}) def test_get_image_digest_exists(mock_service): mock_service.options['image'] = 'abcd' mock_service.image.return_value = {'RepoDigests': ['digest1']} digest = bundle.get_image_digest(mock_service) assert digest == 'digest1' def test_get_image_digest_image_uses_digest(mock_service): mock_service.options['image'] = image_id = 'redis@sha256:digest' digest = bundle.get_image_digest(mock_service) assert digest == image_id assert not mock_service.image.called def test_get_image_digest_no_image(mock_service): with pytest.raises(UserError) as exc: bundle.get_image_digest(service.Service(name='theservice')) assert "doesn't define an image tag" in exc.exconly() def test_push_image_with_saved_digest(mock_service): mock_service.options['build'] = '.' mock_service.options['image'] = image_id = 'abcd' mock_service.push.return_value = expected = 'sha256:thedigest' mock_service.image.return_value = {'RepoDigests': ['digest1']} digest = bundle.push_image(mock_service) assert digest == image_id + '@' + expected mock_service.push.assert_called_once_with() assert not mock_service.client.push.called def test_push_image(mock_service): mock_service.options['build'] = '.' mock_service.options['image'] = image_id = 'abcd' mock_service.push.return_value = expected = 'sha256:thedigest' mock_service.image.return_value = {'RepoDigests': []} digest = bundle.push_image(mock_service) assert digest == image_id + '@' + expected mock_service.push.assert_called_once_with() mock_service.client.pull.assert_called_once_with(digest) def test_to_bundle(): image_digests = {'a': 'aaaa', 'b': 'bbbb'} services = [ {'name': 'a', 'build': '.', }, {'name': 'b', 'build': './b'}, ] config = Config( version=2, services=services, volumes={'special': {}}, networks={'extra': {}}) with mock.patch('compose.bundle.log.warn', autospec=True) as mock_log: output = bundle.to_bundle(config, image_digests) assert mock_log.mock_calls == [ mock.call("Unsupported top level key 'networks' - ignoring"), mock.call("Unsupported top level key 'volumes' - ignoring"), ] assert output == { 'Version': '0.1', 'Services': { 'a': {'Image': 'aaaa', 'Networks': ['default']}, 'b': {'Image': 'bbbb', 'Networks': ['default']}, } } def test_convert_service_to_bundle(): name = 'theservice' image_digest = 'thedigest' service_dict = { 'ports': ['80'], 'expose': ['1234'], 'networks': {'extra': {}}, 'command': 'foo', 'entrypoint': 'entry', 'environment': {'BAZ': 'ENV'}, 'build': '.', 'working_dir': '/tmp', 'user': 'root', 'labels': {'FOO': 'LABEL'}, 'privileged': True, } with mock.patch('compose.bundle.log.warn', autospec=True) as mock_log: config = bundle.convert_service_to_bundle(name, service_dict, image_digest) mock_log.assert_called_once_with( "Unsupported key 'privileged' in services.theservice - ignoring") assert config == { 'Image': image_digest, 'Ports': [ {'Protocol': 'tcp', 'Port': 80}, {'Protocol': 'tcp', 'Port': 1234}, ], 'Networks': ['extra'], 'Command': ['entry', 'foo'], 'Env': ['BAZ=ENV'], 'WorkingDir': '/tmp', 'User': 'root', 'Labels': {'FOO': 'LABEL'}, } def test_set_command_and_args_none(): config = {} bundle.set_command_and_args(config, [], []) assert config == {} def test_set_command_and_args_from_command(): config = {} bundle.set_command_and_args(config, [], "echo ok") assert config == {'Args': ['echo', 'ok']} def test_set_command_and_args_from_entrypoint(): config = {} bundle.set_command_and_args(config, "echo entry", []) assert config == {'Command': ['echo', 'entry']} def test_set_command_and_args_from_both(): config = {} bundle.set_command_and_args(config, "echo entry", ["extra", "arg"]) assert config == {'Command': ['echo', 'entry', "extra", "arg"]} def test_make_service_networks_default(): name = 'theservice' service_dict = {} with mock.patch('compose.bundle.log.warn', autospec=True) as mock_log: networks = bundle.make_service_networks(name, service_dict) assert not mock_log.called assert networks == ['default'] def test_make_service_networks(): name = 'theservice' service_dict = { 'networks': { 'foo': { 'aliases': ['one', 'two'], }, 'bar': {} }, } with mock.patch('compose.bundle.log.warn', autospec=True) as mock_log: networks = bundle.make_service_networks(name, service_dict) mock_log.assert_called_once_with( "Unsupported key 'aliases' in services.theservice.networks.foo - ignoring") assert sorted(networks) == sorted(service_dict['networks']) def test_make_port_specs(): service_dict = { 'expose': ['80', '500/udp'], 'ports': [ '400:80', '222', '127.0.0.1:8001:8001', '127.0.0.1:5000-5001:3000-3001'], } port_specs = bundle.make_port_specs(service_dict) assert port_specs == [ {'Protocol': 'tcp', 'Port': 80}, {'Protocol': 'tcp', 'Port': 222}, {'Protocol': 'tcp', 'Port': 8001}, {'Protocol': 'tcp', 'Port': 3000}, {'Protocol': 'tcp', 'Port': 3001}, {'Protocol': 'udp', 'Port': 500}, ] def test_make_port_spec_with_protocol(): port_spec = bundle.make_port_spec("5000/udp") assert port_spec == {'Protocol': 'udp', 'Port': 5000} def test_make_port_spec_default_protocol(): port_spec = bundle.make_port_spec("50000") assert port_spec == {'Protocol': 'tcp', 'Port': 50000} compose-1.8.0/tests/unit/cli/000077500000000000000000000000001274620702700160645ustar00rootroot00000000000000compose-1.8.0/tests/unit/cli/__init__.py000066400000000000000000000000001274620702700201630ustar00rootroot00000000000000compose-1.8.0/tests/unit/cli/command_test.py000066400000000000000000000050611274620702700211150ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import os import ssl import pytest from compose.cli.command import get_config_path_from_options from compose.cli.command import get_tls_version from compose.config.environment import Environment from compose.const import IS_WINDOWS_PLATFORM from tests import mock class TestGetConfigPathFromOptions(object): def test_path_from_options(self): paths = ['one.yml', 'two.yml'] opts = {'--file': paths} environment = Environment.from_env_file('.') assert get_config_path_from_options('.', opts, environment) == paths def test_single_path_from_env(self): with mock.patch.dict(os.environ): os.environ['COMPOSE_FILE'] = 'one.yml' environment = Environment.from_env_file('.') assert get_config_path_from_options('.', {}, environment) == ['one.yml'] @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix separator') def test_multiple_path_from_env(self): with mock.patch.dict(os.environ): os.environ['COMPOSE_FILE'] = 'one.yml:two.yml' environment = Environment.from_env_file('.') assert get_config_path_from_options( '.', {}, environment ) == ['one.yml', 'two.yml'] @pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='windows separator') def test_multiple_path_from_env_windows(self): with mock.patch.dict(os.environ): os.environ['COMPOSE_FILE'] = 'one.yml;two.yml' environment = Environment.from_env_file('.') assert get_config_path_from_options( '.', {}, environment ) == ['one.yml', 'two.yml'] def test_no_path(self): environment = Environment.from_env_file('.') assert not get_config_path_from_options('.', {}, environment) class TestGetTlsVersion(object): def test_get_tls_version_default(self): environment = {} assert get_tls_version(environment) is None @pytest.mark.skipif(not hasattr(ssl, 'PROTOCOL_TLSv1_2'), reason='TLS v1.2 unsupported') def test_get_tls_version_upgrade(self): environment = {'COMPOSE_TLS_VERSION': 'TLSv1_2'} assert get_tls_version(environment) == ssl.PROTOCOL_TLSv1_2 def test_get_tls_version_unavailable(self): environment = {'COMPOSE_TLS_VERSION': 'TLSv5_5'} with mock.patch('compose.cli.command.log') as mock_log: tls_version = get_tls_version(environment) mock_log.warn.assert_called_once_with(mock.ANY) assert tls_version is None compose-1.8.0/tests/unit/cli/docker_client_test.py000066400000000000000000000115411274620702700223040ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import os import platform import docker import pytest import compose from compose.cli import errors from compose.cli.docker_client import docker_client from compose.cli.docker_client import tls_config_from_options from tests import mock from tests import unittest class DockerClientTestCase(unittest.TestCase): def test_docker_client_no_home(self): with mock.patch.dict(os.environ): del os.environ['HOME'] docker_client(os.environ) @mock.patch.dict(os.environ) def test_docker_client_with_custom_timeout(self): os.environ['COMPOSE_HTTP_TIMEOUT'] = '123' client = docker_client(os.environ) assert client.timeout == 123 @mock.patch.dict(os.environ) def test_custom_timeout_error(self): os.environ['COMPOSE_HTTP_TIMEOUT'] = '123' client = docker_client(os.environ) with mock.patch('compose.cli.errors.log') as fake_log: with pytest.raises(errors.ConnectionError): with errors.handle_connection_errors(client): raise errors.RequestsConnectionError( errors.ReadTimeoutError(None, None, None)) assert fake_log.error.call_count == 1 assert '123' in fake_log.error.call_args[0][0] def test_user_agent(self): client = docker_client(os.environ) expected = "docker-compose/{0} docker-py/{1} {2}/{3}".format( compose.__version__, docker.__version__, platform.system(), platform.release() ) self.assertEqual(client.headers['User-Agent'], expected) class TLSConfigTestCase(unittest.TestCase): ca_cert = 'tests/fixtures/tls/ca.pem' client_cert = 'tests/fixtures/tls/cert.pem' key = 'tests/fixtures/tls/key.key' def test_simple_tls(self): options = {'--tls': True} result = tls_config_from_options(options) assert result is True def test_tls_ca_cert(self): options = { '--tlscacert': self.ca_cert, '--tlsverify': True } result = tls_config_from_options(options) assert isinstance(result, docker.tls.TLSConfig) assert result.ca_cert == options['--tlscacert'] assert result.verify is True def test_tls_ca_cert_explicit(self): options = { '--tlscacert': self.ca_cert, '--tls': True, '--tlsverify': True } result = tls_config_from_options(options) assert isinstance(result, docker.tls.TLSConfig) assert result.ca_cert == options['--tlscacert'] assert result.verify is True def test_tls_client_cert(self): options = { '--tlscert': self.client_cert, '--tlskey': self.key } result = tls_config_from_options(options) assert isinstance(result, docker.tls.TLSConfig) assert result.cert == (options['--tlscert'], options['--tlskey']) def test_tls_client_cert_explicit(self): options = { '--tlscert': self.client_cert, '--tlskey': self.key, '--tls': True } result = tls_config_from_options(options) assert isinstance(result, docker.tls.TLSConfig) assert result.cert == (options['--tlscert'], options['--tlskey']) def test_tls_client_and_ca(self): options = { '--tlscert': self.client_cert, '--tlskey': self.key, '--tlsverify': True, '--tlscacert': self.ca_cert } result = tls_config_from_options(options) assert isinstance(result, docker.tls.TLSConfig) assert result.cert == (options['--tlscert'], options['--tlskey']) assert result.ca_cert == options['--tlscacert'] assert result.verify is True def test_tls_client_and_ca_explicit(self): options = { '--tlscert': self.client_cert, '--tlskey': self.key, '--tlsverify': True, '--tlscacert': self.ca_cert, '--tls': True } result = tls_config_from_options(options) assert isinstance(result, docker.tls.TLSConfig) assert result.cert == (options['--tlscert'], options['--tlskey']) assert result.ca_cert == options['--tlscacert'] assert result.verify is True def test_tls_client_missing_key(self): options = {'--tlscert': self.client_cert} with pytest.raises(docker.errors.TLSParameterError): tls_config_from_options(options) options = {'--tlskey': self.key} with pytest.raises(docker.errors.TLSParameterError): tls_config_from_options(options) def test_assert_hostname_explicit_skip(self): options = {'--tlscacert': self.ca_cert, '--skip-hostname-check': True} result = tls_config_from_options(options) assert isinstance(result, docker.tls.TLSConfig) assert result.assert_hostname is False compose-1.8.0/tests/unit/cli/errors_test.py000066400000000000000000000033031274620702700210100ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import pytest from docker.errors import APIError from requests.exceptions import ConnectionError from compose.cli import errors from compose.cli.errors import handle_connection_errors from tests import mock @pytest.yield_fixture def mock_logging(): with mock.patch('compose.cli.errors.log', autospec=True) as mock_log: yield mock_log def patch_call_silently(side_effect): return mock.patch( 'compose.cli.errors.call_silently', autospec=True, side_effect=side_effect) class TestHandleConnectionErrors(object): def test_generic_connection_error(self, mock_logging): with pytest.raises(errors.ConnectionError): with patch_call_silently([0, 1]): with handle_connection_errors(mock.Mock()): raise ConnectionError() _, args, _ = mock_logging.error.mock_calls[0] assert "Couldn't connect to Docker daemon at" in args[0] def test_api_error_version_mismatch(self, mock_logging): with pytest.raises(errors.ConnectionError): with handle_connection_errors(mock.Mock(api_version='1.22')): raise APIError(None, None, b"client is newer than server") _, args, _ = mock_logging.error.mock_calls[0] assert "Docker Engine of version 1.10.0 or greater" in args[0] def test_api_error_version_other(self, mock_logging): msg = b"Something broke!" with pytest.raises(errors.ConnectionError): with handle_connection_errors(mock.Mock(api_version='1.22')): raise APIError(None, None, msg) mock_logging.error.assert_called_once_with(msg) compose-1.8.0/tests/unit/cli/formatter_test.py000066400000000000000000000017571274620702700215120ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging from compose.cli import colors from compose.cli.formatter import ConsoleWarningFormatter from tests import unittest MESSAGE = 'this is the message' def makeLogRecord(level): return logging.LogRecord('name', level, 'pathame', 0, MESSAGE, (), None) class ConsoleWarningFormatterTestCase(unittest.TestCase): def setUp(self): self.formatter = ConsoleWarningFormatter() def test_format_warn(self): output = self.formatter.format(makeLogRecord(logging.WARN)) expected = colors.yellow('WARNING') + ': ' assert output == expected + MESSAGE def test_format_error(self): output = self.formatter.format(makeLogRecord(logging.ERROR)) expected = colors.red('ERROR') + ': ' assert output == expected + MESSAGE def test_format_info(self): output = self.formatter.format(makeLogRecord(logging.INFO)) assert output == MESSAGE compose-1.8.0/tests/unit/cli/log_printer_test.py000066400000000000000000000127271274620702700220320ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import itertools import pytest import six from six.moves.queue import Queue from compose.cli.log_printer import build_log_generator from compose.cli.log_printer import build_log_presenters from compose.cli.log_printer import build_no_log_generator from compose.cli.log_printer import consume_queue from compose.cli.log_printer import QueueItem from compose.cli.log_printer import wait_on_exit from compose.cli.log_printer import watch_events from compose.container import Container from tests import mock @pytest.fixture def output_stream(): output = six.StringIO() output.flush = mock.Mock() return output @pytest.fixture def mock_container(): return mock.Mock(spec=Container, name_without_project='web_1') class TestLogPresenter(object): def test_monochrome(self, mock_container): presenters = build_log_presenters(['foo', 'bar'], True) presenter = next(presenters) actual = presenter.present(mock_container, "this line") assert actual == "web_1 | this line" def test_polychrome(self, mock_container): presenters = build_log_presenters(['foo', 'bar'], False) presenter = next(presenters) actual = presenter.present(mock_container, "this line") assert '\033[' in actual def test_wait_on_exit(): exit_status = 3 mock_container = mock.Mock( spec=Container, name='cname', wait=mock.Mock(return_value=exit_status)) expected = '{} exited with code {}\n'.format(mock_container.name, exit_status) assert expected == wait_on_exit(mock_container) def test_build_no_log_generator(mock_container): mock_container.has_api_logs = False mock_container.log_driver = 'none' output, = build_no_log_generator(mock_container, None) assert "WARNING: no logs are available with the 'none' log driver\n" in output assert "exited with code" not in output class TestBuildLogGenerator(object): def test_no_log_stream(self, mock_container): mock_container.log_stream = None mock_container.logs.return_value = iter([b"hello\nworld"]) log_args = {'follow': True} generator = build_log_generator(mock_container, log_args) assert next(generator) == "hello\n" assert next(generator) == "world" mock_container.logs.assert_called_once_with( stdout=True, stderr=True, stream=True, **log_args) def test_with_log_stream(self, mock_container): mock_container.log_stream = iter([b"hello\nworld"]) log_args = {'follow': True} generator = build_log_generator(mock_container, log_args) assert next(generator) == "hello\n" assert next(generator) == "world" def test_unicode(self, output_stream): glyph = u'\u2022\n' mock_container.log_stream = iter([glyph.encode('utf-8')]) generator = build_log_generator(mock_container, {}) assert next(generator) == glyph @pytest.fixture def thread_map(): return {'cid': mock.Mock()} @pytest.fixture def mock_presenters(): return itertools.cycle([mock.Mock()]) class TestWatchEvents(object): def test_stop_event(self, thread_map, mock_presenters): event_stream = [{'action': 'stop', 'id': 'cid'}] watch_events(thread_map, event_stream, mock_presenters, ()) assert not thread_map def test_start_event(self, thread_map, mock_presenters): container_id = 'abcd' event = {'action': 'start', 'id': container_id, 'container': mock.Mock()} event_stream = [event] thread_args = 'foo', 'bar' with mock.patch( 'compose.cli.log_printer.build_thread', autospec=True ) as mock_build_thread: watch_events(thread_map, event_stream, mock_presenters, thread_args) mock_build_thread.assert_called_once_with( event['container'], next(mock_presenters), *thread_args) assert container_id in thread_map def test_other_event(self, thread_map, mock_presenters): container_id = 'abcd' event_stream = [{'action': 'create', 'id': container_id}] watch_events(thread_map, event_stream, mock_presenters, ()) assert container_id not in thread_map class TestConsumeQueue(object): def test_item_is_an_exception(self): class Problem(Exception): pass queue = Queue() error = Problem('oops') for item in QueueItem.new('a'), QueueItem.new('b'), QueueItem.exception(error): queue.put(item) generator = consume_queue(queue, False) assert next(generator) == 'a' assert next(generator) == 'b' with pytest.raises(Problem): next(generator) def test_item_is_stop_without_cascade_stop(self): queue = Queue() for item in QueueItem.stop(), QueueItem.new('a'), QueueItem.new('b'): queue.put(item) generator = consume_queue(queue, False) assert next(generator) == 'a' assert next(generator) == 'b' def test_item_is_stop_with_cascade_stop(self): queue = Queue() for item in QueueItem.stop(), QueueItem.new('a'), QueueItem.new('b'): queue.put(item) assert list(consume_queue(queue, True)) == [] def test_item_is_none_when_timeout_is_hit(self): queue = Queue() generator = consume_queue(queue, False) assert next(generator) is None compose-1.8.0/tests/unit/cli/main_test.py000066400000000000000000000066411274620702700204300ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging import pytest from compose import container from compose.cli.errors import UserError from compose.cli.formatter import ConsoleWarningFormatter from compose.cli.main import convergence_strategy_from_opts from compose.cli.main import filter_containers_to_service_names from compose.cli.main import setup_console_handler from compose.service import ConvergenceStrategy from tests import mock def mock_container(service, number): return mock.create_autospec( container.Container, service=service, number=number, name_without_project='{0}_{1}'.format(service, number)) @pytest.fixture def logging_handler(): stream = mock.Mock() stream.isatty.return_value = True return logging.StreamHandler(stream=stream) class TestCLIMainTestCase(object): def test_filter_containers_to_service_names(self): containers = [ mock_container('web', 1), mock_container('web', 2), mock_container('db', 1), mock_container('other', 1), mock_container('another', 1), ] service_names = ['web', 'db'] actual = filter_containers_to_service_names(containers, service_names) assert actual == containers[:3] def test_filter_containers_to_service_names_all(self): containers = [ mock_container('web', 1), mock_container('db', 1), mock_container('other', 1), ] service_names = [] actual = filter_containers_to_service_names(containers, service_names) assert actual == containers class TestSetupConsoleHandlerTestCase(object): def test_with_tty_verbose(self, logging_handler): setup_console_handler(logging_handler, True) assert type(logging_handler.formatter) == ConsoleWarningFormatter assert '%(name)s' in logging_handler.formatter._fmt assert '%(funcName)s' in logging_handler.formatter._fmt def test_with_tty_not_verbose(self, logging_handler): setup_console_handler(logging_handler, False) assert type(logging_handler.formatter) == ConsoleWarningFormatter assert '%(name)s' not in logging_handler.formatter._fmt assert '%(funcName)s' not in logging_handler.formatter._fmt def test_with_not_a_tty(self, logging_handler): logging_handler.stream.isatty.return_value = False setup_console_handler(logging_handler, False) assert type(logging_handler.formatter) == logging.Formatter class TestConvergeStrategyFromOptsTestCase(object): def test_invalid_opts(self): options = {'--force-recreate': True, '--no-recreate': True} with pytest.raises(UserError): convergence_strategy_from_opts(options) def test_always(self): options = {'--force-recreate': True, '--no-recreate': False} assert ( convergence_strategy_from_opts(options) == ConvergenceStrategy.always ) def test_never(self): options = {'--force-recreate': False, '--no-recreate': True} assert ( convergence_strategy_from_opts(options) == ConvergenceStrategy.never ) def test_changed(self): options = {'--force-recreate': False, '--no-recreate': False} assert ( convergence_strategy_from_opts(options) == ConvergenceStrategy.changed ) compose-1.8.0/tests/unit/cli/verbose_proxy_test.py000066400000000000000000000017631274620702700224120ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import six from compose.cli import verbose_proxy from tests import unittest class VerboseProxyTestCase(unittest.TestCase): def test_format_call(self): prefix = '' if six.PY3 else 'u' expected = "(%(p)s'arg1', True, key=%(p)s'value')" % dict(p=prefix) actual = verbose_proxy.format_call( ("arg1", True), {'key': 'value'}) self.assertEqual(expected, actual) def test_format_return_sequence(self): expected = "(list with 10 items)" actual = verbose_proxy.format_return(list(range(10)), 2) self.assertEqual(expected, actual) def test_format_return(self): expected = repr({'Id': 'ok'}) actual = verbose_proxy.format_return({'Id': 'ok'}, 2) self.assertEqual(expected, actual) def test_format_return_no_result(self): actual = verbose_proxy.format_return(None, 2) self.assertEqual(None, actual) compose-1.8.0/tests/unit/cli_test.py000066400000000000000000000156451274620702700175100ustar00rootroot00000000000000# encoding: utf-8 from __future__ import absolute_import from __future__ import unicode_literals import os import shutil import tempfile from io import StringIO import docker import py import pytest from .. import mock from .. import unittest from ..helpers import build_config from compose.cli.command import get_project from compose.cli.command import get_project_name from compose.cli.docopt_command import NoSuchCommand from compose.cli.errors import UserError from compose.cli.main import TopLevelCommand from compose.const import IS_WINDOWS_PLATFORM from compose.project import Project class CLITestCase(unittest.TestCase): def test_default_project_name(self): test_dir = py._path.local.LocalPath('tests/fixtures/simple-composefile') with test_dir.as_cwd(): project_name = get_project_name('.') self.assertEquals('simplecomposefile', project_name) def test_project_name_with_explicit_base_dir(self): base_dir = 'tests/fixtures/simple-composefile' project_name = get_project_name(base_dir) self.assertEquals('simplecomposefile', project_name) def test_project_name_with_explicit_uppercase_base_dir(self): base_dir = 'tests/fixtures/UpperCaseDir' project_name = get_project_name(base_dir) self.assertEquals('uppercasedir', project_name) def test_project_name_with_explicit_project_name(self): name = 'explicit-project-name' project_name = get_project_name(None, project_name=name) self.assertEquals('explicitprojectname', project_name) @mock.patch.dict(os.environ) def test_project_name_from_environment_new_var(self): name = 'namefromenv' os.environ['COMPOSE_PROJECT_NAME'] = name project_name = get_project_name(None) self.assertEquals(project_name, name) def test_project_name_with_empty_environment_var(self): base_dir = 'tests/fixtures/simple-composefile' with mock.patch.dict(os.environ): os.environ['COMPOSE_PROJECT_NAME'] = '' project_name = get_project_name(base_dir) self.assertEquals('simplecomposefile', project_name) @mock.patch.dict(os.environ) def test_project_name_with_environment_file(self): base_dir = tempfile.mkdtemp() try: name = 'namefromenvfile' with open(os.path.join(base_dir, '.env'), 'w') as f: f.write('COMPOSE_PROJECT_NAME={}'.format(name)) project_name = get_project_name(base_dir) assert project_name == name # Environment has priority over .env file os.environ['COMPOSE_PROJECT_NAME'] = 'namefromenv' assert get_project_name(base_dir) == os.environ['COMPOSE_PROJECT_NAME'] finally: shutil.rmtree(base_dir) def test_get_project(self): base_dir = 'tests/fixtures/longer-filename-composefile' project = get_project(base_dir) self.assertEqual(project.name, 'longerfilenamecomposefile') self.assertTrue(project.client) self.assertTrue(project.services) def test_command_help(self): with mock.patch('sys.stdout', new=StringIO()) as fake_stdout: TopLevelCommand.help({'COMMAND': 'up'}) assert "Usage: up" in fake_stdout.getvalue() def test_command_help_nonexistent(self): with pytest.raises(NoSuchCommand): TopLevelCommand.help({'COMMAND': 'nonexistent'}) @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason="requires dockerpty") @mock.patch('compose.cli.main.RunOperation', autospec=True) @mock.patch('compose.cli.main.PseudoTerminal', autospec=True) def test_run_interactive_passes_logs_false(self, mock_pseudo_terminal, mock_run_operation): mock_client = mock.create_autospec(docker.Client) project = Project.from_config( name='composetest', client=mock_client, config_data=build_config({ 'service': {'image': 'busybox'} }), ) command = TopLevelCommand(project) with pytest.raises(SystemExit): command.run({ 'SERVICE': 'service', 'COMMAND': None, '-e': [], '--user': None, '--no-deps': None, '-d': False, '-T': None, '--entrypoint': None, '--service-ports': None, '--publish': [], '--rm': None, '--name': None, '--workdir': None, }) _, _, call_kwargs = mock_run_operation.mock_calls[0] assert call_kwargs['logs'] is False def test_run_service_with_restart_always(self): mock_client = mock.create_autospec(docker.Client) project = Project.from_config( name='composetest', client=mock_client, config_data=build_config({ 'service': { 'image': 'busybox', 'restart': 'always', } }), ) command = TopLevelCommand(project) command.run({ 'SERVICE': 'service', 'COMMAND': None, '-e': [], '--user': None, '--no-deps': None, '-d': True, '-T': None, '--entrypoint': None, '--service-ports': None, '--publish': [], '--rm': None, '--name': None, '--workdir': None, }) self.assertEquals( mock_client.create_host_config.call_args[1]['restart_policy']['Name'], 'always' ) command = TopLevelCommand(project) command.run({ 'SERVICE': 'service', 'COMMAND': None, '-e': [], '--user': None, '--no-deps': None, '-d': True, '-T': None, '--entrypoint': None, '--service-ports': None, '--publish': [], '--rm': True, '--name': None, '--workdir': None, }) self.assertFalse( mock_client.create_host_config.call_args[1].get('restart_policy') ) def test_command_manula_and_service_ports_together(self): project = Project.from_config( name='composetest', client=None, config_data=build_config({ 'service': {'image': 'busybox'}, }), ) command = TopLevelCommand(project) with self.assertRaises(UserError): command.run({ 'SERVICE': 'service', 'COMMAND': None, '-e': [], '--user': None, '--no-deps': None, '-d': True, '-T': None, '--entrypoint': None, '--service-ports': True, '--publish': ['80:80'], '--rm': None, '--name': None, }) compose-1.8.0/tests/unit/config/000077500000000000000000000000001274620702700165625ustar00rootroot00000000000000compose-1.8.0/tests/unit/config/__init__.py000066400000000000000000000000001274620702700206610ustar00rootroot00000000000000compose-1.8.0/tests/unit/config/config_test.py000066400000000000000000002766151274620702700214610ustar00rootroot00000000000000# encoding: utf-8 from __future__ import absolute_import from __future__ import print_function from __future__ import unicode_literals import os import shutil import tempfile from operator import itemgetter import py import pytest from ...helpers import build_config_details from compose.config import config from compose.config.config import resolve_build_args from compose.config.config import resolve_environment from compose.config.config import V1 from compose.config.config import V2_0 from compose.config.environment import Environment from compose.config.errors import ConfigurationError from compose.config.errors import VERSION_EXPLANATION from compose.config.types import VolumeSpec from compose.const import IS_WINDOWS_PLATFORM from tests import mock from tests import unittest DEFAULT_VERSION = V2_0 def make_service_dict(name, service_dict, working_dir, filename=None): """Test helper function to construct a ServiceExtendsResolver """ resolver = config.ServiceExtendsResolver( config.ServiceConfig( working_dir=working_dir, filename=filename, name=name, config=service_dict), config.ConfigFile(filename=filename, config={}), environment=Environment.from_env_file(working_dir) ) return config.process_service(resolver.run()) def service_sort(services): return sorted(services, key=itemgetter('name')) class ConfigTest(unittest.TestCase): def test_load(self): service_dicts = config.load( build_config_details( { 'foo': {'image': 'busybox'}, 'bar': {'image': 'busybox', 'environment': ['FOO=1']}, }, 'tests/fixtures/extends', 'common.yml' ) ).services self.assertEqual( service_sort(service_dicts), service_sort([ { 'name': 'bar', 'image': 'busybox', 'environment': {'FOO': '1'}, }, { 'name': 'foo', 'image': 'busybox', } ]) ) def test_load_v2(self): config_data = config.load( build_config_details({ 'version': '2', 'services': { 'foo': {'image': 'busybox'}, 'bar': {'image': 'busybox', 'environment': ['FOO=1']}, }, 'volumes': { 'hello': { 'driver': 'default', 'driver_opts': {'beep': 'boop'} } }, 'networks': { 'default': { 'driver': 'bridge', 'driver_opts': {'beep': 'boop'} }, 'with_ipam': { 'ipam': { 'driver': 'default', 'config': [ {'subnet': '172.28.0.0/16'} ] } } } }, 'working_dir', 'filename.yml') ) service_dicts = config_data.services volume_dict = config_data.volumes networks_dict = config_data.networks self.assertEqual( service_sort(service_dicts), service_sort([ { 'name': 'bar', 'image': 'busybox', 'environment': {'FOO': '1'}, }, { 'name': 'foo', 'image': 'busybox', } ]) ) self.assertEqual(volume_dict, { 'hello': { 'driver': 'default', 'driver_opts': {'beep': 'boop'} } }) self.assertEqual(networks_dict, { 'default': { 'driver': 'bridge', 'driver_opts': {'beep': 'boop'} }, 'with_ipam': { 'ipam': { 'driver': 'default', 'config': [ {'subnet': '172.28.0.0/16'} ] } } }) def test_valid_versions(self): for version in ['2', '2.0']: cfg = config.load(build_config_details({'version': version})) assert cfg.version == V2_0 def test_v1_file_version(self): cfg = config.load(build_config_details({'web': {'image': 'busybox'}})) assert cfg.version == V1 assert list(s['name'] for s in cfg.services) == ['web'] cfg = config.load(build_config_details({'version': {'image': 'busybox'}})) assert cfg.version == V1 assert list(s['name'] for s in cfg.services) == ['version'] def test_wrong_version_type(self): for version in [None, 1, 2, 2.0]: with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( {'version': version}, filename='filename.yml', ) ) assert 'Version in "filename.yml" is invalid - it should be a string.' \ in excinfo.exconly() def test_unsupported_version(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( {'version': '2.1'}, filename='filename.yml', ) ) assert 'Version in "filename.yml" is unsupported' in excinfo.exconly() assert VERSION_EXPLANATION in excinfo.exconly() def test_version_1_is_invalid(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'version': '1', 'web': {'image': 'busybox'}, }, filename='filename.yml', ) ) assert 'Version in "filename.yml" is invalid' in excinfo.exconly() assert VERSION_EXPLANATION in excinfo.exconly() def test_v1_file_with_version_is_invalid(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'version': '2', 'web': {'image': 'busybox'}, }, filename='filename.yml', ) ) assert 'Additional properties are not allowed' in excinfo.exconly() assert VERSION_EXPLANATION in excinfo.exconly() def test_named_volume_config_empty(self): config_details = build_config_details({ 'version': '2', 'services': { 'simple': {'image': 'busybox'} }, 'volumes': { 'simple': None, 'other': {}, } }) config_result = config.load(config_details) volumes = config_result.volumes assert 'simple' in volumes assert volumes['simple'] == {} assert volumes['other'] == {} def test_named_volume_numeric_driver_opt(self): config_details = build_config_details({ 'version': '2', 'services': { 'simple': {'image': 'busybox'} }, 'volumes': { 'simple': {'driver_opts': {'size': 42}}, } }) cfg = config.load(config_details) assert cfg.volumes['simple']['driver_opts']['size'] == '42' def test_volume_invalid_driver_opt(self): config_details = build_config_details({ 'version': '2', 'services': { 'simple': {'image': 'busybox'} }, 'volumes': { 'simple': {'driver_opts': {'size': True}}, } }) with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert 'driver_opts.size contains an invalid type' in exc.exconly() def test_named_volume_invalid_type_list(self): config_details = build_config_details({ 'version': '2', 'services': { 'simple': {'image': 'busybox'} }, 'volumes': [] }) with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert "volume must be a mapping, not an array" in exc.exconly() def test_networks_invalid_type_list(self): config_details = build_config_details({ 'version': '2', 'services': { 'simple': {'image': 'busybox'} }, 'networks': [] }) with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert "network must be a mapping, not an array" in exc.exconly() def test_load_service_with_name_version(self): with mock.patch('compose.config.config.log') as mock_logging: config_data = config.load( build_config_details({ 'version': { 'image': 'busybox' } }, 'working_dir', 'filename.yml') ) assert 'Unexpected type for "version" key in "filename.yml"' \ in mock_logging.warn.call_args[0][0] service_dicts = config_data.services self.assertEqual( service_sort(service_dicts), service_sort([ { 'name': 'version', 'image': 'busybox', } ]) ) def test_load_throws_error_when_not_dict(self): with self.assertRaises(ConfigurationError): config.load( build_config_details( {'web': 'busybox:latest'}, 'working_dir', 'filename.yml' ) ) def test_load_throws_error_when_not_dict_v2(self): with self.assertRaises(ConfigurationError): config.load( build_config_details( {'version': '2', 'services': {'web': 'busybox:latest'}}, 'working_dir', 'filename.yml' ) ) def test_load_throws_error_with_invalid_network_fields(self): with self.assertRaises(ConfigurationError): config.load( build_config_details({ 'version': '2', 'services': {'web': 'busybox:latest'}, 'networks': { 'invalid': {'foo', 'bar'} } }, 'working_dir', 'filename.yml') ) def test_load_config_invalid_service_names(self): for invalid_name in ['?not?allowed', ' ', '', '!', '/', '\xe2']: with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( {invalid_name: {'image': 'busybox'}})) assert 'Invalid service name \'%s\'' % invalid_name in exc.exconly() def test_load_config_invalid_service_names_v2(self): for invalid_name in ['?not?allowed', ' ', '', '!', '/', '\xe2']: with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'version': '2', 'services': {invalid_name: {'image': 'busybox'}}, })) assert 'Invalid service name \'%s\'' % invalid_name in exc.exconly() def test_load_with_invalid_field_name(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'version': '2', 'services': { 'web': {'image': 'busybox', 'name': 'bogus'}, } }, 'working_dir', 'filename.yml', )) assert "Unsupported config option for services.web: 'name'" in exc.exconly() def test_load_with_invalid_field_name_v1(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'web': {'image': 'busybox', 'name': 'bogus'}, }, 'working_dir', 'filename.yml', )) assert "Unsupported config option for web: 'name'" in exc.exconly() def test_load_invalid_service_definition(self): config_details = build_config_details( {'web': 'wrong'}, 'working_dir', 'filename.yml') with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert "service 'web' must be a mapping not a string." in exc.exconly() def test_load_with_empty_build_args(self): config_details = build_config_details( { 'version': '2', 'services': { 'web': { 'build': { 'context': '.', 'args': None, }, }, }, } ) with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert ( "services.web.build.args contains an invalid type, it should be an " "object, or an array" in exc.exconly() ) def test_config_integer_service_name_raise_validation_error(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( {1: {'image': 'busybox'}}, 'working_dir', 'filename.yml' ) ) assert ( "In file 'filename.yml', the service name 1 must be a quoted string, i.e. '1'" in excinfo.exconly() ) def test_config_integer_service_name_raise_validation_error_v2(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'version': '2', 'services': {1: {'image': 'busybox'}} }, 'working_dir', 'filename.yml' ) ) assert ( "In file 'filename.yml', the service name 1 must be a quoted string, i.e. '1'." in excinfo.exconly() ) def test_load_with_multiple_files_v1(self): base_file = config.ConfigFile( 'base.yaml', { 'web': { 'image': 'example/web', 'links': ['db'], }, 'db': { 'image': 'example/db', }, }) override_file = config.ConfigFile( 'override.yaml', { 'web': { 'build': '/', 'volumes': ['/home/user/project:/code'], }, }) details = config.ConfigDetails('.', [base_file, override_file]) service_dicts = config.load(details).services expected = [ { 'name': 'web', 'build': {'context': os.path.abspath('/')}, 'volumes': [VolumeSpec.parse('/home/user/project:/code')], 'links': ['db'], }, { 'name': 'db', 'image': 'example/db', }, ] assert service_sort(service_dicts) == service_sort(expected) def test_load_with_multiple_files_and_empty_override(self): base_file = config.ConfigFile( 'base.yml', {'web': {'image': 'example/web'}}) override_file = config.ConfigFile('override.yml', None) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) error_msg = "Top level object in 'override.yml' needs to be an object" assert error_msg in exc.exconly() def test_load_with_multiple_files_and_empty_override_v2(self): base_file = config.ConfigFile( 'base.yml', {'version': '2', 'services': {'web': {'image': 'example/web'}}}) override_file = config.ConfigFile('override.yml', None) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) error_msg = "Top level object in 'override.yml' needs to be an object" assert error_msg in exc.exconly() def test_load_with_multiple_files_and_empty_base(self): base_file = config.ConfigFile('base.yml', None) override_file = config.ConfigFile( 'override.yml', {'web': {'image': 'example/web'}}) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) assert "Top level object in 'base.yml' needs to be an object" in exc.exconly() def test_load_with_multiple_files_and_empty_base_v2(self): base_file = config.ConfigFile('base.yml', None) override_file = config.ConfigFile( 'override.tml', {'version': '2', 'services': {'web': {'image': 'example/web'}}} ) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) assert "Top level object in 'base.yml' needs to be an object" in exc.exconly() def test_load_with_multiple_files_and_extends_in_override_file(self): base_file = config.ConfigFile( 'base.yaml', { 'web': {'image': 'example/web'}, }) override_file = config.ConfigFile( 'override.yaml', { 'web': { 'extends': { 'file': 'common.yml', 'service': 'base', }, 'volumes': ['/home/user/project:/code'], }, }) details = config.ConfigDetails('.', [base_file, override_file]) tmpdir = py.test.ensuretemp('config_test') self.addCleanup(tmpdir.remove) tmpdir.join('common.yml').write(""" base: labels: ['label=one'] """) with tmpdir.as_cwd(): service_dicts = config.load(details).services expected = [ { 'name': 'web', 'image': 'example/web', 'volumes': [VolumeSpec.parse('/home/user/project:/code')], 'labels': {'label': 'one'}, }, ] self.assertEqual(service_sort(service_dicts), service_sort(expected)) def test_load_with_multiple_files_and_invalid_override(self): base_file = config.ConfigFile( 'base.yaml', {'web': {'image': 'example/web'}}) override_file = config.ConfigFile( 'override.yaml', {'bogus': 'thing'}) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) assert "service 'bogus' must be a mapping not a string." in exc.exconly() assert "In file 'override.yaml'" in exc.exconly() def test_load_sorts_in_dependency_order(self): config_details = build_config_details({ 'web': { 'image': 'busybox:latest', 'links': ['db'], }, 'db': { 'image': 'busybox:latest', 'volumes_from': ['volume:ro'] }, 'volume': { 'image': 'busybox:latest', 'volumes': ['/tmp'], } }) services = config.load(config_details).services assert services[0]['name'] == 'volume' assert services[1]['name'] == 'db' assert services[2]['name'] == 'web' def test_config_build_configuration(self): service = config.load( build_config_details( {'web': { 'build': '.', 'dockerfile': 'Dockerfile-alt' }}, 'tests/fixtures/extends', 'filename.yml' ) ).services self.assertTrue('context' in service[0]['build']) self.assertEqual(service[0]['build']['dockerfile'], 'Dockerfile-alt') def test_config_build_configuration_v2(self): # service.dockerfile is invalid in v2 with self.assertRaises(ConfigurationError): config.load( build_config_details( { 'version': '2', 'services': { 'web': { 'build': '.', 'dockerfile': 'Dockerfile-alt' } } }, 'tests/fixtures/extends', 'filename.yml' ) ) service = config.load( build_config_details({ 'version': '2', 'services': { 'web': { 'build': '.' } } }, 'tests/fixtures/extends', 'filename.yml') ).services[0] self.assertTrue('context' in service['build']) service = config.load( build_config_details( { 'version': '2', 'services': { 'web': { 'build': { 'context': '.', 'dockerfile': 'Dockerfile-alt' } } } }, 'tests/fixtures/extends', 'filename.yml' ) ).services self.assertTrue('context' in service[0]['build']) self.assertEqual(service[0]['build']['dockerfile'], 'Dockerfile-alt') def test_load_with_buildargs(self): service = config.load( build_config_details( { 'version': '2', 'services': { 'web': { 'build': { 'context': '.', 'dockerfile': 'Dockerfile-alt', 'args': { 'opt1': 42, 'opt2': 'foobar' } } } } }, 'tests/fixtures/extends', 'filename.yml' ) ).services[0] assert 'args' in service['build'] assert 'opt1' in service['build']['args'] assert isinstance(service['build']['args']['opt1'], str) assert service['build']['args']['opt1'] == '42' assert service['build']['args']['opt2'] == 'foobar' def test_build_args_allow_empty_properties(self): service = config.load( build_config_details( { 'version': '2', 'services': { 'web': { 'build': { 'context': '.', 'dockerfile': 'Dockerfile-alt', 'args': { 'foo': None } } } } }, 'tests/fixtures/extends', 'filename.yml' ) ).services[0] assert 'args' in service['build'] assert 'foo' in service['build']['args'] assert service['build']['args']['foo'] == '' # If build argument is None then it will be converted to the empty # string. Make sure that int zero kept as it is, i.e. not converted to # the empty string def test_build_args_check_zero_preserved(self): service = config.load( build_config_details( { 'version': '2', 'services': { 'web': { 'build': { 'context': '.', 'dockerfile': 'Dockerfile-alt', 'args': { 'foo': 0 } } } } }, 'tests/fixtures/extends', 'filename.yml' ) ).services[0] assert 'args' in service['build'] assert 'foo' in service['build']['args'] assert service['build']['args']['foo'] == '0' def test_load_with_multiple_files_mismatched_networks_format(self): base_file = config.ConfigFile( 'base.yaml', { 'version': '2', 'services': { 'web': { 'image': 'example/web', 'networks': { 'foobar': {'aliases': ['foo', 'bar']} } } }, 'networks': {'foobar': {}, 'baz': {}} } ) override_file = config.ConfigFile( 'override.yaml', { 'version': '2', 'services': { 'web': { 'networks': ['baz'] } } } ) details = config.ConfigDetails('.', [base_file, override_file]) web_service = config.load(details).services[0] assert web_service['networks'] == { 'foobar': {'aliases': ['foo', 'bar']}, 'baz': None } def test_load_with_multiple_files_v2(self): base_file = config.ConfigFile( 'base.yaml', { 'version': '2', 'services': { 'web': { 'image': 'example/web', 'depends_on': ['db'], }, 'db': { 'image': 'example/db', } }, }) override_file = config.ConfigFile( 'override.yaml', { 'version': '2', 'services': { 'web': { 'build': '/', 'volumes': ['/home/user/project:/code'], 'depends_on': ['other'], }, 'other': { 'image': 'example/other', } } }) details = config.ConfigDetails('.', [base_file, override_file]) service_dicts = config.load(details).services expected = [ { 'name': 'web', 'build': {'context': os.path.abspath('/')}, 'image': 'example/web', 'volumes': [VolumeSpec.parse('/home/user/project:/code')], 'depends_on': ['db', 'other'], }, { 'name': 'db', 'image': 'example/db', }, { 'name': 'other', 'image': 'example/other', }, ] assert service_sort(service_dicts) == service_sort(expected) def test_undeclared_volume_v2(self): base_file = config.ConfigFile( 'base.yaml', { 'version': '2', 'services': { 'web': { 'image': 'busybox:latest', 'volumes': ['data0028:/data:ro'], }, }, } ) details = config.ConfigDetails('.', [base_file]) with self.assertRaises(ConfigurationError): config.load(details) base_file = config.ConfigFile( 'base.yaml', { 'version': '2', 'services': { 'web': { 'image': 'busybox:latest', 'volumes': ['./data0028:/data:ro'], }, }, } ) details = config.ConfigDetails('.', [base_file]) config_data = config.load(details) volume = config_data.services[0].get('volumes')[0] assert not volume.is_named_volume def test_undeclared_volume_v1(self): base_file = config.ConfigFile( 'base.yaml', { 'web': { 'image': 'busybox:latest', 'volumes': ['data0028:/data:ro'], }, } ) details = config.ConfigDetails('.', [base_file]) config_data = config.load(details) volume = config_data.services[0].get('volumes')[0] assert volume.external == 'data0028' assert volume.is_named_volume def test_config_valid_service_names(self): for valid_name in ['_', '-', '.__.', '_what-up.', 'what_.up----', 'whatup']: services = config.load( build_config_details( {valid_name: {'image': 'busybox'}}, 'tests/fixtures/extends', 'common.yml')).services assert services[0]['name'] == valid_name def test_config_hint(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'foo': {'image': 'busybox', 'privilige': 'something'}, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "(did you mean 'privileged'?)" in excinfo.exconly() def test_load_errors_on_uppercase_with_no_image(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'Foo': {'build': '.'}, }, 'tests/fixtures/build-ctx')) assert "Service 'Foo' contains uppercase characters" in exc.exconly() def test_invalid_config_v1(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'foo': {'image': 1}, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "foo.image contains an invalid type, it should be a string" \ in excinfo.exconly() def test_invalid_config_v2(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'version': '2', 'services': { 'foo': {'image': 1}, }, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "services.foo.image contains an invalid type, it should be a string" \ in excinfo.exconly() def test_invalid_config_build_and_image_specified_v1(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'foo': {'image': 'busybox', 'build': '.'}, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "foo has both an image and build path specified." in excinfo.exconly() def test_invalid_config_type_should_be_an_array(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'foo': {'image': 'busybox', 'links': 'an_link'}, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "foo.links contains an invalid type, it should be an array" \ in excinfo.exconly() def test_invalid_config_not_a_dictionary(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( ['foo', 'lol'], 'tests/fixtures/extends', 'filename.yml' ) ) assert "Top level object in 'filename.yml' needs to be an object" \ in excinfo.exconly() def test_invalid_config_not_unique_items(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'web': {'build': '.', 'devices': ['/dev/foo:/dev/foo', '/dev/foo:/dev/foo']} }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "has non-unique elements" in excinfo.exconly() def test_invalid_list_of_strings_format(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'web': {'build': '.', 'command': [1]} }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "web.command contains 1, which is an invalid type, it should be a string" \ in excinfo.exconly() def test_load_config_dockerfile_without_build_raises_error_v1(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'web': { 'image': 'busybox', 'dockerfile': 'Dockerfile.alt' } })) assert "web has both an image and alternate Dockerfile." in exc.exconly() def test_config_extra_hosts_string_raises_validation_error(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( {'web': { 'image': 'busybox', 'extra_hosts': 'somehost:162.242.195.82' }}, 'working_dir', 'filename.yml' ) ) assert "web.extra_hosts contains an invalid type" \ in excinfo.exconly() def test_config_extra_hosts_list_of_dicts_validation_error(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( {'web': { 'image': 'busybox', 'extra_hosts': [ {'somehost': '162.242.195.82'}, {'otherhost': '50.31.209.229'} ] }}, 'working_dir', 'filename.yml' ) ) assert "web.extra_hosts contains {\"somehost\": \"162.242.195.82\"}, " \ "which is an invalid type, it should be a string" \ in excinfo.exconly() def test_config_ulimits_invalid_keys_validation_error(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'web': { 'image': 'busybox', 'ulimits': { 'nofile': { "not_soft_or_hard": 100, "soft": 10000, "hard": 20000, } } } }, 'working_dir', 'filename.yml')) assert "web.ulimits.nofile contains unsupported option: 'not_soft_or_hard'" \ in exc.exconly() def test_config_ulimits_required_keys_validation_error(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'web': { 'image': 'busybox', 'ulimits': {'nofile': {"soft": 10000}} } }, 'working_dir', 'filename.yml')) assert "web.ulimits.nofile" in exc.exconly() assert "'hard' is a required property" in exc.exconly() def test_config_ulimits_soft_greater_than_hard_error(self): expected = "'soft' value can not be greater than 'hard' value" with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'web': { 'image': 'busybox', 'ulimits': { 'nofile': {"soft": 10000, "hard": 1000} } } }, 'working_dir', 'filename.yml')) assert expected in exc.exconly() def test_valid_config_which_allows_two_type_definitions(self): expose_values = [["8000"], [8000]] for expose in expose_values: service = config.load( build_config_details( {'web': { 'image': 'busybox', 'expose': expose }}, 'working_dir', 'filename.yml' ) ).services self.assertEqual(service[0]['expose'], expose) def test_valid_config_oneof_string_or_list(self): entrypoint_values = [["sh"], "sh"] for entrypoint in entrypoint_values: service = config.load( build_config_details( {'web': { 'image': 'busybox', 'entrypoint': entrypoint }}, 'working_dir', 'filename.yml' ) ).services self.assertEqual(service[0]['entrypoint'], entrypoint) def test_logs_warning_for_boolean_in_environment(self): config_details = build_config_details({ 'web': { 'image': 'busybox', 'environment': {'SHOW_STUFF': True} } }) with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert "contains true, which is an invalid type" in exc.exconly() def test_config_valid_environment_dict_key_contains_dashes(self): services = config.load( build_config_details( {'web': { 'image': 'busybox', 'environment': {'SPRING_JPA_HIBERNATE_DDL-AUTO': 'none'} }}, 'working_dir', 'filename.yml' ) ).services self.assertEqual(services[0]['environment']['SPRING_JPA_HIBERNATE_DDL-AUTO'], 'none') def test_load_yaml_with_yaml_error(self): tmpdir = py.test.ensuretemp('invalid_yaml_test') self.addCleanup(tmpdir.remove) invalid_yaml_file = tmpdir.join('docker-compose.yml') invalid_yaml_file.write(""" web: this is bogus: ok: what """) with pytest.raises(ConfigurationError) as exc: config.load_yaml(str(invalid_yaml_file)) assert 'line 3, column 32' in exc.exconly() def test_validate_extra_hosts_invalid(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'web': { 'image': 'alpine', 'extra_hosts': "www.example.com: 192.168.0.17", } })) assert "web.extra_hosts contains an invalid type" in exc.exconly() def test_validate_extra_hosts_invalid_list(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'web': { 'image': 'alpine', 'extra_hosts': [ {'www.example.com': '192.168.0.17'}, {'api.example.com': '192.168.0.18'} ], } })) assert "which is an invalid type" in exc.exconly() def test_normalize_dns_options(self): actual = config.load(build_config_details({ 'web': { 'image': 'alpine', 'dns': '8.8.8.8', 'dns_search': 'domain.local', } })) assert actual.services == [ { 'name': 'web', 'image': 'alpine', 'dns': ['8.8.8.8'], 'dns_search': ['domain.local'], } ] def test_tmpfs_option(self): actual = config.load(build_config_details({ 'version': '2', 'services': { 'web': { 'image': 'alpine', 'tmpfs': '/run', } } })) assert actual.services == [ { 'name': 'web', 'image': 'alpine', 'tmpfs': ['/run'], } ] def test_merge_service_dicts_from_files_with_extends_in_base(self): base = { 'volumes': ['.:/app'], 'extends': {'service': 'app'} } override = { 'image': 'alpine:edge', } actual = config.merge_service_dicts_from_files( base, override, DEFAULT_VERSION) assert actual == { 'image': 'alpine:edge', 'volumes': ['.:/app'], 'extends': {'service': 'app'} } def test_merge_service_dicts_from_files_with_extends_in_override(self): base = { 'volumes': ['.:/app'], 'extends': {'service': 'app'} } override = { 'image': 'alpine:edge', 'extends': {'service': 'foo'} } actual = config.merge_service_dicts_from_files( base, override, DEFAULT_VERSION) assert actual == { 'image': 'alpine:edge', 'volumes': ['.:/app'], 'extends': {'service': 'foo'} } def test_merge_build_args(self): base = { 'build': { 'context': '.', 'args': { 'ONE': '1', 'TWO': '2', }, } } override = { 'build': { 'args': { 'TWO': 'dos', 'THREE': '3', }, } } actual = config.merge_service_dicts( base, override, DEFAULT_VERSION) assert actual == { 'build': { 'context': '.', 'args': { 'ONE': '1', 'TWO': 'dos', 'THREE': '3', }, } } def test_merge_logging_v1(self): base = { 'image': 'alpine:edge', 'log_driver': 'something', 'log_opt': {'foo': 'three'}, } override = { 'image': 'alpine:edge', 'command': 'true', } actual = config.merge_service_dicts(base, override, V1) assert actual == { 'image': 'alpine:edge', 'log_driver': 'something', 'log_opt': {'foo': 'three'}, 'command': 'true', } def test_external_volume_config(self): config_details = build_config_details({ 'version': '2', 'services': { 'bogus': {'image': 'busybox'} }, 'volumes': { 'ext': {'external': True}, 'ext2': {'external': {'name': 'aliased'}} } }) config_result = config.load(config_details) volumes = config_result.volumes assert 'ext' in volumes assert volumes['ext']['external'] is True assert 'ext2' in volumes assert volumes['ext2']['external']['name'] == 'aliased' def test_external_volume_invalid_config(self): config_details = build_config_details({ 'version': '2', 'services': { 'bogus': {'image': 'busybox'} }, 'volumes': { 'ext': {'external': True, 'driver': 'foo'} } }) with pytest.raises(ConfigurationError): config.load(config_details) def test_depends_on_orders_services(self): config_details = build_config_details({ 'version': '2', 'services': { 'one': {'image': 'busybox', 'depends_on': ['three', 'two']}, 'two': {'image': 'busybox', 'depends_on': ['three']}, 'three': {'image': 'busybox'}, }, }) actual = config.load(config_details) assert ( [service['name'] for service in actual.services] == ['three', 'two', 'one'] ) def test_depends_on_unknown_service_errors(self): config_details = build_config_details({ 'version': '2', 'services': { 'one': {'image': 'busybox', 'depends_on': ['three']}, }, }) with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert "Service 'one' depends on service 'three'" in exc.exconly() def test_linked_service_is_undefined(self): with self.assertRaises(ConfigurationError): config.load( build_config_details({ 'version': '2', 'services': { 'web': {'image': 'busybox', 'links': ['db:db']}, }, }) ) def test_load_dockerfile_without_context(self): config_details = build_config_details({ 'version': '2', 'services': { 'one': {'build': {'dockerfile': 'Dockerfile.foo'}}, }, }) with pytest.raises(ConfigurationError) as exc: config.load(config_details) assert 'has neither an image nor a build context' in exc.exconly() class NetworkModeTest(unittest.TestCase): def test_network_mode_standard(self): config_data = config.load(build_config_details({ 'version': '2', 'services': { 'web': { 'image': 'busybox', 'command': "top", 'network_mode': 'bridge', }, }, })) assert config_data.services[0]['network_mode'] == 'bridge' def test_network_mode_standard_v1(self): config_data = config.load(build_config_details({ 'web': { 'image': 'busybox', 'command': "top", 'net': 'bridge', }, })) assert config_data.services[0]['network_mode'] == 'bridge' assert 'net' not in config_data.services[0] def test_network_mode_container(self): config_data = config.load(build_config_details({ 'version': '2', 'services': { 'web': { 'image': 'busybox', 'command': "top", 'network_mode': 'container:foo', }, }, })) assert config_data.services[0]['network_mode'] == 'container:foo' def test_network_mode_container_v1(self): config_data = config.load(build_config_details({ 'web': { 'image': 'busybox', 'command': "top", 'net': 'container:foo', }, })) assert config_data.services[0]['network_mode'] == 'container:foo' def test_network_mode_service(self): config_data = config.load(build_config_details({ 'version': '2', 'services': { 'web': { 'image': 'busybox', 'command': "top", 'network_mode': 'service:foo', }, 'foo': { 'image': 'busybox', 'command': "top", }, }, })) assert config_data.services[1]['network_mode'] == 'service:foo' def test_network_mode_service_v1(self): config_data = config.load(build_config_details({ 'web': { 'image': 'busybox', 'command': "top", 'net': 'container:foo', }, 'foo': { 'image': 'busybox', 'command': "top", }, })) assert config_data.services[1]['network_mode'] == 'service:foo' def test_network_mode_service_nonexistent(self): with pytest.raises(ConfigurationError) as excinfo: config.load(build_config_details({ 'version': '2', 'services': { 'web': { 'image': 'busybox', 'command': "top", 'network_mode': 'service:foo', }, }, })) assert "service 'foo' which is undefined" in excinfo.exconly() def test_network_mode_plus_networks_is_invalid(self): with pytest.raises(ConfigurationError) as excinfo: config.load(build_config_details({ 'version': '2', 'services': { 'web': { 'image': 'busybox', 'command': "top", 'network_mode': 'bridge', 'networks': ['front'], }, }, 'networks': { 'front': None, } })) assert "'network_mode' and 'networks' cannot be combined" in excinfo.exconly() class PortsTest(unittest.TestCase): INVALID_PORTS_TYPES = [ {"1": "8000"}, False, "8000", 8000, ] NON_UNIQUE_SINGLE_PORTS = [ ["8000", "8000"], ] INVALID_PORT_MAPPINGS = [ ["8000-8001:8000"], ] VALID_SINGLE_PORTS = [ ["8000"], ["8000/tcp"], ["8000", "9000"], [8000], [8000, 9000], ] VALID_PORT_MAPPINGS = [ ["8000:8050"], ["49153-49154:3002-3003"], ] def test_config_invalid_ports_type_validation(self): for invalid_ports in self.INVALID_PORTS_TYPES: with pytest.raises(ConfigurationError) as exc: self.check_config({'ports': invalid_ports}) assert "contains an invalid type" in exc.value.msg def test_config_non_unique_ports_validation(self): for invalid_ports in self.NON_UNIQUE_SINGLE_PORTS: with pytest.raises(ConfigurationError) as exc: self.check_config({'ports': invalid_ports}) assert "non-unique" in exc.value.msg def test_config_invalid_ports_format_validation(self): for invalid_ports in self.INVALID_PORT_MAPPINGS: with pytest.raises(ConfigurationError) as exc: self.check_config({'ports': invalid_ports}) assert "Port ranges don't match in length" in exc.value.msg def test_config_valid_ports_format_validation(self): for valid_ports in self.VALID_SINGLE_PORTS + self.VALID_PORT_MAPPINGS: self.check_config({'ports': valid_ports}) def test_config_invalid_expose_type_validation(self): for invalid_expose in self.INVALID_PORTS_TYPES: with pytest.raises(ConfigurationError) as exc: self.check_config({'expose': invalid_expose}) assert "contains an invalid type" in exc.value.msg def test_config_non_unique_expose_validation(self): for invalid_expose in self.NON_UNIQUE_SINGLE_PORTS: with pytest.raises(ConfigurationError) as exc: self.check_config({'expose': invalid_expose}) assert "non-unique" in exc.value.msg def test_config_invalid_expose_format_validation(self): # Valid port mappings ARE NOT valid 'expose' entries for invalid_expose in self.INVALID_PORT_MAPPINGS + self.VALID_PORT_MAPPINGS: with pytest.raises(ConfigurationError) as exc: self.check_config({'expose': invalid_expose}) assert "should be of the format" in exc.value.msg def test_config_valid_expose_format_validation(self): # Valid single ports ARE valid 'expose' entries for valid_expose in self.VALID_SINGLE_PORTS: self.check_config({'expose': valid_expose}) def check_config(self, cfg): config.load( build_config_details( {'web': dict(image='busybox', **cfg)}, 'working_dir', 'filename.yml' ) ) class InterpolationTest(unittest.TestCase): @mock.patch.dict(os.environ) def test_config_file_with_environment_file(self): project_dir = 'tests/fixtures/default-env-file' service_dicts = config.load( config.find( project_dir, None, Environment.from_env_file(project_dir) ) ).services self.assertEqual(service_dicts[0], { 'name': 'web', 'image': 'alpine:latest', 'ports': ['5643', '9999'], 'command': 'true' }) @mock.patch.dict(os.environ) def test_config_file_with_environment_variable(self): project_dir = 'tests/fixtures/environment-interpolation' os.environ.update( IMAGE="busybox", HOST_PORT="80", LABEL_VALUE="myvalue", ) service_dicts = config.load( config.find( project_dir, None, Environment.from_env_file(project_dir) ) ).services self.assertEqual(service_dicts, [ { 'name': 'web', 'image': 'busybox', 'ports': ['80:8000'], 'labels': {'mylabel': 'myvalue'}, 'hostname': 'host-', 'command': '${ESCAPED}', } ]) @mock.patch.dict(os.environ) def test_unset_variable_produces_warning(self): os.environ.pop('FOO', None) os.environ.pop('BAR', None) config_details = build_config_details( { 'web': { 'image': '${FOO}', 'command': '${BAR}', 'container_name': '${BAR}', }, }, '.', None, ) with mock.patch('compose.config.environment.log') as log: config.load(config_details) self.assertEqual(2, log.warn.call_count) warnings = sorted(args[0][0] for args in log.warn.call_args_list) self.assertIn('BAR', warnings[0]) self.assertIn('FOO', warnings[1]) @mock.patch.dict(os.environ) def test_invalid_interpolation(self): with self.assertRaises(config.ConfigurationError) as cm: config.load( build_config_details( {'web': {'image': '${'}}, 'working_dir', 'filename.yml' ) ) self.assertIn('Invalid', cm.exception.msg) self.assertIn('for "image" option', cm.exception.msg) self.assertIn('in service "web"', cm.exception.msg) self.assertIn('"${"', cm.exception.msg) def test_empty_environment_key_allowed(self): service_dict = config.load( build_config_details( { 'web': { 'build': '.', 'environment': { 'POSTGRES_PASSWORD': '' }, }, }, '.', None, ) ).services[0] self.assertEquals(service_dict['environment']['POSTGRES_PASSWORD'], '') class VolumeConfigTest(unittest.TestCase): def test_no_binding(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['/data']}, working_dir='.') self.assertEqual(d['volumes'], ['/data']) @mock.patch.dict(os.environ) def test_volume_binding_with_environment_variable(self): os.environ['VOLUME_PATH'] = '/host/path' d = config.load( build_config_details( {'foo': {'build': '.', 'volumes': ['${VOLUME_PATH}:/container/path']}}, '.', None, ) ).services[0] self.assertEqual(d['volumes'], [VolumeSpec.parse('/host/path:/container/path')]) @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix paths') @mock.patch.dict(os.environ) def test_volume_binding_with_home(self): os.environ['HOME'] = '/home/user' d = make_service_dict('foo', {'build': '.', 'volumes': ['~:/container/path']}, working_dir='.') self.assertEqual(d['volumes'], ['/home/user:/container/path']) def test_name_does_not_expand(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['mydatavolume:/data']}, working_dir='.') self.assertEqual(d['volumes'], ['mydatavolume:/data']) def test_absolute_posix_path_does_not_expand(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['/var/lib/data:/data']}, working_dir='.') self.assertEqual(d['volumes'], ['/var/lib/data:/data']) def test_absolute_windows_path_does_not_expand(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['c:\\data:/data']}, working_dir='.') self.assertEqual(d['volumes'], ['c:\\data:/data']) @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix paths') def test_relative_path_does_expand_posix(self): d = make_service_dict( 'foo', {'build': '.', 'volumes': ['./data:/data']}, working_dir='/home/me/myproject') self.assertEqual(d['volumes'], ['/home/me/myproject/data:/data']) d = make_service_dict( 'foo', {'build': '.', 'volumes': ['.:/data']}, working_dir='/home/me/myproject') self.assertEqual(d['volumes'], ['/home/me/myproject:/data']) d = make_service_dict( 'foo', {'build': '.', 'volumes': ['../otherproject:/data']}, working_dir='/home/me/myproject') self.assertEqual(d['volumes'], ['/home/me/otherproject:/data']) @pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='windows paths') def test_relative_path_does_expand_windows(self): d = make_service_dict( 'foo', {'build': '.', 'volumes': ['./data:/data']}, working_dir='c:\\Users\\me\\myproject') self.assertEqual(d['volumes'], ['c:\\Users\\me\\myproject\\data:/data']) d = make_service_dict( 'foo', {'build': '.', 'volumes': ['.:/data']}, working_dir='c:\\Users\\me\\myproject') self.assertEqual(d['volumes'], ['c:\\Users\\me\\myproject:/data']) d = make_service_dict( 'foo', {'build': '.', 'volumes': ['../otherproject:/data']}, working_dir='c:\\Users\\me\\myproject') self.assertEqual(d['volumes'], ['c:\\Users\\me\\otherproject:/data']) @mock.patch.dict(os.environ) def test_home_directory_with_driver_does_not_expand(self): os.environ['NAME'] = 'surprise!' d = make_service_dict('foo', { 'build': '.', 'volumes': ['~:/data'], 'volume_driver': 'foodriver', }, working_dir='.') self.assertEqual(d['volumes'], ['~:/data']) def test_volume_path_with_non_ascii_directory(self): volume = u'/Füü/data:/data' container_path = config.resolve_volume_path(".", volume) self.assertEqual(container_path, volume) class MergePathMappingTest(object): config_name = "" def test_empty(self): service_dict = config.merge_service_dicts({}, {}, DEFAULT_VERSION) assert self.config_name not in service_dict def test_no_override(self): service_dict = config.merge_service_dicts( {self.config_name: ['/foo:/code', '/data']}, {}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(['/foo:/code', '/data']) def test_no_base(self): service_dict = config.merge_service_dicts( {}, {self.config_name: ['/bar:/code']}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(['/bar:/code']) def test_override_explicit_path(self): service_dict = config.merge_service_dicts( {self.config_name: ['/foo:/code', '/data']}, {self.config_name: ['/bar:/code']}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(['/bar:/code', '/data']) def test_add_explicit_path(self): service_dict = config.merge_service_dicts( {self.config_name: ['/foo:/code', '/data']}, {self.config_name: ['/bar:/code', '/quux:/data']}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(['/bar:/code', '/quux:/data']) def test_remove_explicit_path(self): service_dict = config.merge_service_dicts( {self.config_name: ['/foo:/code', '/quux:/data']}, {self.config_name: ['/bar:/code', '/data']}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(['/bar:/code', '/data']) class MergeVolumesTest(unittest.TestCase, MergePathMappingTest): config_name = 'volumes' class MergeDevicesTest(unittest.TestCase, MergePathMappingTest): config_name = 'devices' class BuildOrImageMergeTest(unittest.TestCase): def test_merge_build_or_image_no_override(self): self.assertEqual( config.merge_service_dicts({'build': '.'}, {}, V1), {'build': '.'}, ) self.assertEqual( config.merge_service_dicts({'image': 'redis'}, {}, V1), {'image': 'redis'}, ) def test_merge_build_or_image_override_with_same(self): self.assertEqual( config.merge_service_dicts({'build': '.'}, {'build': './web'}, V1), {'build': './web'}, ) self.assertEqual( config.merge_service_dicts({'image': 'redis'}, {'image': 'postgres'}, V1), {'image': 'postgres'}, ) def test_merge_build_or_image_override_with_other(self): self.assertEqual( config.merge_service_dicts({'build': '.'}, {'image': 'redis'}, V1), {'image': 'redis'}, ) self.assertEqual( config.merge_service_dicts({'image': 'redis'}, {'build': '.'}, V1), {'build': '.'} ) class MergeListsTest(object): config_name = "" base_config = [] override_config = [] def merged_config(self): return set(self.base_config) | set(self.override_config) def test_empty(self): assert self.config_name not in config.merge_service_dicts({}, {}, DEFAULT_VERSION) def test_no_override(self): service_dict = config.merge_service_dicts( {self.config_name: self.base_config}, {}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(self.base_config) def test_no_base(self): service_dict = config.merge_service_dicts( {}, {self.config_name: self.base_config}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(self.base_config) def test_add_item(self): service_dict = config.merge_service_dicts( {self.config_name: self.base_config}, {self.config_name: self.override_config}, DEFAULT_VERSION) assert set(service_dict[self.config_name]) == set(self.merged_config()) class MergePortsTest(unittest.TestCase, MergeListsTest): config_name = 'ports' base_config = ['10:8000', '9000'] override_config = ['20:8000'] def test_duplicate_port_mappings(self): service_dict = config.merge_service_dicts( {self.config_name: self.base_config}, {self.config_name: self.base_config}, DEFAULT_VERSION ) assert set(service_dict[self.config_name]) == set(self.base_config) class MergeNetworksTest(unittest.TestCase, MergeListsTest): config_name = 'networks' base_config = ['frontend', 'backend'] override_config = ['monitoring'] class MergeStringsOrListsTest(unittest.TestCase): def test_no_override(self): service_dict = config.merge_service_dicts( {'dns': '8.8.8.8'}, {}, DEFAULT_VERSION) assert set(service_dict['dns']) == set(['8.8.8.8']) def test_no_base(self): service_dict = config.merge_service_dicts( {}, {'dns': '8.8.8.8'}, DEFAULT_VERSION) assert set(service_dict['dns']) == set(['8.8.8.8']) def test_add_string(self): service_dict = config.merge_service_dicts( {'dns': ['8.8.8.8']}, {'dns': '9.9.9.9'}, DEFAULT_VERSION) assert set(service_dict['dns']) == set(['8.8.8.8', '9.9.9.9']) def test_add_list(self): service_dict = config.merge_service_dicts( {'dns': '8.8.8.8'}, {'dns': ['9.9.9.9']}, DEFAULT_VERSION) assert set(service_dict['dns']) == set(['8.8.8.8', '9.9.9.9']) class MergeLabelsTest(unittest.TestCase): def test_empty(self): assert 'labels' not in config.merge_service_dicts({}, {}, DEFAULT_VERSION) def test_no_override(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar']}, 'tests/'), make_service_dict('foo', {'build': '.'}, 'tests/'), DEFAULT_VERSION) assert service_dict['labels'] == {'foo': '1', 'bar': ''} def test_no_base(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.'}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['foo=2']}, 'tests/'), DEFAULT_VERSION) assert service_dict['labels'] == {'foo': '2'} def test_override_explicit_value(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar']}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['foo=2']}, 'tests/'), DEFAULT_VERSION) assert service_dict['labels'] == {'foo': '2', 'bar': ''} def test_add_explicit_value(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar']}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['bar=2']}, 'tests/'), DEFAULT_VERSION) assert service_dict['labels'] == {'foo': '1', 'bar': '2'} def test_remove_explicit_value(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar=2']}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['bar']}, 'tests/'), DEFAULT_VERSION) assert service_dict['labels'] == {'foo': '1', 'bar': ''} class MemoryOptionsTest(unittest.TestCase): def test_validation_fails_with_just_memswap_limit(self): """ When you set a 'memswap_limit' it is invalid config unless you also set a mem_limit """ with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'foo': {'image': 'busybox', 'memswap_limit': 2000000}, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "foo.memswap_limit is invalid: when defining " \ "'memswap_limit' you must set 'mem_limit' as well" \ in excinfo.exconly() def test_validation_with_correct_memswap_values(self): service_dict = config.load( build_config_details( {'foo': {'image': 'busybox', 'mem_limit': 1000000, 'memswap_limit': 2000000}}, 'tests/fixtures/extends', 'common.yml' ) ).services self.assertEqual(service_dict[0]['memswap_limit'], 2000000) def test_memswap_can_be_a_string(self): service_dict = config.load( build_config_details( {'foo': {'image': 'busybox', 'mem_limit': "1G", 'memswap_limit': "512M"}}, 'tests/fixtures/extends', 'common.yml' ) ).services self.assertEqual(service_dict[0]['memswap_limit'], "512M") class EnvTest(unittest.TestCase): def test_parse_environment_as_list(self): environment = [ 'NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS=', ] self.assertEqual( config.parse_environment(environment), {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}, ) def test_parse_environment_as_dict(self): environment = { 'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': None, } self.assertEqual(config.parse_environment(environment), environment) def test_parse_environment_invalid(self): with self.assertRaises(ConfigurationError): config.parse_environment('a=b') def test_parse_environment_empty(self): self.assertEqual(config.parse_environment(None), {}) @mock.patch.dict(os.environ) def test_resolve_environment(self): os.environ['FILE_DEF'] = 'E1' os.environ['FILE_DEF_EMPTY'] = 'E2' os.environ['ENV_DEF'] = 'E3' service_dict = { 'build': '.', 'environment': { 'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None }, } self.assertEqual( resolve_environment( service_dict, Environment.from_env_file(None) ), {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': None}, ) def test_resolve_environment_from_env_file(self): self.assertEqual( resolve_environment({'env_file': ['tests/fixtures/env/one.env']}), {'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'bar'}, ) def test_resolve_environment_with_multiple_env_files(self): service_dict = { 'env_file': [ 'tests/fixtures/env/one.env', 'tests/fixtures/env/two.env' ] } self.assertEqual( resolve_environment(service_dict), {'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}, ) def test_resolve_environment_nonexistent_file(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( {'foo': {'image': 'example', 'env_file': 'nonexistent.env'}}, working_dir='tests/fixtures/env')) assert 'Couldn\'t find env file' in exc.exconly() assert 'nonexistent.env' in exc.exconly() @mock.patch.dict(os.environ) def test_resolve_environment_from_env_file_with_empty_values(self): os.environ['FILE_DEF'] = 'E1' os.environ['FILE_DEF_EMPTY'] = 'E2' os.environ['ENV_DEF'] = 'E3' self.assertEqual( resolve_environment( {'env_file': ['tests/fixtures/env/resolve.env']}, Environment.from_env_file(None) ), { 'FILE_DEF': u'bär', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': None }, ) @mock.patch.dict(os.environ) def test_resolve_build_args(self): os.environ['env_arg'] = 'value2' build = { 'context': '.', 'args': { 'arg1': 'value1', 'empty_arg': '', 'env_arg': None, 'no_env': None } } self.assertEqual( resolve_build_args(build, Environment.from_env_file(build['context'])), {'arg1': 'value1', 'empty_arg': '', 'env_arg': 'value2', 'no_env': None}, ) @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') @mock.patch.dict(os.environ) def test_resolve_path(self): os.environ['HOSTENV'] = '/tmp' os.environ['CONTAINERENV'] = '/host/tmp' service_dict = config.load( build_config_details( {'foo': {'build': '.', 'volumes': ['$HOSTENV:$CONTAINERENV']}}, "tests/fixtures/env", ) ).services[0] self.assertEqual( set(service_dict['volumes']), set([VolumeSpec.parse('/tmp:/host/tmp')])) service_dict = config.load( build_config_details( {'foo': {'build': '.', 'volumes': ['/opt${HOSTENV}:/opt${CONTAINERENV}']}}, "tests/fixtures/env", ) ).services[0] self.assertEqual( set(service_dict['volumes']), set([VolumeSpec.parse('/opt/tmp:/opt/host/tmp')])) def load_from_filename(filename): return config.load( config.find('.', [filename], Environment.from_env_file('.')) ).services class ExtendsTest(unittest.TestCase): def test_extends(self): service_dicts = load_from_filename('tests/fixtures/extends/docker-compose.yml') self.assertEqual(service_sort(service_dicts), service_sort([ { 'name': 'mydb', 'image': 'busybox', 'command': 'top', }, { 'name': 'myweb', 'image': 'busybox', 'command': 'top', 'network_mode': 'bridge', 'links': ['mydb:db'], 'environment': { "FOO": "1", "BAR": "2", "BAZ": "2", }, } ])) def test_merging_env_labels_ulimits(self): service_dicts = load_from_filename('tests/fixtures/extends/common-env-labels-ulimits.yml') self.assertEqual(service_sort(service_dicts), service_sort([ { 'name': 'web', 'image': 'busybox', 'command': '/bin/true', 'network_mode': 'host', 'environment': { "FOO": "2", "BAR": "1", "BAZ": "3", }, 'labels': {'label': 'one'}, 'ulimits': {'nproc': 65535, 'memlock': {'soft': 1024, 'hard': 2048}} } ])) def test_nested(self): service_dicts = load_from_filename('tests/fixtures/extends/nested.yml') self.assertEqual(service_dicts, [ { 'name': 'myweb', 'image': 'busybox', 'command': '/bin/true', 'network_mode': 'host', 'environment': { "FOO": "2", "BAR": "2", }, }, ]) def test_self_referencing_file(self): """ We specify a 'file' key that is the filename we're already in. """ service_dicts = load_from_filename('tests/fixtures/extends/specify-file-as-self.yml') self.assertEqual(service_sort(service_dicts), service_sort([ { 'environment': { 'YEP': '1', 'BAR': '1', 'BAZ': '3' }, 'image': 'busybox', 'name': 'myweb' }, { 'environment': {'YEP': '1'}, 'image': 'busybox', 'name': 'otherweb' }, { 'environment': {'YEP': '1', 'BAZ': '3'}, 'image': 'busybox', 'name': 'web' } ])) def test_circular(self): with pytest.raises(config.CircularReference) as exc: load_from_filename('tests/fixtures/extends/circle-1.yml') path = [ (os.path.basename(filename), service_name) for (filename, service_name) in exc.value.trail ] expected = [ ('circle-1.yml', 'web'), ('circle-2.yml', 'other'), ('circle-1.yml', 'web'), ] self.assertEqual(path, expected) def test_extends_validation_empty_dictionary(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'web': {'image': 'busybox', 'extends': {}}, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert 'service' in excinfo.exconly() def test_extends_validation_missing_service_key(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'web': {'image': 'busybox', 'extends': {'file': 'common.yml'}}, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "'service' is a required property" in excinfo.exconly() def test_extends_validation_invalid_key(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'web': { 'image': 'busybox', 'extends': { 'file': 'common.yml', 'service': 'web', 'rogue_key': 'is not allowed' } }, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "web.extends contains unsupported option: 'rogue_key'" \ in excinfo.exconly() def test_extends_validation_sub_property_key(self): with pytest.raises(ConfigurationError) as excinfo: config.load( build_config_details( { 'web': { 'image': 'busybox', 'extends': { 'file': 1, 'service': 'web', } }, }, 'tests/fixtures/extends', 'filename.yml' ) ) assert "web.extends.file contains 1, which is an invalid type, it should be a string" \ in excinfo.exconly() def test_extends_validation_no_file_key_no_filename_set(self): dictionary = {'extends': {'service': 'web'}} with pytest.raises(ConfigurationError) as excinfo: make_service_dict('myweb', dictionary, working_dir='tests/fixtures/extends') assert 'file' in excinfo.exconly() def test_extends_validation_valid_config(self): service = config.load( build_config_details( { 'web': {'image': 'busybox', 'extends': {'service': 'web', 'file': 'common.yml'}}, }, 'tests/fixtures/extends', 'common.yml' ) ).services self.assertEquals(len(service), 1) self.assertIsInstance(service[0], dict) self.assertEquals(service[0]['command'], "/bin/true") def test_extended_service_with_invalid_config(self): with pytest.raises(ConfigurationError) as exc: load_from_filename('tests/fixtures/extends/service-with-invalid-schema.yml') assert ( "myweb has neither an image nor a build context specified" in exc.exconly() ) def test_extended_service_with_valid_config(self): service = load_from_filename('tests/fixtures/extends/service-with-valid-composite-extends.yml') self.assertEquals(service[0]['command'], "top") def test_extends_file_defaults_to_self(self): """ Test not specifying a file in our extends options that the config is valid and correctly extends from itself. """ service_dicts = load_from_filename('tests/fixtures/extends/no-file-specified.yml') self.assertEqual(service_sort(service_dicts), service_sort([ { 'name': 'myweb', 'image': 'busybox', 'environment': { "BAR": "1", "BAZ": "3", } }, { 'name': 'web', 'image': 'busybox', 'environment': { "BAZ": "3", } } ])) def test_invalid_links_in_extended_service(self): with pytest.raises(ConfigurationError) as excinfo: load_from_filename('tests/fixtures/extends/invalid-links.yml') assert "services with 'links' cannot be extended" in excinfo.exconly() def test_invalid_volumes_from_in_extended_service(self): with pytest.raises(ConfigurationError) as excinfo: load_from_filename('tests/fixtures/extends/invalid-volumes.yml') assert "services with 'volumes_from' cannot be extended" in excinfo.exconly() def test_invalid_net_in_extended_service(self): with pytest.raises(ConfigurationError) as excinfo: load_from_filename('tests/fixtures/extends/invalid-net-v2.yml') assert 'network_mode: service' in excinfo.exconly() assert 'cannot be extended' in excinfo.exconly() with pytest.raises(ConfigurationError) as excinfo: load_from_filename('tests/fixtures/extends/invalid-net.yml') assert 'net: container' in excinfo.exconly() assert 'cannot be extended' in excinfo.exconly() @mock.patch.dict(os.environ) def test_load_config_runs_interpolation_in_extended_service(self): os.environ.update(HOSTNAME_VALUE="penguin") expected_interpolated_value = "host-penguin" service_dicts = load_from_filename( 'tests/fixtures/extends/valid-interpolation.yml') for service in service_dicts: assert service['hostname'] == expected_interpolated_value @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') def test_volume_path(self): dicts = load_from_filename('tests/fixtures/volume-path/docker-compose.yml') paths = [ VolumeSpec( os.path.abspath('tests/fixtures/volume-path/common/foo'), '/foo', 'rw'), VolumeSpec( os.path.abspath('tests/fixtures/volume-path/bar'), '/bar', 'rw') ] self.assertEqual(set(dicts[0]['volumes']), set(paths)) def test_parent_build_path_dne(self): child = load_from_filename('tests/fixtures/extends/nonexistent-path-child.yml') self.assertEqual(child, [ { 'name': 'dnechild', 'image': 'busybox', 'command': '/bin/true', 'environment': { "FOO": "1", "BAR": "2", }, }, ]) def test_load_throws_error_when_base_service_does_not_exist(self): with pytest.raises(ConfigurationError) as excinfo: load_from_filename('tests/fixtures/extends/nonexistent-service.yml') assert "Cannot extend service 'foo'" in excinfo.exconly() assert "Service not found" in excinfo.exconly() def test_partial_service_config_in_extends_is_still_valid(self): dicts = load_from_filename('tests/fixtures/extends/valid-common-config.yml') self.assertEqual(dicts[0]['environment'], {'FOO': '1'}) def test_extended_service_with_verbose_and_shorthand_way(self): services = load_from_filename('tests/fixtures/extends/verbose-and-shorthand.yml') self.assertEqual(service_sort(services), service_sort([ { 'name': 'base', 'image': 'busybox', 'environment': {'BAR': '1'}, }, { 'name': 'verbose', 'image': 'busybox', 'environment': {'BAR': '1', 'FOO': '1'}, }, { 'name': 'shorthand', 'image': 'busybox', 'environment': {'BAR': '1', 'FOO': '2'}, }, ])) @mock.patch.dict(os.environ) def test_extends_with_environment_and_env_files(self): tmpdir = py.test.ensuretemp('test_extends_with_environment') self.addCleanup(tmpdir.remove) commondir = tmpdir.mkdir('common') commondir.join('base.yml').write(""" app: image: 'example/app' env_file: - 'envs' environment: - SECRET - TEST_ONE=common - TEST_TWO=common """) tmpdir.join('docker-compose.yml').write(""" ext: extends: file: common/base.yml service: app env_file: - 'envs' environment: - THING - TEST_ONE=top """) commondir.join('envs').write(""" COMMON_ENV_FILE TEST_ONE=common-env-file TEST_TWO=common-env-file TEST_THREE=common-env-file TEST_FOUR=common-env-file """) tmpdir.join('envs').write(""" TOP_ENV_FILE TEST_ONE=top-env-file TEST_TWO=top-env-file TEST_THREE=top-env-file """) expected = [ { 'name': 'ext', 'image': 'example/app', 'environment': { 'SECRET': 'secret', 'TOP_ENV_FILE': 'secret', 'COMMON_ENV_FILE': 'secret', 'THING': 'thing', 'TEST_ONE': 'top', 'TEST_TWO': 'common', 'TEST_THREE': 'top-env-file', 'TEST_FOUR': 'common-env-file', }, }, ] os.environ['SECRET'] = 'secret' os.environ['THING'] = 'thing' os.environ['COMMON_ENV_FILE'] = 'secret' os.environ['TOP_ENV_FILE'] = 'secret' config = load_from_filename(str(tmpdir.join('docker-compose.yml'))) assert config == expected def test_extends_with_mixed_versions_is_error(self): tmpdir = py.test.ensuretemp('test_extends_with_mixed_version') self.addCleanup(tmpdir.remove) tmpdir.join('docker-compose.yml').write(""" version: "2" services: web: extends: file: base.yml service: base image: busybox """) tmpdir.join('base.yml').write(""" base: volumes: ['/foo'] ports: ['3000:3000'] """) with pytest.raises(ConfigurationError) as exc: load_from_filename(str(tmpdir.join('docker-compose.yml'))) assert 'Version mismatch' in exc.exconly() def test_extends_with_defined_version_passes(self): tmpdir = py.test.ensuretemp('test_extends_with_defined_version') self.addCleanup(tmpdir.remove) tmpdir.join('docker-compose.yml').write(""" version: "2" services: web: extends: file: base.yml service: base image: busybox """) tmpdir.join('base.yml').write(""" version: "2" services: base: volumes: ['/foo'] ports: ['3000:3000'] command: top """) service = load_from_filename(str(tmpdir.join('docker-compose.yml'))) self.assertEquals(service[0]['command'], "top") def test_extends_with_depends_on(self): tmpdir = py.test.ensuretemp('test_extends_with_defined_version') self.addCleanup(tmpdir.remove) tmpdir.join('docker-compose.yml').write(""" version: "2" services: base: image: example web: extends: base image: busybox depends_on: ['other'] other: image: example """) services = load_from_filename(str(tmpdir.join('docker-compose.yml'))) assert service_sort(services)[2]['depends_on'] == ['other'] @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') class ExpandPathTest(unittest.TestCase): working_dir = '/home/user/somedir' def test_expand_path_normal(self): result = config.expand_path(self.working_dir, 'myfile') self.assertEqual(result, self.working_dir + '/' + 'myfile') def test_expand_path_absolute(self): abs_path = '/home/user/otherdir/somefile' result = config.expand_path(self.working_dir, abs_path) self.assertEqual(result, abs_path) def test_expand_path_with_tilde(self): test_path = '~/otherdir/somefile' with mock.patch.dict(os.environ): os.environ['HOME'] = user_path = '/home/user/' result = config.expand_path(self.working_dir, test_path) self.assertEqual(result, user_path + 'otherdir/somefile') class VolumePathTest(unittest.TestCase): def test_split_path_mapping_with_windows_path(self): host_path = "c:\\Users\\msamblanet\\Documents\\anvil\\connect\\config" windows_volume_path = host_path + ":/opt/connect/config:ro" expected_mapping = ("/opt/connect/config:ro", host_path) mapping = config.split_path_mapping(windows_volume_path) assert mapping == expected_mapping def test_split_path_mapping_with_windows_path_in_container(self): host_path = 'c:\\Users\\remilia\\data' container_path = 'c:\\scarletdevil\\data' expected_mapping = (container_path, host_path) mapping = config.split_path_mapping('{0}:{1}'.format(host_path, container_path)) assert mapping == expected_mapping def test_split_path_mapping_with_root_mount(self): host_path = '/' container_path = '/var/hostroot' expected_mapping = (container_path, host_path) mapping = config.split_path_mapping('{0}:{1}'.format(host_path, container_path)) assert mapping == expected_mapping @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') class BuildPathTest(unittest.TestCase): def setUp(self): self.abs_context_path = os.path.join(os.getcwd(), 'tests/fixtures/build-ctx') def test_nonexistent_path(self): with self.assertRaises(ConfigurationError): config.load( build_config_details( { 'foo': {'build': 'nonexistent.path'}, }, 'working_dir', 'filename.yml' ) ) def test_relative_path(self): relative_build_path = '../build-ctx/' service_dict = make_service_dict( 'relpath', {'build': relative_build_path}, working_dir='tests/fixtures/build-path' ) self.assertEquals(service_dict['build'], self.abs_context_path) def test_absolute_path(self): service_dict = make_service_dict( 'abspath', {'build': self.abs_context_path}, working_dir='tests/fixtures/build-path' ) self.assertEquals(service_dict['build'], self.abs_context_path) def test_from_file(self): service_dict = load_from_filename('tests/fixtures/build-path/docker-compose.yml') self.assertEquals(service_dict, [{'name': 'foo', 'build': {'context': self.abs_context_path}}]) def test_valid_url_in_build_path(self): valid_urls = [ 'git://github.com/docker/docker', 'git@github.com:docker/docker.git', 'git@bitbucket.org:atlassianlabs/atlassian-docker.git', 'https://github.com/docker/docker.git', 'http://github.com/docker/docker.git', 'github.com/docker/docker.git', ] for valid_url in valid_urls: service_dict = config.load(build_config_details({ 'validurl': {'build': valid_url}, }, '.', None)).services assert service_dict[0]['build'] == {'context': valid_url} def test_invalid_url_in_build_path(self): invalid_urls = [ 'example.com/bogus', 'ftp://example.com/', '/path/does/not/exist', ] for invalid_url in invalid_urls: with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'invalidurl': {'build': invalid_url}, }, '.', None)) assert 'build path' in exc.exconly() class GetDefaultConfigFilesTestCase(unittest.TestCase): files = [ 'docker-compose.yml', 'docker-compose.yaml', ] def test_get_config_path_default_file_in_basedir(self): for index, filename in enumerate(self.files): self.assertEqual( filename, get_config_filename_for_files(self.files[index:])) with self.assertRaises(config.ComposeFileNotFound): get_config_filename_for_files([]) def test_get_config_path_default_file_in_parent_dir(self): """Test with files placed in the subdir""" def get_config_in_subdir(files): return get_config_filename_for_files(files, subdir=True) for index, filename in enumerate(self.files): self.assertEqual(filename, get_config_in_subdir(self.files[index:])) with self.assertRaises(config.ComposeFileNotFound): get_config_in_subdir([]) def get_config_filename_for_files(filenames, subdir=None): def make_files(dirname, filenames): for fname in filenames: with open(os.path.join(dirname, fname), 'w') as f: f.write('') project_dir = tempfile.mkdtemp() try: make_files(project_dir, filenames) if subdir: base_dir = tempfile.mkdtemp(dir=project_dir) else: base_dir = project_dir filename, = config.get_default_config_files(base_dir) return os.path.basename(filename) finally: shutil.rmtree(project_dir) compose-1.8.0/tests/unit/config/interpolation_test.py000066400000000000000000000033701274620702700230650ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import os import mock import pytest from compose.config.environment import Environment from compose.config.interpolation import interpolate_environment_variables @pytest.yield_fixture def mock_env(): with mock.patch.dict(os.environ): os.environ['USER'] = 'jenny' os.environ['FOO'] = 'bar' yield def test_interpolate_environment_variables_in_services(mock_env): services = { 'servicea': { 'image': 'example:${USER}', 'volumes': ['$FOO:/target'], 'logging': { 'driver': '${FOO}', 'options': { 'user': '$USER', } } } } expected = { 'servicea': { 'image': 'example:jenny', 'volumes': ['bar:/target'], 'logging': { 'driver': 'bar', 'options': { 'user': 'jenny', } } } } assert interpolate_environment_variables( services, 'service', Environment.from_env_file(None) ) == expected def test_interpolate_environment_variables_in_volumes(mock_env): volumes = { 'data': { 'driver': '$FOO', 'driver_opts': { 'max': 2, 'user': '${USER}' } }, 'other': None, } expected = { 'data': { 'driver': 'bar', 'driver_opts': { 'max': 2, 'user': 'jenny' } }, 'other': {}, } assert interpolate_environment_variables( volumes, 'volume', Environment.from_env_file(None) ) == expected compose-1.8.0/tests/unit/config/sort_services_test.py000066400000000000000000000152751274620702700230770ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import pytest from compose.config.errors import DependencyError from compose.config.sort_services import sort_service_dicts from compose.config.types import VolumeFromSpec class TestSortService(object): def test_sort_service_dicts_1(self): services = [ { 'links': ['redis'], 'name': 'web' }, { 'name': 'grunt' }, { 'name': 'redis' } ] sorted_services = sort_service_dicts(services) assert len(sorted_services) == 3 assert sorted_services[0]['name'] == 'grunt' assert sorted_services[1]['name'] == 'redis' assert sorted_services[2]['name'] == 'web' def test_sort_service_dicts_2(self): services = [ { 'links': ['redis', 'postgres'], 'name': 'web' }, { 'name': 'postgres', 'links': ['redis'] }, { 'name': 'redis' } ] sorted_services = sort_service_dicts(services) assert len(sorted_services) == 3 assert sorted_services[0]['name'] == 'redis' assert sorted_services[1]['name'] == 'postgres' assert sorted_services[2]['name'] == 'web' def test_sort_service_dicts_3(self): services = [ { 'name': 'child' }, { 'name': 'parent', 'links': ['child'] }, { 'links': ['parent'], 'name': 'grandparent' }, ] sorted_services = sort_service_dicts(services) assert len(sorted_services) == 3 assert sorted_services[0]['name'] == 'child' assert sorted_services[1]['name'] == 'parent' assert sorted_services[2]['name'] == 'grandparent' def test_sort_service_dicts_4(self): services = [ { 'name': 'child' }, { 'name': 'parent', 'volumes_from': [VolumeFromSpec('child', 'rw', 'service')] }, { 'links': ['parent'], 'name': 'grandparent' }, ] sorted_services = sort_service_dicts(services) assert len(sorted_services) == 3 assert sorted_services[0]['name'] == 'child' assert sorted_services[1]['name'] == 'parent' assert sorted_services[2]['name'] == 'grandparent' def test_sort_service_dicts_5(self): services = [ { 'links': ['parent'], 'name': 'grandparent' }, { 'name': 'parent', 'network_mode': 'service:child' }, { 'name': 'child' } ] sorted_services = sort_service_dicts(services) assert len(sorted_services) == 3 assert sorted_services[0]['name'] == 'child' assert sorted_services[1]['name'] == 'parent' assert sorted_services[2]['name'] == 'grandparent' def test_sort_service_dicts_6(self): services = [ { 'links': ['parent'], 'name': 'grandparent' }, { 'name': 'parent', 'volumes_from': [VolumeFromSpec('child', 'ro', 'service')] }, { 'name': 'child' } ] sorted_services = sort_service_dicts(services) assert len(sorted_services) == 3 assert sorted_services[0]['name'] == 'child' assert sorted_services[1]['name'] == 'parent' assert sorted_services[2]['name'] == 'grandparent' def test_sort_service_dicts_7(self): services = [ { 'network_mode': 'service:three', 'name': 'four' }, { 'links': ['two'], 'name': 'three' }, { 'name': 'two', 'volumes_from': [VolumeFromSpec('one', 'rw', 'service')] }, { 'name': 'one' } ] sorted_services = sort_service_dicts(services) assert len(sorted_services) == 4 assert sorted_services[0]['name'] == 'one' assert sorted_services[1]['name'] == 'two' assert sorted_services[2]['name'] == 'three' assert sorted_services[3]['name'] == 'four' def test_sort_service_dicts_circular_imports(self): services = [ { 'links': ['redis'], 'name': 'web' }, { 'name': 'redis', 'links': ['web'] }, ] with pytest.raises(DependencyError) as exc: sort_service_dicts(services) assert 'redis' in exc.exconly() assert 'web' in exc.exconly() def test_sort_service_dicts_circular_imports_2(self): services = [ { 'links': ['postgres', 'redis'], 'name': 'web' }, { 'name': 'redis', 'links': ['web'] }, { 'name': 'postgres' } ] with pytest.raises(DependencyError) as exc: sort_service_dicts(services) assert 'redis' in exc.exconly() assert 'web' in exc.exconly() def test_sort_service_dicts_circular_imports_3(self): services = [ { 'links': ['b'], 'name': 'a' }, { 'name': 'b', 'links': ['c'] }, { 'name': 'c', 'links': ['a'] } ] with pytest.raises(DependencyError) as exc: sort_service_dicts(services) assert 'a' in exc.exconly() assert 'b' in exc.exconly() def test_sort_service_dicts_self_imports(self): services = [ { 'links': ['web'], 'name': 'web' }, ] with pytest.raises(DependencyError) as exc: sort_service_dicts(services) assert 'web' in exc.exconly() def test_sort_service_dicts_depends_on_self(self): services = [ { 'depends_on': ['web'], 'name': 'web' }, ] with pytest.raises(DependencyError) as exc: sort_service_dicts(services) assert 'A service can not depend on itself: web' in exc.exconly() compose-1.8.0/tests/unit/config/types_test.py000066400000000000000000000104221274620702700213360ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import pytest from compose.config.config import V1 from compose.config.config import V2_0 from compose.config.errors import ConfigurationError from compose.config.types import parse_extra_hosts from compose.config.types import VolumeFromSpec from compose.config.types import VolumeSpec from compose.const import IS_WINDOWS_PLATFORM def test_parse_extra_hosts_list(): expected = {'www.example.com': '192.168.0.17'} assert parse_extra_hosts(["www.example.com:192.168.0.17"]) == expected expected = {'www.example.com': '192.168.0.17'} assert parse_extra_hosts(["www.example.com: 192.168.0.17"]) == expected assert parse_extra_hosts([ "www.example.com: 192.168.0.17", "static.example.com:192.168.0.19", "api.example.com: 192.168.0.18", "v6.example.com: ::1" ]) == { 'www.example.com': '192.168.0.17', 'static.example.com': '192.168.0.19', 'api.example.com': '192.168.0.18', 'v6.example.com': '::1' } def test_parse_extra_hosts_dict(): assert parse_extra_hosts({ 'www.example.com': '192.168.0.17', 'api.example.com': '192.168.0.18' }) == { 'www.example.com': '192.168.0.17', 'api.example.com': '192.168.0.18' } class TestVolumeSpec(object): def test_parse_volume_spec_only_one_path(self): spec = VolumeSpec.parse('/the/volume') assert spec == (None, '/the/volume', 'rw') def test_parse_volume_spec_internal_and_external(self): spec = VolumeSpec.parse('external:interval') assert spec == ('external', 'interval', 'rw') def test_parse_volume_spec_with_mode(self): spec = VolumeSpec.parse('external:interval:ro') assert spec == ('external', 'interval', 'ro') spec = VolumeSpec.parse('external:interval:z') assert spec == ('external', 'interval', 'z') def test_parse_volume_spec_too_many_parts(self): with pytest.raises(ConfigurationError) as exc: VolumeSpec.parse('one:two:three:four') assert 'has incorrect format' in exc.exconly() @pytest.mark.xfail((not IS_WINDOWS_PLATFORM), reason='does not have a drive') def test_parse_volume_windows_absolute_path(self): windows_path = "c:\\Users\\me\\Documents\\shiny\\config:\\opt\\shiny\\config:ro" assert VolumeSpec.parse(windows_path) == ( "/c/Users/me/Documents/shiny/config", "/opt/shiny/config", "ro" ) class TestVolumesFromSpec(object): services = ['servicea', 'serviceb'] def test_parse_v1_from_service(self): volume_from = VolumeFromSpec.parse('servicea', self.services, V1) assert volume_from == VolumeFromSpec('servicea', 'rw', 'service') def test_parse_v1_from_container(self): volume_from = VolumeFromSpec.parse('foo:ro', self.services, V1) assert volume_from == VolumeFromSpec('foo', 'ro', 'container') def test_parse_v1_invalid(self): with pytest.raises(ConfigurationError): VolumeFromSpec.parse('unknown:format:ro', self.services, V1) def test_parse_v2_from_service(self): volume_from = VolumeFromSpec.parse('servicea', self.services, V2_0) assert volume_from == VolumeFromSpec('servicea', 'rw', 'service') def test_parse_v2_from_service_with_mode(self): volume_from = VolumeFromSpec.parse('servicea:ro', self.services, V2_0) assert volume_from == VolumeFromSpec('servicea', 'ro', 'service') def test_parse_v2_from_container(self): volume_from = VolumeFromSpec.parse('container:foo', self.services, V2_0) assert volume_from == VolumeFromSpec('foo', 'rw', 'container') def test_parse_v2_from_container_with_mode(self): volume_from = VolumeFromSpec.parse('container:foo:ro', self.services, V2_0) assert volume_from == VolumeFromSpec('foo', 'ro', 'container') def test_parse_v2_invalid_type(self): with pytest.raises(ConfigurationError) as exc: VolumeFromSpec.parse('bogus:foo:ro', self.services, V2_0) assert "Unknown volumes_from type 'bogus'" in exc.exconly() def test_parse_v2_invalid(self): with pytest.raises(ConfigurationError): VolumeFromSpec.parse('unknown:format:ro', self.services, V2_0) compose-1.8.0/tests/unit/container_test.py000066400000000000000000000137541274620702700207220ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import docker from .. import mock from .. import unittest from compose.container import Container from compose.container import get_container_name class ContainerTest(unittest.TestCase): def setUp(self): self.container_id = "abcabcabcbabc12345" self.container_dict = { "Id": self.container_id, "Image": "busybox:latest", "Command": "top", "Created": 1387384730, "Status": "Up 8 seconds", "Ports": None, "SizeRw": 0, "SizeRootFs": 0, "Names": ["/composetest_db_1", "/composetest_web_1/db"], "NetworkSettings": { "Ports": {}, }, "Config": { "Labels": { "com.docker.compose.project": "composetest", "com.docker.compose.service": "web", "com.docker.compose.container-number": 7, }, } } def test_from_ps(self): container = Container.from_ps(None, self.container_dict, has_been_inspected=True) self.assertEqual( container.dictionary, { "Id": self.container_id, "Image": "busybox:latest", "Name": "/composetest_db_1", }) def test_from_ps_prefixed(self): self.container_dict['Names'] = [ '/swarm-host-1' + n for n in self.container_dict['Names'] ] container = Container.from_ps( None, self.container_dict, has_been_inspected=True) self.assertEqual(container.dictionary, { "Id": self.container_id, "Image": "busybox:latest", "Name": "/composetest_db_1", }) def test_environment(self): container = Container(None, { 'Id': 'abc', 'Config': { 'Env': [ 'FOO=BAR', 'BAZ=DOGE', ] } }, has_been_inspected=True) self.assertEqual(container.environment, { 'FOO': 'BAR', 'BAZ': 'DOGE', }) def test_number(self): container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.number, 7) def test_name(self): container = Container.from_ps(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.name, "composetest_db_1") def test_name_without_project(self): self.container_dict['Name'] = "/composetest_web_7" container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.name_without_project, "web_7") def test_name_without_project_custom_container_name(self): self.container_dict['Name'] = "/custom_name_of_container" container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.name_without_project, "custom_name_of_container") def test_inspect_if_not_inspected(self): mock_client = mock.create_autospec(docker.Client) container = Container(mock_client, dict(Id="the_id")) container.inspect_if_not_inspected() mock_client.inspect_container.assert_called_once_with("the_id") self.assertEqual(container.dictionary, mock_client.inspect_container.return_value) self.assertTrue(container.has_been_inspected) container.inspect_if_not_inspected() self.assertEqual(mock_client.inspect_container.call_count, 1) def test_human_readable_ports_none(self): container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.human_readable_ports, '') def test_human_readable_ports_public_and_private(self): self.container_dict['NetworkSettings']['Ports'].update({ "45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}], "45453/tcp": [], }) container = Container(None, self.container_dict, has_been_inspected=True) expected = "45453/tcp, 0.0.0.0:49197->45454/tcp" self.assertEqual(container.human_readable_ports, expected) def test_get_local_port(self): self.container_dict['NetworkSettings']['Ports'].update({ "45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}], }) container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual( container.get_local_port(45454, protocol='tcp'), '0.0.0.0:49197') def test_get(self): container = Container(None, { "Status": "Up 8 seconds", "HostConfig": { "VolumesFrom": ["volume_id"] }, }, has_been_inspected=True) self.assertEqual(container.get('Status'), "Up 8 seconds") self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id"]) self.assertEqual(container.get('Foo.Bar.DoesNotExist'), None) def test_short_id(self): container = Container(None, self.container_dict, has_been_inspected=True) assert container.short_id == self.container_id[:12] class GetContainerNameTestCase(unittest.TestCase): def test_get_container_name(self): self.assertIsNone(get_container_name({})) self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1') self.assertEqual( get_container_name({'Names': ['/myproject_db_1', '/myproject_web_1/db']}), 'myproject_db_1') self.assertEqual( get_container_name({ 'Names': [ '/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db' ] }), 'myproject_db_1' ) compose-1.8.0/tests/unit/interpolation_test.py000066400000000000000000000033251274620702700216200ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import unittest from compose.config.environment import Environment as bddict from compose.config.interpolation import interpolate from compose.config.interpolation import InvalidInterpolation class InterpolationTest(unittest.TestCase): def test_valid_interpolations(self): self.assertEqual(interpolate('$foo', bddict(foo='hi')), 'hi') self.assertEqual(interpolate('${foo}', bddict(foo='hi')), 'hi') self.assertEqual(interpolate('${subject} love you', bddict(subject='i')), 'i love you') self.assertEqual(interpolate('i ${verb} you', bddict(verb='love')), 'i love you') self.assertEqual(interpolate('i love ${object}', bddict(object='you')), 'i love you') def test_empty_value(self): self.assertEqual(interpolate('${foo}', bddict(foo='')), '') def test_unset_value(self): self.assertEqual(interpolate('${foo}', bddict()), '') def test_escaped_interpolation(self): self.assertEqual(interpolate('$${foo}', bddict(foo='hi')), '${foo}') def test_invalid_strings(self): self.assertRaises(InvalidInterpolation, lambda: interpolate('${', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('$}', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${}', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${ }', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${ foo}', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${foo }', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${foo!}', bddict())) compose-1.8.0/tests/unit/parallel_test.py000066400000000000000000000036511274620702700205270ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import six from docker.errors import APIError from compose.parallel import parallel_execute from compose.parallel import parallel_execute_iter from compose.parallel import UpstreamError web = 'web' db = 'db' data_volume = 'data_volume' cache = 'cache' objects = [web, db, data_volume, cache] deps = { web: [db, cache], db: [data_volume], data_volume: [], cache: [], } def get_deps(obj): return deps[obj] def test_parallel_execute(): results, errors = parallel_execute( objects=[1, 2, 3, 4, 5], func=lambda x: x * 2, get_name=six.text_type, msg="Doubling", ) assert sorted(results) == [2, 4, 6, 8, 10] assert errors == {} def test_parallel_execute_with_deps(): log = [] def process(x): log.append(x) parallel_execute( objects=objects, func=process, get_name=lambda obj: obj, msg="Processing", get_deps=get_deps, ) assert sorted(log) == sorted(objects) assert log.index(data_volume) < log.index(db) assert log.index(db) < log.index(web) assert log.index(cache) < log.index(web) def test_parallel_execute_with_upstream_errors(): log = [] def process(x): if x is data_volume: raise APIError(None, None, "Something went wrong") log.append(x) parallel_execute( objects=objects, func=process, get_name=lambda obj: obj, msg="Processing", get_deps=get_deps, ) assert log == [cache] events = [ (obj, result, type(exception)) for obj, result, exception in parallel_execute_iter(objects, process, get_deps) ] assert (cache, None, type(None)) in events assert (data_volume, None, APIError) in events assert (db, None, UpstreamError) in events assert (web, None, UpstreamError) in events compose-1.8.0/tests/unit/progress_stream_test.py000066400000000000000000000053521274620702700221520ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from six import StringIO from compose import progress_stream from tests import unittest class ProgressStreamTestCase(unittest.TestCase): def test_stream_output(self): output = [ b'{"status": "Downloading", "progressDetail": {"current": ' b'31019763, "start": 1413653874, "total": 62763875}, ' b'"progress": "..."}', ] events = progress_stream.stream_output(output, StringIO()) self.assertEqual(len(events), 1) def test_stream_output_div_zero(self): output = [ b'{"status": "Downloading", "progressDetail": {"current": ' b'0, "start": 1413653874, "total": 0}, ' b'"progress": "..."}', ] events = progress_stream.stream_output(output, StringIO()) self.assertEqual(len(events), 1) def test_stream_output_null_total(self): output = [ b'{"status": "Downloading", "progressDetail": {"current": ' b'0, "start": 1413653874, "total": null}, ' b'"progress": "..."}', ] events = progress_stream.stream_output(output, StringIO()) self.assertEqual(len(events), 1) def test_stream_output_progress_event_tty(self): events = [ b'{"status": "Already exists", "progressDetail": {}, "id": "8d05e3af52b0"}' ] class TTYStringIO(StringIO): def isatty(self): return True output = TTYStringIO() events = progress_stream.stream_output(events, output) self.assertTrue(len(output.getvalue()) > 0) def test_stream_output_progress_event_no_tty(self): events = [ b'{"status": "Already exists", "progressDetail": {}, "id": "8d05e3af52b0"}' ] output = StringIO() events = progress_stream.stream_output(events, output) self.assertEqual(len(output.getvalue()), 0) def test_stream_output_no_progress_event_no_tty(self): events = [ b'{"status": "Pulling from library/xy", "id": "latest"}' ] output = StringIO() events = progress_stream.stream_output(events, output) self.assertTrue(len(output.getvalue()) > 0) def test_get_digest_from_push(): digest = "sha256:abcd" events = [ {"status": "..."}, {"status": "..."}, {"progressDetail": {}, "aux": {"Digest": digest}}, ] assert progress_stream.get_digest_from_push(events) == digest def test_get_digest_from_pull(): digest = "sha256:abcd" events = [ {"status": "..."}, {"status": "..."}, {"status": "Digest: %s" % digest}, ] assert progress_stream.get_digest_from_pull(events) == digest compose-1.8.0/tests/unit/project_test.py000066400000000000000000000426151274620702700204040ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import datetime import docker from docker.errors import NotFound from .. import mock from .. import unittest from compose.config.config import Config from compose.config.types import VolumeFromSpec from compose.const import LABEL_SERVICE from compose.container import Container from compose.project import Project from compose.service import ImageType from compose.service import Service class ProjectTest(unittest.TestCase): def setUp(self): self.mock_client = mock.create_autospec(docker.Client) def test_from_config(self): config = Config( version=None, services=[ { 'name': 'web', 'image': 'busybox:latest', }, { 'name': 'db', 'image': 'busybox:latest', }, ], networks=None, volumes=None, ) project = Project.from_config( name='composetest', config_data=config, client=None, ) self.assertEqual(len(project.services), 2) self.assertEqual(project.get_service('web').name, 'web') self.assertEqual(project.get_service('web').options['image'], 'busybox:latest') self.assertEqual(project.get_service('db').name, 'db') self.assertEqual(project.get_service('db').options['image'], 'busybox:latest') self.assertFalse(project.networks.use_networking) def test_from_config_v2(self): config = Config( version=2, services=[ { 'name': 'web', 'image': 'busybox:latest', }, { 'name': 'db', 'image': 'busybox:latest', }, ], networks=None, volumes=None, ) project = Project.from_config('composetest', config, None) self.assertEqual(len(project.services), 2) self.assertTrue(project.networks.use_networking) def test_get_service(self): web = Service( project='composetest', name='web', client=None, image="busybox:latest", ) project = Project('test', [web], None) self.assertEqual(project.get_service('web'), web) def test_get_services_returns_all_services_without_args(self): web = Service( project='composetest', name='web', image='foo', ) console = Service( project='composetest', name='console', image='foo', ) project = Project('test', [web, console], None) self.assertEqual(project.get_services(), [web, console]) def test_get_services_returns_listed_services_with_args(self): web = Service( project='composetest', name='web', image='foo', ) console = Service( project='composetest', name='console', image='foo', ) project = Project('test', [web, console], None) self.assertEqual(project.get_services(['console']), [console]) def test_get_services_with_include_links(self): db = Service( project='composetest', name='db', image='foo', ) web = Service( project='composetest', name='web', image='foo', links=[(db, 'database')] ) cache = Service( project='composetest', name='cache', image='foo' ) console = Service( project='composetest', name='console', image='foo', links=[(web, 'web')] ) project = Project('test', [web, db, cache, console], None) self.assertEqual( project.get_services(['console'], include_deps=True), [db, web, console] ) def test_get_services_removes_duplicates_following_links(self): db = Service( project='composetest', name='db', image='foo', ) web = Service( project='composetest', name='web', image='foo', links=[(db, 'database')] ) project = Project('test', [web, db], None) self.assertEqual( project.get_services(['web', 'db'], include_deps=True), [db, web] ) def test_use_volumes_from_container(self): container_id = 'aabbccddee' container_dict = dict(Name='aaa', Id=container_id) self.mock_client.inspect_container.return_value = container_dict project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=None, services=[{ 'name': 'test', 'image': 'busybox:latest', 'volumes_from': [VolumeFromSpec('aaa', 'rw', 'container')] }], networks=None, volumes=None, ), ) assert project.get_service('test')._get_volumes_from() == [container_id + ":rw"] def test_use_volumes_from_service_no_container(self): container_name = 'test_vol_1' self.mock_client.containers.return_value = [ { "Name": container_name, "Names": [container_name], "Id": container_name, "Image": 'busybox:latest' } ] project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=None, services=[ { 'name': 'vol', 'image': 'busybox:latest' }, { 'name': 'test', 'image': 'busybox:latest', 'volumes_from': [VolumeFromSpec('vol', 'rw', 'service')] } ], networks=None, volumes=None, ), ) assert project.get_service('test')._get_volumes_from() == [container_name + ":rw"] def test_use_volumes_from_service_container(self): container_ids = ['aabbccddee', '12345'] project = Project.from_config( name='test', client=None, config_data=Config( version=None, services=[ { 'name': 'vol', 'image': 'busybox:latest' }, { 'name': 'test', 'image': 'busybox:latest', 'volumes_from': [VolumeFromSpec('vol', 'rw', 'service')] } ], networks=None, volumes=None, ), ) with mock.patch.object(Service, 'containers') as mock_return: mock_return.return_value = [ mock.Mock(id=container_id, spec=Container) for container_id in container_ids] assert ( project.get_service('test')._get_volumes_from() == [container_ids[0] + ':rw'] ) def test_events(self): services = [Service(name='web'), Service(name='db')] project = Project('test', services, self.mock_client) self.mock_client.events.return_value = iter([ { 'status': 'create', 'from': 'example/image', 'id': 'abcde', 'time': 1420092061, 'timeNano': 14200920610000002000, }, { 'status': 'attach', 'from': 'example/image', 'id': 'abcde', 'time': 1420092061, 'timeNano': 14200920610000003000, }, { 'status': 'create', 'from': 'example/other', 'id': 'bdbdbd', 'time': 1420092061, 'timeNano': 14200920610000005000, }, { 'status': 'create', 'from': 'example/db', 'id': 'ababa', 'time': 1420092061, 'timeNano': 14200920610000004000, }, { 'status': 'destroy', 'from': 'example/db', 'id': 'eeeee', 'time': 1420092061, 'timeNano': 14200920610000004000, }, ]) def dt_with_microseconds(dt, us): return datetime.datetime.fromtimestamp(dt).replace(microsecond=us) def get_container(cid): if cid == 'eeeee': raise NotFound(None, None, "oops") if cid == 'abcde': name = 'web' labels = {LABEL_SERVICE: name} elif cid == 'ababa': name = 'db' labels = {LABEL_SERVICE: name} else: labels = {} name = '' return { 'Id': cid, 'Config': {'Labels': labels}, 'Name': '/project_%s_1' % name, } self.mock_client.inspect_container.side_effect = get_container events = project.events() events_list = list(events) # Assert the return value is a generator assert not list(events) assert events_list == [ { 'type': 'container', 'service': 'web', 'action': 'create', 'id': 'abcde', 'attributes': { 'name': 'project_web_1', 'image': 'example/image', }, 'time': dt_with_microseconds(1420092061, 2), 'container': Container(None, {'Id': 'abcde'}), }, { 'type': 'container', 'service': 'web', 'action': 'attach', 'id': 'abcde', 'attributes': { 'name': 'project_web_1', 'image': 'example/image', }, 'time': dt_with_microseconds(1420092061, 3), 'container': Container(None, {'Id': 'abcde'}), }, { 'type': 'container', 'service': 'db', 'action': 'create', 'id': 'ababa', 'attributes': { 'name': 'project_db_1', 'image': 'example/db', }, 'time': dt_with_microseconds(1420092061, 4), 'container': Container(None, {'Id': 'ababa'}), }, ] def test_net_unset(self): project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=None, services=[ { 'name': 'test', 'image': 'busybox:latest', } ], networks=None, volumes=None, ), ) service = project.get_service('test') self.assertEqual(service.network_mode.id, None) self.assertNotIn('NetworkMode', service._get_container_host_config({})) def test_use_net_from_container(self): container_id = 'aabbccddee' container_dict = dict(Name='aaa', Id=container_id) self.mock_client.inspect_container.return_value = container_dict project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=None, services=[ { 'name': 'test', 'image': 'busybox:latest', 'network_mode': 'container:aaa' }, ], networks=None, volumes=None, ), ) service = project.get_service('test') self.assertEqual(service.network_mode.mode, 'container:' + container_id) def test_use_net_from_service(self): container_name = 'test_aaa_1' self.mock_client.containers.return_value = [ { "Name": container_name, "Names": [container_name], "Id": container_name, "Image": 'busybox:latest' } ] project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=None, services=[ { 'name': 'aaa', 'image': 'busybox:latest' }, { 'name': 'test', 'image': 'busybox:latest', 'network_mode': 'service:aaa' }, ], networks=None, volumes=None, ), ) service = project.get_service('test') self.assertEqual(service.network_mode.mode, 'container:' + container_name) def test_uses_default_network_true(self): project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=2, services=[ { 'name': 'foo', 'image': 'busybox:latest' }, ], networks=None, volumes=None, ), ) assert 'default' in project.networks.networks def test_uses_default_network_false(self): project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=2, services=[ { 'name': 'foo', 'image': 'busybox:latest', 'networks': {'custom': None} }, ], networks={'custom': {}}, volumes=None, ), ) assert 'default' not in project.networks.networks def test_container_without_name(self): self.mock_client.containers.return_value = [ {'Image': 'busybox:latest', 'Id': '1', 'Name': '1'}, {'Image': 'busybox:latest', 'Id': '2', 'Name': None}, {'Image': 'busybox:latest', 'Id': '3'}, ] self.mock_client.inspect_container.return_value = { 'Id': '1', 'Config': { 'Labels': { LABEL_SERVICE: 'web', }, }, } project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version=None, services=[{ 'name': 'web', 'image': 'busybox:latest', }], networks=None, volumes=None, ), ) self.assertEqual([c.id for c in project.containers()], ['1']) def test_down_with_no_resources(self): project = Project.from_config( name='test', client=self.mock_client, config_data=Config( version='2', services=[{ 'name': 'web', 'image': 'busybox:latest', }], networks={'default': {}}, volumes={'data': {}}, ), ) self.mock_client.remove_network.side_effect = NotFound(None, None, 'oops') self.mock_client.remove_volume.side_effect = NotFound(None, None, 'oops') project.down(ImageType.all, True) self.mock_client.remove_image.assert_called_once_with("busybox:latest") def test_warning_in_swarm_mode(self): self.mock_client.info.return_value = {'Swarm': {'LocalNodeState': 'active'}} project = Project('composetest', [], self.mock_client) with mock.patch('compose.project.log') as fake_log: project.up() assert fake_log.warn.call_count == 1 def test_no_warning_on_stop(self): self.mock_client.info.return_value = {'Swarm': {'LocalNodeState': 'active'}} project = Project('composetest', [], self.mock_client) with mock.patch('compose.project.log') as fake_log: project.stop() assert fake_log.warn.call_count == 0 def test_no_warning_in_normal_mode(self): self.mock_client.info.return_value = {'Swarm': {'LocalNodeState': 'inactive'}} project = Project('composetest', [], self.mock_client) with mock.patch('compose.project.log') as fake_log: project.up() assert fake_log.warn.call_count == 0 def test_no_warning_with_no_swarm_info(self): self.mock_client.info.return_value = {} project = Project('composetest', [], self.mock_client) with mock.patch('compose.project.log') as fake_log: project.up() assert fake_log.warn.call_count == 0 compose-1.8.0/tests/unit/service_test.py000066400000000000000000001105661274620702700203770ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import docker import pytest from docker.errors import APIError from .. import mock from .. import unittest from compose.config.types import VolumeFromSpec from compose.config.types import VolumeSpec from compose.const import LABEL_CONFIG_HASH from compose.const import LABEL_ONE_OFF from compose.const import LABEL_PROJECT from compose.const import LABEL_SERVICE from compose.container import Container from compose.project import OneOffFilter from compose.service import build_ulimits from compose.service import build_volume_binding from compose.service import BuildAction from compose.service import ContainerNetworkMode from compose.service import get_container_data_volumes from compose.service import ImageType from compose.service import merge_volume_bindings from compose.service import NeedsBuildError from compose.service import NetworkMode from compose.service import NoSuchImageError from compose.service import parse_repository_tag from compose.service import Service from compose.service import ServiceNetworkMode from compose.service import warn_on_masked_volume class ServiceTest(unittest.TestCase): def setUp(self): self.mock_client = mock.create_autospec(docker.Client) def test_containers(self): service = Service('db', self.mock_client, 'myproject', image='foo') self.mock_client.containers.return_value = [] self.assertEqual(list(service.containers()), []) def test_containers_with_containers(self): self.mock_client.containers.return_value = [ dict(Name=str(i), Image='foo', Id=i) for i in range(3) ] service = Service('db', self.mock_client, 'myproject', image='foo') self.assertEqual([c.id for c in service.containers()], list(range(3))) expected_labels = [ '{0}=myproject'.format(LABEL_PROJECT), '{0}=db'.format(LABEL_SERVICE), '{0}=False'.format(LABEL_ONE_OFF), ] self.mock_client.containers.assert_called_once_with( all=False, filters={'label': expected_labels}) def test_container_without_name(self): self.mock_client.containers.return_value = [ {'Image': 'foo', 'Id': '1', 'Name': '1'}, {'Image': 'foo', 'Id': '2', 'Name': None}, {'Image': 'foo', 'Id': '3'}, ] service = Service('db', self.mock_client, 'myproject', image='foo') self.assertEqual([c.id for c in service.containers()], ['1']) self.assertEqual(service._next_container_number(), 2) self.assertEqual(service.get_container(1).id, '1') def test_get_volumes_from_container(self): container_id = 'aabbccddee' service = Service( 'test', image='foo', volumes_from=[ VolumeFromSpec( mock.Mock(id=container_id, spec=Container), 'rw', 'container')]) self.assertEqual(service._get_volumes_from(), [container_id + ':rw']) def test_get_volumes_from_container_read_only(self): container_id = 'aabbccddee' service = Service( 'test', image='foo', volumes_from=[ VolumeFromSpec( mock.Mock(id=container_id, spec=Container), 'ro', 'container')]) self.assertEqual(service._get_volumes_from(), [container_id + ':ro']) def test_get_volumes_from_service_container_exists(self): container_ids = ['aabbccddee', '12345'] from_service = mock.create_autospec(Service) from_service.containers.return_value = [ mock.Mock(id=container_id, spec=Container) for container_id in container_ids ] service = Service( 'test', volumes_from=[VolumeFromSpec(from_service, 'rw', 'service')], image='foo') self.assertEqual(service._get_volumes_from(), [container_ids[0] + ":rw"]) def test_get_volumes_from_service_container_exists_with_flags(self): for mode in ['ro', 'rw', 'z', 'rw,z', 'z,rw']: container_ids = ['aabbccddee:' + mode, '12345:' + mode] from_service = mock.create_autospec(Service) from_service.containers.return_value = [ mock.Mock(id=container_id.split(':')[0], spec=Container) for container_id in container_ids ] service = Service( 'test', volumes_from=[VolumeFromSpec(from_service, mode, 'service')], image='foo') self.assertEqual(service._get_volumes_from(), [container_ids[0]]) def test_get_volumes_from_service_no_container(self): container_id = 'abababab' from_service = mock.create_autospec(Service) from_service.containers.return_value = [] from_service.create_container.return_value = mock.Mock( id=container_id, spec=Container) service = Service( 'test', image='foo', volumes_from=[VolumeFromSpec(from_service, 'rw', 'service')]) self.assertEqual(service._get_volumes_from(), [container_id + ':rw']) from_service.create_container.assert_called_once_with() def test_split_domainname_none(self): service = Service('foo', image='foo', hostname='name', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name', 'hostname') self.assertFalse('domainname' in opts, 'domainname') def test_memory_swap_limit(self): self.mock_client.create_host_config.return_value = {} service = Service( name='foo', image='foo', hostname='name', client=self.mock_client, mem_limit=1000000000, memswap_limit=2000000000) service._get_container_create_options({'some': 'overrides'}, 1) self.assertTrue(self.mock_client.create_host_config.called) self.assertEqual( self.mock_client.create_host_config.call_args[1]['mem_limit'], 1000000000 ) self.assertEqual( self.mock_client.create_host_config.call_args[1]['memswap_limit'], 2000000000 ) def test_cgroup_parent(self): self.mock_client.create_host_config.return_value = {} service = Service( name='foo', image='foo', hostname='name', client=self.mock_client, cgroup_parent='test') service._get_container_create_options({'some': 'overrides'}, 1) self.assertTrue(self.mock_client.create_host_config.called) self.assertEqual( self.mock_client.create_host_config.call_args[1]['cgroup_parent'], 'test' ) def test_log_opt(self): self.mock_client.create_host_config.return_value = {} log_opt = {'syslog-address': 'tcp://192.168.0.42:123'} logging = {'driver': 'syslog', 'options': log_opt} service = Service( name='foo', image='foo', hostname='name', client=self.mock_client, log_driver='syslog', logging=logging) service._get_container_create_options({'some': 'overrides'}, 1) self.assertTrue(self.mock_client.create_host_config.called) self.assertEqual( self.mock_client.create_host_config.call_args[1]['log_config'], {'Type': 'syslog', 'Config': {'syslog-address': 'tcp://192.168.0.42:123'}} ) def test_split_domainname_fqdn(self): service = Service( 'foo', hostname='name.domain.tld', image='foo', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name', 'hostname') self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') def test_split_domainname_both(self): service = Service( 'foo', hostname='name', image='foo', domainname='domain.tld', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name', 'hostname') self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') def test_split_domainname_weird(self): service = Service( 'foo', hostname='name.sub', domainname='domain.tld', image='foo', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name.sub', 'hostname') self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') def test_no_default_hostname_when_not_using_networking(self): service = Service( 'foo', image='foo', use_networking=False, client=self.mock_client, ) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertIsNone(opts.get('hostname')) def test_get_container_create_options_with_name_option(self): service = Service( 'foo', image='foo', client=self.mock_client, container_name='foo1') name = 'the_new_name' opts = service._get_container_create_options( {'name': name}, 1, one_off=OneOffFilter.only) self.assertEqual(opts['name'], name) def test_get_container_create_options_does_not_mutate_options(self): labels = {'thing': 'real'} environment = {'also': 'real'} service = Service( 'foo', image='foo', labels=dict(labels), client=self.mock_client, environment=dict(environment), ) self.mock_client.inspect_image.return_value = {'Id': 'abcd'} prev_container = mock.Mock( id='ababab', image_config={'ContainerConfig': {}}) prev_container.get.return_value = None opts = service._get_container_create_options( {}, 1, previous_container=prev_container) self.assertEqual(service.options['labels'], labels) self.assertEqual(service.options['environment'], environment) self.assertEqual( opts['labels'][LABEL_CONFIG_HASH], '2524a06fcb3d781aa2c981fc40bcfa08013bb318e4273bfa388df22023e6f2aa') assert opts['environment'] == ['also=real'] def test_get_container_create_options_sets_affinity_with_binds(self): service = Service( 'foo', image='foo', client=self.mock_client, ) self.mock_client.inspect_image.return_value = {'Id': 'abcd'} prev_container = mock.Mock( id='ababab', image_config={'ContainerConfig': {'Volumes': ['/data']}}) def container_get(key): return { 'Mounts': [ { 'Destination': '/data', 'Source': '/some/path', 'Name': 'abab1234', }, ] }.get(key, None) prev_container.get.side_effect = container_get opts = service._get_container_create_options( {}, 1, previous_container=prev_container) assert opts['environment'] == ['affinity:container==ababab'] def test_get_container_create_options_no_affinity_without_binds(self): service = Service('foo', image='foo', client=self.mock_client) self.mock_client.inspect_image.return_value = {'Id': 'abcd'} prev_container = mock.Mock( id='ababab', image_config={'ContainerConfig': {}}) prev_container.get.return_value = None opts = service._get_container_create_options( {}, 1, previous_container=prev_container) assert opts['environment'] == [] def test_get_container_not_found(self): self.mock_client.containers.return_value = [] service = Service('foo', client=self.mock_client, image='foo') self.assertRaises(ValueError, service.get_container) @mock.patch('compose.service.Container', autospec=True) def test_get_container(self, mock_container_class): container_dict = dict(Name='default_foo_2') self.mock_client.containers.return_value = [container_dict] service = Service('foo', image='foo', client=self.mock_client) container = service.get_container(number=2) self.assertEqual(container, mock_container_class.from_ps.return_value) mock_container_class.from_ps.assert_called_once_with( self.mock_client, container_dict) @mock.patch('compose.service.log', autospec=True) def test_pull_image(self, mock_log): service = Service('foo', client=self.mock_client, image='someimage:sometag') service.pull() self.mock_client.pull.assert_called_once_with( 'someimage', tag='sometag', stream=True) mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...') def test_pull_image_no_tag(self): service = Service('foo', client=self.mock_client, image='ababab') service.pull() self.mock_client.pull.assert_called_once_with( 'ababab', tag='latest', stream=True) @mock.patch('compose.service.log', autospec=True) def test_pull_image_digest(self, mock_log): service = Service('foo', client=self.mock_client, image='someimage@sha256:1234') service.pull() self.mock_client.pull.assert_called_once_with( 'someimage', tag='sha256:1234', stream=True) mock_log.info.assert_called_once_with('Pulling foo (someimage@sha256:1234)...') @mock.patch('compose.service.Container', autospec=True) def test_recreate_container(self, _): mock_container = mock.create_autospec(Container) service = Service('foo', client=self.mock_client, image='someimage') service.image = lambda: {'Id': 'abc123'} new_container = service.recreate_container(mock_container) mock_container.stop.assert_called_once_with(timeout=10) mock_container.rename_to_tmp_name.assert_called_once_with() new_container.start.assert_called_once_with() mock_container.remove.assert_called_once_with() @mock.patch('compose.service.Container', autospec=True) def test_recreate_container_with_timeout(self, _): mock_container = mock.create_autospec(Container) self.mock_client.inspect_image.return_value = {'Id': 'abc123'} service = Service('foo', client=self.mock_client, image='someimage') service.recreate_container(mock_container, timeout=1) mock_container.stop.assert_called_once_with(timeout=1) def test_parse_repository_tag(self): self.assertEqual(parse_repository_tag("root"), ("root", "", ":")) self.assertEqual(parse_repository_tag("root:tag"), ("root", "tag", ":")) self.assertEqual(parse_repository_tag("user/repo"), ("user/repo", "", ":")) self.assertEqual(parse_repository_tag("user/repo:tag"), ("user/repo", "tag", ":")) self.assertEqual(parse_repository_tag("url:5000/repo"), ("url:5000/repo", "", ":")) self.assertEqual( parse_repository_tag("url:5000/repo:tag"), ("url:5000/repo", "tag", ":")) self.assertEqual( parse_repository_tag("root@sha256:digest"), ("root", "sha256:digest", "@")) self.assertEqual( parse_repository_tag("user/repo@sha256:digest"), ("user/repo", "sha256:digest", "@")) self.assertEqual( parse_repository_tag("url:5000/repo@sha256:digest"), ("url:5000/repo", "sha256:digest", "@")) def test_create_container(self): service = Service('foo', client=self.mock_client, build={'context': '.'}) self.mock_client.inspect_image.side_effect = [ NoSuchImageError, {'Id': 'abc123'}, ] self.mock_client.build.return_value = [ '{"stream": "Successfully built abcd"}', ] with mock.patch('compose.service.log', autospec=True) as mock_log: service.create_container() assert mock_log.warn.called _, args, _ = mock_log.warn.mock_calls[0] assert 'was built because it did not already exist' in args[0] self.mock_client.build.assert_called_once_with( tag='default_foo', dockerfile=None, stream=True, path='.', pull=False, forcerm=False, nocache=False, rm=True, buildargs=None, ) def test_ensure_image_exists_no_build(self): service = Service('foo', client=self.mock_client, build={'context': '.'}) self.mock_client.inspect_image.return_value = {'Id': 'abc123'} service.ensure_image_exists(do_build=BuildAction.skip) assert not self.mock_client.build.called def test_ensure_image_exists_no_build_but_needs_build(self): service = Service('foo', client=self.mock_client, build={'context': '.'}) self.mock_client.inspect_image.side_effect = NoSuchImageError with pytest.raises(NeedsBuildError): service.ensure_image_exists(do_build=BuildAction.skip) def test_ensure_image_exists_force_build(self): service = Service('foo', client=self.mock_client, build={'context': '.'}) self.mock_client.inspect_image.return_value = {'Id': 'abc123'} self.mock_client.build.return_value = [ '{"stream": "Successfully built abcd"}', ] with mock.patch('compose.service.log', autospec=True) as mock_log: service.ensure_image_exists(do_build=BuildAction.force) assert not mock_log.warn.called self.mock_client.build.assert_called_once_with( tag='default_foo', dockerfile=None, stream=True, path='.', pull=False, forcerm=False, nocache=False, rm=True, buildargs=None, ) def test_build_does_not_pull(self): self.mock_client.build.return_value = [ b'{"stream": "Successfully built 12345"}', ] service = Service('foo', client=self.mock_client, build={'context': '.'}) service.build() self.assertEqual(self.mock_client.build.call_count, 1) self.assertFalse(self.mock_client.build.call_args[1]['pull']) def test_config_dict(self): self.mock_client.inspect_image.return_value = {'Id': 'abcd'} service = Service( 'foo', image='example.com/foo', client=self.mock_client, network_mode=ServiceNetworkMode(Service('other')), networks={'default': None}, links=[(Service('one'), 'one')], volumes_from=[VolumeFromSpec(Service('two'), 'rw', 'service')]) config_dict = service.config_dict() expected = { 'image_id': 'abcd', 'options': {'image': 'example.com/foo'}, 'links': [('one', 'one')], 'net': 'other', 'networks': {'default': None}, 'volumes_from': [('two', 'rw')], } assert config_dict == expected def test_config_dict_with_network_mode_from_container(self): self.mock_client.inspect_image.return_value = {'Id': 'abcd'} container = Container( self.mock_client, {'Id': 'aaabbb', 'Name': '/foo_1'}) service = Service( 'foo', image='example.com/foo', client=self.mock_client, network_mode=ContainerNetworkMode(container)) config_dict = service.config_dict() expected = { 'image_id': 'abcd', 'options': {'image': 'example.com/foo'}, 'links': [], 'networks': {}, 'net': 'aaabbb', 'volumes_from': [], } assert config_dict == expected def test_remove_image_none(self): web = Service('web', image='example', client=self.mock_client) assert not web.remove_image(ImageType.none) assert not self.mock_client.remove_image.called def test_remove_image_local_with_image_name_doesnt_remove(self): web = Service('web', image='example', client=self.mock_client) assert not web.remove_image(ImageType.local) assert not self.mock_client.remove_image.called def test_remove_image_local_without_image_name_does_remove(self): web = Service('web', build='.', client=self.mock_client) assert web.remove_image(ImageType.local) self.mock_client.remove_image.assert_called_once_with(web.image_name) def test_remove_image_all_does_remove(self): web = Service('web', image='example', client=self.mock_client) assert web.remove_image(ImageType.all) self.mock_client.remove_image.assert_called_once_with(web.image_name) def test_remove_image_with_error(self): self.mock_client.remove_image.side_effect = error = APIError( message="testing", response={}, explanation="Boom") web = Service('web', image='example', client=self.mock_client) with mock.patch('compose.service.log', autospec=True) as mock_log: assert not web.remove_image(ImageType.all) mock_log.error.assert_called_once_with( "Failed to remove image for service %s: %s", web.name, error) def test_specifies_host_port_with_no_ports(self): service = Service( 'foo', image='foo') self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_container_port(self): service = Service( 'foo', image='foo', ports=["2000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_port(self): service = Service( 'foo', image='foo', ports=["1000:2000"]) self.assertEqual(service.specifies_host_port(), True) def test_specifies_host_port_with_host_ip_no_port(self): service = Service( 'foo', image='foo', ports=["127.0.0.1::2000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_ip_and_port(self): service = Service( 'foo', image='foo', ports=["127.0.0.1:1000:2000"]) self.assertEqual(service.specifies_host_port(), True) def test_specifies_host_port_with_container_port_range(self): service = Service( 'foo', image='foo', ports=["2000-3000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_port_range(self): service = Service( 'foo', image='foo', ports=["1000-2000:2000-3000"]) self.assertEqual(service.specifies_host_port(), True) def test_specifies_host_port_with_host_ip_no_port_range(self): service = Service( 'foo', image='foo', ports=["127.0.0.1::2000-3000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_ip_and_port_range(self): service = Service( 'foo', image='foo', ports=["127.0.0.1:1000-2000:2000-3000"]) self.assertEqual(service.specifies_host_port(), True) def test_image_name_from_config(self): image_name = 'example/web:latest' service = Service('foo', image=image_name) assert service.image_name == image_name def test_image_name_default(self): service = Service('foo', project='testing') assert service.image_name == 'testing_foo' @mock.patch('compose.service.log', autospec=True) def test_only_log_warning_when_host_ports_clash(self, mock_log): self.mock_client.inspect_image.return_value = {'Id': 'abcd'} name = 'foo' service = Service( name, client=self.mock_client, ports=["8080:80"]) service.scale(0) self.assertFalse(mock_log.warn.called) service.scale(1) self.assertFalse(mock_log.warn.called) service.scale(2) mock_log.warn.assert_called_once_with( 'The "{}" service specifies a port on the host. If multiple containers ' 'for this service are created on a single host, the port will clash.'.format(name)) class TestServiceNetwork(object): def test_connect_container_to_networks_short_aliase_exists(self): mock_client = mock.create_autospec(docker.Client) service = Service( 'db', mock_client, 'myproject', image='foo', networks={'project_default': {}}) container = Container( None, { 'Id': 'abcdef', 'NetworkSettings': { 'Networks': { 'project_default': { 'Aliases': ['analias', 'abcdef'], }, }, }, }, True) service.connect_container_to_networks(container) assert not mock_client.disconnect_container_from_network.call_count assert not mock_client.connect_container_to_network.call_count def sort_by_name(dictionary_list): return sorted(dictionary_list, key=lambda k: k['name']) class BuildUlimitsTestCase(unittest.TestCase): def test_build_ulimits_with_dict(self): ulimits = build_ulimits( { 'nofile': {'soft': 10000, 'hard': 20000}, 'nproc': {'soft': 65535, 'hard': 65535} } ) expected = [ {'name': 'nofile', 'soft': 10000, 'hard': 20000}, {'name': 'nproc', 'soft': 65535, 'hard': 65535} ] assert sort_by_name(ulimits) == sort_by_name(expected) def test_build_ulimits_with_ints(self): ulimits = build_ulimits({'nofile': 20000, 'nproc': 65535}) expected = [ {'name': 'nofile', 'soft': 20000, 'hard': 20000}, {'name': 'nproc', 'soft': 65535, 'hard': 65535} ] assert sort_by_name(ulimits) == sort_by_name(expected) def test_build_ulimits_with_integers_and_dicts(self): ulimits = build_ulimits( { 'nproc': 65535, 'nofile': {'soft': 10000, 'hard': 20000} } ) expected = [ {'name': 'nofile', 'soft': 10000, 'hard': 20000}, {'name': 'nproc', 'soft': 65535, 'hard': 65535} ] assert sort_by_name(ulimits) == sort_by_name(expected) class NetTestCase(unittest.TestCase): def test_network_mode(self): network_mode = NetworkMode('host') self.assertEqual(network_mode.id, 'host') self.assertEqual(network_mode.mode, 'host') self.assertEqual(network_mode.service_name, None) def test_network_mode_container(self): container_id = 'abcd' network_mode = ContainerNetworkMode(Container(None, {'Id': container_id})) self.assertEqual(network_mode.id, container_id) self.assertEqual(network_mode.mode, 'container:' + container_id) self.assertEqual(network_mode.service_name, None) def test_network_mode_service(self): container_id = 'bbbb' service_name = 'web' mock_client = mock.create_autospec(docker.Client) mock_client.containers.return_value = [ {'Id': container_id, 'Name': container_id, 'Image': 'abcd'}, ] service = Service(name=service_name, client=mock_client) network_mode = ServiceNetworkMode(service) self.assertEqual(network_mode.id, service_name) self.assertEqual(network_mode.mode, 'container:' + container_id) self.assertEqual(network_mode.service_name, service_name) def test_network_mode_service_no_containers(self): service_name = 'web' mock_client = mock.create_autospec(docker.Client) mock_client.containers.return_value = [] service = Service(name=service_name, client=mock_client) network_mode = ServiceNetworkMode(service) self.assertEqual(network_mode.id, service_name) self.assertEqual(network_mode.mode, None) self.assertEqual(network_mode.service_name, service_name) def build_mount(destination, source, mode='rw'): return {'Source': source, 'Destination': destination, 'Mode': mode} class ServiceVolumesTest(unittest.TestCase): def setUp(self): self.mock_client = mock.create_autospec(docker.Client) def test_build_volume_binding(self): binding = build_volume_binding(VolumeSpec.parse('/outside:/inside')) assert binding == ('/inside', '/outside:/inside:rw') def test_get_container_data_volumes(self): options = [VolumeSpec.parse(v) for v in [ '/host/volume:/host/volume:ro', '/new/volume', '/existing/volume', 'named:/named/vol', ]] self.mock_client.inspect_image.return_value = { 'ContainerConfig': { 'Volumes': { '/mnt/image/data': {}, } } } container = Container(self.mock_client, { 'Image': 'ababab', 'Mounts': [ { 'Source': '/host/volume', 'Destination': '/host/volume', 'Mode': '', 'RW': True, 'Name': 'hostvolume', }, { 'Source': '/var/lib/docker/aaaaaaaa', 'Destination': '/existing/volume', 'Mode': '', 'RW': True, 'Name': 'existingvolume', }, { 'Source': '/var/lib/docker/bbbbbbbb', 'Destination': '/removed/volume', 'Mode': '', 'RW': True, 'Name': 'removedvolume', }, { 'Source': '/var/lib/docker/cccccccc', 'Destination': '/mnt/image/data', 'Mode': '', 'RW': True, 'Name': 'imagedata', }, ] }, has_been_inspected=True) expected = [ VolumeSpec.parse('existingvolume:/existing/volume:rw'), VolumeSpec.parse('imagedata:/mnt/image/data:rw'), ] volumes = get_container_data_volumes(container, options) assert sorted(volumes) == sorted(expected) def test_merge_volume_bindings(self): options = [ VolumeSpec.parse('/host/volume:/host/volume:ro'), VolumeSpec.parse('/host/rw/volume:/host/rw/volume'), VolumeSpec.parse('/new/volume'), VolumeSpec.parse('/existing/volume'), ] self.mock_client.inspect_image.return_value = { 'ContainerConfig': {'Volumes': {}} } previous_container = Container(self.mock_client, { 'Id': 'cdefab', 'Image': 'ababab', 'Mounts': [{ 'Source': '/var/lib/docker/aaaaaaaa', 'Destination': '/existing/volume', 'Mode': '', 'RW': True, 'Name': 'existingvolume', }], }, has_been_inspected=True) expected = [ '/host/volume:/host/volume:ro', '/host/rw/volume:/host/rw/volume:rw', 'existingvolume:/existing/volume:rw', ] binds, affinity = merge_volume_bindings(options, previous_container) assert sorted(binds) == sorted(expected) assert affinity == {'affinity:container': '=cdefab'} def test_mount_same_host_path_to_two_volumes(self): service = Service( 'web', image='busybox', volumes=[ VolumeSpec.parse('/host/path:/data1'), VolumeSpec.parse('/host/path:/data2'), ], client=self.mock_client, ) self.mock_client.inspect_image.return_value = { 'Id': 'ababab', 'ContainerConfig': { 'Volumes': {} } } service._get_container_create_options( override_options={}, number=1, ) self.assertEqual( set(self.mock_client.create_host_config.call_args[1]['binds']), set([ '/host/path:/data1:rw', '/host/path:/data2:rw', ]), ) def test_get_container_create_options_with_different_host_path_in_container_json(self): service = Service( 'web', image='busybox', volumes=[VolumeSpec.parse('/host/path:/data')], client=self.mock_client, ) volume_name = 'abcdefff1234' self.mock_client.inspect_image.return_value = { 'Id': 'ababab', 'ContainerConfig': { 'Volumes': { '/data': {}, } } } self.mock_client.inspect_container.return_value = { 'Id': '123123123', 'Image': 'ababab', 'Mounts': [ { 'Destination': '/data', 'Source': '/mnt/sda1/host/path', 'Mode': '', 'RW': True, 'Driver': 'local', 'Name': volume_name, }, ] } service._get_container_create_options( override_options={}, number=1, previous_container=Container(self.mock_client, {'Id': '123123123'}), ) assert ( self.mock_client.create_host_config.call_args[1]['binds'] == ['{}:/data:rw'.format(volume_name)] ) def test_warn_on_masked_volume_no_warning_when_no_container_volumes(self): volumes_option = [VolumeSpec('/home/user', '/path', 'rw')] container_volumes = [] service = 'service_name' with mock.patch('compose.service.log', autospec=True) as mock_log: warn_on_masked_volume(volumes_option, container_volumes, service) assert not mock_log.warn.called def test_warn_on_masked_volume_when_masked(self): volumes_option = [VolumeSpec('/home/user', '/path', 'rw')] container_volumes = [ VolumeSpec('/var/lib/docker/path', '/path', 'rw'), VolumeSpec('/var/lib/docker/path', '/other', 'rw'), ] service = 'service_name' with mock.patch('compose.service.log', autospec=True) as mock_log: warn_on_masked_volume(volumes_option, container_volumes, service) mock_log.warn.assert_called_once_with(mock.ANY) def test_warn_on_masked_no_warning_with_same_path(self): volumes_option = [VolumeSpec('/home/user', '/path', 'rw')] container_volumes = [VolumeSpec('/home/user', '/path', 'rw')] service = 'service_name' with mock.patch('compose.service.log', autospec=True) as mock_log: warn_on_masked_volume(volumes_option, container_volumes, service) assert not mock_log.warn.called def test_warn_on_masked_no_warning_with_container_only_option(self): volumes_option = [VolumeSpec(None, '/path', 'rw')] container_volumes = [ VolumeSpec('/var/lib/docker/volume/path', '/path', 'rw') ] service = 'service_name' with mock.patch('compose.service.log', autospec=True) as mock_log: warn_on_masked_volume(volumes_option, container_volumes, service) assert not mock_log.warn.called def test_create_with_special_volume_mode(self): self.mock_client.inspect_image.return_value = {'Id': 'imageid'} self.mock_client.create_container.return_value = {'Id': 'containerid'} volume = '/tmp:/foo:z' Service( 'web', client=self.mock_client, image='busybox', volumes=[VolumeSpec.parse(volume)], ).create_container() assert self.mock_client.create_container.call_count == 1 self.assertEqual( self.mock_client.create_host_config.call_args[1]['binds'], [volume]) compose-1.8.0/tests/unit/split_buffer_test.py000066400000000000000000000026561274620702700214230ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from .. import unittest from compose.utils import split_buffer class SplitBufferTest(unittest.TestCase): def test_single_line_chunks(self): def reader(): yield b'abc\n' yield b'def\n' yield b'ghi\n' self.assert_produces(reader, ['abc\n', 'def\n', 'ghi\n']) def test_no_end_separator(self): def reader(): yield b'abc\n' yield b'def\n' yield b'ghi' self.assert_produces(reader, ['abc\n', 'def\n', 'ghi']) def test_multiple_line_chunk(self): def reader(): yield b'abc\ndef\nghi' self.assert_produces(reader, ['abc\n', 'def\n', 'ghi']) def test_chunked_line(self): def reader(): yield b'a' yield b'b' yield b'c' yield b'\n' yield b'd' self.assert_produces(reader, ['abc\n', 'd']) def test_preserves_unicode_sequences_within_lines(self): string = u"a\u2022c\n" def reader(): yield string.encode('utf-8') self.assert_produces(reader, [string]) def assert_produces(self, reader, expectations): split = split_buffer(reader()) for (actual, expected) in zip(split, expectations): self.assertEqual(type(actual), type(expected)) self.assertEqual(actual, expected) compose-1.8.0/tests/unit/utils_test.py000066400000000000000000000022331274620702700200660ustar00rootroot00000000000000# encoding: utf-8 from __future__ import absolute_import from __future__ import unicode_literals from compose import utils class TestJsonSplitter(object): def test_json_splitter_no_object(self): data = '{"foo": "bar' assert utils.json_splitter(data) is None def test_json_splitter_with_object(self): data = '{"foo": "bar"}\n \n{"next": "obj"}' assert utils.json_splitter(data) == ({'foo': 'bar'}, '{"next": "obj"}') class TestStreamAsText(object): def test_stream_with_non_utf_unicode_character(self): stream = [b'\xed\xf3\xf3'] output, = utils.stream_as_text(stream) assert output == '���' def test_stream_with_utf_character(self): stream = ['ěĝ'.encode('utf-8')] output, = utils.stream_as_text(stream) assert output == 'ěĝ' class TestJsonStream(object): def test_with_falsy_entries(self): stream = [ '{"one": "two"}\n{}\n', "[1, 2, 3]\n[]\n", ] output = list(utils.json_stream(stream)) assert output == [ {'one': 'two'}, {}, [1, 2, 3], [], ] compose-1.8.0/tests/unit/volume_test.py000066400000000000000000000012611274620702700202350ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import docker import pytest from compose import volume from tests import mock @pytest.fixture def mock_client(): return mock.create_autospec(docker.Client) class TestVolume(object): def test_remove_local_volume(self, mock_client): vol = volume.Volume(mock_client, 'foo', 'project') vol.remove() mock_client.remove_volume.assert_called_once_with('foo_project') def test_remove_external_volume(self, mock_client): vol = volume.Volume(mock_client, 'foo', 'project', external_name='data') vol.remove() assert not mock_client.remove_volume.called compose-1.8.0/tox.ini000066400000000000000000000014661274620702700145160ustar00rootroot00000000000000[tox] envlist = py27,py34,pre-commit [testenv] usedevelop=True passenv = LD_LIBRARY_PATH DOCKER_HOST DOCKER_CERT_PATH DOCKER_TLS_VERIFY DOCKER_VERSION setenv = HOME=/tmp deps = -rrequirements.txt -rrequirements-dev.txt commands = py.test -v \ --cov=compose \ --cov-report html \ --cov-report term \ --cov-config=tox.ini \ {posargs:tests} [testenv:pre-commit] skip_install = True deps = pre-commit commands = pre-commit install pre-commit run --all-files # Coverage configuration [run] branch = True [report] show_missing = true [html] directory = coverage-html # end coverage configuration [flake8] max-line-length = 105 # Set this high for now max-complexity = 11 exclude = compose/packages [pytest] addopts = --tb=short -rxs