pax_global_header00006660000000000000000000000064126301126100014502gustar00rootroot0000000000000052 comment=7240ff35eeac36d7fb53892af495bb172e0e00c2 compose-1.5.2/000077500000000000000000000000001263011261000131545ustar00rootroot00000000000000compose-1.5.2/.dockerignore000066400000000000000000000001101263011261000156200ustar00rootroot00000000000000*.egg-info .coverage .git .tox build coverage-html docs/_site venv .tox compose-1.5.2/.gitignore000066400000000000000000000001521263011261000151420ustar00rootroot00000000000000*.egg-info *.pyc /.coverage /.tox /build /coverage-html /dist /docs/_site /venv README.rst compose/GITSHA compose-1.5.2/.pre-commit-config.yaml000066400000000000000000000011411263011261000174320ustar00rootroot00000000000000- repo: git://github.com/pre-commit/pre-commit-hooks sha: 'v0.4.2' hooks: - id: check-added-large-files - id: check-docstring-first - id: check-merge-conflict - id: check-yaml - id: check-json - id: debug-statements - id: end-of-file-fixer - id: flake8 - id: name-tests-test exclude: 'tests/integration/testcases.py' - id: requirements-txt-fixer - id: trailing-whitespace - repo: git://github.com/asottile/reorder_python_imports sha: 3d86483455ab5bd06cc1069fdd5ac57be5463f10 hooks: - id: reorder-python-imports language_version: 'python2.7' compose-1.5.2/.travis.yml000066400000000000000000000007441263011261000152720ustar00rootroot00000000000000sudo: required language: python matrix: include: - os: linux services: - docker - os: osx language: generic install: ./script/travis/install script: - ./script/travis/ci - ./script/travis/build-binary before_deploy: - "./script/travis/render-bintray-config.py < ./script/travis/bintray.json.tmpl > ./bintray.json" deploy: provider: bintray user: docker-compose-roleuser key: '$BINTRAY_API_KEY' file: ./bintray.json skip_cleanup: true compose-1.5.2/CHANGELOG.md000066400000000000000000000704651263011261000150010ustar00rootroot00000000000000Change log ========== 1.5.2 (2015-12-03) ------------------ - Fixed a bug which broke the use of `environment` and `env_file` with `extends`, and caused environment keys without values to have a `None` value, instead of a value from the host environment. - Fixed a regression in 1.5.1 that caused a warning about volumes to be raised incorrectly when containers were recreated. - Fixed a bug which prevented building a `Dockerfile` that used `ADD ` - Fixed a bug with `docker-compose restart` which prevented it from starting stopped containers. - Fixed handling of SIGTERM and SIGINT to properly stop containers - Add support for using a url as the value of `build` - Improved the validation of the `expose` option 1.5.1 (2015-11-12) ------------------ - Add the `--force-rm` option to `build`. - Add the `ulimit` option for services in the Compose file. - Fixed a bug where `up` would error with "service needs to be built" if a service changed from using `image` to using `build`. - Fixed a bug that would cause incorrect output of parallel operations on some terminals. - Fixed a bug that prevented a container from being recreated when the mode of a `volumes_from` was changed. - Fixed a regression in 1.5.0 where non-utf-8 unicode characters would cause `up` or `logs` to crash. - Fixed a regression in 1.5.0 where Compose would use a success exit status code when a command fails due to an HTTP timeout communicating with the docker daemon. - Fixed a regression in 1.5.0 where `name` was being accepted as a valid service option which would override the actual name of the service. - When using `--x-networking` Compose no longer sets the hostname to the container name. - When using `--x-networking` Compose will only create the default network if at least one container is using the network. - When printings logs during `up` or `logs`, flush the output buffer after each line to prevent buffering issues from hideing logs. - Recreate a container if one of it's dependencies is being created. Previously a container was only recreated if it's dependencies already existed, but were being recreated as well. - Add a warning when a `volume` in the Compose file is being ignored and masked by a container volume from a previous container. - Improve the output of `pull` when run without a tty. - When using multiple Compose files, validate each before attempting to merge them together. Previously invalid files would result in not helpful errors. - Allow dashes in keys in the `environment` service option. - Improve validation error messages by including the filename as part of the error message. 1.5.0 (2015-11-03) ------------------ **Breaking changes:** With the introduction of variable substitution support in the Compose file, any Compose file that uses an environment variable (`$VAR` or `${VAR}`) in the `command:` or `entrypoint:` field will break. Previously these values were interpolated inside the container, with a value from the container environment. In Compose 1.5.0, the values will be interpolated on the host, with a value from the host environment. To migrate a Compose file to 1.5.0, escape the variables with an extra `$` (ex: `$$VAR` or `$${VAR}`). See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution Major features: - Compose is now available for Windows. - Environment variables can be used in the Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution - Multiple compose files can be specified, allowing you to override settings in the default Compose file. See https://github.com/docker/compose/blob/8cc8e61/docs/reference/docker-compose.md for more details. - Compose now produces better error messages when a file contains invalid configuration. - `up` now waits for all services to exit before shutting down, rather than shutting down as soon as one container exits. - Experimental support for the new docker networking system can be enabled with the `--x-networking` flag. Read more here: https://github.com/docker/docker/blob/8fee1c20/docs/userguide/dockernetworks.md New features: - You can now optionally pass a mode to `volumes_from`, e.g. `volumes_from: ["servicename:ro"]`. - Since Docker now lets you create volumes with names, you can refer to those volumes by name in `docker-compose.yml`. For example, `volumes: ["mydatavolume:/data"]` will mount the volume named `mydatavolume` at the path `/data` inside the container. If the first component of an entry in `volumes` starts with a `.`, `/` or `~`, it is treated as a path and expansion of relative paths is performed as necessary. Otherwise, it is treated as a volume name and passed straight through to Docker. Read more on named volumes and volume drivers here: https://github.com/docker/docker/blob/244d9c33/docs/userguide/dockervolumes.md - `docker-compose build --pull` instructs Compose to pull the base image for each Dockerfile before building. - `docker-compose pull --ignore-pull-failures` instructs Compose to continue if it fails to pull a single service's image, rather than aborting. - You can now specify an IPC namespace in `docker-compose.yml` with the `ipc` option. - Containers created by `docker-compose run` can now be named with the `--name` flag. - If you install Compose with pip or use it as a library, it now works with Python 3. - `image` now supports image digests (in addition to ids and tags), e.g. `image: "busybox@sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d"` - `ports` now supports ranges of ports, e.g. ports: - "3000-3005" - "9000-9001:8000-8001" - `docker-compose run` now supports a `-p|--publish` parameter, much like `docker run -p`, for publishing specific ports to the host. - `docker-compose pause` and `docker-compose unpause` have been implemented, analogous to `docker pause` and `docker unpause`. - When using `extends` to copy configuration from another service in the same Compose file, you can omit the `file` option. - Compose can be installed and run as a Docker image. This is an experimental feature. Bug fixes: - All values for the `log_driver` option which are supported by the Docker daemon are now supported by Compose. - `docker-compose build` can now be run successfully against a Swarm cluster. 1.4.2 (2015-09-22) ------------------ - Fixed a regression in the 1.4.1 release that would cause `docker-compose up` without the `-d` option to exit immediately. 1.4.1 (2015-09-10) ------------------ The following bugs have been fixed: - Some configuration changes (notably changes to `links`, `volumes_from`, and `net`) were not properly triggering a container recreate as part of `docker-compose up`. - `docker-compose up ` was showing logs for all services instead of just the specified services. - Containers with custom container names were showing up in logs as `service_number` instead of their custom container name. - When scaling a service sometimes containers would be recreated even when the configuration had not changed. 1.4.0 (2015-08-04) ------------------ - By default, `docker-compose up` now only recreates containers for services whose configuration has changed since they were created. This should result in a dramatic speed-up for many applications. The experimental `--x-smart-recreate` flag which introduced this feature in Compose 1.3.0 has been removed, and a `--force-recreate` flag has been added for when you want to recreate everything. - Several of Compose's commands - `scale`, `stop`, `kill` and `rm` - now perform actions on multiple containers in parallel, rather than in sequence, which will run much faster on larger applications. - You can now specify a custom name for a service's container with `container_name`. Because Docker container names must be unique, this means you can't scale the service beyond one container. - You no longer have to specify a `file` option when using `extends` - it will default to the current file. - Service names can now contain dots, dashes and underscores. - Compose can now read YAML configuration from standard input, rather than from a file, by specifying `-` as the filename. This makes it easier to generate configuration dynamically: $ echo 'redis: {"image": "redis"}' | docker-compose --file - up - There's a new `docker-compose version` command which prints extended information about Compose's bundled dependencies. - `docker-compose.yml` now supports `log_opt` as well as `log_driver`, allowing you to pass extra configuration to a service's logging driver. - `docker-compose.yml` now supports `memswap_limit`, similar to `docker run --memory-swap`. - When mounting volumes with the `volumes` option, you can now pass in any mode supported by the daemon, not just `:ro` or `:rw`. For example, SELinux users can pass `:z` or `:Z`. - You can now specify a custom volume driver with the `volume_driver` option in `docker-compose.yml`, much like `docker run --volume-driver`. - A bug has been fixed where Compose would fail to pull images from private registries serving plain (unsecured) HTTP. The `--allow-insecure-ssl` flag, which was previously used to work around this issue, has been deprecated and now has no effect. - A bug has been fixed where `docker-compose build` would fail if the build depended on a private Hub image or an image from a private registry. - A bug has been fixed where Compose would crash if there were containers which the Docker daemon had not finished removing. - Two bugs have been fixed where Compose would sometimes fail with a "Duplicate bind mount" error, or fail to attach volumes to a container, if there was a volume path specified in `docker-compose.yml` with a trailing slash. Thanks @mnowster, @dnephin, @ekristen, @funkyfuture, @jeffk and @lukemarsden! 1.3.3 (2015-07-15) ------------------ Two regressions have been fixed: - When stopping containers gracefully, Compose was setting the timeout to 0, effectively forcing a SIGKILL every time. - Compose would sometimes crash depending on the formatting of container data returned from the Docker API. 1.3.2 (2015-07-14) ------------------ The following bugs have been fixed: - When there were one-off containers created by running `docker-compose run` on an older version of Compose, `docker-compose run` would fail with a name collision. Compose now shows an error if you have leftover containers of this type lying around, and tells you how to remove them. - Compose was not reading Docker authentication config files created in the new location, `~/docker/config.json`, and authentication against private registries would therefore fail. - When a container had a pseudo-TTY attached, its output in `docker-compose up` would be truncated. - `docker-compose up --x-smart-recreate` would sometimes fail when an image tag was updated. - `docker-compose up` would sometimes create two containers with the same numeric suffix. - `docker-compose rm` and `docker-compose ps` would sometimes list services that aren't part of the current project (though no containers were erroneously removed). - Some `docker-compose` commands would not show an error if invalid service names were passed in. Thanks @dano, @josephpage, @kevinsimper, @lieryan, @phemmer, @soulrebel and @sschepens! 1.3.1 (2015-06-21) ------------------ The following bugs have been fixed: - `docker-compose build` would always attempt to pull the base image before building. - `docker-compose help migrate-to-labels` failed with an error. - If no network mode was specified, Compose would set it to "bridge", rather than allowing the Docker daemon to use its configured default network mode. 1.3.0 (2015-06-18) ------------------ Firstly, two important notes: - **This release contains breaking changes, and you will need to either remove or migrate your existing containers before running your app** - see the [upgrading section of the install docs](https://github.com/docker/compose/blob/1.3.0rc1/docs/install.md#upgrading) for details. - Compose now requires Docker 1.6.0 or later. We've done a lot of work in this release to remove hacks and make Compose more stable: - Compose now uses container labels, rather than names, to keep track of containers. This makes Compose both faster and easier to integrate with your own tools. - Compose no longer uses "intermediate containers" when recreating containers for a service. This makes `docker-compose up` less complex and more resilient to failure. There are some new features: - `docker-compose up` has an **experimental** new behaviour: it will only recreate containers for services whose configuration has changed in `docker-compose.yml`. This will eventually become the default, but for now you can take it for a spin: $ docker-compose up --x-smart-recreate - When invoked in a subdirectory of a project, `docker-compose` will now climb up through parent directories until it finds a `docker-compose.yml`. Several new configuration keys have been added to `docker-compose.yml`: - `dockerfile`, like `docker build --file`, lets you specify an alternate Dockerfile to use with `build`. - `labels`, like `docker run --labels`, lets you add custom metadata to containers. - `extra_hosts`, like `docker run --add-host`, lets you add entries to a container's `/etc/hosts` file. - `pid: host`, like `docker run --pid=host`, lets you reuse the same PID namespace as the host machine. - `cpuset`, like `docker run --cpuset-cpus`, lets you specify which CPUs to allow execution in. - `read_only`, like `docker run --read-only`, lets you mount a container's filesystem as read-only. - `security_opt`, like `docker run --security-opt`, lets you specify [security options](https://docs.docker.com/reference/run/#security-configuration). - `log_driver`, like `docker run --log-driver`, lets you specify a [log driver](https://docs.docker.com/reference/run/#logging-drivers-log-driver). Many bugs have been fixed, including the following: - The output of `docker-compose run` was sometimes truncated, especially when running under Jenkins. - A service's volumes would sometimes not update after volume configuration was changed in `docker-compose.yml`. - Authenticating against third-party registries would sometimes fail. - `docker-compose run --rm` would fail to remove the container if the service had a `restart` policy in place. - `docker-compose scale` would refuse to scale a service beyond 1 container if it exposed a specific port number on the host. - Compose would refuse to create multiple volume entries with the same host path. Thanks @ahromis, @albers, @aleksandr-vin, @antoineco, @ccverak, @chernjie, @dnephin, @edmorley, @fordhurley, @josephpage, @KyleJamesWalker, @lsowen, @mchasal, @noironetworks, @sdake, @sdurrheimer, @sherter, @stephenlawrence, @thaJeztah, @thieman, @turtlemonvh, @twhiteman, @vdemeester, @xuxinkun and @zwily! 1.2.0 (2015-04-16) ------------------ - `docker-compose.yml` now supports an `extends` option, which enables a service to inherit configuration from another service in another configuration file. This is really good for sharing common configuration between apps, or for configuring the same app for different environments. Here's the [documentation](https://github.com/docker/compose/blob/master/docs/yml.md#extends). - When using Compose with a Swarm cluster, containers that depend on one another will be co-scheduled on the same node. This means that most Compose apps will now work out of the box, as long as they don't use `build`. - Repeated invocations of `docker-compose up` when using Compose with a Swarm cluster now work reliably. - Directories passed to `build`, filenames passed to `env_file` and volume host paths passed to `volumes` are now treated as relative to the *directory of the configuration file*, not the directory that `docker-compose` is being run in. In the majority of cases, those are the same, but if you use the `-f|--file` argument to specify a configuration file in another directory, **this is a breaking change**. - A service can now share another service's network namespace with `net: container:`. - `volumes_from` and `net: container:` entries are taken into account when resolving dependencies, so `docker-compose up ` will correctly start all dependencies of ``. - `docker-compose run` now accepts a `--user` argument to specify a user to run the command as, just like `docker run`. - The `up`, `stop` and `restart` commands now accept a `--timeout` (or `-t`) argument to specify how long to wait when attempting to gracefully stop containers, just like `docker stop`. - `docker-compose rm` now accepts `-f` as a shorthand for `--force`, just like `docker rm`. Thanks, @abesto, @albers, @alunduil, @dnephin, @funkyfuture, @gilclark, @IanVS, @KingsleyKelly, @knutwalker, @thaJeztah and @vmalloc! 1.1.0 (2015-02-25) ------------------ Fig has been renamed to Docker Compose, or just Compose for short. This has several implications for you: - The command you type is now `docker-compose`, not `fig`. - You should rename your fig.yml to docker-compose.yml. - If you’re installing via PyPi, the package is now `docker-compose`, so install it with `pip install docker-compose`. Besides that, there’s a lot of new stuff in this release: - We’ve made a few small changes to ensure that Compose will work with Swarm, Docker’s new clustering tool (https://github.com/docker/swarm). Eventually you'll be able to point Compose at a Swarm cluster instead of a standalone Docker host and it’ll run your containers on the cluster with no extra work from you. As Swarm is still developing, integration is rough and lots of Compose features don't work yet. - `docker-compose run` now has a `--service-ports` flag for exposing ports on the given service. This is useful for e.g. running your webapp with an interactive debugger. - You can now link to containers outside your app with the `external_links` option in docker-compose.yml. - You can now prevent `docker-compose up` from automatically building images with the `--no-build` option. This will make fewer API calls and run faster. - If you don’t specify a tag when using the `image` key, Compose will default to the `latest` tag, rather than pulling all tags. - `docker-compose kill` now supports the `-s` flag, allowing you to specify the exact signal you want to send to a service’s containers. - docker-compose.yml now has an `env_file` key, analogous to `docker run --env-file`, letting you specify multiple environment variables in a separate file. This is great if you have a lot of them, or if you want to keep sensitive information out of version control. - docker-compose.yml now supports the `dns_search`, `cap_add`, `cap_drop`, `cpu_shares` and `restart` options, analogous to `docker run`’s `--dns-search`, `--cap-add`, `--cap-drop`, `--cpu-shares` and `--restart` options. - Compose now ships with Bash tab completion - see the installation and usage docs at https://github.com/docker/compose/blob/1.1.0/docs/completion.md - A number of bugs have been fixed - see the milestone for details: https://github.com/docker/compose/issues?q=milestone%3A1.1.0+ Thanks @dnephin, @squebe, @jbalonso, @raulcd, @benlangfield, @albers, @ggtools, @bersace, @dtenenba, @petercv, @drewkett, @TFenby, @paulRbr, @Aigeruth and @salehe! 1.0.1 (2014-11-04) ------------------ - Added an `--allow-insecure-ssl` option to allow `fig up`, `fig run` and `fig pull` to pull from insecure registries. - Fixed `fig run` not showing output in Jenkins. - Fixed a bug where Fig couldn't build Dockerfiles with ADD statements pointing at URLs. 1.0.0 (2014-10-16) ------------------ The highlights: - [Fig has joined Docker.](https://www.orchardup.com/blog/orchard-is-joining-docker) Fig will continue to be maintained, but we'll also be incorporating the best bits of Fig into Docker itself. This means the GitHub repository has moved to [https://github.com/docker/fig](https://github.com/docker/fig) and our IRC channel is now #docker-fig on Freenode. - Fig can be used with the [official Docker OS X installer](https://docs.docker.com/installation/mac/). Boot2Docker will mount the home directory from your host machine so volumes work as expected. - Fig supports Docker 1.3. - It is now possible to connect to the Docker daemon using TLS by using the `DOCKER_CERT_PATH` and `DOCKER_TLS_VERIFY` environment variables. - There is a new `fig port` command which outputs the host port binding of a service, in a similar way to `docker port`. - There is a new `fig pull` command which pulls the latest images for a service. - There is a new `fig restart` command which restarts a service's containers. - Fig creates multiple containers in service by appending a number to the service name (e.g. `db_1`, `db_2`, etc). As a convenience, Fig will now give the first container an alias of the service name (e.g. `db`). This link alias is also a valid hostname and added to `/etc/hosts` so you can connect to linked services using their hostname. For example, instead of resolving the environment variables `DB_PORT_5432_TCP_ADDR` and `DB_PORT_5432_TCP_PORT`, you could just use the hostname `db` and port `5432` directly. - Volume definitions now support `ro` mode, expanding `~` and expanding environment variables. - `.dockerignore` is supported when building. - The project name can be set with the `FIG_PROJECT_NAME` environment variable. - The `--env` and `--entrypoint` options have been added to `fig run`. - The Fig binary for Linux is now linked against an older version of glibc so it works on CentOS 6 and Debian Wheezy. Other things: - `fig ps` now works on Jenkins and makes fewer API calls to the Docker daemon. - `--verbose` displays more useful debugging output. - When starting a service where `volumes_from` points to a service without any containers running, that service will now be started. - Lots of docs improvements. Notably, environment variables are documented and official repositories are used throughout. Thanks @dnephin, @d11wtq, @marksteve, @rubbish, @jbalonso, @timfreund, @alunduil, @mieciu, @shuron, @moss, @suzaku and @chmouel! Whew. 0.5.2 (2014-07-28) ------------------ - Added a `--no-cache` option to `fig build`, which bypasses the cache just like `docker build --no-cache`. - Fixed the `dns:` fig.yml option, which was causing fig to error out. - Fixed a bug where fig couldn't start under Python 2.6. - Fixed a log-streaming bug that occasionally caused fig to exit. Thanks @dnephin and @marksteve! 0.5.1 (2014-07-11) ------------------ - If a service has a command defined, `fig run [service]` with no further arguments will run it. - The project name now defaults to the directory containing fig.yml, not the current working directory (if they're different) - `volumes_from` now works properly with containers as well as services - Fixed a race condition when recreating containers in `fig up` Thanks @ryanbrainard and @d11wtq! 0.5.0 (2014-07-11) ------------------ - Fig now starts links when you run `fig run` or `fig up`. For example, if you have a `web` service which depends on a `db` service, `fig run web ...` will start the `db` service. - Environment variables can now be resolved from the environment that Fig is running in. Just specify it as a blank variable in your `fig.yml` and, if set, it'll be resolved: ``` environment: RACK_ENV: development SESSION_SECRET: ``` - `volumes_from` is now supported in `fig.yml`. All of the volumes from the specified services and containers will be mounted: ``` volumes_from: - service_name - container_name ``` - A host address can now be specified in `ports`: ``` ports: - "0.0.0.0:8000:8000" - "127.0.0.1:8001:8001" ``` - The `net` and `workdir` options are now supported in `fig.yml`. - The `hostname` option now works in the same way as the Docker CLI, splitting out into a `domainname` option. - TTY behaviour is far more robust, and resizes are supported correctly. - Load YAML files safely. Thanks to @d11wtq, @ryanbrainard, @rail44, @j0hnsmith, @binarin, @Elemecca, @mozz100 and @marksteve for their help with this release! 0.4.2 (2014-06-18) ------------------ - Fix various encoding errors when using `fig run`, `fig up` and `fig build`. 0.4.1 (2014-05-08) ------------------ - Add support for Docker 0.11.0. (Thanks @marksteve!) - Make project name configurable. (Thanks @jefmathiot!) - Return correct exit code from `fig run`. 0.4.0 (2014-04-29) ------------------ - Support Docker 0.9 and 0.10 - Display progress bars correctly when pulling images (no more ski slopes) - `fig up` now stops all services when any container exits - Added support for the `privileged` config option in fig.yml (thanks @kvz!) - Shortened and aligned log prefixes in `fig up` output - Only containers started with `fig run` link back to their own service - Handle UTF-8 correctly when streaming `fig build/run/up` output (thanks @mauvm and @shanejonas!) - Error message improvements 0.3.2 (2014-03-05) ------------------ - Added an `--rm` option to `fig run`. (Thanks @marksteve!) - Added an `expose` option to `fig.yml`. 0.3.1 (2014-03-04) ------------------ - Added contribution instructions. (Thanks @kvz!) - Fixed `fig rm` throwing an error. - Fixed a bug in `fig ps` on Docker 0.8.1 when there is a container with no command. 0.3.0 (2014-03-03) ------------------ - We now ship binaries for OS X and Linux. No more having to install with Pip! - Add `-f` flag to specify alternate `fig.yml` files - Add support for custom link names - Fix a bug where recreating would sometimes hang - Update docker-py to support Docker 0.8.0. - Various documentation improvements - Various error message improvements Thanks @marksteve, @Gazler and @teozkr! 0.2.2 (2014-02-17) ------------------ - Resolve dependencies using Cormen/Tarjan topological sort - Fix `fig up` not printing log output - Stop containers in reverse order to starting - Fix scale command not binding ports Thanks to @barnybug and @dustinlacewell for their work on this release. 0.2.1 (2014-02-04) ------------------ - General improvements to error reporting (#77, #79) 0.2.0 (2014-01-31) ------------------ - Link services to themselves so run commands can access the running service. (#67) - Much better documentation. - Make service dependency resolution more reliable. (#48) - Load Fig configurations with a `.yaml` extension. (#58) Big thanks to @cameronmaske, @mrchrisadams and @damianmoore for their help with this release. 0.1.4 (2014-01-27) ------------------ - Add a link alias without the project name. This makes the environment variables a little shorter: `REDIS_1_PORT_6379_TCP_ADDR`. (#54) 0.1.3 (2014-01-23) ------------------ - Fix ports sometimes being configured incorrectly. (#46) - Fix log output sometimes not displaying. (#47) 0.1.2 (2014-01-22) ------------------ - Add `-T` option to `fig run` to disable pseudo-TTY. (#34) - Fix `fig up` requiring the ubuntu image to be pulled to recreate containers. (#33) Thanks @cameronmaske! - Improve reliability, fix arrow keys and fix a race condition in `fig run`. (#34, #39, #40) 0.1.1 (2014-01-17) ------------------ - Fix bug where ports were not exposed correctly (#29). Thanks @dustinlacewell! 0.1.0 (2014-01-16) ------------------ - Containers are recreated on each `fig up`, ensuring config is up-to-date with `fig.yml` (#2) - Add `fig scale` command (#9) - Use `DOCKER_HOST` environment variable to find Docker daemon, for consistency with the official Docker client (was previously `DOCKER_URL`) (#19) - Truncate long commands in `fig ps` (#18) - Fill out CLI help banners for commands (#15, #16) - Show a friendlier error when `fig.yml` is missing (#4) - Fix bug with `fig build` logging (#3) - Fix bug where builds would time out if a step took a long time without generating output (#6) - Fix bug where streaming container output over the Unix socket raised an error (#7) Big thanks to @tomstuart, @EnTeQuAk, @schickling, @aronasorman and @GeoffreyPlitt. 0.0.2 (2014-01-02) ------------------ - Improve documentation - Try to connect to Docker on `tcp://localdocker:4243` and a UNIX socket in addition to `localhost`. - Improve `fig up` behaviour - Add confirmation prompt to `fig rm` - Add `fig build` command 0.0.1 (2013-12-20) ------------------ Initial release. compose-1.5.2/CHANGES.md000077700000000000000000000000001263011261000163522CHANGELOG.mdustar00rootroot00000000000000compose-1.5.2/CONTRIBUTING.md000066400000000000000000000063611263011261000154130ustar00rootroot00000000000000# Contributing to Compose Compose is a part of the Docker project, and follows the same rules and principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md) to get an overview. ## TL;DR Pull requests will need: - Tests - Documentation - [To be signed off](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work) - A logical series of [well written commits](https://github.com/alphagov/styleguides/blob/master/git.md) ## Development environment If you're looking contribute to Compose but you're new to the project or maybe even to Python, here are the steps that should get you started. 1. Fork [https://github.com/docker/compose](https://github.com/docker/compose) to your username. 2. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`. 3. You must [configure a remote](https://help.github.com/articles/configuring-a-remote-for-a-fork/) for your fork so that you can [sync changes you make](https://help.github.com/articles/syncing-a-fork/) with the original repository. 4. Enter the local directory `cd compose`. 5. Set up a development environment by running `python setup.py develop`. This will install the dependencies and set up a symlink from your `docker-compose` executable to the checkout of the repository. When you now run `docker-compose` from anywhere on your machine, it will run your development version of Compose. ## Install pre-commit hooks This step is optional, but recommended. Pre-commit hooks will run style checks and in some cases fix style issues for you, when you commit code. Install the git pre-commit hooks using [tox](https://tox.readthedocs.org) by running `tox -e pre-commit` or by following the [pre-commit install guide](http://pre-commit.com/#install). To run the style checks at any time run `tox -e pre-commit`. ## Submitting a pull request See Docker's [basic contribution workflow](https://docs.docker.com/project/make-a-contribution/#the-basic-contribution-workflow) for a guide on how to submit a pull request for code or documentation. ## Running the test suite Use the test script to run linting checks and then the full test suite against different Python interpreters: $ script/test Tests are run against a Docker daemon inside a container, so that we can test against multiple Docker versions. By default they'll run against only the latest Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run against all supported versions: $ DOCKER_VERSIONS=all script/test Arguments to `script/test` are passed through to the `nosetests` executable, so you can specify a test directory, file, module, class or method: $ script/test tests/unit $ script/test tests/unit/cli_test.py $ script/test tests/unit/config_test.py::ConfigTest $ script/test tests/unit/config_test.py::ConfigTest::test_load ## Finding things to work on We use a [ZenHub board](https://www.zenhub.io/) to keep track of specific things we are working on and planning to work on. If you're looking for things to work on, stuff in the backlog is a great place to start. For more information about our project planning, take a look at our [GitHub wiki](https://github.com/docker/compose/wiki). compose-1.5.2/Dockerfile000066400000000000000000000037211263011261000151510ustar00rootroot00000000000000FROM debian:wheezy RUN set -ex; \ apt-get update -qq; \ apt-get install -y \ locales \ gcc \ make \ zlib1g \ zlib1g-dev \ libssl-dev \ git \ ca-certificates \ curl \ libsqlite3-dev \ ; \ rm -rf /var/lib/apt/lists/* RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest \ -o /usr/local/bin/docker && \ chmod +x /usr/local/bin/docker # Build Python 2.7.9 from source RUN set -ex; \ curl -LO https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz; \ tar -xzf Python-2.7.9.tgz; \ cd Python-2.7.9; \ ./configure --enable-shared; \ make; \ make install; \ cd ..; \ rm -rf /Python-2.7.9; \ rm Python-2.7.9.tgz # Build python 3.4 from source RUN set -ex; \ curl -LO https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tgz; \ tar -xzf Python-3.4.3.tgz; \ cd Python-3.4.3; \ ./configure --enable-shared; \ make; \ make install; \ cd ..; \ rm -rf /Python-3.4.3; \ rm Python-3.4.3.tgz # Make libpython findable ENV LD_LIBRARY_PATH /usr/local/lib # Install setuptools RUN set -ex; \ curl -LO https://bootstrap.pypa.io/ez_setup.py; \ python ez_setup.py; \ rm ez_setup.py # Install pip RUN set -ex; \ curl -LO https://pypi.python.org/packages/source/p/pip/pip-7.0.1.tar.gz; \ tar -xzf pip-7.0.1.tar.gz; \ cd pip-7.0.1; \ python setup.py install; \ cd ..; \ rm -rf pip-7.0.1; \ rm pip-7.0.1.tar.gz # Python3 requires a valid locale RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen ENV LANG en_US.UTF-8 RUN useradd -d /home/user -m -s /bin/bash user WORKDIR /code/ RUN pip install tox==2.1.1 ADD requirements.txt /code/ ADD requirements-dev.txt /code/ ADD .pre-commit-config.yaml /code/ ADD setup.py /code/ ADD tox.ini /code/ ADD compose /code/compose/ RUN tox --notest ADD . /code/ RUN chown -R user /code/ ENTRYPOINT ["/code/.tox/py27/bin/docker-compose"] compose-1.5.2/Dockerfile.run000066400000000000000000000005361263011261000157550ustar00rootroot00000000000000 FROM alpine:edge RUN apk -U add \ python \ py-pip COPY requirements.txt /code/requirements.txt RUN pip install -r /code/requirements.txt ADD dist/docker-compose-release.tar.gz /code/docker-compose RUN pip install --no-deps /code/docker-compose/docker-compose-* ENTRYPOINT ["/usr/bin/docker-compose"] compose-1.5.2/LICENSE000066400000000000000000000250061263011261000141640ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS Copyright 2014 Docker, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. compose-1.5.2/MAINTAINERS000066400000000000000000000002761263011261000146560ustar00rootroot00000000000000Aanand Prasad (@aanand) Ben Firshman (@bfirsh) Daniel Nephin (@dnephin) Mazz Mosley (@mnowster) compose-1.5.2/MANIFEST.in000066400000000000000000000005201263011261000147070ustar00rootroot00000000000000include Dockerfile include LICENSE include requirements.txt include requirements-dev.txt include tox.ini include *.md exclude README.md include README.rst include compose/config/*.json include compose/GITSHA recursive-include contrib/completion * recursive-include tests * global-exclude *.pyc global-exclude *.pyo global-exclude *.un~ compose-1.5.2/README.md000066400000000000000000000045241263011261000144400ustar00rootroot00000000000000Docker Compose ============== ![Docker Compose](logo.png?raw=true "Docker Compose Logo") Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration. To learn more about all the features of Compose see [the list of features](docs/index.md#features). Compose is great for development, testing, and staging environments, as well as CI workflows. You can learn more about each case in [Common Use Cases](docs/index.md#common-use-cases). Using Compose is basically a three-step process. 1. Define your app's environment with a `Dockerfile` so it can be reproduced anywhere. 2. Define the services that make up your app in `docker-compose.yml` so they can be run together in an isolated environment: 3. Lastly, run `docker-compose up` and Compose will start and run your entire app. A `docker-compose.yml` looks like this: web: build: . ports: - "5000:5000" volumes: - .:/code links: - redis redis: image: redis For more information about the Compose file, see the [Compose file reference](docs/compose-file.md) Compose has commands for managing the whole lifecycle of your application: * Start, stop and rebuild services * View the status of running services * Stream the log output of running services * Run a one-off command on a service Installation and documentation ------------------------------ - Full documentation is available on [Docker's website](http://docs.docker.com/compose/). - If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose) Contributing ------------ [![Build Status](http://jenkins.dockerproject.org/buildStatus/icon?job=Compose%20Master)](http://jenkins.dockerproject.org/job/Compose%20Master/) Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md). Releasing --------- Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md). compose-1.5.2/ROADMAP.md000066400000000000000000000041401263011261000145600ustar00rootroot00000000000000# Roadmap ## More than just development environments Over time we will extend Compose's remit to cover test, staging and production environments. This is not a simple task, and will take many incremental improvements such as: - Compose currently will attempt to get your application into the correct state when running `up`, but it has a number of shortcomings: - It should roll back to a known good state if it fails. - It should allow a user to check the actions it is about to perform before running them. - It should be possible to partially modify the config file for different environments (dev/test/staging/prod), passing in e.g. custom ports or volume mount paths. ([#1377](https://github.com/docker/compose/issues/1377)) - Compose should recommend a technique for zero-downtime deploys. - It should be possible to continuously attempt to keep an application in the correct state, instead of just performing `up` a single time. ## Integration with Swarm Compose should integrate really well with Swarm so you can take an application you've developed on your laptop and run it on a Swarm cluster. The current state of integration is documented in [SWARM.md](SWARM.md). ## Applications spanning multiple teams Compose works well for applications that are in a single repository and depend on services that are hosted on Docker Hub. If your application depends on another application within your organisation, Compose doesn't work as well. There are several ideas about how this could work, such as [including external files](https://github.com/docker/fig/issues/318). ## An even better tool for development environments Compose is a great tool for development environments, but it could be even better. For example: - [Compose could watch your code and automatically kick off builds when something changes.](https://github.com/docker/fig/issues/184) - It should be possible to define hostnames for containers which work from the host machine, e.g. “mywebcontainer.local”. This is needed by apps comprising multiple web services which generate links to one another (e.g. a frontend website and a separate admin webapp) compose-1.5.2/SWARM.md000066400000000000000000000036561263011261000144010ustar00rootroot00000000000000Docker Compose/Swarm integration ================================ Eventually, Compose and Swarm aim to have full integration, meaning you can point a Compose app at a Swarm cluster and have it all just work as if you were using a single Docker host. However, integration is currently incomplete: Compose can create containers on a Swarm cluster, but the majority of Compose apps won’t work out of the box unless all containers are scheduled on one host, because links between containers do not work across hosts. Docker networking is [getting overhauled](https://github.com/docker/libnetwork) in such a way that it’ll fit the multi-host model much better. For now, linked containers are automatically scheduled on the same host. Building -------- Swarm can build an image from a Dockerfile just like a single-host Docker instance can, but the resulting image will only live on a single node and won't be distributed to other nodes. If you want to use Compose to scale the service in question to multiple nodes, you'll have to build it yourself, push it to a registry (e.g. the Docker Hub) and reference it from `docker-compose.yml`: $ docker build -t myusername/web . $ docker push myusername/web $ cat docker-compose.yml web: image: myusername/web $ docker-compose up -d $ docker-compose scale web=3 Scheduling ---------- Swarm offers a rich set of scheduling and affinity hints, enabling you to control where containers are located. They are specified via container environment variables, so you can use Compose's `environment` option to set them. environment: # Schedule containers on a node that has the 'storage' label set to 'ssd' - "constraint:storage==ssd" # Schedule containers where the 'redis' image is already pulled - "affinity:image==redis" For the full set of available filters and expressions, see the [Swarm documentation](https://docs.docker.com/swarm/scheduler/filter/). compose-1.5.2/appveyor.yml000066400000000000000000000014161263011261000155460ustar00rootroot00000000000000 version: '{branch}-{build}' install: - "SET PATH=C:\\Python27-x64;C:\\Python27-x64\\Scripts;%PATH%" - "python --version" - "pip install tox==2.1.1 virtualenv==13.1.2" # Build the binary after tests build: false environment: BINTRAY_USER: "docker-compose-roleuser" BINTRAY_PATH: "docker-compose/master/windows/master/docker-compose-Windows-x86_64.exe" test_script: - "tox -e py27,py34 -- tests/unit" - ps: ".\\script\\build-windows.ps1" deploy_script: - "curl -sS -u \"%BINTRAY_USER%:%BINTRAY_API_KEY%\" -X PUT \"https://api.bintray.com/content/%BINTRAY_PATH%?override=1&publish=1\" --data-binary @dist\\docker-compose-Windows-x86_64.exe" artifacts: - path: .\dist\docker-compose-Windows-x86_64.exe name: "Compose Windows binary" compose-1.5.2/bin/000077500000000000000000000000001263011261000137245ustar00rootroot00000000000000compose-1.5.2/bin/docker-compose000077500000000000000000000000771263011261000165700ustar00rootroot00000000000000#!/usr/bin/env python from compose.cli.main import main main() compose-1.5.2/compose/000077500000000000000000000000001263011261000146215ustar00rootroot00000000000000compose-1.5.2/compose/__init__.py000066400000000000000000000000771263011261000167360ustar00rootroot00000000000000from __future__ import unicode_literals __version__ = '1.5.2' compose-1.5.2/compose/cli/000077500000000000000000000000001263011261000153705ustar00rootroot00000000000000compose-1.5.2/compose/cli/__init__.py000066400000000000000000000000001263011261000174670ustar00rootroot00000000000000compose-1.5.2/compose/cli/colors.py000066400000000000000000000014651263011261000172510ustar00rootroot00000000000000from __future__ import unicode_literals NAMES = [ 'grey', 'red', 'green', 'yellow', 'blue', 'magenta', 'cyan', 'white' ] def get_pairs(): for i, name in enumerate(NAMES): yield(name, str(30 + i)) yield('intense_' + name, str(30 + i) + ';1') def ansi(code): return '\033[{0}m'.format(code) def ansi_color(code, s): return '{0}{1}{2}'.format(ansi(code), s, ansi(0)) def make_color_fn(code): return lambda s: ansi_color(code, s) for (name, code) in get_pairs(): globals()[name] = make_color_fn(code) def rainbow(): cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue', 'intense_cyan', 'intense_yellow', 'intense_green', 'intense_magenta', 'intense_red', 'intense_blue'] for c in cs: yield globals()[c] compose-1.5.2/compose/cli/command.py000066400000000000000000000067621263011261000173730ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import contextlib import logging import os import re import six from requests.exceptions import ConnectionError from requests.exceptions import SSLError from . import errors from . import verbose_proxy from .. import config from ..project import Project from .docker_client import docker_client from .utils import call_silently from .utils import get_version_info from .utils import is_mac from .utils import is_ubuntu log = logging.getLogger(__name__) @contextlib.contextmanager def friendly_error_message(): try: yield except SSLError as e: raise errors.UserError('SSL error: %s' % e) except ConnectionError: if call_silently(['which', 'docker']) != 0: if is_mac(): raise errors.DockerNotFoundMac() elif is_ubuntu(): raise errors.DockerNotFoundUbuntu() else: raise errors.DockerNotFoundGeneric() elif call_silently(['which', 'docker-machine']) == 0: raise errors.ConnectionErrorDockerMachine() else: raise errors.ConnectionErrorGeneric(get_client().base_url) def project_from_options(base_dir, options): return get_project( base_dir, get_config_path(options.get('--file')), project_name=options.get('--project-name'), verbose=options.get('--verbose'), use_networking=options.get('--x-networking'), network_driver=options.get('--x-network-driver'), ) def get_config_path(file_option): if file_option: return file_option if 'FIG_FILE' in os.environ: log.warn('The FIG_FILE environment variable is deprecated.') log.warn('Please use COMPOSE_FILE instead.') config_file = os.environ.get('COMPOSE_FILE') or os.environ.get('FIG_FILE') return [config_file] if config_file else None def get_client(verbose=False, version=None): client = docker_client(version=version) if verbose: version_info = six.iteritems(client.version()) log.info(get_version_info('full')) log.info("Docker base_url: %s", client.base_url) log.info("Docker version: %s", ", ".join("%s=%s" % item for item in version_info)) return verbose_proxy.VerboseProxy('docker', client) return client def get_project(base_dir, config_path=None, project_name=None, verbose=False, use_networking=False, network_driver=None): config_details = config.find(base_dir, config_path) api_version = '1.21' if use_networking else None return Project.from_dicts( get_project_name(config_details.working_dir, project_name), config.load(config_details), get_client(verbose=verbose, version=api_version), use_networking=use_networking, network_driver=network_driver) def get_project_name(working_dir, project_name=None): def normalize_name(name): return re.sub(r'[^a-z0-9]', '', name.lower()) if 'FIG_PROJECT_NAME' in os.environ: log.warn('The FIG_PROJECT_NAME environment variable is deprecated.') log.warn('Please use COMPOSE_PROJECT_NAME instead.') project_name = ( project_name or os.environ.get('COMPOSE_PROJECT_NAME') or os.environ.get('FIG_PROJECT_NAME')) if project_name is not None: return normalize_name(project_name) project = os.path.basename(os.path.abspath(working_dir)) if project: return normalize_name(project) return 'default' compose-1.5.2/compose/cli/docker_client.py000066400000000000000000000014071263011261000205510ustar00rootroot00000000000000import logging import os from docker import Client from docker.utils import kwargs_from_env from ..const import HTTP_TIMEOUT log = logging.getLogger(__name__) DEFAULT_API_VERSION = '1.19' def docker_client(version=None): """ Returns a docker-py client configured using environment variables according to the same logic as the official Docker client. """ if 'DOCKER_CLIENT_TIMEOUT' in os.environ: log.warn('The DOCKER_CLIENT_TIMEOUT environment variable is deprecated. Please use COMPOSE_HTTP_TIMEOUT instead.') kwargs = kwargs_from_env(assert_hostname=False) kwargs['version'] = version or os.environ.get( 'COMPOSE_API_VERSION', DEFAULT_API_VERSION) kwargs['timeout'] = HTTP_TIMEOUT return Client(**kwargs) compose-1.5.2/compose/cli/docopt_command.py000066400000000000000000000030761263011261000207360ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import sys from inspect import getdoc from docopt import docopt from docopt import DocoptExit def docopt_full_help(docstring, *args, **kwargs): try: return docopt(docstring, *args, **kwargs) except DocoptExit: raise SystemExit(docstring) class DocoptCommand(object): def docopt_options(self): return {'options_first': True} def sys_dispatch(self): self.dispatch(sys.argv[1:], None) def dispatch(self, argv, global_options): self.perform_command(*self.parse(argv, global_options)) def parse(self, argv, global_options): options = docopt_full_help(getdoc(self), argv, **self.docopt_options()) command = options['COMMAND'] if command is None: raise SystemExit(getdoc(self)) handler = self.get_handler(command) docstring = getdoc(handler) if docstring is None: raise NoSuchCommand(command, self) command_options = docopt_full_help(docstring, options['ARGS'], options_first=True) return options, handler, command_options def get_handler(self, command): command = command.replace('-', '_') if not hasattr(self, command): raise NoSuchCommand(command, self) return getattr(self, command) class NoSuchCommand(Exception): def __init__(self, command, supercommand): super(NoSuchCommand, self).__init__("No such command: %s" % command) self.command = command self.supercommand = supercommand compose-1.5.2/compose/cli/errors.py000066400000000000000000000031071263011261000172570ustar00rootroot00000000000000from __future__ import absolute_import from textwrap import dedent class UserError(Exception): def __init__(self, msg): self.msg = dedent(msg).strip() def __unicode__(self): return self.msg __str__ = __unicode__ class DockerNotFoundMac(UserError): def __init__(self): super(DockerNotFoundMac, self).__init__(""" Couldn't connect to Docker daemon. You might need to install docker-osx: https://github.com/noplay/docker-osx """) class DockerNotFoundUbuntu(UserError): def __init__(self): super(DockerNotFoundUbuntu, self).__init__(""" Couldn't connect to Docker daemon. You might need to install Docker: http://docs.docker.io/en/latest/installation/ubuntulinux/ """) class DockerNotFoundGeneric(UserError): def __init__(self): super(DockerNotFoundGeneric, self).__init__(""" Couldn't connect to Docker daemon. You might need to install Docker: http://docs.docker.io/en/latest/installation/ """) class ConnectionErrorDockerMachine(UserError): def __init__(self): super(ConnectionErrorDockerMachine, self).__init__(""" Couldn't connect to Docker daemon - you might need to run `docker-machine start default`. """) class ConnectionErrorGeneric(UserError): def __init__(self, url): super(ConnectionErrorGeneric, self).__init__(""" Couldn't connect to Docker daemon at %s - is it running? If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. """ % url) compose-1.5.2/compose/cli/formatter.py000066400000000000000000000025161263011261000177510ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging import os import texttable from compose.cli import colors def get_tty_width(): tty_size = os.popen('stty size', 'r').read().split() if len(tty_size) != 2: return 0 _, width = tty_size return int(width) class Formatter(object): """Format tabular data for printing.""" def table(self, headers, rows): table = texttable.Texttable(max_width=get_tty_width()) table.set_cols_dtype(['t' for h in headers]) table.add_rows([headers] + rows) table.set_deco(table.HEADER) table.set_chars(['-', '|', '+', '-']) return table.draw() class ConsoleWarningFormatter(logging.Formatter): """A logging.Formatter which prints WARNING and ERROR messages with a prefix of the log level colored appropriate for the log level. """ def get_level_message(self, record): separator = ': ' if record.levelno == logging.WARNING: return colors.yellow(record.levelname) + separator if record.levelno == logging.ERROR: return colors.red(record.levelname) + separator return '' def format(self, record): message = super(ConsoleWarningFormatter, self).format(record) return self.get_level_message(record) + message compose-1.5.2/compose/cli/log_printer.py000066400000000000000000000055401263011261000202720ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import sys from itertools import cycle from . import colors from .multiplexer import Multiplexer from compose import utils from compose.utils import split_buffer class LogPrinter(object): """Print logs from many containers to a single output stream.""" def __init__(self, containers, output=sys.stdout, monochrome=False): self.containers = containers self.output = utils.get_output_stream(output) self.monochrome = monochrome def run(self): if not self.containers: return prefix_width = max_name_width(self.containers) generators = list(self._make_log_generators(self.monochrome, prefix_width)) for line in Multiplexer(generators).loop(): self.output.write(line) self.output.flush() def _make_log_generators(self, monochrome, prefix_width): def no_color(text): return text if monochrome: color_funcs = cycle([no_color]) else: color_funcs = cycle(colors.rainbow()) for color_func, container in zip(color_funcs, self.containers): generator_func = get_log_generator(container) prefix = color_func(build_log_prefix(container, prefix_width)) yield generator_func(container, prefix, color_func) def build_log_prefix(container, prefix_width): return container.name_without_project.ljust(prefix_width) + ' | ' def max_name_width(containers): """Calculate the maximum width of container names so we can make the log prefixes line up like so: db_1 | Listening web_1 | Listening """ return max(len(container.name_without_project) for container in containers) def get_log_generator(container): if container.has_api_logs: return build_log_generator return build_no_log_generator def build_no_log_generator(container, prefix, color_func): """Return a generator that prints a warning about logs and waits for container to exit. """ yield "{} WARNING: no logs are available with the '{}' log driver\n".format( prefix, container.log_driver) yield color_func(wait_on_exit(container)) def build_log_generator(container, prefix, color_func): # if the container doesn't have a log_stream we need to attach to container # before log printer starts running if container.log_stream is None: stream = container.attach(stdout=True, stderr=True, stream=True, logs=True) line_generator = split_buffer(stream) else: line_generator = split_buffer(container.log_stream) for line in line_generator: yield prefix + line yield color_func(wait_on_exit(container)) def wait_on_exit(container): exit_code = container.wait() return "%s exited with code %s\n" % (container.name, exit_code) compose-1.5.2/compose/cli/main.py000066400000000000000000000570701263011261000166770ustar00rootroot00000000000000from __future__ import print_function from __future__ import unicode_literals import logging import re import signal import sys from inspect import getdoc from operator import attrgetter from docker.errors import APIError from requests.exceptions import ReadTimeout from .. import __version__ from .. import legacy from ..config import ConfigurationError from ..config import parse_environment from ..const import DEFAULT_TIMEOUT from ..const import HTTP_TIMEOUT from ..const import IS_WINDOWS_PLATFORM from ..progress_stream import StreamOutputError from ..project import NoSuchService from ..service import BuildError from ..service import ConvergenceStrategy from ..service import NeedsBuildError from .command import friendly_error_message from .command import project_from_options from .docopt_command import DocoptCommand from .docopt_command import NoSuchCommand from .errors import UserError from .formatter import ConsoleWarningFormatter from .formatter import Formatter from .log_printer import LogPrinter from .utils import get_version_info from .utils import yesno if not IS_WINDOWS_PLATFORM: import dockerpty log = logging.getLogger(__name__) console_handler = logging.StreamHandler(sys.stderr) INSECURE_SSL_WARNING = """ --allow-insecure-ssl is deprecated and has no effect. It will be removed in a future version of Compose. """ def main(): setup_logging() try: command = TopLevelCommand() command.sys_dispatch() except KeyboardInterrupt: log.error("\nAborting.") sys.exit(1) except (UserError, NoSuchService, ConfigurationError, legacy.LegacyError) as e: log.error(e.msg) sys.exit(1) except NoSuchCommand as e: commands = "\n".join(parse_doc_section("commands:", getdoc(e.supercommand))) log.error("No such command: %s\n\n%s", e.command, commands) sys.exit(1) except APIError as e: log.error(e.explanation) sys.exit(1) except BuildError as e: log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason)) sys.exit(1) except StreamOutputError as e: log.error(e) sys.exit(1) except NeedsBuildError as e: log.error("Service '%s' needs to be built, but --no-build was passed." % e.service.name) sys.exit(1) except ReadTimeout as e: log.error( "An HTTP request took too long to complete. Retry with --verbose to obtain debug information.\n" "If you encounter this issue regularly because of slow network conditions, consider setting " "COMPOSE_HTTP_TIMEOUT to a higher value (current value: %s)." % HTTP_TIMEOUT ) sys.exit(1) def setup_logging(): root_logger = logging.getLogger() root_logger.addHandler(console_handler) root_logger.setLevel(logging.DEBUG) # Disable requests logging logging.getLogger("requests").propagate = False def setup_console_handler(handler, verbose): if handler.stream.isatty(): format_class = ConsoleWarningFormatter else: format_class = logging.Formatter if verbose: handler.setFormatter(format_class('%(name)s.%(funcName)s: %(message)s')) handler.setLevel(logging.DEBUG) else: handler.setFormatter(format_class()) handler.setLevel(logging.INFO) # stolen from docopt master def parse_doc_section(name, source): pattern = re.compile('^([^\n]*' + name + '[^\n]*\n?(?:[ \t].*?(?:\n|$))*)', re.IGNORECASE | re.MULTILINE) return [s.strip() for s in pattern.findall(source)] class TopLevelCommand(DocoptCommand): """Define and run multi-container applications with Docker. Usage: docker-compose [-f=...] [options] [COMMAND] [ARGS...] docker-compose -h|--help Options: -f, --file FILE Specify an alternate compose file (default: docker-compose.yml) -p, --project-name NAME Specify an alternate project name (default: directory name) --x-networking (EXPERIMENTAL) Use new Docker networking functionality. Requires Docker 1.9 or later. --x-network-driver DRIVER (EXPERIMENTAL) Specify a network driver (default: "bridge"). Requires Docker 1.9 or later. --verbose Show more output -v, --version Print version and exit Commands: build Build or rebuild services help Get help on a command kill Kill containers logs View output from containers pause Pause services port Print the public port for a port binding ps List containers pull Pulls service images restart Restart services rm Remove stopped containers run Run a one-off command scale Set number of containers for a service start Start services stop Stop services unpause Unpause services up Create and start containers migrate-to-labels Recreate containers to add labels version Show the Docker-Compose version information """ base_dir = '.' def docopt_options(self): options = super(TopLevelCommand, self).docopt_options() options['version'] = get_version_info('compose') return options def perform_command(self, options, handler, command_options): setup_console_handler(console_handler, options.get('--verbose')) if options['COMMAND'] in ('help', 'version'): # Skip looking up the compose file. handler(None, command_options) return project = project_from_options(self.base_dir, options) with friendly_error_message(): handler(project, command_options) def build(self, project, options): """ Build or rebuild services. Services are built once and then tagged as `project_service`, e.g. `composetest_db`. If you change a service's `Dockerfile` or the contents of its build directory, you can run `docker-compose build` to rebuild it. Usage: build [options] [SERVICE...] Options: --force-rm Always remove intermediate containers. --no-cache Do not use cache when building the image. --pull Always attempt to pull a newer version of the image. """ project.build( service_names=options['SERVICE'], no_cache=bool(options.get('--no-cache', False)), pull=bool(options.get('--pull', False)), force_rm=bool(options.get('--force-rm', False))) def help(self, project, options): """ Get help on a command. Usage: help COMMAND """ handler = self.get_handler(options['COMMAND']) raise SystemExit(getdoc(handler)) def kill(self, project, options): """ Force stop service containers. Usage: kill [options] [SERVICE...] Options: -s SIGNAL SIGNAL to send to the container. Default signal is SIGKILL. """ signal = options.get('-s', 'SIGKILL') project.kill(service_names=options['SERVICE'], signal=signal) def logs(self, project, options): """ View output from containers. Usage: logs [options] [SERVICE...] Options: --no-color Produce monochrome output. """ containers = project.containers(service_names=options['SERVICE'], stopped=True) monochrome = options['--no-color'] print("Attaching to", list_containers(containers)) LogPrinter(containers, monochrome=monochrome).run() def pause(self, project, options): """ Pause services. Usage: pause [SERVICE...] """ project.pause(service_names=options['SERVICE']) def port(self, project, options): """ Print the public port for a port binding. Usage: port [options] SERVICE PRIVATE_PORT Options: --protocol=proto tcp or udp [default: tcp] --index=index index of the container if there are multiple instances of a service [default: 1] """ index = int(options.get('--index')) service = project.get_service(options['SERVICE']) try: container = service.get_container(number=index) except ValueError as e: raise UserError(str(e)) print(container.get_local_port( options['PRIVATE_PORT'], protocol=options.get('--protocol') or 'tcp') or '') def ps(self, project, options): """ List containers. Usage: ps [options] [SERVICE...] Options: -q Only display IDs """ containers = sorted( project.containers(service_names=options['SERVICE'], stopped=True) + project.containers(service_names=options['SERVICE'], one_off=True), key=attrgetter('name')) if options['-q']: for container in containers: print(container.id) else: headers = [ 'Name', 'Command', 'State', 'Ports', ] rows = [] for container in containers: command = container.human_readable_command if len(command) > 30: command = '%s ...' % command[:26] rows.append([ container.name, command, container.human_readable_state, container.human_readable_ports, ]) print(Formatter().table(headers, rows)) def pull(self, project, options): """ Pulls images for services. Usage: pull [options] [SERVICE...] Options: --ignore-pull-failures Pull what it can and ignores images with pull failures. --allow-insecure-ssl Deprecated - no effect. """ if options['--allow-insecure-ssl']: log.warn(INSECURE_SSL_WARNING) project.pull( service_names=options['SERVICE'], ignore_pull_failures=options.get('--ignore-pull-failures') ) def rm(self, project, options): """ Remove stopped service containers. Usage: rm [options] [SERVICE...] Options: -f, --force Don't ask to confirm removal -v Remove volumes associated with containers """ all_containers = project.containers(service_names=options['SERVICE'], stopped=True) stopped_containers = [c for c in all_containers if not c.is_running] if len(stopped_containers) > 0: print("Going to remove", list_containers(stopped_containers)) if options.get('--force') \ or yesno("Are you sure? [yN] ", default=False): project.remove_stopped( service_names=options['SERVICE'], v=options.get('-v', False) ) else: print("No stopped containers") def run(self, project, options): """ Run a one-off command on a service. For example: $ docker-compose run web python manage.py shell By default, linked services will be started, unless they are already running. If you do not want to start linked services, use `docker-compose run --no-deps SERVICE COMMAND [ARGS...]`. Usage: run [options] [-p PORT...] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...] Options: --allow-insecure-ssl Deprecated - no effect. -d Detached mode: Run container in the background, print new container name. --name NAME Assign a name to the container --entrypoint CMD Override the entrypoint of the image. -e KEY=VAL Set an environment variable (can be used multiple times) -u, --user="" Run as specified username or uid --no-deps Don't start linked services. --rm Remove container after run. Ignored in detached mode. -p, --publish=[] Publish a container's port(s) to the host --service-ports Run command with the service's ports enabled and mapped to the host. -T Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY. """ service = project.get_service(options['SERVICE']) detach = options['-d'] if IS_WINDOWS_PLATFORM and not detach: raise UserError( "Interactive mode is not yet supported on Windows.\n" "Please pass the -d flag when using `docker-compose run`." ) if options['--allow-insecure-ssl']: log.warn(INSECURE_SSL_WARNING) if options['COMMAND']: command = [options['COMMAND']] + options['ARGS'] else: command = service.options.get('command') container_options = { 'command': command, 'tty': not (detach or options['-T'] or not sys.stdin.isatty()), 'stdin_open': not detach, 'detach': detach, } if options['-e']: container_options['environment'] = parse_environment(options['-e']) if options['--entrypoint']: container_options['entrypoint'] = options.get('--entrypoint') if options['--rm']: container_options['restart'] = None if options['--user']: container_options['user'] = options.get('--user') if not options['--service-ports']: container_options['ports'] = [] if options['--publish']: container_options['ports'] = options.get('--publish') if options['--publish'] and options['--service-ports']: raise UserError( 'Service port mapping and manual port mapping ' 'can not be used togather' ) if options['--name']: container_options['name'] = options['--name'] run_one_off_container(container_options, project, service, options) def scale(self, project, options): """ Set number of containers to run for a service. Numbers are specified in the form `service=num` as arguments. For example: $ docker-compose scale web=2 worker=3 Usage: scale [options] [SERVICE=NUM...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) """ timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) for s in options['SERVICE=NUM']: if '=' not in s: raise UserError('Arguments to scale should be in the form service=num') service_name, num = s.split('=', 1) try: num = int(num) except ValueError: raise UserError('Number of containers for service "%s" is not a ' 'number' % service_name) project.get_service(service_name).scale(num, timeout=timeout) def start(self, project, options): """ Start existing containers. Usage: start [SERVICE...] """ project.start(service_names=options['SERVICE']) def stop(self, project, options): """ Stop running containers without removing them. They can be started again with `docker-compose start`. Usage: stop [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) """ timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) project.stop(service_names=options['SERVICE'], timeout=timeout) def restart(self, project, options): """ Restart running containers. Usage: restart [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) """ timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) project.restart(service_names=options['SERVICE'], timeout=timeout) def unpause(self, project, options): """ Unpause services. Usage: unpause [SERVICE...] """ project.unpause(service_names=options['SERVICE']) def up(self, project, options): """ Builds, (re)creates, starts, and attaches to containers for a service. Unless they are already running, this command also starts any linked services. The `docker-compose up` command aggregates the output of each container. When the command exits, all containers are stopped. Running `docker-compose up -d` starts the containers in the background and leaves them running. If there are existing containers for a service, and the service's configuration or image was changed after the container's creation, `docker-compose up` picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag. If you want to force Compose to stop and recreate all containers, use the `--force-recreate` flag. Usage: up [options] [SERVICE...] Options: --allow-insecure-ssl Deprecated - no effect. -d Detached mode: Run containers in the background, print new container names. --no-color Produce monochrome output. --no-deps Don't start linked services. --force-recreate Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate. --no-recreate If containers already exist, don't recreate them. Incompatible with --force-recreate. --no-build Don't build an image, even if it's missing -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10) """ if options['--allow-insecure-ssl']: log.warn(INSECURE_SSL_WARNING) monochrome = options['--no-color'] start_deps = not options['--no-deps'] service_names = options['SERVICE'] timeout = int(options.get('--timeout') or DEFAULT_TIMEOUT) detached = options.get('-d') to_attach = project.up( service_names=service_names, start_deps=start_deps, strategy=convergence_strategy_from_opts(options), do_build=not options['--no-build'], timeout=timeout, detached=detached ) if not detached: log_printer = build_log_printer(to_attach, service_names, monochrome) attach_to_logs(project, log_printer, service_names, timeout) def migrate_to_labels(self, project, _options): """ Recreate containers to add labels If you're coming from Compose 1.2 or earlier, you'll need to remove or migrate your existing containers after upgrading Compose. This is because, as of version 1.3, Compose uses Docker labels to keep track of containers, and so they need to be recreated with labels added. If Compose detects containers that were created without labels, it will refuse to run so that you don't end up with two sets of them. If you want to keep using your existing containers (for example, because they have data volumes you want to preserve) you can migrate them with the following command: docker-compose migrate-to-labels Alternatively, if you're not worried about keeping them, you can remove them - Compose will just create new ones. docker rm -f myapp_web_1 myapp_db_1 ... Usage: migrate-to-labels """ legacy.migrate_project_to_labels(project) def version(self, project, options): """ Show version informations Usage: version [--short] Options: --short Shows only Compose's version number. """ if options['--short']: print(__version__) else: print(get_version_info('full')) def convergence_strategy_from_opts(options): no_recreate = options['--no-recreate'] force_recreate = options['--force-recreate'] if force_recreate and no_recreate: raise UserError("--force-recreate and --no-recreate cannot be combined.") if force_recreate: return ConvergenceStrategy.always if no_recreate: return ConvergenceStrategy.never return ConvergenceStrategy.changed def run_one_off_container(container_options, project, service, options): if not options['--no-deps']: deps = service.get_linked_service_names() if deps: project.up( service_names=deps, start_deps=True, strategy=ConvergenceStrategy.never) if project.use_networking: project.ensure_network_exists() try: container = service.create_container( quiet=True, one_off=True, **container_options) except APIError: legacy.check_for_legacy_containers( project.client, project.name, [service.name], allow_one_off=False) raise if options['-d']: container.start() print(container.name) return def remove_container(force=False): if options['--rm']: project.client.remove_container(container.id, force=True) def force_shutdown(signal, frame): project.client.kill(container.id) remove_container(force=True) sys.exit(2) def shutdown(signal, frame): set_signal_handler(force_shutdown) project.client.stop(container.id) remove_container() sys.exit(1) set_signal_handler(shutdown) dockerpty.start(project.client, container.id, interactive=not options['-T']) exit_code = container.wait() remove_container() sys.exit(exit_code) def build_log_printer(containers, service_names, monochrome): if service_names: containers = [ container for container in containers if container.service in service_names ] return LogPrinter(containers, monochrome=monochrome) def attach_to_logs(project, log_printer, service_names, timeout): def force_shutdown(signal, frame): project.kill(service_names=service_names) sys.exit(2) def shutdown(signal, frame): set_signal_handler(force_shutdown) print("Gracefully stopping... (press Ctrl+C again to force)") project.stop(service_names=service_names, timeout=timeout) print("Attaching to", list_containers(log_printer.containers)) set_signal_handler(shutdown) log_printer.run() def set_signal_handler(handler): signal.signal(signal.SIGINT, handler) signal.signal(signal.SIGTERM, handler) def list_containers(containers): return ", ".join(c.name for c in containers) compose-1.5.2/compose/cli/multiplexer.py000066400000000000000000000027371263011261000203250ustar00rootroot00000000000000from __future__ import absolute_import from threading import Thread from six.moves import _thread as thread try: from Queue import Queue, Empty except ImportError: from queue import Queue, Empty # Python 3.x STOP = object() class Multiplexer(object): """ Create a single iterator from several iterators by running all of them in parallel and yielding results as they come in. """ def __init__(self, iterators): self.iterators = iterators self._num_running = len(iterators) self.queue = Queue() def loop(self): self._init_readers() while self._num_running > 0: try: item, exception = self.queue.get(timeout=0.1) if exception: raise exception if item is STOP: self._num_running -= 1 else: yield item except Empty: pass # See https://github.com/docker/compose/issues/189 except thread.error: raise KeyboardInterrupt() def _init_readers(self): for iterator in self.iterators: t = Thread(target=_enqueue_output, args=(iterator, self.queue)) t.daemon = True t.start() def _enqueue_output(iterator, queue): try: for item in iterator: queue.put((item, None)) queue.put((STOP, None)) except Exception as e: queue.put((None, e)) compose-1.5.2/compose/cli/utils.py000066400000000000000000000043031263011261000171020ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import division from __future__ import unicode_literals import os import platform import ssl import subprocess import docker from six.moves import input import compose def yesno(prompt, default=None): """ Prompt the user for a yes or no. Can optionally specify a default value, which will only be used if they enter a blank line. Unrecognised input (anything other than "y", "n", "yes", "no" or "") will return None. """ answer = input(prompt).strip().lower() if answer == "y" or answer == "yes": return True elif answer == "n" or answer == "no": return False elif answer == "": return default else: return None def call_silently(*args, **kwargs): """ Like subprocess.call(), but redirects stdout and stderr to /dev/null. """ with open(os.devnull, 'w') as shutup: try: return subprocess.call(*args, stdout=shutup, stderr=shutup, **kwargs) except WindowsError: # On Windows, subprocess.call() can still raise exceptions. Normalize # to POSIXy behaviour by returning a nonzero exit code. return 1 def is_mac(): return platform.system() == 'Darwin' def is_ubuntu(): return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu' def get_version_info(scope): versioninfo = 'docker-compose version {}, build {}'.format( compose.__version__, get_build_version()) if scope == 'compose': return versioninfo if scope == 'full': return ( "{}\n" "docker-py version: {}\n" "{} version: {}\n" "OpenSSL version: {}" ).format( versioninfo, docker.version, platform.python_implementation(), platform.python_version(), ssl.OPENSSL_VERSION) raise ValueError("{} is not a valid version scope".format(scope)) def get_build_version(): filename = os.path.join(os.path.dirname(compose.__file__), 'GITSHA') if not os.path.exists(filename): return 'unknown' with open(filename) as fh: return fh.read().strip() compose-1.5.2/compose/cli/verbose_proxy.py000066400000000000000000000032321263011261000206500ustar00rootroot00000000000000import functools import logging import pprint from itertools import chain import six def format_call(args, kwargs): args = (repr(a) for a in args) kwargs = ("{0!s}={1!r}".format(*item) for item in six.iteritems(kwargs)) return "({0})".format(", ".join(chain(args, kwargs))) def format_return(result, max_lines): if isinstance(result, (list, tuple, set)): return "({0} with {1} items)".format(type(result).__name__, len(result)) if result: lines = pprint.pformat(result).split('\n') extra = '\n...' if len(lines) > max_lines else '' return '\n'.join(lines[:max_lines]) + extra return result class VerboseProxy(object): """Proxy all function calls to another class and log method name, arguments and return values for each call. """ def __init__(self, obj_name, obj, log_name=None, max_lines=10): self.obj_name = obj_name self.obj = obj self.max_lines = max_lines self.log = logging.getLogger(log_name or __name__) def __getattr__(self, name): attr = getattr(self.obj, name) if not six.callable(attr): return attr return functools.partial(self.proxy_callable, name) def proxy_callable(self, call_name, *args, **kwargs): self.log.info("%s %s <- %s", self.obj_name, call_name, format_call(args, kwargs)) result = getattr(self.obj, call_name)(*args, **kwargs) self.log.info("%s %s -> %s", self.obj_name, call_name, format_return(result, self.max_lines)) return result compose-1.5.2/compose/config/000077500000000000000000000000001263011261000160665ustar00rootroot00000000000000compose-1.5.2/compose/config/__init__.py000066400000000000000000000003331263011261000201760ustar00rootroot00000000000000# flake8: noqa from .config import ConfigurationError from .config import DOCKER_CONFIG_KEYS from .config import find from .config import load from .config import merge_environment from .config import parse_environment compose-1.5.2/compose/config/config.py000066400000000000000000000472001263011261000177100ustar00rootroot00000000000000from __future__ import absolute_import import codecs import logging import os import sys from collections import namedtuple import six import yaml from .errors import CircularReference from .errors import ComposeFileNotFound from .errors import ConfigurationError from .interpolation import interpolate_environment_variables from .sort_services import get_service_name_from_net from .sort_services import sort_service_dicts from .types import parse_extra_hosts from .types import parse_restart_spec from .types import VolumeFromSpec from .types import VolumeSpec from .validation import validate_against_fields_schema from .validation import validate_against_service_schema from .validation import validate_extends_file_path from .validation import validate_top_level_object DOCKER_CONFIG_KEYS = [ 'cap_add', 'cap_drop', 'cgroup_parent', 'command', 'cpu_shares', 'cpuset', 'detach', 'devices', 'dns', 'dns_search', 'domainname', 'entrypoint', 'env_file', 'environment', 'extra_hosts', 'hostname', 'image', 'ipc', 'labels', 'links', 'log_driver', 'log_opt', 'mac_address', 'mem_limit', 'memswap_limit', 'net', 'pid', 'ports', 'privileged', 'read_only', 'restart', 'security_opt', 'stdin_open', 'tty', 'user', 'volume_driver', 'volumes', 'volumes_from', 'working_dir', ] ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [ 'build', 'container_name', 'dockerfile', 'expose', 'external_links', ] DOCKER_VALID_URL_PREFIXES = ( 'http://', 'https://', 'git://', 'github.com/', 'git@', ) SUPPORTED_FILENAMES = [ 'docker-compose.yml', 'docker-compose.yaml', 'fig.yml', 'fig.yaml', ] DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml' log = logging.getLogger(__name__) class ConfigDetails(namedtuple('_ConfigDetails', 'working_dir config_files')): """ :param working_dir: the directory to use for relative paths in the config :type working_dir: string :param config_files: list of configuration files to load :type config_files: list of :class:`ConfigFile` """ class ConfigFile(namedtuple('_ConfigFile', 'filename config')): """ :param filename: filename of the config file :type filename: string :param config: contents of the config file :type config: :class:`dict` """ @classmethod def from_filename(cls, filename): return cls(filename, load_yaml(filename)) class ServiceConfig(namedtuple('_ServiceConfig', 'working_dir filename name config')): @classmethod def with_abs_paths(cls, working_dir, filename, name, config): if not working_dir: raise ValueError("No working_dir for ServiceConfig.") return cls( os.path.abspath(working_dir), os.path.abspath(filename) if filename else filename, name, config) def find(base_dir, filenames): if filenames == ['-']: return ConfigDetails( os.getcwd(), [ConfigFile(None, yaml.safe_load(sys.stdin))]) if filenames: filenames = [os.path.join(base_dir, f) for f in filenames] else: filenames = get_default_config_files(base_dir) log.debug("Using configuration files: {}".format(",".join(filenames))) return ConfigDetails( os.path.dirname(filenames[0]), [ConfigFile.from_filename(f) for f in filenames]) def get_default_config_files(base_dir): (candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir) if not candidates: raise ComposeFileNotFound(SUPPORTED_FILENAMES) winner = candidates[0] if len(candidates) > 1: log.warn("Found multiple config files with supported names: %s", ", ".join(candidates)) log.warn("Using %s\n", winner) if winner == 'docker-compose.yaml': log.warn("Please be aware that .yml is the expected extension " "in most cases, and using .yaml can cause compatibility " "issues in future.\n") if winner.startswith("fig."): log.warn("%s is deprecated and will not be supported in future. " "Please rename your config file to docker-compose.yml\n" % winner) return [os.path.join(path, winner)] + get_default_override_file(path) def get_default_override_file(path): override_filename = os.path.join(path, DEFAULT_OVERRIDE_FILENAME) return [override_filename] if os.path.exists(override_filename) else [] def find_candidates_in_parent_dirs(filenames, path): """ Given a directory path to start, looks for filenames in the directory, and then each parent directory successively, until found. Returns tuple (candidates, path). """ candidates = [filename for filename in filenames if os.path.exists(os.path.join(path, filename))] if not candidates: parent_dir = os.path.join(path, '..') if os.path.abspath(parent_dir) != os.path.abspath(path): return find_candidates_in_parent_dirs(filenames, parent_dir) return (candidates, path) def load(config_details): """Load the configuration from a working directory and a list of configuration files. Files are loaded in order, and merged on top of each other to create the final configuration. Return a fully interpolated, extended and validated configuration. """ def build_service(filename, service_name, service_dict): service_config = ServiceConfig.with_abs_paths( config_details.working_dir, filename, service_name, service_dict) resolver = ServiceExtendsResolver(service_config) service_dict = process_service(resolver.run()) # TODO: move to validate_service() validate_against_service_schema(service_dict, service_config.name) validate_paths(service_dict) service_dict = finalize_service(service_config._replace(config=service_dict)) service_dict['name'] = service_config.name return service_dict def build_services(config_file): return sort_service_dicts([ build_service(config_file.filename, name, service_dict) for name, service_dict in config_file.config.items() ]) def merge_services(base, override): all_service_names = set(base) | set(override) return { name: merge_service_dicts_from_files( base.get(name, {}), override.get(name, {})) for name in all_service_names } config_file = process_config_file(config_details.config_files[0]) for next_file in config_details.config_files[1:]: next_file = process_config_file(next_file) config = merge_services(config_file.config, next_file.config) config_file = config_file._replace(config=config) return build_services(config_file) def process_config_file(config_file, service_name=None): validate_top_level_object(config_file) processed_config = interpolate_environment_variables(config_file.config) validate_against_fields_schema(processed_config, config_file.filename) if service_name and service_name not in processed_config: raise ConfigurationError( "Cannot extend service '{}' in {}: Service not found".format( service_name, config_file.filename)) return config_file._replace(config=processed_config) class ServiceExtendsResolver(object): def __init__(self, service_config, already_seen=None): self.service_config = service_config self.working_dir = service_config.working_dir self.already_seen = already_seen or [] @property def signature(self): return self.service_config.filename, self.service_config.name def detect_cycle(self): if self.signature in self.already_seen: raise CircularReference(self.already_seen + [self.signature]) def run(self): self.detect_cycle() if 'extends' in self.service_config.config: service_dict = self.resolve_extends(*self.validate_and_construct_extends()) return self.service_config._replace(config=service_dict) return self.service_config def validate_and_construct_extends(self): extends = self.service_config.config['extends'] if not isinstance(extends, dict): extends = {'service': extends} config_path = self.get_extended_config_path(extends) service_name = extends['service'] extended_file = process_config_file( ConfigFile.from_filename(config_path), service_name=service_name) service_config = extended_file.config[service_name] return config_path, service_config, service_name def resolve_extends(self, extended_config_path, service_dict, service_name): resolver = ServiceExtendsResolver( ServiceConfig.with_abs_paths( os.path.dirname(extended_config_path), extended_config_path, service_name, service_dict), already_seen=self.already_seen + [self.signature]) service_config = resolver.run() other_service_dict = process_service(service_config) validate_extended_service_dict( other_service_dict, extended_config_path, service_name, ) return merge_service_dicts(other_service_dict, self.service_config.config) def get_extended_config_path(self, extends_options): """Service we are extending either has a value for 'file' set, which we need to obtain a full path too or we are extending from a service defined in our own file. """ filename = self.service_config.filename validate_extends_file_path( self.service_config.name, extends_options, filename) if 'file' in extends_options: return expand_path(self.working_dir, extends_options['file']) return filename def resolve_environment(service_dict): """Unpack any environment variables from an env_file, if set. Interpolate environment values if set. """ env = {} for env_file in service_dict.get('env_file', []): env.update(env_vars_from_file(env_file)) env.update(parse_environment(service_dict.get('environment'))) return dict(resolve_env_var(k, v) for k, v in six.iteritems(env)) def validate_extended_service_dict(service_dict, filename, service): error_prefix = "Cannot extend service '%s' in %s:" % (service, filename) if 'links' in service_dict: raise ConfigurationError( "%s services with 'links' cannot be extended" % error_prefix) if 'volumes_from' in service_dict: raise ConfigurationError( "%s services with 'volumes_from' cannot be extended" % error_prefix) if 'net' in service_dict: if get_service_name_from_net(service_dict['net']) is not None: raise ConfigurationError( "%s services with 'net: container' cannot be extended" % error_prefix) def validate_ulimits(ulimit_config): for limit_name, soft_hard_values in six.iteritems(ulimit_config): if isinstance(soft_hard_values, dict): if not soft_hard_values['soft'] <= soft_hard_values['hard']: raise ConfigurationError( "ulimit_config \"{}\" cannot contain a 'soft' value higher " "than 'hard' value".format(ulimit_config)) # TODO: rename to normalize_service def process_service(service_config): working_dir = service_config.working_dir service_dict = dict(service_config.config) if 'env_file' in service_dict: service_dict['env_file'] = [ expand_path(working_dir, path) for path in to_list(service_dict['env_file']) ] if 'volumes' in service_dict and service_dict.get('volume_driver') is None: service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict) if 'build' in service_dict: service_dict['build'] = resolve_build_path(working_dir, service_dict['build']) if 'labels' in service_dict: service_dict['labels'] = parse_labels(service_dict['labels']) if 'extra_hosts' in service_dict: service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts']) # TODO: move to a validate_service() if 'ulimits' in service_dict: validate_ulimits(service_dict['ulimits']) return service_dict def finalize_service(service_config): service_dict = dict(service_config.config) if 'environment' in service_dict or 'env_file' in service_dict: service_dict['environment'] = resolve_environment(service_dict) service_dict.pop('env_file', None) if 'volumes_from' in service_dict: service_dict['volumes_from'] = [ VolumeFromSpec.parse(vf) for vf in service_dict['volumes_from']] if 'volumes' in service_dict: service_dict['volumes'] = [ VolumeSpec.parse(v) for v in service_dict['volumes']] if 'restart' in service_dict: service_dict['restart'] = parse_restart_spec(service_dict['restart']) return service_dict def merge_service_dicts_from_files(base, override): """When merging services from multiple files we need to merge the `extends` field. This is not handled by `merge_service_dicts()` which is used to perform the `extends`. """ new_service = merge_service_dicts(base, override) if 'extends' in override: new_service['extends'] = override['extends'] return new_service def merge_service_dicts(base, override): d = base.copy() if 'environment' in base or 'environment' in override: d['environment'] = merge_environment( base.get('environment'), override.get('environment'), ) path_mapping_keys = ['volumes', 'devices'] for key in path_mapping_keys: if key in base or key in override: d[key] = merge_path_mappings( base.get(key), override.get(key), ) if 'labels' in base or 'labels' in override: d['labels'] = merge_labels( base.get('labels'), override.get('labels'), ) if 'image' in override and 'build' in d: del d['build'] if 'build' in override and 'image' in d: del d['image'] list_keys = ['ports', 'expose', 'external_links'] for key in list_keys: if key in base or key in override: d[key] = base.get(key, []) + override.get(key, []) list_or_string_keys = ['dns', 'dns_search', 'env_file'] for key in list_or_string_keys: if key in base or key in override: d[key] = to_list(base.get(key)) + to_list(override.get(key)) already_merged_keys = ['environment', 'labels'] + path_mapping_keys + list_keys + list_or_string_keys for k in set(ALLOWED_KEYS) - set(already_merged_keys): if k in override: d[k] = override[k] return d def merge_environment(base, override): env = parse_environment(base) env.update(parse_environment(override)) return env def parse_environment(environment): if not environment: return {} if isinstance(environment, list): return dict(split_env(e) for e in environment) if isinstance(environment, dict): return dict(environment) raise ConfigurationError( "environment \"%s\" must be a list or mapping," % environment ) def split_env(env): if isinstance(env, six.binary_type): env = env.decode('utf-8', 'replace') if '=' in env: return env.split('=', 1) else: return env, None def resolve_env_var(key, val): if val is not None: return key, val elif key in os.environ: return key, os.environ[key] else: return key, '' def env_vars_from_file(filename): """ Read in a line delimited file of environment variables. """ if not os.path.exists(filename): raise ConfigurationError("Couldn't find env file: %s" % filename) env = {} for line in codecs.open(filename, 'r', 'utf-8'): line = line.strip() if line and not line.startswith('#'): k, v = split_env(line) env[k] = v return env def resolve_volume_paths(working_dir, service_dict): return [ resolve_volume_path(working_dir, volume) for volume in service_dict['volumes'] ] def resolve_volume_path(working_dir, volume): container_path, host_path = split_path_mapping(volume) if host_path is not None: if host_path.startswith('.'): host_path = expand_path(working_dir, host_path) host_path = os.path.expanduser(host_path) return u"{}:{}".format(host_path, container_path) else: return container_path def resolve_build_path(working_dir, build_path): if is_url(build_path): return build_path return expand_path(working_dir, build_path) def is_url(build_path): return build_path.startswith(DOCKER_VALID_URL_PREFIXES) def validate_paths(service_dict): if 'build' in service_dict: build_path = service_dict['build'] if ( not is_url(build_path) and (not os.path.exists(build_path) or not os.access(build_path, os.R_OK)) ): raise ConfigurationError( "build path %s either does not exist, is not accessible, " "or is not a valid URL." % build_path) def merge_path_mappings(base, override): d = dict_from_path_mappings(base) d.update(dict_from_path_mappings(override)) return path_mappings_from_dict(d) def dict_from_path_mappings(path_mappings): if path_mappings: return dict(split_path_mapping(v) for v in path_mappings) else: return {} def path_mappings_from_dict(d): return [join_path_mapping(v) for v in d.items()] def split_path_mapping(volume_path): """ Ascertain if the volume_path contains a host path as well as a container path. Using splitdrive so windows absolute paths won't cause issues with splitting on ':'. """ # splitdrive has limitations when it comes to relative paths, so when it's # relative, handle special case to set the drive to '' if volume_path.startswith('.') or volume_path.startswith('~'): drive, volume_config = '', volume_path else: drive, volume_config = os.path.splitdrive(volume_path) if ':' in volume_config: (host, container) = volume_config.split(':', 1) return (container, drive + host) else: return (volume_path, None) def join_path_mapping(pair): (container, host) = pair if host is None: return container else: return ":".join((host, container)) def merge_labels(base, override): labels = parse_labels(base) labels.update(parse_labels(override)) return labels def parse_labels(labels): if not labels: return {} if isinstance(labels, list): return dict(split_label(e) for e in labels) if isinstance(labels, dict): return dict(labels) def split_label(label): if '=' in label: return label.split('=', 1) else: return label, '' def expand_path(working_dir, path): return os.path.abspath(os.path.join(working_dir, os.path.expanduser(path))) def to_list(value): if value is None: return [] elif isinstance(value, six.string_types): return [value] else: return value def load_yaml(filename): try: with open(filename, 'r') as fh: return yaml.safe_load(fh) except (IOError, yaml.YAMLError) as e: error_name = getattr(e, '__module__', '') + '.' + e.__class__.__name__ raise ConfigurationError(u"{}: {}".format(error_name, e)) compose-1.5.2/compose/config/errors.py000066400000000000000000000015661263011261000177640ustar00rootroot00000000000000class ConfigurationError(Exception): def __init__(self, msg): self.msg = msg def __str__(self): return self.msg class DependencyError(ConfigurationError): pass class CircularReference(ConfigurationError): def __init__(self, trail): self.trail = trail @property def msg(self): lines = [ "{} in {}".format(service_name, filename) for (filename, service_name) in self.trail ] return "Circular reference:\n {}".format("\n extends ".join(lines)) class ComposeFileNotFound(ConfigurationError): def __init__(self, supported_filenames): super(ComposeFileNotFound, self).__init__(""" Can't find a suitable configuration file in this directory or any parent. Are you in the right directory? Supported filenames: %s """ % ", ".join(supported_filenames)) compose-1.5.2/compose/config/fields_schema.json000066400000000000000000000114001263011261000215430ustar00rootroot00000000000000{ "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", "id": "fields_schema.json", "patternProperties": { "^[a-zA-Z0-9._-]+$": { "$ref": "#/definitions/service" } }, "additionalProperties": false, "definitions": { "service": { "id": "#/definitions/service", "type": "object", "properties": { "build": {"type": "string"}, "cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "cgroup_parent": {"type": "string"}, "command": { "oneOf": [ {"type": "string"}, {"type": "array", "items": {"type": "string"}} ] }, "container_name": {"type": "string"}, "cpu_shares": {"type": ["number", "string"]}, "cpuset": {"type": "string"}, "devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "dns": {"$ref": "#/definitions/string_or_list"}, "dns_search": {"$ref": "#/definitions/string_or_list"}, "dockerfile": {"type": "string"}, "domainname": {"type": "string"}, "entrypoint": {"$ref": "#/definitions/string_or_list"}, "env_file": {"$ref": "#/definitions/string_or_list"}, "environment": {"$ref": "#/definitions/list_or_dict"}, "expose": { "type": "array", "items": { "type": ["string", "number"], "format": "expose" }, "uniqueItems": true }, "extends": { "oneOf": [ { "type": "string" }, { "type": "object", "properties": { "service": {"type": "string"}, "file": {"type": "string"} }, "required": ["service"], "additionalProperties": false } ] }, "extra_hosts": {"$ref": "#/definitions/list_or_dict"}, "external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "hostname": {"type": "string"}, "image": {"type": "string"}, "ipc": {"type": "string"}, "labels": {"$ref": "#/definitions/list_or_dict"}, "links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "log_driver": {"type": "string"}, "log_opt": {"type": "object"}, "mac_address": {"type": "string"}, "mem_limit": {"type": ["number", "string"]}, "memswap_limit": {"type": ["number", "string"]}, "net": {"type": "string"}, "pid": {"type": ["string", "null"]}, "ports": { "type": "array", "items": { "type": ["string", "number"], "format": "ports" }, "uniqueItems": true }, "privileged": {"type": "boolean"}, "read_only": {"type": "boolean"}, "restart": {"type": "string"}, "security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "stdin_open": {"type": "boolean"}, "tty": {"type": "boolean"}, "ulimits": { "type": "object", "patternProperties": { "^[a-z]+$": { "oneOf": [ {"type": "integer"}, { "type":"object", "properties": { "hard": {"type": "integer"}, "soft": {"type": "integer"} }, "required": ["soft", "hard"], "additionalProperties": false } ] } } }, "user": {"type": "string"}, "volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "volume_driver": {"type": "string"}, "volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, "working_dir": {"type": "string"} }, "dependencies": { "memswap_limit": ["mem_limit"] }, "additionalProperties": false }, "string_or_list": { "oneOf": [ {"type": "string"}, {"$ref": "#/definitions/list_of_strings"} ] }, "list_of_strings": { "type": "array", "items": {"type": "string"}, "uniqueItems": true }, "list_or_dict": { "oneOf": [ { "type": "object", "patternProperties": { ".+": { "type": ["string", "number", "boolean", "null"], "format": "bool-value-in-mapping" } }, "additionalProperties": false }, {"type": "array", "items": {"type": "string"}, "uniqueItems": true} ] } } } compose-1.5.2/compose/config/interpolation.py000066400000000000000000000044301263011261000213300ustar00rootroot00000000000000import logging import os from string import Template import six from .errors import ConfigurationError log = logging.getLogger(__name__) def interpolate_environment_variables(config): mapping = BlankDefaultDict(os.environ) return dict( (service_name, process_service(service_name, service_dict, mapping)) for (service_name, service_dict) in config.items() ) def process_service(service_name, service_dict, mapping): return dict( (key, interpolate_value(service_name, key, val, mapping)) for (key, val) in service_dict.items() ) def interpolate_value(service_name, config_key, value, mapping): try: return recursive_interpolate(value, mapping) except InvalidInterpolation as e: raise ConfigurationError( 'Invalid interpolation format for "{config_key}" option ' 'in service "{service_name}": "{string}"' .format( config_key=config_key, service_name=service_name, string=e.string, ) ) def recursive_interpolate(obj, mapping): if isinstance(obj, six.string_types): return interpolate(obj, mapping) elif isinstance(obj, dict): return dict( (key, recursive_interpolate(val, mapping)) for (key, val) in obj.items() ) elif isinstance(obj, list): return [recursive_interpolate(val, mapping) for val in obj] else: return obj def interpolate(string, mapping): try: return Template(string).substitute(mapping) except ValueError: raise InvalidInterpolation(string) class BlankDefaultDict(dict): def __init__(self, *args, **kwargs): super(BlankDefaultDict, self).__init__(*args, **kwargs) self.missing_keys = [] def __getitem__(self, key): try: return super(BlankDefaultDict, self).__getitem__(key) except KeyError: if key not in self.missing_keys: log.warn( "The {} variable is not set. Defaulting to a blank string." .format(key) ) self.missing_keys.append(key) return "" class InvalidInterpolation(Exception): def __init__(self, string): self.string = string compose-1.5.2/compose/config/service_schema.json000066400000000000000000000011471263011261000217440ustar00rootroot00000000000000{ "$schema": "http://json-schema.org/draft-04/schema#", "id": "service_schema.json", "type": "object", "allOf": [ {"$ref": "fields_schema.json#/definitions/service"}, {"$ref": "#/definitions/constraints"} ], "definitions": { "constraints": { "id": "#/definitions/constraints", "anyOf": [ { "required": ["build"], "not": {"required": ["image"]} }, { "required": ["image"], "not": {"anyOf": [ {"required": ["build"]}, {"required": ["dockerfile"]} ]} } ] } } } compose-1.5.2/compose/config/sort_services.py000066400000000000000000000035361263011261000213410ustar00rootroot00000000000000from compose.config.errors import DependencyError def get_service_name_from_net(net_config): if not net_config: return if not net_config.startswith('container:'): return _, net_name = net_config.split(':', 1) return net_name def sort_service_dicts(services): # Topological sort (Cormen/Tarjan algorithm). unmarked = services[:] temporary_marked = set() sorted_services = [] def get_service_names(links): return [link.split(':')[0] for link in links] def get_service_names_from_volumes_from(volumes_from): return [volume_from.source for volume_from in volumes_from] def get_service_dependents(service_dict, services): name = service_dict['name'] return [ service for service in services if (name in get_service_names(service.get('links', [])) or name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or name == get_service_name_from_net(service.get('net'))) ] def visit(n): if n['name'] in temporary_marked: if n['name'] in get_service_names(n.get('links', [])): raise DependencyError('A service can not link to itself: %s' % n['name']) if n['name'] in n.get('volumes_from', []): raise DependencyError('A service can not mount itself as volume: %s' % n['name']) else: raise DependencyError('Circular import between %s' % ' and '.join(temporary_marked)) if n in unmarked: temporary_marked.add(n['name']) for m in get_service_dependents(n, services): visit(m) temporary_marked.remove(n['name']) unmarked.remove(n) sorted_services.insert(0, n) while unmarked: visit(unmarked[-1]) return sorted_services compose-1.5.2/compose/config/types.py000066400000000000000000000071461263011261000176140ustar00rootroot00000000000000""" Types for objects parsed from the configuration. """ from __future__ import absolute_import from __future__ import unicode_literals import os from collections import namedtuple from compose.config.errors import ConfigurationError from compose.const import IS_WINDOWS_PLATFORM class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode')): @classmethod def parse(cls, volume_from_config): parts = volume_from_config.split(':') if len(parts) > 2: raise ConfigurationError( "volume_from {} has incorrect format, should be " "service[:mode]".format(volume_from_config)) if len(parts) == 1: source = parts[0] mode = 'rw' else: source, mode = parts return cls(source, mode) def parse_restart_spec(restart_config): if not restart_config: return None parts = restart_config.split(':') if len(parts) > 2: raise ConfigurationError( "Restart %s has incorrect format, should be " "mode[:max_retry]" % restart_config) if len(parts) == 2: name, max_retry_count = parts else: name, = parts max_retry_count = 0 return {'Name': name, 'MaximumRetryCount': int(max_retry_count)} def parse_extra_hosts(extra_hosts_config): if not extra_hosts_config: return {} if isinstance(extra_hosts_config, dict): return dict(extra_hosts_config) if isinstance(extra_hosts_config, list): extra_hosts_dict = {} for extra_hosts_line in extra_hosts_config: # TODO: validate string contains ':' ? host, ip = extra_hosts_line.split(':') extra_hosts_dict[host.strip()] = ip.strip() return extra_hosts_dict def normalize_paths_for_engine(external_path, internal_path): """Windows paths, c:\my\path\shiny, need to be changed to be compatible with the Engine. Volume paths are expected to be linux style /c/my/path/shiny/ """ if not IS_WINDOWS_PLATFORM: return external_path, internal_path if external_path: drive, tail = os.path.splitdrive(external_path) if drive: external_path = '/' + drive.lower().rstrip(':') + tail external_path = external_path.replace('\\', '/') return external_path, internal_path.replace('\\', '/') class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')): @classmethod def parse(cls, volume_config): """Parse a volume_config path and split it into external:internal[:mode] parts to be returned as a valid VolumeSpec. """ if IS_WINDOWS_PLATFORM: # relative paths in windows expand to include the drive, eg C:\ # so we join the first 2 parts back together to count as one drive, tail = os.path.splitdrive(volume_config) parts = tail.split(":") if drive: parts[0] = drive + parts[0] else: parts = volume_config.split(':') if len(parts) > 3: raise ConfigurationError( "Volume %s has incorrect format, should be " "external:internal[:mode]" % volume_config) if len(parts) == 1: external, internal = normalize_paths_for_engine( None, os.path.normpath(parts[0])) else: external, internal = normalize_paths_for_engine( os.path.normpath(parts[0]), os.path.normpath(parts[1])) mode = 'rw' if len(parts) == 3: mode = parts[2] return cls(external, internal, mode) compose-1.5.2/compose/config/validation.py000066400000000000000000000272321263011261000206000ustar00rootroot00000000000000import json import logging import os import re import sys import six from docker.utils.ports import split_port from jsonschema import Draft4Validator from jsonschema import FormatChecker from jsonschema import RefResolver from jsonschema import ValidationError from .errors import ConfigurationError log = logging.getLogger(__name__) DOCKER_CONFIG_HINTS = { 'cpu_share': 'cpu_shares', 'add_host': 'extra_hosts', 'hosts': 'extra_hosts', 'extra_host': 'extra_hosts', 'device': 'devices', 'link': 'links', 'memory_swap': 'memswap_limit', 'port': 'ports', 'privilege': 'privileged', 'priviliged': 'privileged', 'privilige': 'privileged', 'volume': 'volumes', 'workdir': 'working_dir', } VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]' VALID_EXPOSE_FORMAT = r'^\d+(\/[a-zA-Z]+)?$' @FormatChecker.cls_checks(format="ports", raises=ValidationError) def format_ports(instance): try: split_port(instance) except ValueError as e: raise ValidationError(six.text_type(e)) return True @FormatChecker.cls_checks(format="expose", raises=ValidationError) def format_expose(instance): if isinstance(instance, six.string_types): if not re.match(VALID_EXPOSE_FORMAT, instance): raise ValidationError( "should be of the format 'PORT[/PROTOCOL]'") return True @FormatChecker.cls_checks(format="bool-value-in-mapping") def format_boolean_in_environment(instance): """ Check if there is a boolean in the environment and display a warning. Always return True here so the validation won't raise an error. """ if isinstance(instance, bool): log.warn( "There is a boolean value in the 'environment' key.\n" "Environment variables can only be strings.\n" "Please add quotes to any boolean values to make them string " "(eg, 'True', 'yes', 'N').\n" "This warning will become an error in a future release. \r\n" ) return True def validate_top_level_service_objects(config_file): """Perform some high level validation of the service name and value. This validation must happen before interpolation, which must happen before the rest of validation, which is why it's separate from the rest of the service validation. """ for service_name, service_dict in config_file.config.items(): if not isinstance(service_name, six.string_types): raise ConfigurationError( "In file '{}' service name: {} needs to be a string, eg '{}'".format( config_file.filename, service_name, service_name)) if not isinstance(service_dict, dict): raise ConfigurationError( "In file '{}' service '{}' doesn\'t have any configuration options. " "All top level keys in your docker-compose.yml must map " "to a dictionary of configuration options.".format( config_file.filename, service_name)) def validate_top_level_object(config_file): if not isinstance(config_file.config, dict): raise ConfigurationError( "Top level object in '{}' needs to be an object not '{}'. Check " "that you have defined a service at the top level.".format( config_file.filename, type(config_file.config))) validate_top_level_service_objects(config_file) def validate_extends_file_path(service_name, extends_options, filename): """ The service to be extended must either be defined in the config key 'file', or within 'filename'. """ error_prefix = "Invalid 'extends' configuration for %s:" % service_name if 'file' not in extends_options and filename is None: raise ConfigurationError( "%s you need to specify a 'file', e.g. 'file: something.yml'" % error_prefix ) def get_unsupported_config_msg(service_name, error_key): msg = "Unsupported config option for '{}' service: '{}'".format(service_name, error_key) if error_key in DOCKER_CONFIG_HINTS: msg += " (did you mean '{}'?)".format(DOCKER_CONFIG_HINTS[error_key]) return msg def anglicize_validator(validator): if validator in ["array", "object"]: return 'an ' + validator return 'a ' + validator def handle_error_for_schema_with_id(error, service_name): schema_id = error.schema['id'] if schema_id == 'fields_schema.json' and error.validator == 'additionalProperties': return "Invalid service name '{}' - only {} characters are allowed".format( # The service_name is the key to the json object list(error.instance)[0], VALID_NAME_CHARS) if schema_id == '#/definitions/constraints': if 'image' in error.instance and 'build' in error.instance: return ( "Service '{}' has both an image and build path specified. " "A service can either be built to image or use an existing " "image, not both.".format(service_name)) if 'image' not in error.instance and 'build' not in error.instance: return ( "Service '{}' has neither an image nor a build path " "specified. Exactly one must be provided.".format(service_name)) if 'image' in error.instance and 'dockerfile' in error.instance: return ( "Service '{}' has both an image and alternate Dockerfile. " "A service can either be built to image or use an existing " "image, not both.".format(service_name)) if schema_id == '#/definitions/service': if error.validator == 'additionalProperties': invalid_config_key = parse_key_from_error_msg(error) return get_unsupported_config_msg(service_name, invalid_config_key) def handle_generic_service_error(error, service_name): config_key = " ".join("'%s'" % k for k in error.path) msg_format = None error_msg = error.message if error.validator == 'oneOf': msg_format = "Service '{}' configuration key {} {}" error_msg = _parse_oneof_validator(error) elif error.validator == 'type': msg_format = ("Service '{}' configuration key {} contains an invalid " "type, it should be {}") error_msg = _parse_valid_types_from_validator(error.validator_value) # TODO: no test case for this branch, there are no config options # which exercise this branch elif error.validator == 'required': msg_format = "Service '{}' configuration key '{}' is invalid, {}" elif error.validator == 'dependencies': msg_format = "Service '{}' configuration key '{}' is invalid: {}" config_key = list(error.validator_value.keys())[0] required_keys = ",".join(error.validator_value[config_key]) error_msg = "when defining '{}' you must set '{}' as well".format( config_key, required_keys) elif error.cause: error_msg = six.text_type(error.cause) msg_format = "Service '{}' configuration key {} is invalid: {}" elif error.path: msg_format = "Service '{}' configuration key {} value {}" if msg_format: return msg_format.format(service_name, config_key, error_msg) return error.message def parse_key_from_error_msg(error): return error.message.split("'")[1] def _parse_valid_types_from_validator(validator): """A validator value can be either an array of valid types or a string of a valid type. Parse the valid types and prefix with the correct article. """ if not isinstance(validator, list): return anglicize_validator(validator) if len(validator) == 1: return anglicize_validator(validator[0]) return "{}, or {}".format( ", ".join([anglicize_validator(validator[0])] + validator[1:-1]), anglicize_validator(validator[-1])) def _parse_oneof_validator(error): """oneOf has multiple schemas, so we need to reason about which schema, sub schema or constraint the validation is failing on. Inspecting the context value of a ValidationError gives us information about which sub schema failed and which kind of error it is. """ types = [] for context in error.context: if context.validator == 'required': return context.message if context.validator == 'additionalProperties': invalid_config_key = parse_key_from_error_msg(context) return "contains unsupported option: '{}'".format(invalid_config_key) if context.path: invalid_config_key = " ".join( "'{}' ".format(fragment) for fragment in context.path if isinstance(fragment, six.string_types) ) return "{}contains {}, which is an invalid type, it should be {}".format( invalid_config_key, context.instance, _parse_valid_types_from_validator(context.validator_value)) if context.validator == 'uniqueItems': return "contains non unique items, please remove duplicates from {}".format( context.instance) if context.validator == 'type': types.append(context.validator_value) valid_types = _parse_valid_types_from_validator(types) return "contains an invalid type, it should be {}".format(valid_types) def process_errors(errors, service_name=None): """jsonschema gives us an error tree full of information to explain what has gone wrong. Process each error and pull out relevant information and re-write helpful error messages that are relevant. """ def format_error_message(error, service_name): if not service_name and error.path: # field_schema errors will have service name on the path service_name = error.path.popleft() if 'id' in error.schema: error_msg = handle_error_for_schema_with_id(error, service_name) if error_msg: return error_msg return handle_generic_service_error(error, service_name) return '\n'.join(format_error_message(error, service_name) for error in errors) def validate_against_fields_schema(config, filename): _validate_against_schema( config, "fields_schema.json", format_checker=["ports", "expose", "bool-value-in-mapping"], filename=filename) def validate_against_service_schema(config, service_name): _validate_against_schema( config, "service_schema.json", format_checker=["ports"], service_name=service_name) def _validate_against_schema( config, schema_filename, format_checker=(), service_name=None, filename=None): config_source_dir = os.path.dirname(os.path.abspath(__file__)) if sys.platform == "win32": file_pre_fix = "///" config_source_dir = config_source_dir.replace('\\', '/') else: file_pre_fix = "//" resolver_full_path = "file:{}{}/".format(file_pre_fix, config_source_dir) schema_file = os.path.join(config_source_dir, schema_filename) with open(schema_file, "r") as schema_fh: schema = json.load(schema_fh) resolver = RefResolver(resolver_full_path, schema) validation_output = Draft4Validator( schema, resolver=resolver, format_checker=FormatChecker(format_checker)) errors = [error for error in sorted(validation_output.iter_errors(config), key=str)] if not errors: return error_msg = process_errors(errors, service_name) file_msg = " in file '{}'".format(filename) if filename else '' raise ConfigurationError("Validation failed{}, reason(s):\n{}".format( file_msg, error_msg)) compose-1.5.2/compose/const.py000066400000000000000000000007521263011261000163250ustar00rootroot00000000000000import os import sys DEFAULT_TIMEOUT = 10 HTTP_TIMEOUT = int(os.environ.get('COMPOSE_HTTP_TIMEOUT', os.environ.get('DOCKER_CLIENT_TIMEOUT', 60))) IS_WINDOWS_PLATFORM = (sys.platform == "win32") LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number' LABEL_ONE_OFF = 'com.docker.compose.oneoff' LABEL_PROJECT = 'com.docker.compose.project' LABEL_SERVICE = 'com.docker.compose.service' LABEL_VERSION = 'com.docker.compose.version' LABEL_CONFIG_HASH = 'com.docker.compose.config-hash' compose-1.5.2/compose/container.py000066400000000000000000000160771263011261000171700ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from functools import reduce import six from .const import LABEL_CONTAINER_NUMBER from .const import LABEL_PROJECT from .const import LABEL_SERVICE class Container(object): """ Represents a Docker container, constructed from the output of GET /containers/:id:/json. """ def __init__(self, client, dictionary, has_been_inspected=False): self.client = client self.dictionary = dictionary self.has_been_inspected = has_been_inspected self.log_stream = None @classmethod def from_ps(cls, client, dictionary, **kwargs): """ Construct a container object from the output of GET /containers/json. """ name = get_container_name(dictionary) if name is None: return None new_dictionary = { 'Id': dictionary['Id'], 'Image': dictionary['Image'], 'Name': '/' + name, } return cls(client, new_dictionary, **kwargs) @classmethod def from_id(cls, client, id): return cls(client, client.inspect_container(id)) @classmethod def create(cls, client, **options): response = client.create_container(**options) return cls.from_id(client, response['Id']) @property def id(self): return self.dictionary['Id'] @property def image(self): return self.dictionary['Image'] @property def image_config(self): return self.client.inspect_image(self.image) @property def short_id(self): return self.id[:10] @property def name(self): return self.dictionary['Name'][1:] @property def service(self): return self.labels.get(LABEL_SERVICE) @property def name_without_project(self): project = self.labels.get(LABEL_PROJECT) if self.name.startswith('{0}_{1}'.format(project, self.service)): return '{0}_{1}'.format(self.service, self.number) else: return self.name @property def number(self): number = self.labels.get(LABEL_CONTAINER_NUMBER) if not number: raise ValueError("Container {0} does not have a {1} label".format( self.short_id, LABEL_CONTAINER_NUMBER)) return int(number) @property def ports(self): self.inspect_if_not_inspected() return self.get('NetworkSettings.Ports') or {} @property def human_readable_ports(self): def format_port(private, public): if not public: return private return '{HostIp}:{HostPort}->{private}'.format( private=private, **public[0]) return ', '.join(format_port(*item) for item in sorted(six.iteritems(self.ports))) @property def labels(self): return self.get('Config.Labels') or {} @property def log_config(self): return self.get('HostConfig.LogConfig') or None @property def human_readable_state(self): if self.is_paused: return 'Paused' if self.is_running: return 'Ghost' if self.get('State.Ghost') else 'Up' else: return 'Exit %s' % self.get('State.ExitCode') @property def human_readable_command(self): entrypoint = self.get('Config.Entrypoint') or [] cmd = self.get('Config.Cmd') or [] return ' '.join(entrypoint + cmd) @property def environment(self): return dict(var.split("=", 1) for var in self.get('Config.Env') or []) @property def is_running(self): return self.get('State.Running') @property def is_paused(self): return self.get('State.Paused') @property def log_driver(self): return self.get('HostConfig.LogConfig.Type') @property def has_api_logs(self): log_type = self.log_driver return not log_type or log_type != 'none' def attach_log_stream(self): """A log stream can only be attached if the container uses a json-file log driver. """ if self.has_api_logs: self.log_stream = self.attach(stdout=True, stderr=True, stream=True) def get(self, key): """Return a value from the container or None if the value is not set. :param key: a string using dotted notation for nested dictionary lookups """ self.inspect_if_not_inspected() def get_value(dictionary, key): return (dictionary or {}).get(key) return reduce(get_value, key.split('.'), self.dictionary) def get_local_port(self, port, protocol='tcp'): port = self.ports.get("%s/%s" % (port, protocol)) return "{HostIp}:{HostPort}".format(**port[0]) if port else None def start(self, **options): return self.client.start(self.id, **options) def stop(self, **options): return self.client.stop(self.id, **options) def pause(self, **options): return self.client.pause(self.id, **options) def unpause(self, **options): return self.client.unpause(self.id, **options) def kill(self, **options): return self.client.kill(self.id, **options) def restart(self, **options): return self.client.restart(self.id, **options) def remove(self, **options): return self.client.remove_container(self.id, **options) def rename_to_tmp_name(self): """Rename the container to a hopefully unique temporary container name by prepending the short id. """ self.client.rename( self.id, '%s_%s' % (self.short_id, self.name) ) def inspect_if_not_inspected(self): if not self.has_been_inspected: self.inspect() def wait(self): return self.client.wait(self.id) def logs(self, *args, **kwargs): return self.client.logs(self.id, *args, **kwargs) def inspect(self): self.dictionary = self.client.inspect_container(self.id) self.has_been_inspected = True return self.dictionary # TODO: only used by tests, move to test module def links(self): links = [] for container in self.client.containers(): for name in container['Names']: bits = name.split('/') if len(bits) > 2 and bits[1] == self.name: links.append(bits[2]) return links def attach(self, *args, **kwargs): return self.client.attach(self.id, *args, **kwargs) def __repr__(self): return '' % (self.name, self.id[:6]) def __eq__(self, other): if type(self) != type(other): return False return self.id == other.id def __hash__(self): return self.id.__hash__() def get_container_name(container): if not container.get('Name') and not container.get('Names'): return None # inspect if 'Name' in container: return container['Name'] # ps shortest_name = min(container['Names'], key=lambda n: len(n.split('/'))) return shortest_name.split('/')[-1] compose-1.5.2/compose/legacy.py000066400000000000000000000113711263011261000164420ustar00rootroot00000000000000import logging import re from .const import LABEL_VERSION from .container import Container from .container import get_container_name log = logging.getLogger(__name__) # TODO: remove this section when migrate_project_to_labels is removed NAME_RE = re.compile(r'^([^_]+)_([^_]+)_(run_)?(\d+)$') ERROR_MESSAGE_FORMAT = """ Compose found the following containers without labels: {names_list} As of Compose 1.3.0, containers are identified with labels instead of naming convention. If you want to continue using these containers, run: $ docker-compose migrate-to-labels Alternatively, remove them: $ docker rm -f {rm_args} """ ONE_OFF_ADDENDUM_FORMAT = """ You should also remove your one-off containers: $ docker rm -f {rm_args} """ ONE_OFF_ERROR_MESSAGE_FORMAT = """ Compose found the following containers without labels: {names_list} As of Compose 1.3.0, containers are identified with labels instead of naming convention. Remove them before continuing: $ docker rm -f {rm_args} """ def check_for_legacy_containers( client, project, services, allow_one_off=True): """Check if there are containers named using the old naming convention and warn the user that those containers may need to be migrated to using labels, so that compose can find them. """ containers = get_legacy_containers(client, project, services, one_off=False) if containers: one_off_containers = get_legacy_containers(client, project, services, one_off=True) raise LegacyContainersError( [c.name for c in containers], [c.name for c in one_off_containers], ) if not allow_one_off: one_off_containers = get_legacy_containers(client, project, services, one_off=True) if one_off_containers: raise LegacyOneOffContainersError( [c.name for c in one_off_containers], ) class LegacyError(Exception): def __unicode__(self): return self.msg __str__ = __unicode__ class LegacyContainersError(LegacyError): def __init__(self, names, one_off_names): self.names = names self.one_off_names = one_off_names self.msg = ERROR_MESSAGE_FORMAT.format( names_list="\n".join(" {}".format(name) for name in names), rm_args=" ".join(names), ) if one_off_names: self.msg += ONE_OFF_ADDENDUM_FORMAT.format(rm_args=" ".join(one_off_names)) class LegacyOneOffContainersError(LegacyError): def __init__(self, one_off_names): self.one_off_names = one_off_names self.msg = ONE_OFF_ERROR_MESSAGE_FORMAT.format( names_list="\n".join(" {}".format(name) for name in one_off_names), rm_args=" ".join(one_off_names), ) def add_labels(project, container): project_name, service_name, one_off, number = NAME_RE.match(container.name).groups() if project_name != project.name or service_name not in project.service_names: return service = project.get_service(service_name) service.recreate_container(container) def migrate_project_to_labels(project): log.info("Running migration to labels for project %s", project.name) containers = get_legacy_containers( project.client, project.name, project.service_names, one_off=False, ) for container in containers: add_labels(project, container) def get_legacy_containers( client, project, services, one_off=False): return list(_get_legacy_containers_iter( client, project, services, one_off=one_off, )) def _get_legacy_containers_iter( client, project, services, one_off=False): containers = client.containers(all=True) for service in services: for container in containers: if LABEL_VERSION in (container.get('Labels') or {}): continue name = get_container_name(container) if has_container(project, service, name, one_off=one_off): yield Container.from_ps(client, container) def has_container(project, service, name, one_off=False): if not name or not is_valid_name(name, one_off): return False container_project, container_service, _container_number = parse_name(name) return container_project == project and container_service == service def is_valid_name(name, one_off=False): match = NAME_RE.match(name) if match is None: return False if one_off: return match.group(3) == 'run_' else: return match.group(3) is None def parse_name(name): match = NAME_RE.match(name) (project, service_name, _, suffix) = match.groups() return (project, service_name, int(suffix)) compose-1.5.2/compose/progress_stream.py000066400000000000000000000047151263011261000204210ustar00rootroot00000000000000from compose import utils class StreamOutputError(Exception): pass def stream_output(output, stream): is_terminal = hasattr(stream, 'isatty') and stream.isatty() stream = utils.get_output_stream(stream) all_events = [] lines = {} diff = 0 for event in utils.json_stream(output): all_events.append(event) is_progress_event = 'progress' in event or 'progressDetail' in event if not is_progress_event: print_output_event(event, stream, is_terminal) stream.flush() continue if not is_terminal: continue # if it's a progress event and we have a terminal, then display the progress bars image_id = event.get('id') if not image_id: continue if image_id in lines: diff = len(lines) - lines[image_id] else: lines[image_id] = len(lines) stream.write("\n") diff = 0 # move cursor up `diff` rows stream.write("%c[%dA" % (27, diff)) print_output_event(event, stream, is_terminal) if 'id' in event: # move cursor back down stream.write("%c[%dB" % (27, diff)) stream.flush() return all_events def print_output_event(event, stream, is_terminal): if 'errorDetail' in event: raise StreamOutputError(event['errorDetail']['message']) terminator = '' if is_terminal and 'stream' not in event: # erase current line stream.write("%c[2K\r" % 27) terminator = "\r" elif 'progressDetail' in event: return if 'time' in event: stream.write("[%s] " % event['time']) if 'id' in event: stream.write("%s: " % event['id']) if 'from' in event: stream.write("(from %s) " % event['from']) status = event.get('status', '') if 'progress' in event: stream.write("%s %s%s" % (status, event['progress'], terminator)) elif 'progressDetail' in event: detail = event['progressDetail'] total = detail.get('total') if 'current' in detail and total: percentage = float(detail['current']) / float(total) * 100 stream.write('%s (%.1f%%)%s' % (status, percentage, terminator)) else: stream.write('%s%s' % (status, terminator)) elif 'stream' in event: stream.write("%s%s" % (event['stream'], terminator)) else: stream.write("%s%s\n" % (status, terminator)) compose-1.5.2/compose/project.py000066400000000000000000000320711263011261000166440ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging from functools import reduce from docker.errors import APIError from docker.errors import NotFound from .config import ConfigurationError from .config.sort_services import get_service_name_from_net from .const import DEFAULT_TIMEOUT from .const import LABEL_ONE_OFF from .const import LABEL_PROJECT from .const import LABEL_SERVICE from .container import Container from .legacy import check_for_legacy_containers from .service import ContainerNet from .service import ConvergenceStrategy from .service import Net from .service import Service from .service import ServiceNet from .utils import parallel_execute log = logging.getLogger(__name__) class Project(object): """ A collection of services. """ def __init__(self, name, services, client, use_networking=False, network_driver=None): self.name = name self.services = services self.client = client self.use_networking = use_networking self.network_driver = network_driver def labels(self, one_off=False): return [ '{0}={1}'.format(LABEL_PROJECT, self.name), '{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False"), ] @classmethod def from_dicts(cls, name, service_dicts, client, use_networking=False, network_driver=None): """ Construct a ServiceCollection from a list of dicts representing services. """ project = cls(name, [], client, use_networking=use_networking, network_driver=network_driver) if use_networking: remove_links(service_dicts) for service_dict in service_dicts: links = project.get_links(service_dict) volumes_from = project.get_volumes_from(service_dict) net = project.get_net(service_dict) project.services.append( Service( client=client, project=name, use_networking=use_networking, links=links, net=net, volumes_from=volumes_from, **service_dict)) return project @property def service_names(self): return [service.name for service in self.services] def get_service(self, name): """ Retrieve a service by name. Raises NoSuchService if the named service does not exist. """ for service in self.services: if service.name == name: return service raise NoSuchService(name) def validate_service_names(self, service_names): """ Validate that the given list of service names only contains valid services. Raises NoSuchService if one of the names is invalid. """ valid_names = self.service_names for name in service_names: if name not in valid_names: raise NoSuchService(name) def get_services(self, service_names=None, include_deps=False): """ Returns a list of this project's services filtered by the provided list of names, or all services if service_names is None or []. If include_deps is specified, returns a list including the dependencies for service_names, in order of dependency. Preserves the original order of self.services where possible, reordering as needed to resolve dependencies. Raises NoSuchService if any of the named services do not exist. """ if service_names is None or len(service_names) == 0: return self.get_services( service_names=self.service_names, include_deps=include_deps ) else: unsorted = [self.get_service(name) for name in service_names] services = [s for s in self.services if s in unsorted] if include_deps: services = reduce(self._inject_deps, services, []) uniques = [] [uniques.append(s) for s in services if s not in uniques] return uniques def get_links(self, service_dict): links = [] if 'links' in service_dict: for link in service_dict.get('links', []): if ':' in link: service_name, link_name = link.split(':', 1) else: service_name, link_name = link, None try: links.append((self.get_service(service_name), link_name)) except NoSuchService: raise ConfigurationError( 'Service "%s" has a link to service "%s" which does not ' 'exist.' % (service_dict['name'], service_name)) del service_dict['links'] return links def get_volumes_from(self, service_dict): volumes_from = [] if 'volumes_from' in service_dict: for volume_from_spec in service_dict.get('volumes_from', []): # Get service try: service = self.get_service(volume_from_spec.source) volume_from_spec = volume_from_spec._replace(source=service) except NoSuchService: try: container = Container.from_id(self.client, volume_from_spec.source) volume_from_spec = volume_from_spec._replace(source=container) except APIError: raise ConfigurationError( 'Service "%s" mounts volumes from "%s", which is ' 'not the name of a service or container.' % ( service_dict['name'], volume_from_spec.source)) volumes_from.append(volume_from_spec) del service_dict['volumes_from'] return volumes_from def get_net(self, service_dict): net = service_dict.pop('net', None) if not net: if self.use_networking: return Net(self.name) return Net(None) net_name = get_service_name_from_net(net) if not net_name: return Net(net) try: return ServiceNet(self.get_service(net_name)) except NoSuchService: pass try: return ContainerNet(Container.from_id(self.client, net_name)) except APIError: raise ConfigurationError( 'Service "%s" is trying to use the network of "%s", ' 'which is not the name of a service or container.' % ( service_dict['name'], net_name)) def start(self, service_names=None, **options): for service in self.get_services(service_names): service.start(**options) def stop(self, service_names=None, **options): parallel_execute( objects=self.containers(service_names), obj_callable=lambda c: c.stop(**options), msg_index=lambda c: c.name, msg="Stopping" ) def pause(self, service_names=None, **options): for service in reversed(self.get_services(service_names)): service.pause(**options) def unpause(self, service_names=None, **options): for service in self.get_services(service_names): service.unpause(**options) def kill(self, service_names=None, **options): parallel_execute( objects=self.containers(service_names), obj_callable=lambda c: c.kill(**options), msg_index=lambda c: c.name, msg="Killing" ) def remove_stopped(self, service_names=None, **options): all_containers = self.containers(service_names, stopped=True) stopped_containers = [c for c in all_containers if not c.is_running] parallel_execute( objects=stopped_containers, obj_callable=lambda c: c.remove(**options), msg_index=lambda c: c.name, msg="Removing" ) def restart(self, service_names=None, **options): for service in self.get_services(service_names): service.restart(**options) def build(self, service_names=None, no_cache=False, pull=False, force_rm=False): for service in self.get_services(service_names): if service.can_be_built(): service.build(no_cache, pull, force_rm) else: log.info('%s uses an image, skipping' % service.name) def up(self, service_names=None, start_deps=True, strategy=ConvergenceStrategy.changed, do_build=True, timeout=DEFAULT_TIMEOUT, detached=False): services = self.get_services(service_names, include_deps=start_deps) for service in services: service.remove_duplicate_containers() plans = self._get_convergence_plans(services, strategy) if self.use_networking and self.uses_default_network(): self.ensure_network_exists() return [ container for service in services for container in service.execute_convergence_plan( plans[service.name], do_build=do_build, timeout=timeout, detached=detached ) ] def _get_convergence_plans(self, services, strategy): plans = {} for service in services: updated_dependencies = [ name for name in service.get_dependency_names() if name in plans and plans[name].action in ('recreate', 'create') ] if updated_dependencies and strategy.allows_recreate: log.debug('%s has upstream changes (%s)', service.name, ", ".join(updated_dependencies)) plan = service.convergence_plan(ConvergenceStrategy.always) else: plan = service.convergence_plan(strategy) plans[service.name] = plan return plans def pull(self, service_names=None, ignore_pull_failures=False): for service in self.get_services(service_names, include_deps=False): service.pull(ignore_pull_failures) def containers(self, service_names=None, stopped=False, one_off=False): if service_names: self.validate_service_names(service_names) else: service_names = self.service_names containers = list(filter(None, [ Container.from_ps(self.client, container) for container in self.client.containers( all=stopped, filters={'label': self.labels(one_off=one_off)})])) def matches_service_names(container): return container.labels.get(LABEL_SERVICE) in service_names if not containers: check_for_legacy_containers( self.client, self.name, self.service_names, ) return [c for c in containers if matches_service_names(c)] def get_network(self): try: return self.client.inspect_network(self.name) except NotFound: return None def ensure_network_exists(self): # TODO: recreate network if driver has changed? if self.get_network() is None: log.info( 'Creating network "{}" with driver "{}"' .format(self.name, self.network_driver) ) self.client.create_network(self.name, driver=self.network_driver) def remove_network(self): network = self.get_network() if network: self.client.remove_network(network['Id']) def uses_default_network(self): return any(service.net.mode == self.name for service in self.services) def _inject_deps(self, acc, service): dep_names = service.get_dependency_names() if len(dep_names) > 0: dep_services = self.get_services( service_names=list(set(dep_names)), include_deps=True ) else: dep_services = [] dep_services.append(service) return acc + dep_services def remove_links(service_dicts): services_with_links = [s for s in service_dicts if 'links' in s] if not services_with_links: return if len(services_with_links) == 1: prefix = '"{}" defines'.format(services_with_links[0]['name']) else: prefix = 'Some services ({}) define'.format( ", ".join('"{}"'.format(s['name']) for s in services_with_links)) log.warn( '\n{} links, which are not compatible with Docker networking and will be ignored.\n' 'Future versions of Docker will not support links - you should remove them for ' 'forwards-compatibility.\n'.format(prefix)) for s in services_with_links: del s['links'] class NoSuchService(Exception): def __init__(self, name): self.name = name self.msg = "No such service: %s" % self.name def __str__(self): return self.msg compose-1.5.2/compose/service.py000066400000000000000000000763541263011261000166520ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging import re import sys from collections import namedtuple from operator import attrgetter import enum import six from docker.errors import APIError from docker.utils import LogConfig from docker.utils.ports import build_port_bindings from docker.utils.ports import split_port from . import __version__ from .config import DOCKER_CONFIG_KEYS from .config import merge_environment from .config.types import VolumeSpec from .const import DEFAULT_TIMEOUT from .const import LABEL_CONFIG_HASH from .const import LABEL_CONTAINER_NUMBER from .const import LABEL_ONE_OFF from .const import LABEL_PROJECT from .const import LABEL_SERVICE from .const import LABEL_VERSION from .container import Container from .legacy import check_for_legacy_containers from .progress_stream import stream_output from .progress_stream import StreamOutputError from .utils import json_hash from .utils import parallel_execute log = logging.getLogger(__name__) DOCKER_START_KEYS = [ 'cap_add', 'cap_drop', 'cgroup_parent', 'devices', 'dns', 'dns_search', 'env_file', 'extra_hosts', 'ipc', 'read_only', 'net', 'log_driver', 'log_opt', 'mem_limit', 'memswap_limit', 'pid', 'privileged', 'restart', 'volumes_from', 'security_opt', ] class BuildError(Exception): def __init__(self, service, reason): self.service = service self.reason = reason class NeedsBuildError(Exception): def __init__(self, service): self.service = service class NoSuchImageError(Exception): pass ServiceName = namedtuple('ServiceName', 'project service number') ConvergencePlan = namedtuple('ConvergencePlan', 'action containers') @enum.unique class ConvergenceStrategy(enum.Enum): """Enumeration for all possible convergence strategies. Values refer to when containers should be recreated. """ changed = 1 always = 2 never = 3 @property def allows_recreate(self): return self is not type(self).never class Service(object): def __init__( self, name, client=None, project='default', use_networking=False, links=None, volumes_from=None, net=None, **options ): self.name = name self.client = client self.project = project self.use_networking = use_networking self.links = links or [] self.volumes_from = volumes_from or [] self.net = net or Net(None) self.options = options def containers(self, stopped=False, one_off=False, filters={}): filters.update({'label': self.labels(one_off=one_off)}) containers = list(filter(None, [ Container.from_ps(self.client, container) for container in self.client.containers( all=stopped, filters=filters)])) if not containers: check_for_legacy_containers( self.client, self.project, [self.name], ) return containers def get_container(self, number=1): """Return a :class:`compose.container.Container` for this service. The container must be active, and match `number`. """ labels = self.labels() + ['{0}={1}'.format(LABEL_CONTAINER_NUMBER, number)] for container in self.client.containers(filters={'label': labels}): return Container.from_ps(self.client, container) raise ValueError("No container found for %s_%s" % (self.name, number)) def start(self, **options): for c in self.containers(stopped=True): self.start_container_if_stopped(c, **options) # TODO: remove these functions, project takes care of starting/stopping, def stop(self, **options): for c in self.containers(): log.info("Stopping %s" % c.name) c.stop(**options) def pause(self, **options): for c in self.containers(filters={'status': 'running'}): log.info("Pausing %s" % c.name) c.pause(**options) def unpause(self, **options): for c in self.containers(filters={'status': 'paused'}): log.info("Unpausing %s" % c.name) c.unpause() def kill(self, **options): for c in self.containers(): log.info("Killing %s" % c.name) c.kill(**options) def restart(self, **options): for c in self.containers(stopped=True): log.info("Restarting %s" % c.name) c.restart(**options) # end TODO def scale(self, desired_num, timeout=DEFAULT_TIMEOUT): """ Adjusts the number of containers to the specified number and ensures they are running. - creates containers until there are at least `desired_num` - stops containers until there are at most `desired_num` running - starts containers until there are at least `desired_num` running - removes all stopped containers """ if self.custom_container_name() and desired_num > 1: log.warn('The "%s" service is using the custom container name "%s". ' 'Docker requires each container to have a unique name. ' 'Remove the custom name to scale the service.' % (self.name, self.custom_container_name())) if self.specifies_host_port(): log.warn('The "%s" service specifies a port on the host. If multiple containers ' 'for this service are created on a single host, the port will clash.' % self.name) def create_and_start(service, number): container = service.create_container(number=number, quiet=True) container.start() return container running_containers = self.containers(stopped=False) num_running = len(running_containers) if desired_num == num_running: # do nothing as we already have the desired number log.info('Desired container number already achieved') return if desired_num > num_running: # we need to start/create until we have desired_num all_containers = self.containers(stopped=True) if num_running != len(all_containers): # we have some stopped containers, let's start them up again stopped_containers = sorted([c for c in all_containers if not c.is_running], key=attrgetter('number')) num_stopped = len(stopped_containers) if num_stopped + num_running > desired_num: num_to_start = desired_num - num_running containers_to_start = stopped_containers[:num_to_start] else: containers_to_start = stopped_containers parallel_execute( objects=containers_to_start, obj_callable=lambda c: c.start(), msg_index=lambda c: c.name, msg="Starting" ) num_running += len(containers_to_start) num_to_create = desired_num - num_running next_number = self._next_container_number() container_numbers = [ number for number in range( next_number, next_number + num_to_create ) ] parallel_execute( objects=container_numbers, obj_callable=lambda n: create_and_start(service=self, number=n), msg_index=lambda n: n, msg="Creating and starting" ) if desired_num < num_running: num_to_stop = num_running - desired_num sorted_running_containers = sorted(running_containers, key=attrgetter('number')) containers_to_stop = sorted_running_containers[-num_to_stop:] parallel_execute( objects=containers_to_stop, obj_callable=lambda c: c.stop(timeout=timeout), msg_index=lambda c: c.name, msg="Stopping" ) self.remove_stopped() def remove_stopped(self, **options): containers = [c for c in self.containers(stopped=True) if not c.is_running] parallel_execute( objects=containers, obj_callable=lambda c: c.remove(**options), msg_index=lambda c: c.name, msg="Removing" ) def create_container(self, one_off=False, do_build=True, previous_container=None, number=None, quiet=False, **override_options): """ Create a container for this service. If the image doesn't exist, attempt to pull it. """ self.ensure_image_exists(do_build=do_build) container_options = self._get_container_create_options( override_options, number or self._next_container_number(one_off=one_off), one_off=one_off, previous_container=previous_container, ) if 'name' in container_options and not quiet: log.info("Creating %s" % container_options['name']) return Container.create(self.client, **container_options) def ensure_image_exists(self, do_build=True): try: self.image() return except NoSuchImageError: pass if self.can_be_built(): if do_build: self.build() else: raise NeedsBuildError(self) else: self.pull() def image(self): try: return self.client.inspect_image(self.image_name) except APIError as e: if e.response.status_code == 404 and e.explanation and 'No such image' in str(e.explanation): raise NoSuchImageError("Image '{}' not found".format(self.image_name)) else: raise @property def image_name(self): if self.can_be_built(): return self.full_name else: return self.options['image'] def convergence_plan(self, strategy=ConvergenceStrategy.changed): containers = self.containers(stopped=True) if not containers: return ConvergencePlan('create', []) if strategy is ConvergenceStrategy.never: return ConvergencePlan('start', containers) if ( strategy is ConvergenceStrategy.always or self._containers_have_diverged(containers) ): return ConvergencePlan('recreate', containers) stopped = [c for c in containers if not c.is_running] if stopped: return ConvergencePlan('start', stopped) return ConvergencePlan('noop', containers) def _containers_have_diverged(self, containers): config_hash = None try: config_hash = self.config_hash except NoSuchImageError as e: log.debug( 'Service %s has diverged: %s', self.name, six.text_type(e), ) return True has_diverged = False for c in containers: container_config_hash = c.labels.get(LABEL_CONFIG_HASH, None) if container_config_hash != config_hash: log.debug( '%s has diverged: %s != %s', c.name, container_config_hash, config_hash, ) has_diverged = True return has_diverged def execute_convergence_plan(self, plan, do_build=True, timeout=DEFAULT_TIMEOUT, detached=False): (action, containers) = plan should_attach_logs = not detached if action == 'create': container = self.create_container(do_build=do_build) if should_attach_logs: container.attach_log_stream() container.start() return [container] elif action == 'recreate': return [ self.recreate_container( container, do_build=do_build, timeout=timeout, attach_logs=should_attach_logs ) for container in containers ] elif action == 'start': for container in containers: self.start_container_if_stopped(container, attach_logs=should_attach_logs) return containers elif action == 'noop': for c in containers: log.info("%s is up-to-date" % c.name) return containers else: raise Exception("Invalid action: {}".format(action)) def recreate_container( self, container, do_build=False, timeout=DEFAULT_TIMEOUT, attach_logs=False): """Recreate a container. The original container is renamed to a temporary name so that data volumes can be copied to the new container, before the original container is removed. """ log.info("Recreating %s" % container.name) container.stop(timeout=timeout) container.rename_to_tmp_name() new_container = self.create_container( do_build=do_build, previous_container=container, number=container.labels.get(LABEL_CONTAINER_NUMBER), quiet=True, ) if attach_logs: new_container.attach_log_stream() new_container.start() container.remove() return new_container def start_container_if_stopped(self, container, attach_logs=False): if not container.is_running: log.info("Starting %s" % container.name) if attach_logs: container.attach_log_stream() container.start() return container def remove_duplicate_containers(self, timeout=DEFAULT_TIMEOUT): for c in self.duplicate_containers(): log.info('Removing %s' % c.name) c.stop(timeout=timeout) c.remove() def duplicate_containers(self): containers = sorted( self.containers(stopped=True), key=lambda c: c.get('Created'), ) numbers = set() for c in containers: if c.number in numbers: yield c else: numbers.add(c.number) @property def config_hash(self): return json_hash(self.config_dict()) def config_dict(self): return { 'options': self.options, 'image_id': self.image()['Id'], 'links': self.get_link_names(), 'net': self.net.id, 'volumes_from': [ (v.source.name, v.mode) for v in self.volumes_from if isinstance(v.source, Service) ], } def get_dependency_names(self): net_name = self.net.service_name return (self.get_linked_service_names() + self.get_volumes_from_names() + ([net_name] if net_name else [])) def get_linked_service_names(self): return [service.name for (service, _) in self.links] def get_link_names(self): return [(service.name, alias) for service, alias in self.links] def get_volumes_from_names(self): return [s.source.name for s in self.volumes_from if isinstance(s.source, Service)] def get_container_name(self, number, one_off=False): # TODO: Implement issue #652 here return build_container_name(self.project, self.name, number, one_off) # TODO: this would benefit from github.com/docker/docker/pull/14699 # to remove the need to inspect every container def _next_container_number(self, one_off=False): containers = filter(None, [ Container.from_ps(self.client, container) for container in self.client.containers( all=True, filters={'label': self.labels(one_off=one_off)}) ]) numbers = [c.number for c in containers] return 1 if not numbers else max(numbers) + 1 def _get_links(self, link_to_self): if self.use_networking: return [] links = [] for service, link_name in self.links: for container in service.containers(): links.append((container.name, link_name or service.name)) links.append((container.name, container.name)) links.append((container.name, container.name_without_project)) if link_to_self: for container in self.containers(): links.append((container.name, self.name)) links.append((container.name, container.name)) links.append((container.name, container.name_without_project)) for external_link in self.options.get('external_links') or []: if ':' not in external_link: link_name = external_link else: external_link, link_name = external_link.split(':') links.append((external_link, link_name)) return links def _get_volumes_from(self): volumes_from = [] for volume_from_spec in self.volumes_from: volumes = build_volume_from(volume_from_spec) volumes_from.extend(volumes) return volumes_from def _get_container_create_options( self, override_options, number, one_off=False, previous_container=None): add_config_hash = (not one_off and not override_options) container_options = dict( (k, self.options[k]) for k in DOCKER_CONFIG_KEYS if k in self.options) container_options.update(override_options) if self.custom_container_name() and not one_off: container_options['name'] = self.custom_container_name() elif not container_options.get('name'): container_options['name'] = self.get_container_name(number, one_off) if 'detach' not in container_options: container_options['detach'] = True # If a qualified hostname was given, split it into an # unqualified hostname and a domainname unless domainname # was also given explicitly. This matches the behavior of # the official Docker CLI in that scenario. if ('hostname' in container_options and 'domainname' not in container_options and '.' in container_options['hostname']): parts = container_options['hostname'].partition('.') container_options['hostname'] = parts[0] container_options['domainname'] = parts[2] if 'ports' in container_options or 'expose' in self.options: ports = [] all_ports = container_options.get('ports', []) + self.options.get('expose', []) for port_range in all_ports: internal_range, _ = split_port(port_range) for port in internal_range: port = str(port) if '/' in port: port = tuple(port.split('/')) ports.append(port) container_options['ports'] = ports override_options['binds'] = merge_volume_bindings( container_options.get('volumes') or [], previous_container) if 'volumes' in container_options: container_options['volumes'] = dict( (v.internal, {}) for v in container_options['volumes']) container_options['environment'] = merge_environment( self.options.get('environment'), override_options.get('environment')) if previous_container: container_options['environment']['affinity:container'] = ('=' + previous_container.id) container_options['image'] = self.image_name container_options['labels'] = build_container_labels( container_options.get('labels', {}), self.labels(one_off=one_off), number, self.config_hash if add_config_hash else None) # Delete options which are only used when starting for key in DOCKER_START_KEYS: container_options.pop(key, None) container_options['host_config'] = self._get_container_host_config( override_options, one_off=one_off) return container_options def _get_container_host_config(self, override_options, one_off=False): options = dict(self.options, **override_options) log_config = LogConfig( type=options.get('log_driver', ""), config=options.get('log_opt', None) ) return self.client.create_host_config( links=self._get_links(link_to_self=one_off), port_bindings=build_port_bindings(options.get('ports') or []), binds=options.get('binds'), volumes_from=self._get_volumes_from(), privileged=options.get('privileged', False), network_mode=self.net.mode, devices=options.get('devices'), dns=options.get('dns'), dns_search=options.get('dns_search'), restart_policy=options.get('restart'), cap_add=options.get('cap_add'), cap_drop=options.get('cap_drop'), mem_limit=options.get('mem_limit'), memswap_limit=options.get('memswap_limit'), ulimits=build_ulimits(options.get('ulimits')), log_config=log_config, extra_hosts=options.get('extra_hosts'), read_only=options.get('read_only'), pid_mode=options.get('pid'), security_opt=options.get('security_opt'), ipc_mode=options.get('ipc'), cgroup_parent=options.get('cgroup_parent'), ) def build(self, no_cache=False, pull=False, force_rm=False): log.info('Building %s' % self.name) path = self.options['build'] # python2 os.path() doesn't support unicode, so we need to encode it to # a byte string if not six.PY3: path = path.encode('utf8') build_output = self.client.build( path=path, tag=self.image_name, stream=True, rm=True, forcerm=force_rm, pull=pull, nocache=no_cache, dockerfile=self.options.get('dockerfile', None), ) try: all_events = stream_output(build_output, sys.stdout) except StreamOutputError as e: raise BuildError(self, six.text_type(e)) # Ensure the HTTP connection is not reused for another # streaming command, as the Docker daemon can sometimes # complain about it self.client.close() image_id = None for event in all_events: if 'stream' in event: match = re.search(r'Successfully built ([0-9a-f]+)', event.get('stream', '')) if match: image_id = match.group(1) if image_id is None: raise BuildError(self, event if all_events else 'Unknown') return image_id def can_be_built(self): return 'build' in self.options @property def full_name(self): """ The tag to give to images built for this service. """ return '%s_%s' % (self.project, self.name) def labels(self, one_off=False): return [ '{0}={1}'.format(LABEL_PROJECT, self.project), '{0}={1}'.format(LABEL_SERVICE, self.name), '{0}={1}'.format(LABEL_ONE_OFF, "True" if one_off else "False") ] def custom_container_name(self): return self.options.get('container_name') def specifies_host_port(self): def has_host_port(binding): _, external_bindings = split_port(binding) # there are no external bindings if external_bindings is None: return False # we only need to check the first binding from the range external_binding = external_bindings[0] # non-tuple binding means there is a host port specified if not isinstance(external_binding, tuple): return True # extract actual host port from tuple of (host_ip, host_port) _, host_port = external_binding if host_port is not None: return True return False return any(has_host_port(binding) for binding in self.options.get('ports', [])) def pull(self, ignore_pull_failures=False): if 'image' not in self.options: return repo, tag, separator = parse_repository_tag(self.options['image']) tag = tag or 'latest' log.info('Pulling %s (%s%s%s)...' % (self.name, repo, separator, tag)) output = self.client.pull( repo, tag=tag, stream=True, ) try: stream_output(output, sys.stdout) except StreamOutputError as e: if not ignore_pull_failures: raise else: log.error(six.text_type(e)) class Net(object): """A `standard` network mode (ex: host, bridge)""" service_name = None def __init__(self, net): self.net = net @property def id(self): return self.net mode = id class ContainerNet(object): """A network mode that uses a container's network stack.""" service_name = None def __init__(self, container): self.container = container @property def id(self): return self.container.id @property def mode(self): return 'container:' + self.container.id class ServiceNet(object): """A network mode that uses a service's network stack.""" def __init__(self, service): self.service = service @property def id(self): return self.service.name service_name = id @property def mode(self): containers = self.service.containers() if containers: return 'container:' + containers[0].id log.warn("Service %s is trying to use reuse the network stack " "of another service that is not running." % (self.id)) return None # Names def build_container_name(project, service, number, one_off=False): bits = [project, service] if one_off: bits.append('run') return '_'.join(bits + [str(number)]) # Images def parse_repository_tag(repo_path): """Splits image identification into base image path, tag/digest and it's separator. Example: >>> parse_repository_tag('user/repo@sha256:digest') ('user/repo', 'sha256:digest', '@') >>> parse_repository_tag('user/repo:v1') ('user/repo', 'v1', ':') """ tag_separator = ":" digest_separator = "@" if digest_separator in repo_path: repo, tag = repo_path.rsplit(digest_separator, 1) return repo, tag, digest_separator repo, tag = repo_path, "" if tag_separator in repo_path: repo, tag = repo_path.rsplit(tag_separator, 1) if "/" in tag: repo, tag = repo_path, "" return repo, tag, tag_separator # Volumes def merge_volume_bindings(volumes, previous_container): """Return a list of volume bindings for a container. Container data volumes are replaced by those from the previous container. """ volume_bindings = dict( build_volume_binding(volume) for volume in volumes if volume.external) if previous_container: data_volumes = get_container_data_volumes(previous_container, volumes) warn_on_masked_volume(volumes, data_volumes, previous_container.service) volume_bindings.update( build_volume_binding(volume) for volume in data_volumes) return list(volume_bindings.values()) def get_container_data_volumes(container, volumes_option): """Find the container data volumes that are in `volumes_option`, and return a mapping of volume bindings for those volumes. """ volumes = [] container_volumes = container.get('Volumes') or {} image_volumes = [ VolumeSpec.parse(volume) for volume in container.image_config['ContainerConfig'].get('Volumes') or {} ] for volume in set(volumes_option + image_volumes): # No need to preserve host volumes if volume.external: continue volume_path = container_volumes.get(volume.internal) # New volume, doesn't exist in the old container if not volume_path: continue # Copy existing volume from old container volume = volume._replace(external=volume_path) volumes.append(volume) return volumes def warn_on_masked_volume(volumes_option, container_volumes, service): container_volumes = dict( (volume.internal, volume.external) for volume in container_volumes) for volume in volumes_option: if ( volume.internal in container_volumes and container_volumes.get(volume.internal) != volume.external ): log.warn(( "Service \"{service}\" is using volume \"{volume}\" from the " "previous container. Host mapping \"{host_path}\" has no effect. " "Remove the existing containers (with `docker-compose rm {service}`) " "to use the host volume mapping." ).format( service=service, volume=volume.internal, host_path=volume.external)) def build_volume_binding(volume_spec): return volume_spec.internal, "{}:{}:{}".format(*volume_spec) def build_volume_from(volume_from_spec): """ volume_from can be either a service or a container. We want to return the container.id and format it into a string complete with the mode. """ if isinstance(volume_from_spec.source, Service): containers = volume_from_spec.source.containers(stopped=True) if not containers: return ["{}:{}".format(volume_from_spec.source.create_container().id, volume_from_spec.mode)] container = containers[0] return ["{}:{}".format(container.id, volume_from_spec.mode)] elif isinstance(volume_from_spec.source, Container): return ["{}:{}".format(volume_from_spec.source.id, volume_from_spec.mode)] # Labels def build_container_labels(label_options, service_labels, number, config_hash): labels = dict(label_options or {}) labels.update(label.split('=', 1) for label in service_labels) labels[LABEL_CONTAINER_NUMBER] = str(number) labels[LABEL_VERSION] = __version__ if config_hash: log.debug("Added config hash: %s" % config_hash) labels[LABEL_CONFIG_HASH] = config_hash return labels # Ulimits def build_ulimits(ulimit_config): if not ulimit_config: return None ulimits = [] for limit_name, soft_hard_values in six.iteritems(ulimit_config): if isinstance(soft_hard_values, six.integer_types): ulimits.append({'name': limit_name, 'soft': soft_hard_values, 'hard': soft_hard_values}) elif isinstance(soft_hard_values, dict): ulimit_dict = {'name': limit_name} ulimit_dict.update(soft_hard_values) ulimits.append(ulimit_dict) return ulimits compose-1.5.2/compose/state.py000066400000000000000000000000001263011261000163010ustar00rootroot00000000000000compose-1.5.2/compose/utils.py000066400000000000000000000117161263011261000163410ustar00rootroot00000000000000import codecs import hashlib import json import json.decoder import logging import sys from threading import Thread import six from docker.errors import APIError from six.moves.queue import Empty from six.moves.queue import Queue log = logging.getLogger(__name__) json_decoder = json.JSONDecoder() def parallel_execute(objects, obj_callable, msg_index, msg): """ For a given list of objects, call the callable passing in the first object we give it. """ stream = get_output_stream(sys.stdout) lines = [] for obj in objects: write_out_msg(stream, lines, msg_index(obj), msg) q = Queue() def inner_execute_function(an_callable, parameter, msg_index): error = None try: result = an_callable(parameter) except APIError as e: error = e.explanation result = "error" except Exception as e: error = e result = 'unexpected_exception' q.put((msg_index, result, error)) for an_object in objects: t = Thread( target=inner_execute_function, args=(obj_callable, an_object, msg_index(an_object)), ) t.daemon = True t.start() done = 0 errors = {} total_to_execute = len(objects) while done < total_to_execute: try: msg_index, result, error = q.get(timeout=1) if result == 'unexpected_exception': errors[msg_index] = result, error if result == 'error': errors[msg_index] = result, error write_out_msg(stream, lines, msg_index, msg, status='error') else: write_out_msg(stream, lines, msg_index, msg) done += 1 except Empty: pass if not errors: return stream.write("\n") for msg_index, (result, error) in errors.items(): stream.write("ERROR: for {} {} \n".format(msg_index, error)) if result == 'unexpected_exception': raise error def get_output_stream(stream): if six.PY3: return stream return codecs.getwriter('utf-8')(stream) def stream_as_text(stream): """Given a stream of bytes or text, if any of the items in the stream are bytes convert them to text. This function can be removed once docker-py returns text streams instead of byte streams. """ for data in stream: if not isinstance(data, six.text_type): data = data.decode('utf-8', 'replace') yield data def line_splitter(buffer, separator=u'\n'): index = buffer.find(six.text_type(separator)) if index == -1: return None return buffer[:index + 1], buffer[index + 1:] def split_buffer(stream, splitter=None, decoder=lambda a: a): """Given a generator which yields strings and a splitter function, joins all input, splits on the separator and yields each chunk. Unlike string.split(), each chunk includes the trailing separator, except for the last one if none was found on the end of the input. """ splitter = splitter or line_splitter buffered = six.text_type('') for data in stream_as_text(stream): buffered += data while True: buffer_split = splitter(buffered) if buffer_split is None: break item, buffered = buffer_split yield item if buffered: yield decoder(buffered) def json_splitter(buffer): """Attempt to parse a json object from a buffer. If there is at least one object, return it and the rest of the buffer, otherwise return None. """ try: obj, index = json_decoder.raw_decode(buffer) rest = buffer[json.decoder.WHITESPACE.match(buffer, index).end():] return obj, rest except ValueError: return None def json_stream(stream): """Given a stream of text, return a stream of json objects. This handles streams which are inconsistently buffered (some entries may be newline delimited, and others are not). """ return split_buffer(stream, json_splitter, json_decoder.decode) def write_out_msg(stream, lines, msg_index, msg, status="done"): """ Using special ANSI code characters we can write out the msg over the top of a previous status message, if it exists. """ obj_index = msg_index if msg_index in lines: position = lines.index(obj_index) diff = len(lines) - position # move up stream.write("%c[%dA" % (27, diff)) # erase stream.write("%c[2K\r" % 27) stream.write("{} {} ... {}\r".format(msg, obj_index, status)) # move back down stream.write("%c[%dB" % (27, diff)) else: diff = 0 lines.append(obj_index) stream.write("{} {} ... \r\n".format(msg, obj_index)) stream.flush() def json_hash(obj): dump = json.dumps(obj, sort_keys=True, separators=(',', ':')) h = hashlib.sha256() h.update(dump.encode('utf8')) return h.hexdigest() compose-1.5.2/contrib/000077500000000000000000000000001263011261000146145ustar00rootroot00000000000000compose-1.5.2/contrib/completion/000077500000000000000000000000001263011261000167655ustar00rootroot00000000000000compose-1.5.2/contrib/completion/bash/000077500000000000000000000000001263011261000177025ustar00rootroot00000000000000compose-1.5.2/contrib/completion/bash/docker-compose000066400000000000000000000205731263011261000225460ustar00rootroot00000000000000#!bash # # bash completion for docker-compose # # This work is based on the completion for the docker command. # # This script provides completion of: # - commands and their options # - service names # - filepaths # # To enable the completions either: # - place this file in /etc/bash_completion.d # or # - copy this file to e.g. ~/.docker-compose-completion.sh and add the line # below to your .bashrc after bash completion features are loaded # . ~/.docker-compose-completion.sh # For compatibility reasons, Compose and therefore its completion supports several # stack compositon files as listed here, in descending priority. # Support for these filenames might be dropped in some future version. __docker_compose_compose_file() { local file for file in docker-compose.y{,a}ml fig.y{,a}ml ; do [ -e $file ] && { echo $file return } done echo docker-compose.yml } # Extracts all service names from the compose file. ___docker_compose_all_services_in_compose_file() { awk -F: '/^[a-zA-Z0-9]/{print $1}' "${compose_file:-$(__docker_compose_compose_file)}" 2>/dev/null } # All services, even those without an existing container __docker_compose_services_all() { COMPREPLY=( $(compgen -W "$(___docker_compose_all_services_in_compose_file)" -- "$cur") ) } # All services that have an entry with the given key in their compose_file section ___docker_compose_services_with_key() { # flatten sections to one line, then filter lines containing the key and return section name. awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' "${compose_file:-$(__docker_compose_compose_file)}" 2>/dev/null | awk -F: -v key=": +$1:" '$0 ~ key {print $1}' } # All services that are defined by a Dockerfile reference __docker_compose_services_from_build() { COMPREPLY=( $(compgen -W "$(___docker_compose_services_with_key build)" -- "$cur") ) } # All services that are defined by an image __docker_compose_services_from_image() { COMPREPLY=( $(compgen -W "$(___docker_compose_services_with_key image)" -- "$cur") ) } # The services for which containers have been created, optionally filtered # by a boolean expression passed in as argument. __docker_compose_services_with() { local containers names containers="$(docker-compose 2>/dev/null ${compose_file:+-f $compose_file} ${compose_project:+-p $compose_project} ps -q)" names=( $(docker 2>/dev/null inspect --format "{{if ${1:-true}}} {{ .Name }} {{end}}" $containers) ) names=( ${names[@]%_*} ) # strip trailing numbers names=( ${names[@]#*_} ) # strip project name COMPREPLY=( $(compgen -W "${names[*]}" -- "$cur") ) } # The services for which at least one paused container exists __docker_compose_services_paused() { __docker_compose_services_with '.State.Paused' } # The services for which at least one running container exists __docker_compose_services_running() { __docker_compose_services_with '.State.Running' } # The services for which at least one stopped container exists __docker_compose_services_stopped() { __docker_compose_services_with 'not .State.Running' } _docker_compose_build() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--force-rm --help --no-cache --pull" -- "$cur" ) ) ;; *) __docker_compose_services_from_build ;; esac } _docker_compose_docker_compose() { case "$prev" in --file|-f) _filedir "y?(a)ml" return ;; --project-name|-p) return ;; --x-network-driver) COMPREPLY=( $( compgen -W "bridge host none overlay" -- "$cur" ) ) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--file -f --help -h --project-name -p --verbose --version -v --x-networking --x-network-driver" -- "$cur" ) ) ;; *) COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) ) ;; esac } _docker_compose_help() { COMPREPLY=( $( compgen -W "${commands[*]}" -- "$cur" ) ) } _docker_compose_kill() { case "$prev" in -s) COMPREPLY=( $( compgen -W "SIGHUP SIGINT SIGKILL SIGUSR1 SIGUSR2" -- "$(echo $cur | tr '[:lower:]' '[:upper:]')" ) ) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help -s" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_logs() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --no-color" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_migrate_to_labels() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; esac } _docker_compose_pause() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_port() { case "$prev" in --protocol) COMPREPLY=( $( compgen -W "tcp udp" -- "$cur" ) ) return; ;; --index) return; ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --index --protocol" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_ps() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help -q" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_pull() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --ignore-pull-failures" -- "$cur" ) ) ;; *) __docker_compose_services_from_image ;; esac } _docker_compose_restart() { case "$prev" in --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --timeout -t" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_rm() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--force -f --help -v" -- "$cur" ) ) ;; *) __docker_compose_services_stopped ;; esac } _docker_compose_run() { case "$prev" in -e) COMPREPLY=( $( compgen -e -- "$cur" ) ) compopt -o nospace return ;; --entrypoint|--name|--user|-u) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "-d --entrypoint -e --help --name --no-deps --publish -p --rm --service-ports -T --user -u" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_scale() { case "$prev" in =) COMPREPLY=("$cur") return ;; --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --timeout -t" -- "$cur" ) ) ;; *) COMPREPLY=( $(compgen -S "=" -W "$(___docker_compose_all_services_in_compose_file)" -- "$cur") ) compopt -o nospace ;; esac } _docker_compose_start() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; *) __docker_compose_services_stopped ;; esac } _docker_compose_stop() { case "$prev" in --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "--help --timeout -t" -- "$cur" ) ) ;; *) __docker_compose_services_running ;; esac } _docker_compose_unpause() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--help" -- "$cur" ) ) ;; *) __docker_compose_services_paused ;; esac } _docker_compose_up() { case "$prev" in --timeout|-t) return ;; esac case "$cur" in -*) COMPREPLY=( $( compgen -W "-d --help --no-build --no-color --no-deps --no-recreate --force-recreate --timeout -t" -- "$cur" ) ) ;; *) __docker_compose_services_all ;; esac } _docker_compose_version() { case "$cur" in -*) COMPREPLY=( $( compgen -W "--short" -- "$cur" ) ) ;; esac } _docker_compose() { local previous_extglob_setting=$(shopt -p extglob) shopt -s extglob local commands=( build help kill logs migrate-to-labels pause port ps pull restart rm run scale start stop unpause up version ) COMPREPLY=() local cur prev words cword _get_comp_words_by_ref -n : cur prev words cword # search subcommand and invoke its handler. # special treatment of some top-level options local command='docker_compose' local counter=1 local compose_file compose_project while [ $counter -lt $cword ]; do case "${words[$counter]}" in --file|-f) (( counter++ )) compose_file="${words[$counter]}" ;; --project-name|p) (( counter++ )) compose_project="${words[$counter]}" ;; --x-network-driver) (( counter++ )) ;; -*) ;; *) command="${words[$counter]}" break ;; esac (( counter++ )) done local completions_func=_docker_compose_${command//-/_} declare -F $completions_func >/dev/null && $completions_func eval "$previous_extglob_setting" return 0 } complete -F _docker_compose docker-compose compose-1.5.2/contrib/completion/zsh/000077500000000000000000000000001263011261000175715ustar00rootroot00000000000000compose-1.5.2/contrib/completion/zsh/_docker-compose000066400000000000000000000332501263011261000225700ustar00rootroot00000000000000#compdef docker-compose # Description # ----------- # zsh completion for docker-compose # https://github.com/sdurrheimer/docker-compose-zsh-completion # ------------------------------------------------------------------------- # Version # ------- # 1.5.0 # ------------------------------------------------------------------------- # Authors # ------- # * Steve Durrheimer # ------------------------------------------------------------------------- # Inspiration # ----------- # * @albers docker-compose bash completion script # * @felixr docker zsh completion script : https://github.com/felixr/docker-zsh-completion # ------------------------------------------------------------------------- # For compatibility reasons, Compose and therefore its completion supports several # stack compositon files as listed here, in descending priority. # Support for these filenames might be dropped in some future version. __docker-compose_compose_file() { local file for file in docker-compose.y{,a}ml fig.y{,a}ml ; do [ -e $file ] && { echo $file return } done echo docker-compose.yml } # Extracts all service names from docker-compose.yml. ___docker-compose_all_services_in_compose_file() { local already_selected local -a services already_selected=$(echo $words | tr " " "|") awk -F: '/^[a-zA-Z0-9]/{print $1}' "${compose_file:-$(__docker-compose_compose_file)}" 2>/dev/null | grep -Ev "$already_selected" } # All services, even those without an existing container __docker-compose_services_all() { [[ $PREFIX = -* ]] && return 1 integer ret=1 services=$(___docker-compose_all_services_in_compose_file) _alternative "args:services:($services)" && ret=0 return ret } # All services that have an entry with the given key in their docker-compose.yml section ___docker-compose_services_with_key() { local already_selected local -a buildable already_selected=$(echo $words | tr " " "|") # flatten sections to one line, then filter lines containing the key and return section name. awk '/^[a-zA-Z0-9]/{printf "\n"};{printf $0;next;}' "${compose_file:-$(__docker-compose_compose_file)}" 2>/dev/null | awk -F: -v key=": +$1:" '$0 ~ key {print $1}' 2>/dev/null | grep -Ev "$already_selected" } # All services that are defined by a Dockerfile reference __docker-compose_services_from_build() { [[ $PREFIX = -* ]] && return 1 integer ret=1 buildable=$(___docker-compose_services_with_key build) _alternative "args:buildable services:($buildable)" && ret=0 return ret } # All services that are defined by an image __docker-compose_services_from_image() { [[ $PREFIX = -* ]] && return 1 integer ret=1 pullable=$(___docker-compose_services_with_key image) _alternative "args:pullable services:($pullable)" && ret=0 return ret } __docker-compose_get_services() { [[ $PREFIX = -* ]] && return 1 integer ret=1 local kind declare -a running paused stopped lines args services docker_status=$(docker ps > /dev/null 2>&1) if [ $? -ne 0 ]; then _message "Error! Docker is not running." return 1 fi kind=$1 shift [[ $kind =~ (stopped|all) ]] && args=($args -a) lines=(${(f)"$(_call_program commands docker ps $args)"}) services=(${(f)"$(_call_program commands docker-compose 2>/dev/null $compose_options ps -q)"}) # Parse header line to find columns local i=1 j=1 k header=${lines[1]} declare -A begin end while (( j < ${#header} - 1 )); do i=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 1 )) j=$(( i + ${${header[$i,-1]}[(i) ]} - 1 )) k=$(( j + ${${header[$j,-1]}[(i)[^ ]]} - 2 )) begin[${header[$i,$((j-1))]}]=$i end[${header[$i,$((j-1))]}]=$k done lines=(${lines[2,-1]}) # Container ID local line s name local -a names for line in $lines; do if [[ ${services[@]} == *"${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}"* ]]; then names=(${(ps:,:)${${line[${begin[NAMES]},-1]}%% *}}) for name in $names; do s="${${name%_*}#*_}:${(l:15:: :::)${${line[${begin[CREATED]},${end[CREATED]}]/ ago/}%% ##}}" s="$s, ${line[${begin[CONTAINER ID]},${end[CONTAINER ID]}]%% ##}" s="$s, ${${${line[${begin[IMAGE]},${end[IMAGE]}]}/:/\\:}%% ##}" if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = Exit* ]]; then stopped=($stopped $s) else if [[ ${line[${begin[STATUS]},${end[STATUS]}]} = *\(Paused\)* ]]; then paused=($paused $s) fi running=($running $s) fi done fi done [[ $kind =~ (running|all) ]] && _describe -t services-running "running services" running "$@" && ret=0 [[ $kind =~ (paused|all) ]] && _describe -t services-paused "paused services" paused "$@" && ret=0 [[ $kind =~ (stopped|all) ]] && _describe -t services-stopped "stopped services" stopped "$@" && ret=0 return ret } __docker-compose_pausedservices() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services paused "$@" } __docker-compose_stoppedservices() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services stopped "$@" } __docker-compose_runningservices() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services running "$@" } __docker-compose_services() { [[ $PREFIX = -* ]] && return 1 __docker-compose_get_services all "$@" } __docker-compose_caching_policy() { oldp=( "$1"(Nmh+1) ) # 1 hour (( $#oldp )) } __docker-compose_commands() { local cache_policy zstyle -s ":completion:${curcontext}:" cache-policy cache_policy if [[ -z "$cache_policy" ]]; then zstyle ":completion:${curcontext}:" cache-policy __docker-compose_caching_policy fi if ( [[ ${+_docker_compose_subcommands} -eq 0 ]] || _cache_invalid docker_compose_subcommands) \ && ! _retrieve_cache docker_compose_subcommands; then local -a lines lines=(${(f)"$(_call_program commands docker-compose 2>&1)"}) _docker_compose_subcommands=(${${${lines[$((${lines[(i)Commands:]} + 1)),${lines[(I) *]}]}## #}/ ##/:}) _store_cache docker_compose_subcommands _docker_compose_subcommands fi _describe -t docker-compose-commands "docker-compose command" _docker_compose_subcommands } __docker-compose_subcommand() { local opts_help='(: -)--help[Print usage]' integer ret=1 case "$words[1]" in (build) _arguments \ $opts_help \ '--force-rm[Always remove intermediate containers.]' \ '--no-cache[Do not use cache when building the image]' \ '--pull[Always attempt to pull a newer version of the image.]' \ '*:services:__docker-compose_services_from_build' && ret=0 ;; (help) _arguments ':subcommand:__docker-compose_commands' && ret=0 ;; (kill) _arguments \ $opts_help \ '-s[SIGNAL to send to the container. Default signal is SIGKILL.]:signal:_signals' \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (logs) _arguments \ $opts_help \ '--no-color[Produce monochrome output.]' \ '*:services:__docker-compose_services_all' && ret=0 ;; (migrate-to-labels) _arguments -A '-*' \ $opts_help \ '(-):Recreate containers to add labels' && ret=0 ;; (pause) _arguments \ $opts_help \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (port) _arguments \ $opts_help \ '--protocol=-[tcp or udap (defaults to tcp)]:protocol:(tcp udp)' \ '--index=-[index of the container if there are mutiple instances of a service (defaults to 1)]:index: ' \ '1:running services:__docker-compose_runningservices' \ '2:port:_ports' && ret=0 ;; (ps) _arguments \ $opts_help \ '-q[Only display IDs]' \ '*:services:__docker-compose_services_all' && ret=0 ;; (pull) _arguments \ $opts_help \ '--ignore-pull-failures[Pull what it can and ignores images with pull failures.]' \ '*:services:__docker-compose_services_from_image' && ret=0 ;; (rm) _arguments \ $opts_help \ '(-f --force)'{-f,--force}"[Don't ask to confirm removal]" \ '-v[Remove volumes associated with containers]' \ '*:stopped services:__docker-compose_stoppedservices' && ret=0 ;; (run) _arguments \ $opts_help \ '-d[Detached mode: Run container in the background, print new container name.]' \ '--name[Assign a name to the container]:name: ' \ '--entrypoint[Overwrite the entrypoint of the image.]:entry point: ' \ '*-e[KEY=VAL Set an environment variable (can be used multiple times)]:environment variable KEY=VAL: ' \ '(-u --user)'{-u,--user=-}'[Run as specified username or uid]:username or uid:_users' \ "--no-deps[Don't start linked services.]" \ '--rm[Remove container after run. Ignored in detached mode.]' \ "--service-ports[Run command with the service's ports enabled and mapped to the host.]" \ '(-p --publish)'{-p,--publish=-}"[Run command with manually mapped container's port(s) to the host.]" \ '-T[Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY.]' \ '(-):services:__docker-compose_services' \ '(-):command: _command_names -e' \ '*::arguments: _normal' && ret=0 ;; (scale) _arguments \ $opts_help \ '(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (start) _arguments \ $opts_help \ '*:stopped services:__docker-compose_stoppedservices' && ret=0 ;; (stop|restart) _arguments \ $opts_help \ '(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \ '*:running services:__docker-compose_runningservices' && ret=0 ;; (unpause) _arguments \ $opts_help \ '*:paused services:__docker-compose_pausedservices' && ret=0 ;; (up) _arguments \ $opts_help \ '-d[Detached mode: Run containers in the background, print new container names.]' \ '--no-color[Produce monochrome output.]' \ "--no-deps[Don't start linked services.]" \ "--force-recreate[Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate.]" \ "--no-recreate[If containers already exist, don't recreate them.]" \ "--no-build[Don't build an image, even if it's missing]" \ '(-t --timeout)'{-t,--timeout}"[Specify a shutdown timeout in seconds. (default: 10)]:seconds: " \ '*:services:__docker-compose_services_all' && ret=0 ;; (version) _arguments \ $opts_help \ "--short[Shows only Compose's version number.]" && ret=0 ;; (*) _message 'Unknown sub command' && ret=1 ;; esac return ret } _docker-compose() { # Support for subservices, which allows for `compdef _docker docker-shell=_docker_containers`. # Based on /usr/share/zsh/functions/Completion/Unix/_git without support for `ret`. if [[ $service != docker-compose ]]; then _call_function - _$service return fi local curcontext="$curcontext" state line integer ret=1 typeset -A opt_args _arguments -C \ '(- :)'{-h,--help}'[Get help]' \ '--verbose[Show more output]' \ '(- :)'{-v,--version}'[Print version and exit]' \ '(-f --file)'{-f,--file}'[Specify an alternate docker-compose file (default: docker-compose.yml)]:file:_files -g "*.yml"' \ '(-p --project-name)'{-p,--project-name}'[Specify an alternate project name (default: directory name)]:project name:' \ '--x-networking[(EXPERIMENTAL) Use new Docker networking functionality. Requires Docker 1.9 or later.]' \ '--x-network-driver[(EXPERIMENTAL) Specify a network driver (default: "bridge"). Requires Docker 1.9 or later.]:Network Driver:(bridge host none overlay)' \ '(-): :->command' \ '(-)*:: :->option-or-argument' && ret=0 local compose_file=${opt_args[-f]}${opt_args[--file]} local compose_project=${opt_args[-p]}${opt_args[--project-name]} local compose_options="${compose_file:+--file $compose_file} ${compose_project:+--project-name $compose_project}" case $state in (command) __docker-compose_commands && ret=0 ;; (option-or-argument) curcontext=${curcontext%:*:*}:docker-compose-$words[1]: __docker-compose_subcommand && ret=0 ;; esac return ret } _docker-compose "$@" compose-1.5.2/docker-compose.spec000066400000000000000000000017001263011261000167400ustar00rootroot00000000000000# -*- mode: python -*- block_cipher = None a = Analysis(['bin/docker-compose'], pathex=['.'], hiddenimports=[], hookspath=None, runtime_hooks=None, cipher=block_cipher) pyz = PYZ(a.pure, cipher=block_cipher) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, [ ( 'compose/config/fields_schema.json', 'compose/config/fields_schema.json', 'DATA' ), ( 'compose/config/service_schema.json', 'compose/config/service_schema.json', 'DATA' ), ( 'compose/GITSHA', 'compose/GITSHA', 'DATA' ) ], name='docker-compose', debug=False, strip=None, upx=True, console=True) compose-1.5.2/docs/000077500000000000000000000000001263011261000141045ustar00rootroot00000000000000compose-1.5.2/docs/Dockerfile000066400000000000000000000013751263011261000161040ustar00rootroot00000000000000FROM docs/base:hugo-github-linking MAINTAINER Mary Anthony (@moxiegirl) RUN svn checkout https://github.com/docker/docker/trunk/docs /docs/content/engine RUN svn checkout https://github.com/docker/swarm/trunk/docs /docs/content/swarm RUN svn checkout https://github.com/docker/machine/trunk/docs /docs/content/machine RUN svn checkout https://github.com/docker/distribution/trunk/docs /docs/content/registry RUN svn checkout https://github.com/kitematic/kitematic/trunk/docs /docs/content/kitematic RUN svn checkout https://github.com/docker/tutorials/trunk/docs /docs/content/tutorials RUN svn checkout https://github.com/docker/opensource/trunk/docs /docs/content # To get the git info for this repo COPY . /src COPY . /docs/content/compose/ compose-1.5.2/docs/Makefile000066400000000000000000000045621263011261000155530ustar00rootroot00000000000000.PHONY: all binary build cross default docs docs-build docs-shell shell test test-unit test-integration test-integration-cli test-docker-py validate # env vars passed through directly to Docker's build scripts # to allow things like `make DOCKER_CLIENTONLY=1 binary` easily # `docs/sources/contributing/devenvironment.md ` and `project/PACKAGERS.md` have some limited documentation of some of these DOCKER_ENVS := \ -e BUILDFLAGS \ -e DOCKER_CLIENTONLY \ -e DOCKER_EXECDRIVER \ -e DOCKER_GRAPHDRIVER \ -e TESTDIRS \ -e TESTFLAGS \ -e TIMEOUT # note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds # to allow `make DOCSDIR=1 docs-shell` (to create a bind mount in docs) DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR):/docs/content/compose) # to allow `make DOCSPORT=9000 docs` DOCSPORT := 8000 # Get the IP ADDRESS DOCKER_IP=$(shell python -c "import urlparse ; print urlparse.urlparse('$(DOCKER_HOST)').hostname or ''") HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER_IP)") HUGO_BIND_IP=0.0.0.0 GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null) DOCKER_IMAGE := docker$(if $(GIT_BRANCH),:$(GIT_BRANCH)) DOCKER_DOCS_IMAGE := docs-base$(if $(GIT_BRANCH),:$(GIT_BRANCH)) DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE # for some docs workarounds (see below in "docs-build" target) GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null) default: docs docs: docs-build $(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP) --watch docs-draft: docs-build $(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP) docs-shell: docs-build $(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash docs-build: # ( git remote | grep -v upstream ) || git diff --name-status upstream/release..upstream/docs ./ > ./changed-files # echo "$(GIT_BRANCH)" > GIT_BRANCH # echo "$(AWS_S3_BUCKET)" > AWS_S3_BUCKET # echo "$(GITCOMMIT)" > GITCOMMIT docker build -t "$(DOCKER_DOCS_IMAGE)" . compose-1.5.2/docs/README.md000066400000000000000000000072701263011261000153710ustar00rootroot00000000000000# Contributing to the Docker Compose documentation The documentation in this directory is part of the [https://docs.docker.com](https://docs.docker.com) website. Docker uses [the Hugo static generator](http://gohugo.io/overview/introduction/) to convert project Markdown files to a static HTML site. You don't need to be a Hugo expert to contribute to the compose documentation. If you are familiar with Markdown, you can modify the content in the `docs` files. If you want to add a new file or change the location of the document in the menu, you do need to know a little more. ## Documentation contributing workflow 1. Edit a Markdown file in the tree. 2. Save your changes. 3. Make sure you are in the `docs` subdirectory. 4. Build the documentation. $ make docs ---> ffcf3f6c4e97 Removing intermediate container a676414185e8 Successfully built ffcf3f6c4e97 docker run --rm -it -e AWS_S3_BUCKET -e NOCACHE -p 8000:8000 -e DOCKERHOST "docs-base:test-tooling" hugo server --port=8000 --baseUrl=192.168.59.103 --bind=0.0.0.0 ERROR: 2015/06/13 MenuEntry's .Url is deprecated and will be removed in Hugo 0.15. Use .URL instead. 0 of 4 drafts rendered 0 future content 12 pages created 0 paginator pages created 0 tags created 0 categories created in 55 ms Serving pages from /docs/public Web Server is available at http://0.0.0.0:8000/ Press Ctrl+C to stop 5. Open the available server in your browser. The documentation server has the complete menu but only the Docker Compose documentation resolves. You can't access the other project docs from this localized build. ## Tips on Hugo metadata and menu positioning The top of each Docker Compose documentation file contains TOML metadata. The metadata is commented out to prevent it from appearing in GitHub. The metadata alone has this structure: +++ title = "Extending services in Compose" description = "How to use Docker Compose's extends keyword to share configuration between files and projects" keywords = ["fig, composition, compose, docker, orchestration, documentation, docs"] [menu.main] parent="smn_workw_compose" weight=2 +++ The `[menu.main]` section refers to navigation defined [in the main Docker menu](https://github.com/docker/docs-base/blob/hugo/config.toml). This metadata says *add a menu item called* Extending services in Compose *to the menu with the* `smn_workdw_compose` *identifier*. If you locate the menu in the configuration, you'll find *Create multi-container applications* is the menu title. You can move an article in the tree by specifying a new parent. You can shift the location of the item by changing its weight. Higher numbers are heavier and shift the item to the bottom of menu. Low or no numbers shift it up. ## Other key documentation repositories The `docker/docs-base` repository contains [the Hugo theme and menu configuration](https://github.com/docker/docs-base). If you open the `Dockerfile` you'll see the `make docs` relies on this as a base image for building the Compose documentation. The `docker/docs.docker.com` repository contains [build system for building the Docker documentation site](https://github.com/docker/docs.docker.com). Fork this repository to build the entire documentation site. compose-1.5.2/docs/completion.md000066400000000000000000000047001263011261000166000ustar00rootroot00000000000000 # Command-line Completion Compose comes with [command completion](http://en.wikipedia.org/wiki/Command-line_completion) for the bash and zsh shell. ## Installing Command Completion ### Bash Make sure bash completion is installed. If you use a current Linux in a non-minimal installation, bash completion should be available. On a Mac, install with `brew install bash-completion` Place the completion script in `/etc/bash_completion.d/` (`/usr/local/etc/bash_completion.d/` on a Mac), using e.g. curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose Completion will be available upon next login. ### Zsh Place the completion script in your `/path/to/zsh/completion`, using e.g. `~/.zsh/completion/` mkdir -p ~/.zsh/completion curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose --version | awk 'NR==1{print $NF}')/contrib/completion/zsh/_docker-compose > ~/.zsh/completion/_docker-compose Include the directory in your `$fpath`, e.g. by adding in `~/.zshrc` fpath=(~/.zsh/completion $fpath) Make sure `compinit` is loaded or do it by adding in `~/.zshrc` autoload -Uz compinit && compinit -i Then reload your shell exec $SHELL -l ## Available completions Depending on what you typed on the command line so far, it will complete - available docker-compose commands - options that are available for a particular command - service names that make sense in a given context (e.g. services with running or stopped instances or services based on images vs. services based on Dockerfiles). For `docker-compose scale`, completed service names will automatically have "=" appended. - arguments for selected options, e.g. `docker-compose kill -s` will complete some signals like SIGHUP and SIGUSR1. Enjoy working with Compose faster and with less typos! ## Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/compose-file.md000066400000000000000000000315071263011261000170160ustar00rootroot00000000000000 # Compose file reference The compose file is a [YAML](http://yaml.org/) file where all the top level keys are the name of a service, and the values are the service definition. The default path for a compose file is `./docker-compose.yml`. Each service defined in `docker-compose.yml` must specify exactly one of `image` or `build`. Other keys are optional, and are analogous to their `docker run` command-line counterparts. As with `docker run`, options specified in the Dockerfile (e.g., `CMD`, `EXPOSE`, `VOLUME`, `ENV`) are respected by default - you don't need to specify them again in `docker-compose.yml`. ## Service configuration reference This section contains a list of all configuration options supported by a service definition. ### build Either a path to a directory containing a Dockerfile, or a url to a git repository. When the value supplied is a relative path, it is interpreted as relative to the location of the Compose file. This directory is also the build context that is sent to the Docker daemon. Compose will build and tag it with a generated name, and use that image thereafter. build: /path/to/build/dir Using `build` together with `image` is not allowed. Attempting to do so results in an error. ### cap_add, cap_drop Add or drop container capabilities. See `man 7 capabilities` for a full list. cap_add: - ALL cap_drop: - NET_ADMIN - SYS_ADMIN ### command Override the default command. command: bundle exec thin -p 3000 ### cgroup_parent Specify an optional parent cgroup for the container. cgroup_parent: m-executor-abcd ### container_name Specify a custom container name, rather than a generated default name. container_name: my-web-container Because Docker container names must be unique, you cannot scale a service beyond 1 container if you have specified a custom name. Attempting to do so results in an error. ### devices List of device mappings. Uses the same format as the `--device` docker client create option. devices: - "/dev/ttyUSB0:/dev/ttyUSB0" ### dns Custom DNS servers. Can be a single value or a list. dns: 8.8.8.8 dns: - 8.8.8.8 - 9.9.9.9 ### dns_search Custom DNS search domains. Can be a single value or a list. dns_search: example.com dns_search: - dc1.example.com - dc2.example.com ### dockerfile Alternate Dockerfile. Compose will use an alternate file to build with. A build path must also be specified using the `build` key. build: /path/to/build/dir dockerfile: Dockerfile-alternate Using `dockerfile` together with `image` is not allowed. Attempting to do so results in an error. ### env_file Add environment variables from a file. Can be a single value or a list. If you have specified a Compose file with `docker-compose -f FILE`, paths in `env_file` are relative to the directory that file is in. Environment variables specified in `environment` override these values. env_file: .env env_file: - ./common.env - ./apps/web.env - /opt/secrets.env Compose expects each line in an env file to be in `VAR=VAL` format. Lines beginning with `#` (i.e. comments) are ignored, as are blank lines. # Set Rails/Rack environment RACK_ENV=development ### environment Add environment variables. You can use either an array or a dictionary. Any boolean values; true, false, yes no, need to be enclosed in quotes to ensure they are not converted to True or False by the YML parser. Environment variables with only a key are resolved to their values on the machine Compose is running on, which can be helpful for secret or host-specific values. environment: RACK_ENV: development SHOW: 'true' SESSION_SECRET: environment: - RACK_ENV=development - SHOW=true - SESSION_SECRET ### expose Expose ports without publishing them to the host machine - they'll only be accessible to linked services. Only the internal port can be specified. expose: - "3000" - "8000" ### extends Extend another service, in the current file or another, optionally overriding configuration. You can use `extends` on any service together with other configuration keys. The `extends` value must be a dictionary defined with a required `service` and an optional `file` key. extends: file: common.yml service: webapp The `service` the name of the service being extended, for example `web` or `database`. The `file` is the location of a Compose configuration file defining that service. If you omit the `file` Compose looks for the service configuration in the current file. The `file` value can be an absolute or relative path. If you specify a relative path, Compose treats it as relative to the location of the current file. You can extend a service that itself extends another. You can extend indefinitely. Compose does not support circular references and `docker-compose` returns an error if it encounters one. For more on `extends`, see the [the extends documentation](extends.md#extending-services). ### external_links Link to containers started outside this `docker-compose.yml` or even outside of Compose, especially for containers that provide shared or common services. `external_links` follow semantics similar to `links` when specifying both the container name and the link alias (`CONTAINER:ALIAS`). external_links: - redis_1 - project_db_1:mysql - project_db_1:postgresql ### extra_hosts Add hostname mappings. Use the same values as the docker client `--add-host` parameter. extra_hosts: - "somehost:162.242.195.82" - "otherhost:50.31.209.229" An entry with the ip address and hostname will be created in `/etc/hosts` inside containers for this service, e.g: 162.242.195.82 somehost 50.31.209.229 otherhost ### image Tag or partial image ID. Can be local or remote - Compose will attempt to pull if it doesn't exist locally. image: ubuntu image: orchardup/postgresql image: a4bc65fd ### labels Add metadata to containers using [Docker labels](http://docs.docker.com/userguide/labels-custom-metadata/). You can use either an array or a dictionary. It's recommended that you use reverse-DNS notation to prevent your labels from conflicting with those used by other software. labels: com.example.description: "Accounting webapp" com.example.department: "Finance" com.example.label-with-empty-value: "" labels: - "com.example.description=Accounting webapp" - "com.example.department=Finance" - "com.example.label-with-empty-value" ### links Link to containers in another service. Either specify both the service name and the link alias (`SERVICE:ALIAS`), or just the service name (which will also be used for the alias). links: - db - db:database - redis An entry with the alias' name will be created in `/etc/hosts` inside containers for this service, e.g: 172.17.2.186 db 172.17.2.186 database 172.17.2.187 redis Environment variables will also be created - see the [environment variable reference](env.md) for details. ### log_driver Specify a logging driver for the service's containers, as with the ``--log-driver`` option for docker run ([documented here](https://docs.docker.com/reference/logging/overview/)). The default value is json-file. log_driver: "json-file" log_driver: "syslog" log_driver: "none" > **Note:** Only the `json-file` driver makes the logs available directly from > `docker-compose up` and `docker-compose logs`. Using any other driver will not > print any logs. ### log_opt Specify logging options with `log_opt` for the logging driver, as with the ``--log-opt`` option for `docker run`. Logging options are key value pairs. An example of `syslog` options: log_driver: "syslog" log_opt: syslog-address: "tcp://192.168.0.42:123" ### net Networking mode. Use the same values as the docker client `--net` parameter. net: "bridge" net: "none" net: "container:[name or id]" net: "host" ### pid pid: "host" Sets the PID mode to the host PID mode. This turns on sharing between container and the host operating system the PID address space. Containers launched with this flag will be able to access and manipulate other containers in the bare-metal machine's namespace and vise-versa. ### ports Expose ports. Either specify both ports (`HOST:CONTAINER`), or just the container port (a random host port will be chosen). > **Note:** When mapping ports in the `HOST:CONTAINER` format, you may experience > erroneous results when using a container port lower than 60, because YAML will > parse numbers in the format `xx:yy` as sexagesimal (base 60). For this reason, > we recommend always explicitly specifying your port mappings as strings. ports: - "3000" - "3000-3005" - "8000:8000" - "9090-9091:8080-8081" - "49100:22" - "127.0.0.1:8001:8001" - "127.0.0.1:5000-5010:5000-5010" ### security_opt Override the default labeling scheme for each container. security_opt: - label:user:USER - label:role:ROLE ### ulimits Override the default ulimits for a container. You can either specify a single limit as an integer or soft/hard limits as a mapping. ulimits: nproc: 65535 nofile: soft: 20000 hard: 40000 ### volumes, volume\_driver Mount paths as volumes, optionally specifying a path on the host machine (`HOST:CONTAINER`), or an access mode (`HOST:CONTAINER:ro`). volumes: - /var/lib/mysql - ./cache:/tmp/cache - ~/configs:/etc/configs/:ro You can mount a relative path on the host, which will expand relative to the directory of the Compose configuration file being used. Relative paths should always begin with `.` or `..`. If you use a volume name (instead of a volume path), you may also specify a `volume_driver`. volume_driver: mydriver > Note: No path expansion will be done if you have also specified a > `volume_driver`. See [Docker Volumes](https://docs.docker.com/userguide/dockervolumes/) and [Volume Plugins](https://docs.docker.com/extend/plugins_volume/) for more information. ### volumes_from Mount all of the volumes from another service or container, optionally specifying read-only access(``ro``) or read-write(``rw``). volumes_from: - service_name - container_name - service_name:rw ### cpu\_shares, cpuset, domainname, entrypoint, hostname, ipc, mac\_address, mem\_limit, memswap\_limit, privileged, read\_only, restart, stdin\_open, tty, user, working\_dir Each of these is a single value, analogous to its [docker run](https://docs.docker.com/reference/run/) counterpart. cpu_shares: 73 cpuset: 0,1 entrypoint: /code/entrypoint.sh user: postgresql working_dir: /code domainname: foo.com hostname: foo ipc: host mac_address: 02:42:ac:11:65:43 mem_limit: 1000000000 memswap_limit: 2000000000 privileged: true restart: always read_only: true stdin_open: true tty: true ## Variable substitution Your configuration options can contain environment variables. Compose uses the variable values from the shell environment in which `docker-compose` is run. For example, suppose the shell contains `POSTGRES_VERSION=9.3` and you supply this configuration: db: image: "postgres:${POSTGRES_VERSION}" When you run `docker-compose up` with this configuration, Compose looks for the `POSTGRES_VERSION` environment variable in the shell and substitutes its value in. For this example, Compose resolves the `image` to `postgres:9.3` before running the configuration. If an environment variable is not set, Compose substitutes with an empty string. In the example above, if `POSTGRES_VERSION` is not set, the value for the `image` option is `postgres:`. Both `$VARIABLE` and `${VARIABLE}` syntax are supported. Extended shell-style features, such as `${VARIABLE-default}` and `${VARIABLE/foo/bar}`, are not supported. You can use a `$$` (double-dollar sign) when your configuration needs a literal dollar sign. This also prevents Compose from interpolating a value, so a `$$` allows you to refer to environment variables that you don't want processed by Compose. web: build: . command: "$$VAR_NOT_INTERPOLATED_BY_COMPOSE" If you forget and use a single dollar sign (`$`), Compose interprets the value as an environment variable and will warn you: The VAR_NOT_INTERPOLATED_BY_COMPOSE is not set. Substituting an empty string. ## Compose documentation - [User guide](index.md) - [Installing Compose](install.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) compose-1.5.2/docs/django.md000066400000000000000000000145031263011261000156730ustar00rootroot00000000000000 # Quickstart Guide: Compose and Django This quick-start guide demonstrates how to use Compose to set up and run a simple Django/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md). ## Define the project components For this project, you need to create a Dockerfile, a Python dependencies file, and a `docker-compose.yml` file. 1. Create an empty project directory. You can name the directory something easy for you to remember. This directory is the context for your application image. The directory should only contain resources to build that image. 2. Create a new file called `Dockerfile` in your project directory. The Dockerfile defines an application's image content via one or more build commands that configure that image. Once built, you can run the image in a container. For more information on `Dockerfiles`, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). 3. Add the following content to the `Dockerfile`. FROM python:2.7 ENV PYTHONUNBUFFERED 1 RUN mkdir /code WORKDIR /code ADD requirements.txt /code/ RUN pip install -r requirements.txt ADD . /code/ This `Dockerfile` starts with a Python 2.7 base image. The base image is modified by adding a new `code` directory. The base image is further modified by installing the Python requirements defined in the `requirements.txt` file. 4. Save and close the `Dockerfile`. 5. Create a `requirements.txt` in your project directory. This file is used by the `RUN pip install -r requirements.txt` command in your `Dockerfile`. 6. Add the required software in the file. Django psycopg2 7. Save and close the `requirements.txt` file. 8. Create a file called `docker-compose.yml` in your project directory. The `docker-compose.yml` file describes the services that make your app. In this example those services are a web server and database. The compose file also describes which Docker images these services use, how they link together, any volumes they might need mounted inside the containers. Finally, the `docker-compose.yml` file describes which ports these services expose. See the [`docker-compose.yml` reference](compose-file.md) for more information on how this file works. 9. Add the following configuration to the file. db: image: postgres web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" links: - db This file defines two services: The `db` service and the `web` service. 10. Save and close the `docker-compose.yml` file. ## Create a Django project In this step, you create a Django started project by building the image from the build context defined in the previous procedure. 1. Change to the root of your project directory. 2. Create the Django project using the `docker-compose` command. $ docker-compose run web django-admin.py startproject composeexample . This instructs Compose to run `django-admin.py startproject composeeexample` in a container, using the `web` service's image and configuration. Because the `web` image doesn't exist yet, Compose builds it from the current directory, as specified by the `build: .` line in `docker-compose.yml`. Once the `web` service image is built, Compose runs it and executes the `django-admin.py startproject` command in the container. This command instructs Django to create a set of files and directories representing a Django project. 3. After the `docker-compose` command completes, list the contents of your project. $ ls -l drwxr-xr-x 2 root root composeexample -rw-rw-r-- 1 user user docker-compose.yml -rw-rw-r-- 1 user user Dockerfile -rwxr-xr-x 1 root root manage.py -rw-rw-r-- 1 user user requirements.txt The files `django-admin` created are owned by root. This happens because the container runs as the `root` user. 4. Change the ownership of the new files. sudo chown -R $USER:$USER . ## Connect the database In this section, you set up the database connection for Django. 1. In your project dirctory, edit the `composeexample/settings.py` file. 2. Replace the `DATABASES = ...` with the following: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'postgres', 'USER': 'postgres', 'HOST': 'db', 'PORT': 5432, } } These settings are determined by the [postgres](https://registry.hub.docker.com/_/postgres/) Docker image specified in `docker-compose.yml`. 3. Save and close the file. 4. Run the `docker-compose up` command. $ docker-compose up Starting composepractice_db_1... Starting composepractice_web_1... Attaching to composepractice_db_1, composepractice_web_1 ... db_1 | PostgreSQL init process complete; ready for start up. ... db_1 | LOG: database system is ready to accept connections db_1 | LOG: autovacuum launcher started .. web_1 | Django version 1.8.4, using settings 'composeexample.settings' web_1 | Starting development server at http://0.0.0.0:8000/ web_1 | Quit the server with CONTROL-C. At this point, your Django app should be running at port `8000` on your Docker host. If you are using a Docker Machine VM, you can use the `docker-machine ip MACHINE_NAME` to get the IP address. ## More Compose documentation - [User guide](../index.md) - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/env.md000066400000000000000000000034221263011261000152170ustar00rootroot00000000000000 # Compose environment variables reference **Note:** Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the [docker-compose.yml documentation](compose-file.md#links) for details. Compose uses [Docker links] to expose services' containers to one another. Each linked container injects a set of environment variables, each of which begins with the uppercase name of the container. To see what environment variables are available to a service, run `docker-compose run SERVICE env`. name\_PORT
Full URL, e.g. `DB_PORT=tcp://172.17.0.5:5432` name\_PORT\_num\_protocol
Full URL, e.g. `DB_PORT_5432_TCP=tcp://172.17.0.5:5432` name\_PORT\_num\_protocol\_ADDR
Container's IP address, e.g. `DB_PORT_5432_TCP_ADDR=172.17.0.5` name\_PORT\_num\_protocol\_PORT
Exposed port number, e.g. `DB_PORT_5432_TCP_PORT=5432` name\_PORT\_num\_protocol\_PROTO
Protocol (tcp or udp), e.g. `DB_PORT_5432_TCP_PROTO=tcp` name\_NAME
Fully qualified container name, e.g. `DB_1_NAME=/myapp_web_1/myapp_db_1` [Docker links]: http://docs.docker.com/userguide/dockerlinks/ ## Related Information - [User guide](index.md) - [Installing Compose](install.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/extends.md000066400000000000000000000236031263011261000161040ustar00rootroot00000000000000 # Extending services and Compose files Compose supports two methods of sharing common configuration: 1. Extending an entire Compose file by [using multiple Compose files](#multiple-compose-files) 2. Extending individual services with [the `extends` field](#extending-services) ## Multiple Compose files Using multiple Compose files enables you to customize a Compose application for different environments or different workflows. ### Understanding multiple Compose files By default, Compose reads two files, a `docker-compose.yml` and an optional `docker-compose.override.yml` file. By convention, the `docker-compose.yml` contains your base configuration. The override file, as its name implies, can contain configuration overrides for existing services or entirely new services. If a service is defined in both files, Compose merges the configurations using the same rules as the `extends` field (see [Adding and overriding configuration](#adding-and-overriding-configuration)), with one exception. If a service contains `links` or `volumes_from` those fields are copied over and replace any values in the original service, in the same way single-valued fields are copied. To use multiple override files, or an override file with a different name, you can use the `-f` option to specify the list of files. Compose merges files in the order they're specified on the command line. See the [`docker-compose` command reference](./reference/docker-compose.md) for more information about using `-f`. When you use multiple configuration files, you must make sure all paths in the files are relative to the base Compose file (the first Compose file specified with `-f`). This is required because override files need not be valid Compose files. Override files can contain small fragments of configuration. Tracking which fragment of a service is relative to which path is difficult and confusing, so to keep paths easier to understand, all paths must be defined relative to the base file. ### Example use case In this section are two common use cases for multiple compose files: changing a Compose app for different environments, and running administrative tasks against a Compose app. #### Different environments A common use case for multiple files is changing a development Compose app for a production-like environment (which may be production, staging or CI). To support these differences, you can split your Compose configuration into a few different files: Start with a base file that defines the canonical configuration for the services. **docker-compose.yml** web: image: example/my_web_app:latest links: - db - cache db: image: postgres:latest cache: image: redis:latest In this example the development configuration exposes some ports to the host, mounts our code as a volume, and builds the web image. **docker-compose.override.yml** web: build: . volumes: - '.:/code' ports: - 8883:80 environment: DEBUG: 'true' db: command: '-d' ports: - 5432:5432 cache: ports: - 6379:6379 When you run `docker-compose up` it reads the overrides automatically. Now, it would be nice to use this Compose app in a production environment. So, create another override file (which might be stored in a different git repo or managed by a different team). **docker-compose.prod.yml** web: ports: - 80:80 environment: PRODUCTION: 'true' cache: environment: TTL: '500' To deploy with this production Compose file you can run docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d This deploys all three services using the configuration in `docker-compose.yml` and `docker-compose.prod.yml` (but not the dev configuration in `docker-compose.override.yml`). See [production](production.md) for more information about Compose in production. #### Administrative tasks Another common use case is running adhoc or administrative tasks against one or more services in a Compose app. This example demonstrates running a database backup. Start with a **docker-compose.yml**. web: image: example/my_web_app:latest links: - db db: image: postgres:latest In a **docker-compose.admin.yml** add a new service to run the database export or backup. dbadmin: build: database_admin/ links: - db To start a normal environment run `docker-compose up -d`. To run a database backup, include the `docker-compose.admin.yml` as well. docker-compose -f docker-compose.yml -f docker-compose.admin.yml \ run dbadmin db-backup ## Extending services Docker Compose's `extends` keyword enables sharing of common configurations among different files, or even different projects entirely. Extending services is useful if you have several services that reuse a common set of configuration options. Using `extends` you can define a common set of service options in one place and refer to it from anywhere. > **Note:** `links` and `volumes_from` are never shared between services using > `extends`. See > [Adding and overriding configuration](#adding-and-overriding-configuration) > for more information. ### Understand the extends configuration When defining any service in `docker-compose.yml`, you can declare that you are extending another service like this: web: extends: file: common-services.yml service: webapp This instructs Compose to re-use the configuration for the `webapp` service defined in the `common-services.yml` file. Suppose that `common-services.yml` looks like this: webapp: build: . ports: - "8000:8000" volumes: - "/data" In this case, you'll get exactly the same result as if you wrote `docker-compose.yml` with the same `build`, `ports` and `volumes` configuration values defined directly under `web`. You can go further and define (or re-define) configuration locally in `docker-compose.yml`: web: extends: file: common-services.yml service: webapp environment: - DEBUG=1 cpu_shares: 5 important_web: extends: web cpu_shares: 10 You can also write other services and link your `web` service to them: web: extends: file: common-services.yml service: webapp environment: - DEBUG=1 cpu_shares: 5 links: - db db: image: postgres ### Example use case Extending an individual service is useful when you have multiple services that have a common configuration. The example below is a Compose app with two services: a web application and a queue worker. Both services use the same codebase and share many configuration options. In a **common.yml** we define the common configuration: app: build: . environment: CONFIG_FILE_PATH: /code/config API_KEY: xxxyyy cpu_shares: 5 In a **docker-compose.yml** we define the concrete services which use the common configuration: webapp: extends: file: common.yml service: app command: /code/run_web_app ports: - 8080:8080 links: - queue - db queue_worker: extends: file: common.yml service: app command: /code/run_worker links: - queue ## Adding and overriding configuration Compose copies configurations from the original service over to the local one, **except** for `links` and `volumes_from`. These exceptions exist to avoid implicit dependencies—you always define `links` and `volumes_from` locally. This ensures dependencies between services are clearly visible when reading the current file. Defining these locally also ensures changes to the referenced file don't result in breakage. If a configuration option is defined in both the original service the local service, the local value *replaces* or *extends* the original value. For single-value options like `image`, `command` or `mem_limit`, the new value replaces the old value. # original service command: python app.py # local service command: python otherapp.py # result command: python otherapp.py In the case of `build` and `image`, using one in the local service causes Compose to discard the other, if it was defined in the original service. Example of image replacing build: # original service build: . # local service image: redis # result image: redis Example of build replacing image: # original service image: redis # local service build: . # result build: . For the **multi-value options** `ports`, `expose`, `external_links`, `dns` and `dns_search`, Compose concatenates both sets of values: # original service expose: - "3000" # local service expose: - "4000" - "5000" # result expose: - "3000" - "4000" - "5000" In the case of `environment`, `labels`, `volumes` and `devices`, Compose "merges" entries together with locally-defined values taking precedence: # original service environment: - FOO=original - BAR=original # local service environment: - BAR=local - BAZ=local # result environment: - FOO=original - BAR=local - BAZ=local ## Compose documentation - [User guide](/) - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/faq.md000066400000000000000000000135151263011261000152020ustar00rootroot00000000000000 # Frequently asked questions If you don’t see your question here, feel free to drop by `#docker-compose` on freenode IRC and ask the community. ## Why do my services take 10 seconds to stop? Compose stop attempts to stop a container by sending a `SIGTERM`. It then waits for a [default timeout of 10 seconds](./reference/stop.md). After the timeout, a `SIGKILL` is sent to the container to forcefully kill it. If you are waiting for this timeout, it means that your containers aren't shutting down when they receive the `SIGTERM` signal. There has already been a lot written about this problem of [processes handling signals](https://medium.com/@gchudnov/trapping-signals-in-docker-containers-7a57fdda7d86) in containers. To fix this problem, try the following: * Make sure you're using the JSON form of `CMD` and `ENTRYPOINT` in your Dockerfile. For example use `["program", "arg1", "arg2"]` not `"program arg1 arg2"`. Using the string form causes Docker to run your process using `bash` which doesn't handle signals properly. Compose always uses the JSON form, so don't worry if you override the command or entrypoint in your Compose file. * If you are able, modify the application that you're running to add an explicit signal handler for `SIGTERM`. * If you can't modify the application, wrap the application in a lightweight init system (like [s6](http://skarnet.org/software/s6/)) or a signal proxy (like [dumb-init](https://github.com/Yelp/dumb-init) or [tini](https://github.com/krallin/tini)). Either of these wrappers take care of handling `SIGTERM` properly. ## How do I run multiple copies of a Compose file on the same host? Compose uses the project name to create unique identifiers for all of a project's containers and other resources. To run multiple copies of a project, set a custom project name using the [`-p` command line option](./reference/docker-compose.md) or the [`COMPOSE_PROJECT_NAME` environment variable](./reference/overview.md#compose-project-name). ## What's the difference between `up`, `run`, and `start`? Typically, you want `docker-compose up`. Use `up` to start or restart all the services defined in a `docker-compose.yml`. In the default "attached" mode, you'll see all the logs from all the containers. In "detached" mode (`-d`), Compose exits after starting the containers, but the containers continue to run in the background. The `docker-compose run` command is for running "one-off" or "adhoc" tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use `run` to run tests or perform an administrative task such as removing or adding data to a data volume container. The `run` command acts like `docker run -ti` in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container. The `docker-compose start` command is useful only to restart containers that were previously created, but were stopped. It never creates new containers. ## Can I use json instead of yaml for my Compose file? Yes. [Yaml is a superset of json](http://stackoverflow.com/a/1729545/444646) so any JSON file should be valid Yaml. To use a JSON file with Compose, specify the filename to use, for example: ```bash docker-compose -f docker-compose.json up ``` ## How do I get Compose to wait for my database to be ready before starting my application? Unfortunately, Compose won't do that for you but for a good reason. The problem of waiting for a database to be ready is really just a subset of a much larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. The application needs to be resilient to these types of failures. To handle this, the application would attempt to re-establish a connection to the database after a failure. If the application retries the connection, it should eventually be able to connect to the database. To wait for the application to be in a good state, you can implement a healthcheck. A healthcheck makes a request to the application and checks the response for a success status code. If it is not successful it waits for a short period of time, and tries again. After some timeout value, the check stops trying and report a failure. If you need to run tests against your application, you can start by running a healthcheck. Once the healthcheck gets a successful response, you can start running your tests. ## Should I include my code with `COPY`/`ADD` or a volume? You can add your code to the image using `COPY` or `ADD` directive in a `Dockerfile`. This is useful if you need to relocate your code along with the Docker image, for example when you're sending code to another environment (production, CI, etc). You should use a `volume` if you want to make changes to your code and see them reflected immediately, for example when you're developing code and your server supports hot code reloading or live-reload. There may be cases where you'll want to use both. You can have the image include the code using a `COPY`, and use a `volume` in your Compose file to include the code from the host during development. The volume overrides the directory contents of the image. ## Where can I find example compose files? There are [many examples of Compose files on github](https://github.com/search?q=in%3Apath+docker-compose.yml+extension%3Ayml&type=Code). ## Compose documentation - [Installing Compose](install.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/gettingstarted.md000066400000000000000000000142261263011261000174630ustar00rootroot00000000000000 # Getting Started On this page you build a simple Python web application running on Compose. The application uses the Flask framework and increments a value in Redis. While the sample uses Python, the concepts demonstrated here should be understandable even if you're not familiar with it. ## Prerequisites Make sure you have already [installed both Docker Engine and Docker Compose](install.md). You don't need to install Python, it is provided by a Docker image. ## Step 1: Setup 1. Create a directory for the project: $ mkdir composetest $ cd composetest 2. With your favorite text editor create a file called `app.py` in your project directory. from flask import Flask from redis import Redis app = Flask(__name__) redis = Redis(host='redis', port=6379) @app.route('/') def hello(): redis.incr('hits') return 'Hello World! I have been seen %s times.' % redis.get('hits') if __name__ == "__main__": app.run(host="0.0.0.0", debug=True) 3. Create another file called `requirements.txt` in your project directory and add the following: flask redis These define the applications dependencies. ## Step 2: Create a Docker image In this step, you build a new Docker image. The image contains all the dependencies the Python application requires, including Python itself. 1. In your project directory create a file named `Dockerfile` and add the following: FROM python:2.7 ADD . /code WORKDIR /code RUN pip install -r requirements.txt CMD python app.py This tells Docker to: * Build an image starting with the Python 2.7 image. * Add the current directory `.` into the path `/code` in the image. * Set the working directory to `/code`. * Install the Python dependencies. * Set the default command for the container to `python app.py` For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). 2. Build the image. $ docker build -t web . This command builds an image named `web` from the contents of the current directory. The command automatically locates the `Dockerfile`, `app.py`, and `requirements.txt` files. ## Step 3: Define services Define a set of services using `docker-compose.yml`: 1. Create a file called docker-compose.yml in your project directory and add the following: web: build: . ports: - "5000:5000" volumes: - .:/code links: - redis redis: image: redis This Compose file defines two services, `web` and `redis`. The web service: * Builds from the `Dockerfile` in the current directory. * Forwards the exposed port 5000 on the container to port 5000 on the host machine. * Mounts the project directory on the host to `/code` inside the container allowing you to modify the code without having to rebuild the image. * Links the web service to the Redis service. The `redis` service uses the latest public [Redis](https://registry.hub.docker.com/_/redis/) image pulled from the Docker Hub registry. ## Step 4: Build and run your app with Compose 1. From your project directory, start up your application. $ docker-compose up Pulling image redis... Building web... Starting composetest_redis_1... Starting composetest_web_1... redis_1 | [8] 02 Jan 18:43:35.576 # Server started, Redis version 2.8.3 web_1 | * Running on http://0.0.0.0:5000/ web_1 | * Restarting with stat Compose pulls a Redis image, builds an image for your code, and start the services you defined. 2. Enter `http://0.0.0.0:5000/` in a browser to see the application running. If you're using Docker on Linux natively, then the web app should now be listening on port 5000 on your Docker daemon host. If http://0.0.0.0:5000 doesn't resolve, you can also try http://localhost:5000. If you're using Docker Machine on a Mac, use `docker-machine ip MACHINE_VM` to get the IP address of your Docker host. Then, `open http://MACHINE_VM_IP:5000` in a browser. You should see a message in your browser saying: `Hello World! I have been seen 1 times.` 3. Refresh the page. The number should increment. ## Step 5: Experiment with some other commands If you want to run your services in the background, you can pass the `-d` flag (for "detached" mode) to `docker-compose up` and use `docker-compose ps` to see what is currently running: $ docker-compose up -d Starting composetest_redis_1... Starting composetest_web_1... $ docker-compose ps Name Command State Ports ------------------------------------------------------------------- composetest_redis_1 /usr/local/bin/run Up composetest_web_1 /bin/sh -c python app.py Up 5000->5000/tcp The `docker-compose run` command allows you to run one-off commands for your services. For example, to see what environment variables are available to the `web` service: $ docker-compose run web env See `docker-compose --help` to see other available commands. You can also install [command completion](completion.md) for the bash and zsh shell, which will also show you available commands. If you started Compose with `docker-compose up -d`, you'll probably want to stop your services once you've finished with them: $ docker-compose stop At this point, you have seen the basics of how Compose works. ## Where to go next - Next, try the quick start guide for [Django](django.md), [Rails](rails.md), or [WordPress](wordpress.md). - [Explore the full list of Compose commands](./reference/index.md) - [Compose configuration file reference](compose-file.md) compose-1.5.2/docs/index.md000066400000000000000000000162521263011261000155430ustar00rootroot00000000000000 # Overview of Docker Compose Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application's services. Then, using a single command, you create and start all the services from your configuration. To learn more about all the features of Compose see [the list of features](#features). Compose is great for development, testing, and staging environments, as well as CI workflows. You can learn more about each case in [Common Use Cases](#common-use-cases). Using Compose is basically a three-step process. 1. Define your app's environment with a `Dockerfile` so it can be reproduced anywhere. 2. Define the services that make up your app in `docker-compose.yml` so they can be run together in an isolated environment. 3. Lastly, run `docker-compose up` and Compose will start and run your entire app. A `docker-compose.yml` looks like this: web: build: . ports: - "5000:5000" volumes: - .:/code links: - redis redis: image: redis For more information about the Compose file, see the [Compose file reference](compose-file.md) Compose has commands for managing the whole lifecycle of your application: * Start, stop and rebuild services * View the status of running services * Stream the log output of running services * Run a one-off command on a service ## Compose documentation - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Frequently asked questions](faq.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) ## Features The features of Compose that make it effective are: * [Multiple isolated environments on a single host](#Multiple-isolated-environments-on-a-single-host) * [Preserve volume data when containers are created](#preserve-volume-data-when-containers-are-created) * [Only recreate containers that have changed](#only-recreate-containers-that-have-changed) * [Variables and moving a composition between environments](#variables-and-moving-a-composition-between-environments) #### Multiple isolated environments on a single host Compose uses a project name to isolate environments from each other. You can use this project name to: * on a dev host, to create multiple copies of a single environment (ex: you want to run a stable copy for each feature branch of a project) * on a CI server, to keep builds from interfering with each other, you can set the project name to a unique build number * on a shared host or dev host, to prevent different projects which may use the same service names, from interfering with each other The default project name is the basename of the project directory. You can set a custom project name by using the [`-p` command line option](./reference/docker-compose.md) or the [`COMPOSE_PROJECT_NAME` environment variable](./reference/overview.md#compose-project-name). #### Preserve volume data when containers are created Compose preserves all volumes used by your services. When `docker-compose up` runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you've created in volumes isn't lost. #### Only recreate containers that have changed Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly. #### Variables and moving a composition between environments Compose supports variables in the Compose file. You can use these variables to customize your composition for different environments, or different users. See [Variable substitution](compose-file.md#variable-substitution) for more details. You can extend a Compose file using the `extends` field or by creating multiple Compose files. See [extends](extends.md) for more details. ## Common Use Cases Compose can be used in many different ways. Some common use cases are outlined below. ### Development environments When you're developing software, the ability to run an application in an isolated environment and interact with it is crucial. The Compose command line tool can be used to create the environment and interact with it. The [Compose file](compose-file.md) provides a way to document and configure all of the application's service dependencies (databases, queues, caches, web service APIs, etc). Using the Compose command line tool you can create and start one or more containers for each dependency with a single command (`docker-compose up`). Together, these features provide a convenient way for developers to get started on a project. Compose can reduce a multi-page "developer getting started guide" to a single machine readable Compose file and a few commands. ### Automated testing environments An important part of any Continuous Deployment or Continuous Integration process is the automated test suite. Automated end-to-end testing requires an environment in which to run tests. Compose provides a convenient way to create and destroy isolated testing environments for your test suite. By defining the full environment in a [Compose file](compose-file.md) you can create and destroy these environments in just a few commands: $ docker-compose up -d $ ./run_tests $ docker-compose stop $ docker-compose rm -f ### Single host deployments Compose has traditionally been focused on development and testing workflows, but with each release we're making progress on more production-oriented features. You can use Compose to deploy to a remote Docker Engine. The Docker Engine may be a single instance provisioned with [Docker Machine](https://docs.docker.com/machine/) or an entire [Docker Swarm](https://docs.docker.com/swarm/) cluster. For details on using production-oriented features, see [compose in production](production.md) in this documentation. ## Release Notes To see a detailed list of changes for past and current releases of Docker Compose, please refer to the [CHANGELOG](https://github.com/docker/compose/blob/master/CHANGELOG.md). ## Getting help Docker Compose is under active development. If you need help, would like to contribute, or simply want to talk about the project with like-minded individuals, we have a number of open channels for communication. * To report bugs or file feature requests: please use the [issue tracker on Github](https://github.com/docker/compose/issues). * To talk about the project with people in real time: please join the `#docker-compose` channel on freenode IRC. * To contribute code or documentation changes: please submit a [pull request on Github](https://github.com/docker/compose/pulls). For more information and resources, please visit the [Getting Help project page](https://docs.docker.com/project/get-help/). compose-1.5.2/docs/install.md000066400000000000000000000112461263011261000161000ustar00rootroot00000000000000 # Install Docker Compose You can run Compose on OS X and 64-bit Linux. It is currently not supported on the Windows operating system. To install Compose, you'll need to install Docker first. To install Compose, do the following: 1. Install Docker Engine version 1.7.1 or greater: * Mac OS X installation (Toolbox installation includes both Engine and Compose) * Ubuntu installation * other system installations 2. Mac OS X users are done installing. Others should continue to the next step. 3. Go to the Compose repository release page on GitHub. 4. Follow the instructions from the release page and run the `curl` command, which the release page specifies, in your terminal. > Note: If you get a "Permission denied" error, your `/usr/local/bin` directory probably isn't writable and you'll need to install Compose as the superuser. Run `sudo -i`, then the two commands below, then `exit`. The following is an example command illustrating the format: curl -L https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose If you have problems installing with `curl`, see [Alternative Install Options](#alternative-install-options). 5. Apply executable permissions to the binary: $ chmod +x /usr/local/bin/docker-compose 6. Optionally, install [command completion](completion.md) for the `bash` and `zsh` shell. 7. Test the installation. $ docker-compose --version docker-compose version: 1.5.2 ## Alternative install options ### Install using pip Compose can be installed from [pypi](https://pypi.python.org/pypi/docker-compose) using `pip`. If you install using `pip` it is highly recommended that you use a [virtualenv](https://virtualenv.pypa.io/en/latest/) because many operating systems have python system packages that conflict with docker-compose dependencies. See the [virtualenv tutorial](http://docs.python-guide.org/en/latest/dev/virtualenvs/) to get started. $ pip install docker-compose > **Note:** pip version 6.0 or greater is required ### Install as a container Compose can also be run inside a container, from a small bash script wrapper. To install compose as a container run: $ curl -L https://github.com/docker/compose/releases/download/1.5.2/run.sh > /usr/local/bin/docker-compose $ chmod +x /usr/local/bin/docker-compose ## Master builds If you're interested in trying out a pre-release build you can download a binary from https://dl.bintray.com/docker-compose/master/. Pre-release builds allow you to try out new features before they are released, but may be less stable. ## Upgrading If you're upgrading from Compose 1.2 or earlier, you'll need to remove or migrate your existing containers after upgrading Compose. This is because, as of version 1.3, Compose uses Docker labels to keep track of containers, and so they need to be recreated with labels added. If Compose detects containers that were created without labels, it will refuse to run so that you don't end up with two sets of them. If you want to keep using your existing containers (for example, because they have data volumes you want to preserve) you can migrate them with the following command: $ docker-compose migrate-to-labels Alternatively, if you're not worried about keeping them, you can remove them. Compose will just create new ones. $ docker rm -f -v myapp_web_1 myapp_db_1 ... ## Uninstallation To uninstall Docker Compose if you installed using `curl`: $ rm /usr/local/bin/docker-compose To uninstall Docker Compose if you installed using `pip`: $ pip uninstall docker-compose >**Note**: If you get a "Permission denied" error using either of the above >methods, you probably do not have the proper permissions to remove >`docker-compose`. To force the removal, prepend `sudo` to either of the above >commands and run again. ## Where to go next - [User guide](/) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with Rails](rails.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/networking.md000066400000000000000000000106431263011261000166210ustar00rootroot00000000000000 # Networking in Compose > **Note:** Compose's networking support is experimental, and must be explicitly enabled with the `docker-compose --x-networking` flag. Compose sets up a single default [network](/engine/reference/commandline/network_create.md) for your app. Each container for a service joins the default network and is both *reachable* by other containers on that network, and *discoverable* by them at a hostname identical to the container name. > **Note:** Your app's network is given the same name as the "project name", which is based on the name of the directory it lives in. See the [Command line overview](reference/docker-compose.md) for how to override it. For example, suppose your app is in a directory called `myapp`, and your `docker-compose.yml` looks like this: web: build: . ports: - "8000:8000" db: image: postgres When you run `docker-compose --x-networking up`, the following happens: 1. A network called `myapp` is created. 2. A container is created using `web`'s configuration. It joins the network `myapp` under the name `myapp_web_1`. 3. A container is created using `db`'s configuration. It joins the network `myapp` under the name `myapp_db_1`. Each container can now look up the hostname `myapp_web_1` or `myapp_db_1` and get back the appropriate container's IP address. For example, `web`'s application code could connect to the URL `postgres://myapp_db_1:5432` and start using the Postgres database. Because `web` explicitly maps a port, it's also accessible from the outside world via port 8000 on your Docker host's network interface. > **Note:** in the next release there will be additional aliases for the > container, including a short name without the project name and container > index. The full container name will remain as one of the alias for backwards > compatibility. ## Updating containers If you make a configuration change to a service and run `docker-compose up` to update it, the old container will be removed and the new one will join the network under a different IP address but the same name. Running containers will be able to look up that name and connect to the new address, but the old address will stop working. If any containers have connections open to the old container, they will be closed. It is a container's responsibility to detect this condition, look up the name again and reconnect. ## Configure how services are published By default, containers for each service are published on the network with the container name. If you want to change the name, or stop containers from being discoverable at all, you can use the `container_name` option: web: build: . container_name: "my-web-application" ## Links Docker links are a one-way, single-host communication system. They should now be considered deprecated, and you should update your app to use networking instead. In the majority of cases, this will simply involve removing the `links` sections from your `docker-compose.yml`. ## Specifying the network driver By default, Compose uses the `bridge` driver when creating the app’s network. The Docker Engine provides one other driver out-of-the-box: `overlay`, which implements secure communication between containers on different hosts (see the next section for how to set up and use the `overlay` driver). Docker also allows you to install [custom network drivers](/engine/extend/plugins_network.md). You can specify which one to use with the `--x-network-driver` flag: $ docker-compose --x-networking --x-network-driver=overlay up ## Multi-host networking (TODO: talk about Swarm and the overlay driver) ## Custom container network modes Compose allows you to specify a custom network mode for a service with the `net` option - for example, `net: "host"` specifies that its containers should use the same network namespace as the Docker host, and `net: "none"` specifies that they should have no networking capabilities. If a service specifies the `net` option, its containers will *not* join the app’s network and will not be able to communicate with other services in the app. If *all* services in an app specify the `net` option, a network will not be created at all. compose-1.5.2/docs/production.md000066400000000000000000000064721263011261000166250ustar00rootroot00000000000000 ## Using Compose in production > Compose is still primarily aimed at development and testing environments. > Compose may be used for smaller production deployments, but is probably > not yet suitable for larger deployments. When deploying to production, you'll almost certainly want to make changes to your app configuration that are more appropriate to a live environment. These changes may include: - Removing any volume bindings for application code, so that code stays inside the container and can't be changed from outside - Binding to different ports on the host - Setting environment variables differently (e.g., to decrease the verbosity of logging, or to enable email sending) - Specifying a restart policy (e.g., `restart: always`) to avoid downtime - Adding extra services (e.g., a log aggregator) For this reason, you'll probably want to define an additional Compose file, say `production.yml`, which specifies production-appropriate configuration. This configuration file only needs to include the changes you'd like to make from the original Compose file. The additional Compose file can be applied over the original `docker-compose.yml` to create a new configuration. Once you've got a second configuration file, tell Compose to use it with the `-f` option: $ docker-compose -f docker-compose.yml -f production.yml up -d See [Using multiple compose files](extends.md#different-environments) for a more complete example. ### Deploying changes When you make changes to your app code, you'll need to rebuild your image and recreate your app's containers. To redeploy a service called `web`, you would use: $ docker-compose build web $ docker-compose up --no-deps -d web This will first rebuild the image for `web` and then stop, destroy, and recreate *just* the `web` service. The `--no-deps` flag prevents Compose from also recreating any services which `web` depends on. ### Running Compose on a single server You can use Compose to deploy an app to a remote Docker host by setting the `DOCKER_HOST`, `DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` environment variables appropriately. For tasks like this, [Docker Machine](https://docs.docker.com/machine) makes managing local and remote Docker hosts very easy, and is recommended even if you're not deploying remotely. Once you've set up your environment variables, all the normal `docker-compose` commands will work with no further configuration. ### Running Compose on a Swarm cluster [Docker Swarm](https://docs.docker.com/swarm), a Docker-native clustering system, exposes the same API as a single Docker host, which means you can use Compose against a Swarm instance and run your apps across multiple hosts. Compose/Swarm integration is still in the experimental stage, and Swarm is still in beta, but if you'd like to explore and experiment, check out the integration guide. ## Compose documentation - [Installing Compose](install.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/rails.md000066400000000000000000000120141263011261000155360ustar00rootroot00000000000000 ## Quickstart Guide: Compose and Rails This Quickstart guide will show you how to use Compose to set up and run a Rails/PostgreSQL app. Before starting, you'll need to have [Compose installed](install.md). ### Define the project Start by setting up the three files you'll need to build the app. First, since your app is going to run inside a Docker container containing all of its dependencies, you'll need to define exactly what needs to be included in the container. This is done using a file called `Dockerfile`. To begin with, the Dockerfile consists of: FROM ruby:2.2.0 RUN apt-get update -qq && apt-get install -y build-essential libpq-dev RUN mkdir /myapp WORKDIR /myapp ADD Gemfile /myapp/Gemfile ADD Gemfile.lock /myapp/Gemfile.lock RUN bundle install ADD . /myapp That'll put your application code inside an image that will build a container with Ruby, Bundler and all your dependencies inside it. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). Next, create a bootstrap `Gemfile` which just loads Rails. It'll be overwritten in a moment by `rails new`. source 'https://rubygems.org' gem 'rails', '4.2.0' You'll need an empty `Gemfile.lock` in order to build our `Dockerfile`. $ touch Gemfile.lock Finally, `docker-compose.yml` is where the magic happens. This file describes the services that comprise your app (a database and a web app), how to get each one's Docker image (the database just runs on a pre-made PostgreSQL image, and the web app is built from the current directory), and the configuration needed to link them together and expose the web app's port. db: image: postgres web: build: . command: bundle exec rails s -p 3000 -b '0.0.0.0' volumes: - .:/myapp ports: - "3000:3000" links: - db ### Build the project With those three files in place, you can now generate the Rails skeleton app using `docker-compose run`: $ docker-compose run web rails new . --force --database=postgresql --skip-bundle First, Compose will build the image for the `web` service using the `Dockerfile`. Then it'll run `rails new` inside a new container, using that image. Once it's done, you should have generated a fresh app: $ ls Dockerfile app docker-compose.yml tmp Gemfile bin lib vendor Gemfile.lock config log README.rdoc config.ru public Rakefile db test The files `rails new` created are owned by root. This happens because the container runs as the `root` user. Change the ownership of the new files. sudo chown -R $USER:$USER . Uncomment the line in your new `Gemfile` which loads `therubyracer`, so you've got a Javascript runtime: gem 'therubyracer', platforms: :ruby Now that you've got a new `Gemfile`, you need to build the image again. (This, and changes to the Dockerfile itself, should be the only times you'll need to rebuild.) $ docker-compose build ### Connect the database The app is now bootable, but you're not quite there yet. By default, Rails expects a database to be running on `localhost` - so you need to point it at the `db` container instead. You also need to change the database and username to align with the defaults set by the `postgres` image. Replace the contents of `config/database.yml` with the following: development: &default adapter: postgresql encoding: unicode database: postgres pool: 5 username: postgres password: host: db test: <<: *default database: myapp_test You can now boot the app with: $ docker-compose up If all's well, you should see some PostgreSQL output, and then—after a few seconds—the familiar refrain: myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick 1.3.1 myapp_web_1 | [2014-01-17 17:16:29] INFO ruby 2.2.0 (2014-12-25) [x86_64-linux-gnu] myapp_web_1 | [2014-01-17 17:16:29] INFO WEBrick::HTTPServer#start: pid=1 port=3000 Finally, you need to create the database. In another terminal, run: $ docker-compose run web rake db:create That's it. Your app should now be running on port 3000 on your Docker daemon. If you're using [Docker Machine](https://docs.docker.com/machine), then `docker-machine ip MACHINE_VM` returns the Docker host IP address. ## More Compose documentation - [User guide](/) - [Installing Compose](install.md) - [Getting Started](gettingstarted.md) - [Get started with Django](django.md) - [Get started with WordPress](wordpress.md) - [Command line reference](./reference/index.md) - [Compose file reference](compose-file.md) compose-1.5.2/docs/reference/000077500000000000000000000000001263011261000160425ustar00rootroot00000000000000compose-1.5.2/docs/reference/build.md000066400000000000000000000012311263011261000174600ustar00rootroot00000000000000 # build ``` Usage: build [options] [SERVICE...] Options: --force-rm Always remove intermediate containers. --no-cache Do not use cache when building the image. --pull Always attempt to pull a newer version of the image. ``` Services are built once and then tagged as `project_service`, e.g., `composetest_db`. If you change a service's Dockerfile or the contents of its build directory, run `docker-compose build` to rebuild it. compose-1.5.2/docs/reference/docker-compose.md000066400000000000000000000070061263011261000213010ustar00rootroot00000000000000 # docker-compose Command ``` Usage: docker-compose [-f=...] [options] [COMMAND] [ARGS...] docker-compose -h|--help Options: -f, --file FILE Specify an alternate compose file (default: docker-compose.yml) -p, --project-name NAME Specify an alternate project name (default: directory name) --verbose Show more output -v, --version Print version and exit Commands: build Build or rebuild services help Get help on a command kill Kill containers logs View output from containers pause Pause services port Print the public port for a port binding ps List containers pull Pulls service images restart Restart services rm Remove stopped containers run Run a one-off command scale Set number of containers for a service start Start services stop Stop services unpause Unpause services up Create and start containers migrate-to-labels Recreate containers to add labels version Show the Docker-Compose version information ``` The Docker Compose binary. You use this command to build and manage multiple services in Docker containers. Use the `-f` flag to specify the location of a Compose configuration file. You can supply multiple `-f` configuration files. When you supply multiple files, Compose combines them into a single configuration. Compose builds the configuration in the order you supply the files. Subsequent files override and add to their successors. For example, consider this command line: ``` $ docker-compose -f docker-compose.yml -f docker-compose.admin.yml run backup_db` ``` The `docker-compose.yml` file might specify a `webapp` service. ``` webapp: image: examples/web ports: - "8000:8000" volumes: - "/data" ``` If the `docker-compose.admin.yml` also specifies this same service, any matching fields will override the previous file. New values, add to the `webapp` service configuration. ``` webapp: build: . environment: - DEBUG=1 ``` Use a `-f` with `-` (dash) as the filename to read the configuration from stdin. When stdin is used all paths in the configuration are relative to the current working directory. The `-f` flag is optional. If you don't provide this flag on the command line, Compose traverses the working directory and its subdirectories looking for a `docker-compose.yml` and a `docker-compose.override.yml` file. You must supply at least the `docker-compose.yml` file. If both files are present, Compose combines the two files into a single configuration. The configuration in the `docker-compose.override.yml` file is applied over and in addition to the values in the `docker-compose.yml` file. See also the `COMPOSE_FILE` [environment variable](overview.md#compose-file). Each configuration has a project name. If you supply a `-p` flag, you can specify a project name. If you don't specify the flag, Compose uses the current directory name. See also the `COMPOSE_PROJECT_NAME` [environment variable]( overview.md#compose-project-name) ## Where to go next * [CLI environment variables](overview.md) * [Command line reference](index.md) compose-1.5.2/docs/reference/help.md000066400000000000000000000004671263011261000173230ustar00rootroot00000000000000 # help ``` Usage: help COMMAND ``` Displays help and usage instructions for a command. compose-1.5.2/docs/reference/index.md000066400000000000000000000015431263011261000174760ustar00rootroot00000000000000 ## Compose CLI reference The following pages describe the usage information for the [docker-compose](docker-compose.md) subcommands. You can also see this information by running `docker-compose [SUBCOMMAND] --help` from the command line. * [build](build.md) * [help](help.md) * [kill](kill.md) * [ps](ps.md) * [restart](restart.md) * [run](run.md) * [start](start.md) * [up](up.md) * [logs](logs.md) * [port](port.md) * [pull](pull.md) * [rm](rm.md) * [scale](scale.md) * [stop](stop.md) ## Where to go next * [CLI environment variables](overview.md) * [docker-compose Command](docker-compose.md) compose-1.5.2/docs/reference/kill.md000066400000000000000000000010401263011261000173120ustar00rootroot00000000000000 # kill ``` Usage: kill [options] [SERVICE...] Options: -s SIGNAL SIGNAL to send to the container. Default signal is SIGKILL. ``` Forces running containers to stop by sending a `SIGKILL` signal. Optionally the signal can be passed, for example: $ docker-compose kill -s SIGINT compose-1.5.2/docs/reference/logs.md000066400000000000000000000006041263011261000173300ustar00rootroot00000000000000 # logs ``` Usage: logs [options] [SERVICE...] Options: --no-color Produce monochrome output. ``` Displays log output from services. compose-1.5.2/docs/reference/overview.md000066400000000000000000000072141263011261000202360ustar00rootroot00000000000000 # Introduction to the CLI This section describes the subcommands you can use with the `docker-compose` command. You can run subcommand against one or more services. To run against a specific service, you supply the service name from your compose configuration. If you do not specify the service name, the command runs against all the services in your configuration. ## Commands * [docker-compose Command](docker-compose.md) * [CLI Reference](index.md) ## Environment Variables Several environment variables are available for you to configure the Docker Compose command-line behaviour. Variables starting with `DOCKER_` are the same as those used to configure the Docker command-line client. If you're using `docker-machine`, then the `eval "$(docker-machine env my-docker-vm)"` command should set them to their correct values. (In this example, `my-docker-vm` is the name of a machine you created.) ### COMPOSE\_PROJECT\_NAME Sets the project name. This value is prepended along with the service name to the container container on start up. For example, if you project name is `myapp` and it includes two services `db` and `web` then compose starts containers named `myapp_db_1` and `myapp_web_1` respectively. Setting this is optional. If you do not set this, the `COMPOSE_PROJECT_NAME` defaults to the `basename` of the project directory. See also the `-p` [command-line option](docker-compose.md). ### COMPOSE\_FILE Specify the file containing the compose configuration. If not provided, Compose looks for a file named `docker-compose.yml` in the current directory and then each parent directory in succession until a file by that name is found. See also the `-f` [command-line option](docker-compose.md). ### COMPOSE\_API\_VERSION The Docker API only supports requests from clients which report a specific version. If you receive a `client and server don't have same version error` using `docker-compose`, you can workaround this error by setting this environment variable. Set the version value to match the server version. Setting this variable is intended as a workaround for situations where you need to run temporarily with a mismatch between the client and server version. For example, if you can upgrade the client but need to wait to upgrade the server. Running with this variable set and a known mismatch does prevent some Docker features from working properly. The exact features that fail would depend on the Docker client and server versions. For this reason, running with this variable set is only intended as a workaround and it is not officially supported. If you run into problems running with this set, resolve the mismatch through upgrade and remove this setting to see if your problems resolve before notifying support. ### DOCKER\_HOST Sets the URL of the `docker` daemon. As with the Docker client, defaults to `unix:///var/run/docker.sock`. ### DOCKER\_TLS\_VERIFY When set to anything other than an empty string, enables TLS communication with the `docker` daemon. ### DOCKER\_CERT\_PATH Configures the path to the `ca.pem`, `cert.pem`, and `key.pem` files used for TLS verification. Defaults to `~/.docker`. ### COMPOSE\_HTTP\_TIMEOUT Configures the time (in seconds) a request to the Docker daemon is allowed to hang before Compose considers it failed. Defaults to 60 seconds. ## Related Information - [User guide](../index.md) - [Installing Compose](../install.md) - [Compose file reference](../compose-file.md) compose-1.5.2/docs/reference/pause.md000066400000000000000000000006141263011261000175020ustar00rootroot00000000000000 # pause ``` Usage: pause [SERVICE...] ``` Pauses running containers of a service. They can be unpaused with `docker-compose unpause`. compose-1.5.2/docs/reference/port.md000066400000000000000000000010271263011261000173500ustar00rootroot00000000000000 # port ``` Usage: port [options] SERVICE PRIVATE_PORT Options: --protocol=proto tcp or udp [default: tcp] --index=index index of the container if there are multiple instances of a service [default: 1] ``` Prints the public port for a port binding. compose-1.5.2/docs/reference/ps.md000066400000000000000000000005101263011261000170020ustar00rootroot00000000000000 # ps ``` Usage: ps [options] [SERVICE...] Options: -q Only display IDs ``` Lists containers. compose-1.5.2/docs/reference/pull.md000066400000000000000000000006231263011261000173410ustar00rootroot00000000000000 # pull ``` Usage: pull [options] [SERVICE...] Options: --ignore-pull-failures Pull what it can and ignores images with pull failures. ``` Pulls service images. compose-1.5.2/docs/reference/restart.md000066400000000000000000000006531263011261000200540ustar00rootroot00000000000000 # restart ``` Usage: restart [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds. (default: 10) ``` Restarts services. compose-1.5.2/docs/reference/rm.md000066400000000000000000000006701263011261000170050ustar00rootroot00000000000000 # rm ``` Usage: rm [options] [SERVICE...] Options: -f, --force Don't ask to confirm removal -v Remove volumes associated with containers ``` Removes stopped service containers. compose-1.5.2/docs/reference/run.md000066400000000000000000000055001263011261000171700ustar00rootroot00000000000000 # run ``` Usage: run [options] [-e KEY=VAL...] SERVICE [COMMAND] [ARGS...] Options: -d Detached mode: Run container in the background, print new container name. --entrypoint CMD Override the entrypoint of the image. -e KEY=VAL Set an environment variable (can be used multiple times) -u, --user="" Run as specified username or uid --no-deps Don't start linked services. --rm Remove container after run. Ignored in detached mode. -p, --publish=[] Publish a container's port(s) to the host --service-ports Run command with the service's ports enabled and mapped to the host. -T Disable pseudo-tty allocation. By default `docker-compose run` allocates a TTY. ``` Runs a one-time command against a service. For example, the following command starts the `web` service and runs `bash` as its command. $ docker-compose run web bash Commands you use with `run` start in new containers with the same configuration as defined by the service' configuration. This means the container has the same volumes, links, as defined in the configuration file. There two differences though. First, the command passed by `run` overrides the command defined in the service configuration. For example, if the `web` service configuration is started with `bash`, then `docker-compose run web python app.py` overrides it with `python app.py`. The second difference is the `docker-compose run` command does not create any of the ports specified in the service configuration. This prevents the port collisions with already open ports. If you *do want* the service's ports created and mapped to the host, specify the `--service-ports` flag: $ docker-compose run --service-ports web python manage.py shell Alternatively manual port mapping can be specified. Same as when running Docker's `run` command - using `--publish` or `-p` options: $ docker-compose run --publish 8080:80 -p 2022:22 -p 127.0.0.1:2021:21 web python manage.py shell If you start a service configured with links, the `run` command first checks to see if the linked service is running and starts the service if it is stopped. Once all the linked services are running, the `run` executes the command you passed it. So, for example, you could run: $ docker-compose run db psql -h db -U docker This would open up an interactive PostgreSQL shell for the linked `db` container. If you do not want the `run` command to start linked containers, specify the `--no-deps` flag: $ docker-compose run --no-deps web python manage.py shell compose-1.5.2/docs/reference/scale.md000066400000000000000000000007201263011261000174520ustar00rootroot00000000000000 # scale ``` Usage: scale [SERVICE=NUM...] ``` Sets the number of containers to run for a service. Numbers are specified as arguments in the form `service=num`. For example: $ docker-compose scale web=2 worker=3 compose-1.5.2/docs/reference/start.md000066400000000000000000000005341263011261000175230ustar00rootroot00000000000000 # start ``` Usage: start [SERVICE...] ``` Starts existing containers for a service. compose-1.5.2/docs/reference/stop.md000066400000000000000000000007761263011261000173630ustar00rootroot00000000000000 # stop ``` Usage: stop [options] [SERVICE...] Options: -t, --timeout TIMEOUT Specify a shutdown timeout in seconds (default: 10). ``` Stops running containers without removing them. They can be started again with `docker-compose start`. compose-1.5.2/docs/reference/unpause.md000066400000000000000000000005421263011261000200450ustar00rootroot00000000000000 # pause ``` Usage: unpause [SERVICE...] ``` Unpauses paused containers of a service. compose-1.5.2/docs/reference/up.md000066400000000000000000000035521263011261000170150ustar00rootroot00000000000000 # up ``` Usage: up [options] [SERVICE...] Options: -d Detached mode: Run containers in the background, print new container names. --no-color Produce monochrome output. --no-deps Don't start linked services. --force-recreate Recreate containers even if their configuration and image haven't changed. Incompatible with --no-recreate. --no-recreate If containers already exist, don't recreate them. Incompatible with --force-recreate. --no-build Don't build an image, even if it's missing -t, --timeout TIMEOUT Use this timeout in seconds for container shutdown when attached or when containers are already running. (default: 10) ``` Builds, (re)creates, starts, and attaches to containers for a service. Unless they are already running, this command also starts any linked services. The `docker-compose up` command aggregates the output of each container. When the command exits, all containers are stopped. Running `docker-compose up -d` starts the containers in the background and leaves them running. If there are existing containers for a service, and the service's configuration or image was changed after the container's creation, `docker-compose up` picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the `--no-recreate` flag. If you want to force Compose to stop and recreate all containers, use the `--force-recreate` flag. compose-1.5.2/docs/wordpress.md000066400000000000000000000065321263011261000164640ustar00rootroot00000000000000 # Quickstart Guide: Compose and WordPress You can use Compose to easily run WordPress in an isolated environment built with Docker containers. ## Define the project First, [Install Compose](install.md) and then download WordPress into the current directory: $ curl https://wordpress.org/latest.tar.gz | tar -xvzf - This will create a directory called `wordpress`. If you wish, you can rename it to the name of your project. Next, inside that directory, create a `Dockerfile`, a file that defines what environment your app is going to run in. For more information on how to write Dockerfiles, see the [Docker user guide](https://docs.docker.com/userguide/dockerimages/#building-an-image-from-a-dockerfile) and the [Dockerfile reference](http://docs.docker.com/reference/builder/). In this case, your Dockerfile should be: FROM orchardup/php5 ADD . /code This tells Docker how to build an image defining a container that contains PHP and WordPress. Next you'll create a `docker-compose.yml` file that will start your web service and a separate MySQL instance: web: build: . command: php -S 0.0.0.0:8000 -t /code ports: - "8000:8000" links: - db volumes: - .:/code db: image: orchardup/mysql environment: MYSQL_DATABASE: wordpress A supporting file is needed to get this working. `wp-config.php` is the standard WordPress config file with a single change to point the database configuration at the `db` container: Note: This functionality is in the experimental stage, and contains some hacks and workarounds which will be removed as it matures. ## Prerequisites Before you start, you’ll need to install the experimental build of Docker, and the latest versions of Machine and Compose. - To install the experimental Docker build on a Linux machine, follow the instructions [here](https://github.com/docker/docker/tree/master/experimental#install-docker-experimental). - To install the experimental Docker build on a Mac, run these commands: $ curl -L https://experimental.docker.com/builds/Darwin/x86_64/docker-latest > /usr/local/bin/docker $ chmod +x /usr/local/bin/docker - To install Machine, follow the instructions [here](http://docs.docker.com/machine/). - To install Compose, follow the instructions [here](http://docs.docker.com/compose/install/). You’ll also need a [Docker Hub](https://hub.docker.com/account/signup/) account and a [Digital Ocean](https://www.digitalocean.com/) account. ## Set up a swarm with multi-host networking Set the `DIGITALOCEAN_ACCESS_TOKEN` environment variable to a valid Digital Ocean API token, which you can generate in the [API panel](https://cloud.digitalocean.com/settings/applications). DIGITALOCEAN_ACCESS_TOKEN=abc12345 Start a consul server: docker-machine create -d digitalocean --engine-install-url https://experimental.docker.com consul docker $(docker-machine config consul) run -d -p 8500:8500 -h consul progrium/consul -server -bootstrap (In a real world setting you’d set up a distributed consul, but that’s beyond the scope of this guide!) Create a Swarm token: SWARM_TOKEN=$(docker run swarm create) Create a Swarm master: docker-machine create -d digitalocean --swarm --swarm-master --swarm-discovery=token://$SWARM_TOKEN --engine-install-url="https://experimental.docker.com" --digitalocean-image "ubuntu-14-10-x64" --engine-opt=default-network=overlay:multihost --engine-label=com.docker.network.driver.overlay.bind_interface=eth0 --engine-opt=kv-store=consul:$(docker-machine ip consul):8500 swarm-0 Create a Swarm node: docker-machine create -d digitalocean --swarm --swarm-discovery=token://$SWARM_TOKEN --engine-install-url="https://experimental.docker.com" --digitalocean-image "ubuntu-14-10-x64" --engine-opt=default-network=overlay:multihost --engine-label=com.docker.network.driver.overlay.bind_interface=eth0 --engine-opt=kv-store=consul:$(docker-machine ip consul):8500 --engine-label com.docker.network.driver.overlay.neighbor_ip=$(docker-machine ip swarm-0) swarm-1 You can create more Swarm nodes if you want - it’s best to give them sensible names (swarm-2, swarm-3, etc). Finally, point Docker at your swarm: eval "$(docker-machine env --swarm swarm-0)" ## Run containers and get them communicating Now that you’ve got a swarm up and running, you can create containers on it just like a single Docker instance: $ docker run busybox echo hello world hello world If you run `docker ps -a`, you can see what node that container was started on by looking at its name (here it’s swarm-3): $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 41f59749737b busybox "echo hello world" 15 seconds ago Exited (0) 13 seconds ago swarm-3/trusting_leakey As you start more containers, they’ll be placed on different nodes across the cluster, thanks to Swarm’s default “spread” scheduling strategy. Every container started on this swarm will use the “overlay:multihost” network by default, meaning they can all intercommunicate. Each container gets an IP address on that network, and an `/etc/hosts` file which will be updated on-the-fly with every other container’s IP address and name. That means that if you have a running container named ‘foo’, other containers can access it at the hostname ‘foo’. Let’s verify that multi-host networking is functioning. Start a long-running container: $ docker run -d --name long-running busybox top If you start a new container and inspect its /etc/hosts file, you’ll see the long-running container in there: $ docker run busybox cat /etc/hosts ... 172.21.0.6 long-running Verify that connectivity works between containers: $ docker run busybox ping long-running PING long-running (172.21.0.6): 56 data bytes 64 bytes from 172.21.0.6: seq=0 ttl=64 time=7.975 ms 64 bytes from 172.21.0.6: seq=1 ttl=64 time=1.378 ms 64 bytes from 172.21.0.6: seq=2 ttl=64 time=1.348 ms ^C --- long-running ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 1.140/2.099/7.975 ms ## Run a Compose application Here’s an example of a simple Python + Redis app using multi-host networking on a swarm. Create a directory for the app: $ mkdir composetest $ cd composetest Inside this directory, create 2 files. First, create `app.py` - a simple web app that uses the Flask framework and increments a value in Redis: from flask import Flask from redis import Redis import os app = Flask(__name__) redis = Redis(host='composetest_redis_1', port=6379) @app.route('/') def hello(): redis.incr('hits') return 'Hello World! I have been seen %s times.' % redis.get('hits') if __name__ == "__main__": app.run(host="0.0.0.0", debug=True) Note that we’re connecting to a host called `composetest_redis_1` - this is the name of the Redis container that Compose will start. Second, create a Dockerfile for the app container: FROM python:2.7 RUN pip install flask redis ADD . /code WORKDIR /code CMD ["python", "app.py"] Build the Docker image and push it to the Hub (you’ll need a Hub account). Replace `` with your Docker Hub username: $ docker build -t /counter . $ docker push /counter Next, create a `docker-compose.yml`, which defines the configuration for the web and redis containers. Once again, replace `` with your Hub username: web: image: /counter ports: - "80:5000" redis: image: redis Now start the app: $ docker-compose up -d Pulling web (username/counter:latest)... swarm-0: Pulling username/counter:latest... : downloaded swarm-2: Pulling username/counter:latest... : downloaded swarm-1: Pulling username/counter:latest... : downloaded swarm-3: Pulling username/counter:latest... : downloaded swarm-4: Pulling username/counter:latest... : downloaded Creating composetest_web_1... Pulling redis (redis:latest)... swarm-2: Pulling redis:latest... : downloaded swarm-1: Pulling redis:latest... : downloaded swarm-3: Pulling redis:latest... : downloaded swarm-4: Pulling redis:latest... : downloaded swarm-0: Pulling redis:latest... : downloaded Creating composetest_redis_1... Swarm has created containers for both web and redis, and placed them on different nodes, which you can check with `docker ps`: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 92faad2135c9 redis "/entrypoint.sh redi 43 seconds ago Up 42 seconds swarm-2/composetest_redis_1 adb809e5cdac username/counter "/bin/sh -c 'python 55 seconds ago Up 54 seconds 45.67.8.9:80->5000/tcp swarm-1/composetest_web_1 You can also see that the web container has exposed port 80 on its swarm node. If you curl that IP, you’ll get a response from the container: $ curl http://45.67.8.9 Hello World! I have been seen 1 times. If you hit it repeatedly, the counter will increment, demonstrating that the web and redis container are communicating: $ curl http://45.67.8.9 Hello World! I have been seen 2 times. $ curl http://45.67.8.9 Hello World! I have been seen 3 times. $ curl http://45.67.8.9 Hello World! I have been seen 4 times. compose-1.5.2/logo.png000066400000000000000000001143371263011261000146330ustar00rootroot00000000000000PNG  IHDR(ޫbKGD pHYs%%IR$tIME (S IDATxw|չ6ze[+4SLH%@@)J$7@MrbظeY$sǬ+;!G?FiFiFiFiFiFiFiߋg駦oR'LW[^>􌬬/~7?/o]zɰGU ?6zHĮ;S9vc֭Ʃ'0vXv `efzeeYtvuRSSC]!"6CGN7n<;vlO$:k)'>iEHQR2LA.xB"C*J]c_0n|aokC&M4-ekaFyx揿#PY9r͛N0mv ƿ~~C}:em[>פ@8!Av|pZ~\.h@nzg?wmW^{]#„4456O/**-صsߒI_n5/3 1iexr,O!*INyx}^?e5jW#Y9%$bXu qt?@ _6]Iiq͛n]`oV ikPY92%6f;o$< La SbJ B؆&_V4pBJ)dd%*w}{gωgV[[]Ɉ "9y(YHAhB HA -1C<9 =RƜ}1uYk 1=&(m#$h% RbHiӧͷr<`் <}L/Z;;5š|EZy~T|` 8`(gN_8{LsײjȘBC(-$B 0b_q-]gF +XP\V43롦KoJ <: MDDzp%D,;Ba? Q9q*7|&BX i"btpb5 GgU40}YO31 0,+UmcDSfc -f b2[ԼBњ)pKd9u.[G[,zk| DMZ!Ϥk>(7n#I㟆/h^ٹq -Bq vj~{_#f-AxmV6Ʊ+L@(w>S/#X>4]m(IV!iɰetttB.X} ƇV%]M̞cdh:vn'TiuyZ"aQ)rze ?R5Xnjb X^>-7pjs=zHZODz.(zm ~Wg+0tԔix|{,Mة.c`c҉|xrK*O9=}X4JeH,.[L(A iS =1zi%/>C$'š7gZe [c )a#PZBXe>ZE_Kjci ü֫Nhjjz#eåщ|_;Ĝu޵0s>u<Ļ\>XV.aahVѰ{+[R} ] MBcúmCR[iXia_'N%npyQ"Bg}ݿ} |Nf\/V~6+ u"ښh=DfJӞ-t:@ E$E=2OEgdۃj"CJ`۶&HlڵӈF.ۗVe 4lk0o%ЕSEl{9F͜~vw܈4LT!Yydf/% 3!dH,%PX\ljjJ$iXVQ,jIJF:d7[b_x3Iga~z"ec%YC+ -tlC 0@Em>X׃yvө&' ѨW2O:a:WD!$PţCbJ`=d b;51PQ.(چhđJuGq,{D";)--2|giO +2}()GVXYr C;.3n8v\& ji  S#-?Cx9Ac@Fy]/>f{~xt:4A'&$Y&-ըHΈd;b0#%d\rzK|X'F${& ˭&HTF0׼{ǵkru/hm'=,.ٰDyqp$ "0Kk.hiD9yyrs֢P iK^lmJz^DWs[0Bu mhvQLz$G|yX@`݁1ǝio4A┉ذs_GQW ?{TĻ41H!DH8L/BCg*tYY !qK9n]taY2u_^_KDk %Xzon^zݷ4ANP5={pꩧjkk &Ý|ofiDN)$'?`PvTvOE ‘ l1DzZ){+3˭^r.o6i_DfKm_/ GuH$7j0 Éӿ~- 4~c{T1LeXƟ1w݆~Qq57(/~bܸq|wykmmm}>7e.:bQ%@H#Lp!jA};2Hk#g{׽Pj呋ˆ7n^~}ggk>eZ(//SظcHyuD zJ;mcc9 ˟s&U@ O<ƢWVM|wWnᳱ1i ro{w}w5񂥄<]5Rl\|i&O\|Emn% zP(t֚2u+jmKfXyeW?L'&4)BM| %]D?vY1%Iȴ9a1^kajS:)RDxwXK4AN\b1S IrGI!n'f ZW}Kub0XlrԽ$Hm޶B Yi rzW)!!=% KiL>S\ 'E @1kM8H >|$iZWAk L/| ޚ?%#r۶VtW ^paG?\s5_R:KRqS{o=kӪƍ:ujy`ХjiiŶP(4<MBb+m @U~~^uZvdggg̘{z>UMsְg C8ʓTӊRl[j}06 bY\ ɓ'ϮX(8 !r0M) RXE $ bv ;0 (R rD, 5aR^wSNN΋O>8X~~|?sϟB% g BbB{ߟUQ9{w ^w\vEBˮ]zd47] RX)*)%3+,/$'<^ `@GionG "O_,׺ʑ#;np?x9g,@kA 1:FjxW߼=7}'ͦ aYv{G25( |*Fby;"rrw[{zGu09~1Ӵ(-Bj|ٽlkEX&aƛ]eWrgV@ Ðض)q;i+?Iaضߔ` xF(m`&G3d)R9<1Ե&p؄q"縞g )1 (g pm5{n`ۖ,_X !$\=OYFHV( m~{Zmuտ?~<۷oO$#++VOp_Wg=BB28e q3Ϻ6# iI$#Q1.%n L)KVd ZA 4R"0+]ĪeKٱm*$ nο Vʆ8L^]9~B~ 6i Q1mTܹ֯sGlذbBddf1~,nvJU`#1 Z9#+VJfJ^\ PDג@iٷc3<,_p0! #/.}.GkĴ4 jkk7`{CלBzSL~[{Ҏ~48e|OHvnZ,i .(%Q= Z" !p5է\&%C{&͘eTI8`ӆ\i0bX"*233^^~jjjlO$>qLo\С*-[^ :϶5%\sgn]416A Bckrj/a 1 "ށ?1D1OA ) Aaa1sO; fAm!kٺa-;lbddT\a#* [w)}>M4zP[W˨1YJ 7s7}F?(p;fv[W0ƖNbJ7ָ$?Ê$)H!*͎e'88rgE}RSse- '1 6qSSSöM0qW/f4ANV1Vʆ]P>'nON^IM;n2 u Fbh!{T8?].Bd;ѝ_{Vf)>L5+loc)ц!Bsٺe 6m~Ȑ!YǮ;v rb(Gyy9-- m"o /Jq7t:V41l;(Oe ~.W7)D_LJX?!^3 #;EػFm={9s:^׹fOguf[SÆoi ralٴ!6i~Զ//τ`)H% "rƍ#"tڐhY:j:?ÃF=8)1fg`)a1tx%3>j.[Τi+B\.f͚â7R_W|kqee%---i8y,_#+loo]lH--1y(r#vb (1mg6_w!{ 1dd=HTkIo=C~ Gziʙ4ϼ3ȁ;lia?y*ha,}m:::,,*j?xhmkKDFjiii_ ufs;rD:eHZHdy]E(A(&f@h'JB+I $/+@#uU I*EBI`suyؼ"N.Y̌٧_@̶)+-˗yAA^n9o%Kfi 5Lο&θj" \BcZĐ, )V21,I,j2GcH&7Ӎ4Fh3LRHf R M? ZcIqf5P;W`ڬ9S0yCknniY& QFSYY/ 3.㟻[񖝤$- x NX dx,2q OM& _HDtrxl5f^Vhc疍:{6M(`uض:{6555 ry۷oe0d|0}7 qFh nK.RJpS/nɑ$Z>@r$$R#8ߢ #Lիߧ6py 0,e1c TUVӦMeuuu9p@ ;&MaC/J埼a&ѳ@&e r3}^Rȸ#%]ba " "QD9bĩoD;sBKL5ߥA :;;?ȑ&Pey76?rXo7*: $uH %&t DLe[ !K@I=cI\Ƣ=h06}64|x9C*Fck x7ho->bMKu8tP ;n]]yE|?DzZ:-=Dע*I8dhˀ}8QiNwʶ[9321dxX}:G?lٲ3i>|_ a3QZSp[E I<ͽ۲2/B*RcRdzi90}W f3l[@M =}C"-2;V3gҥK8böY6BO@pYΌ Bˁ8QԥHAc7QH<%ڊSg̹ tv{lۼ!%tbUdnmڸy!i r;:@CNA&]9b/ʺ90dgfxa#*7;q"=C1WK%r*:񼦾;n*@g O1 IDATRfzo7%9_7Rm@7,J 7?K'/V^Jˇ"MM!c1r$0M0~_<Ӛ( mAJ7q I(DK'K֪=G\׺} Lw!#Ϻ1܌&ry7QwCOr^x<=0NIn2H?Z>Hj{2wq>GH&5!ͣ)Fccc p eC$ӦM~=i$`\|eޯOT^| ~wߙ1  @œ4A=i!FL`C(a3j2yc&Sv5l6|+u N_3V Qw:^>j8vD*-aJRIQBaʥ0-"Jw⦣F)*QJu4Aa֭;:1ƻ(w1"0Nm$$9[`2u:V۴TEϾ #c-: P. =vu4Ǡ)=L h5NĶ߂ }F~"uxM-]-defa&0\c*dddvyuuPSScU,W_+BG#XPe3k(;\|xf- j(eBM5{^=h C;L3ӕ\)'hFU$2tGNj;S$EjϚ yvnª$I%hwUJT4խ  =n@{+htb K.AkMaIٓ&Y˷lS:_.`2sr"hu3/D޺Ic1SW jX70)@Klq#DcÄX,:m.w_˔k3DNZL1 iwO?׮B֡,!kR]7)q],")Lpk!V~IV>ׅ!^-{<;*Rb(;e0IbpU(e'mIOP B4paP_TT ǂv#r5}ևFkSR^v.ц)aA4G!.4#ToZʟ?7seaȄi $Bz ;/|t.!;;za0Q"htHmaJ;ՓDꢢcq&T&{"y#$}QH' )HSmm-xnrssRm/=ys'5A]/ ;0 @DAGpv%Uk^38/VW͆'n{\oB0'Re+ĿLQ!̎n"\u\,颵`< ; @rATCL{ #tOQ/D{ tRӔ7ǻRbt6Ss*u_}sDEƒS:c "`]V ضm Rh/YYغ=3]cfȑ#c'-A"px! ,BkҊL& ᮸g _~wcTfաHV¸Ɣz3BcQgR?:قNfs$>ީSnq$Id(8Z&YeB]`XH|,xNOFvFak1Ms{%in V& `" ]`tAG@I#ر8qE]Us!QF(oݲy  رs{O`SPAڈ&syӂTZu*~nbHͺ҄˴)۴0twn%%.;ZRxv/q iD.rF;;mx8>&f|UQL.ģw`vx2~a*6Yx uc@` M>Xm0[''`}01xDo,|eͤ:Aw^[ P475 r< FwGc1B:72- qca׀G{ܺw!. {B VϘ}1] 5ٱ!S%?@rc,G-V2"%fe"ިq 5WdmhOv%$f,i4OjL>{ݖACC]kYL1W}VܾJ%KO;DOzYz Zxa=dX#G\pE/?8u [.O]4y 9)퀣hRbH\F%[6h= Ӕ&{vnFir~|Rdڵ 6toWg+)ZۄQg)/.A^&X.<^QB:oKtT);Xg{"wlo*,-9жN7mݸ5WG^ߟc͞]iuԫU˷:u,-1d/t %yMރj[eg3i lVidߞ1y _>UuY}CU{1 G g_|:ƪ>l3gñC=n,XT& y)-x[2 .d d^s=w@gKztF XIQ=^e~$wC0$H,)ZL*(1O@)-:R/ύvnCċ]vMّ ڜ^Bgs? _.c K_2҄pY`q DLҽ^8L b0+ O}:eee e e8uxWc&J+?o:oR=9w8I@"uo:D[)l[! OjcXn{ #'NC-!PZb 7Wxx"?%faw6b^E~Q+qͷ#9 Uie^|!{iLKyoܺ~2c'JoBk'qcr<>2Ąis8OsWS6j"jgܱ.\o|1s7mx.h-۷o?g{0 R]a=nK=$1RF` Lr])PH,hk?P{o\q&l%u؇D#:xOOt?\QQO|uuMH,aH.O0t bZu2M3ddYpyh٨[[=;u{6_ /bmhp+'s6q}~i{wy:)+`TeYJ;ctOvZ9p 1%$UGO/Ȉq;W/P9f2+UV84Z޾GHdwlذ13S rm_ r\.77bة3ⓟ $ z1=B;E[;Pvg,[-kjj~W\x *I2nŮM9spsSSS3fdn݌0PDa2y>^*@kV14ope !1 xO&'_N vw@?Sh0cwntHFs:`횵}O:5|/ϝ;Om𡪙;lOƗ"LStcN24zuiHJwoY{],XumϯlZggiS>=BFl[1etç(G6'^ 4B~̏|M7yw{wSw2|\^1dld#%5VC܏-g]p9#&R* 7ٷ}#iY_pē o߾}~SՁFVT,:tuo1͘҈とPF:S$h\KEE/U+ ̜{:he*>FHмk\4&ꡎחf6nڌeY\ |+Ęq-(GLhj_v/oţKA4ĸ$PTu$IF{ZNGNq4ػs ?7 +ds8KjʑhsIJ zq͔-ݱc緁zom۸ ٹ{ϲ!CtlN=2I,$7+z `ĉ |~㢅7_%|?7QZQ]L>YJe<ٵk;[l_5WOhر_M2o׹ϱfH waSwh?-t;JߜGN]~NJvoC(mRT6ι8>Ŏ8=[iL)XBē=2c|W u=[wٱiŶ?`Μ9|slێ>˟qhN,5mil\.;)Ebk1СC0 v@_O uPFX cFr1dOm(_2?{!P_{q06 ]M)(>+?un:N(*n],zxEmW]ui 7WtH,-nll3sŊlܸq/vuIggGb̶1 Dkݫ]8;vN)RJki? 1!Dd~ˤeog"c$ smo}QF9 ~>;wIz\ݻWmXb~;F2rL%|+x1d7$.x퇣j,{%vm^Kn^nrda}t PPP@ 8=P+G^VQX~=?#?]axԄyB@[[G`BΥx ,Ʋםdެ~xBpi~#DUSw먂J&Jjfy\KO]f7|3M%%y.EKXf: ~U> 'cyQo֯zinnoִb *GFDd|SJIŰrtչjvL>#F:>{2O,NIAUӿAyظavr{i+nܭ\~յIe}XR(׳~rj 9gfeI64 !{QPxmX]E] 6P.^ kGI $@*{q IDAT̜1M^S}2$33={yGZ)Rʠ^ <>zyW\}f{22ur򞜙 FŞവ!bg >s"!Z>$<*VCRۗ<,^J| q #njxC$1Dف(?|'@^H0-S4;S`PV*AS}E{gG{so^gA%b׹%(.ē?JqX{a57OJhbuSHؚҧp cz7I@~ܗ&gn<A!24WDDYm|uʔ)K c cnjƎ3oޝ5al|R_"\p3ﯪYrǜۯKCd\9(2p&"!#"$ up:Z<*̹J9qqZс={O>EcSv s@@€;l E|,zA(`7"O 8Url>Yg88Th^W>{5؛c{wsظ97Q8c敂 "X(hH\ʘ? H<^K};6P}} 0zH.v(Ã~Rɶ^G(cڍPk==Ha0byS1z$}(HO*‹xM5훞`@n΍i}Uơ0 @&P2A׉ʒ"-D\b k$U+r&d0!h Ʊ}4o 5}ljic4i26n\2Çc߾}0[¿~Wfg ˲ۅ7\WGӭ Mo2 y/6ɴ}!o}L{t+E'ЃDS8SBBAYas`T h8 5'jࢫGz0(Be2J=w_F퉊ĸŕ#GĞ={ԳKxb\Hjsjl3E ip$јG ͅE}-ݸ1οUL +Zc;ylNo|yݟփNM0gJ (@,*pVCHNMu;w8c3v4v؅P1.xÏJ!B_UEi{AS%DpAQ/> o 1Q1yLN2FTpW8MkQZQYǦfd/yˁS(A|ޓF"2M^]},Aq͜1euTUg8|6PnKo5_"E7H1k@fÅf Θր" Q,fTl:4kr1 1 ďٙw9g2^FPt9~N7܂^}E 45g BU‚f޽ʘ+ ]0Ba1q 3Y&"Ž`c7 Lpole׏W v'Ů jpޙ;jԸ5jr Ո 0_ .5҃q( ?ZJz! I?Z[[k~8gwxy.Gq@¬qޝK*PW8:>ƗʊӦNƧ'+):ƸLt 9ٯƅƬw~;kPEX=)(`梭Ƶ5L_M![! {Ac6AP{Cқg5(@m9|pMQ:ƙn*z Cr]xN0{NJEMԞƸƒV$(N\Qq`jb 7`s>ܥӿ̏ȸw R]0FB"\*৶?)D``2  = XBvmaZ;p@ЫW* bky^ S#£"'?B6gL1.X@ zn7(!UO=I+sL]H`RujTYWoZ8AB 5<%4TVU|5);5ǎt^|xNv9V=<Ba r*4j#c00)?]ppA|۝xm񋐹 ,} /+903SW}gcsFe,o]p9:ufn͢EG=:s:ZJQݭXvm73H.h 6|ۘ1px, 7@u7|r`a *z8:piSϝk[l蕔0dGp>¢=L8QpĤ4: qxp8%]pb#o8|2h& HNFJzjCo3=s%\yN^zTu?m.ơe׸ Ȝo45U¶+j)La$_s (!}*8(D@Tr~9woBLz3Mŷᅪ7'AP = zCgOHץb $=h0q( \p~~PPvQ㥗.<3Za"DCH@{{pTWD*2݅[߉\cfF7?D"]^!~tT|8 rS'@tv,B8ؼz2G D^5t  ^xIv<8"eu55EmX~!YcE ( ˎ :TxEVq$^pRmǤ'#D>ވŏ>^~Ʀ=H6ם) k _rx<.HV8#`7ݾ*XIT;åίNBN#8i`?0T;Z >wo}N\e TN&O` P0?֊爍1z#[[[=$QR': F@VvfRSS)S***+x/Zܷ2l$nw7xU]{93JtǙܽƊ ޭw_G ̻~Ermv3l11\8=e*v:pZ*.E`Z**t:ɟ@^h4h4r*1jfMNOo8[Z3mZ-Ϛ}9Ο:"k 0{ޅ$^G@C࿜tvPqެ;o3I-(jڲ[֮kˆuභo_T/<~nշzW)ʝ=uEΔgDQ Jv;򷍪*|Z#h# 90 2;6xd8x2~ GEtuNcm'DcG@Z=V\vfZ aڢ8@z'E7 @QU ΂ ugA:4U_jl BEQ4A/ Y̵`ؙ;=Ca[2ҶPBdN2jDh<ܵI Z(}a$?t #UГ{EDҨwS("( I>ۃ$g$=f_u-ֶ݉&9[[Z9f6X#c G3ɤgWV1kqW@RR\RQq|erRou[EE?ȑM\U#8Ǟy)iP@"{{0hpGQS "l]ǚkq]rwT\ AoF:P/ <^rO?Sg\o^.BD(72g!3cl]j$S=?Eȝy)(.T۰qsZsשw-d־PdR@0PY'CÜpGYcF]J Ig@ƠCQ} \ :S%WtkdacO/a`s1Z[]Ų6"FZZnAޝPS} FY)H)W>xɨi#ȓ7Ut({5B*HAwUpY@2pϣOc黟a%peڹ":"7Qk)֯]-z Fas0bxTY=uZ_Bm Jh rp-{rǶMmmMK!4hQv_N( "<"Ն?'nC߿d*x0'mPu'ŹS>֠|ypoG$b2? kx:EVm3QUEV$W& c+Dl\<(b*;BzQv DI$`I DĪvNVw^:[npUEݍPsg IDAT 6iUiE0o+#3ҝ۶LjEVBrM bŧ9&9eQύ71&*7µ"HR<2]w뉪ii)WV|5}4fisw~!|غ}Wo8nLkOV>:it hx6}V^&F-;c N̸r"#(j`ASyIGqhS+vk߾tպҕ_S'(H4Yt"*: f]t9vn݈͛Deca j88TOC t D@5 G#1d0D&i=lٸFbvq`4!P!  J0TU >|_5֝z!.?%"h9!B_DcؼǚC;':T3pN;(+/Ezj&EDࢋ.6=pucӟ>oʅd&E?rPI3g!_ ?0>er|Xogzq V/-|>Y-5VHZct7ǏҢU׈6lL6jm44@fxwN( .tC߾_{ybRߍacӼ=Q)"K)pסeeߟSUp;Z@Q h@xxATt, 4F37.eT$uzl68].#=juY/j4̎?T0Xb PAk+Idz=:"UBvP~]'kӲx\t!AbHLLrD2q V5b@tT샻:{˪CyNhS/DиbF-qfť{䉒GcccY}}޺auw~1p}orߤ7M3y>L+T+A5q"Gme ^h췘g@,XV{{9 K.p4Xrze+"K '']UAjF0(B$pB!TXN h=74!Lf3Q]]ȸޝ$ǐӆ#YQOPsp"3@XJ!mkGIp.>Gy%W8WA֡D̀ y174Y\p{sm6zl 0Y¡R5Ik dXҲ]7xǒ>Pu/&æM0q~6`")׀^J 1N /Q7'*cV>YpQT@n2[H"OSAq;p(6oLvA"TXg9z,!7y&ݿ]߇LC NIMI<]p plov1LHNpssP1f9Sp Q3(ןF`˖-<=kiޟk{{r(!Cb\F/]pTp`"@ᶵKkKA[K e01Y `Ǿ]Px`o!KN H.pPFz)L':T@8[k[ ŵcIZ B(F eDZwp'Jן">/F3 px2d"cHd cj{攕357ȑ8g -kQ͎y$[[[g>lp|G N?R΀ov/DS Koq(/? ΂CVsݸ xAop;lChmiAVf&& 4e`Dge(W|V/D.>z' QN`c 5J3. ꂜV)?zd}F)}44b=2$/D tFrl߸ΖE55'MDb*{*  D(EJ 7 3`AdУ C0xDɠт5=x4N|rdb+v9_7j[[>CGW_͕D*BOjaqa´Y<6Y>WȲ|mbdK[4x~!!^ņ~;ovL Dxu NM-DctOO>Ar|ru4hmm#""^LOO:eAoinBmMmħ͡>83RW27!5gg্t:dt`ͪ>;.W ĨQf (T$@!acU7߉a)]*S``3asWTVVwb||hXt q111wECmkjlwK %}Ï/Ɠ_GƠQ0m\(xgn0b~޽~94i6nX3kj=Z!p7bQDgjWa̞:8V+Vmq(?[GFO 8qF(**KUU';-LXE! W/pEEE SUPe2(:3 BA=\n-SV#I"/"8#ULƑmXǴ?-$^.༩ӱpĉ&f9U.NGQ {<ЅXQÑ1*STE-G p9XQڽ& {ȑ#dggnٸq/***/N񦆆v o߁0uEݍǡr I"h6w$$$b]Mk/ic;VX͇4BˋC Q spaam$`Cf k=Z+{B2f`嗟ak-U`0cgRp N:lFQ(-)AUe9ftrFPA˯ pi[@C:Qi'N*dD}ז - 0\I䔔EEE`11h L4 /펫m dц;60-At|oDEGWX &@ a@]mZ \@2`B덟[,t;v2,"q{@u"T $J藖]u`n7db%fUpifLI\j)g>x}xa57^y*w$gU2`G2dHyTUELl FjjjF@$ (Uv @:z(sۛ߃)nX,at󊊊[@1ŗ/` %FR[sYMJTU\RqDQ۠IC'0 95>[B:o1cl2e +lmmp0D1(2; jG#³A 2Ӫd"rCh8*+ 5SN]ۯqM֎</qur0T} tw:.s45P0.<, @mM (8(< 0j Q! ҉(E{[+mق{0L0EINb߾999(**@~\Bk=W5^x-U( 0M'!ZZv[^̙3F566Nx<t9@za0FrjAXXXl^SaEGE+4xUx^߿rB'Q*W@ ex!R3xa8I2J)Z^L?`'EŋQSS7&55ukuuy;m`Ҍ+p7B%x )"CNrmߤ7yɲNG HDASz#& ~*n f,ϩ&C7N'>}+[  mW807nu٥s:.ʉ)p*˲;[nvۻw=z{<zEPQcbbwW_m.^( KƤ>Csc dEAA\=rkr`fqbW%$ĚíÅZxiBjjCϵao1y_0i4$M'dr]af%x<P(` ,*&H&kOA' נC%"2JsOhimH)BzÖƥeee5'N@]]zͳb ~ZNvVTuMF S1v8q xBEY1Aue)hka;G-MHHXSO==Z0U Z6;'IHhr`j RS!!&Kǒ9CZ Mᑊ?\xluO?~xTTT|QQvbh ~6h%+]YډZ>qV goW!w\JM"mDZu.r ɩ1f<.s#Z[[ӏ+ihjl444w8_P4lmZ Օe7x,!*D49dF[1h(ض,|$f}u۷cƌXjUe_Urذ'86ۉ{ "Q0[‘>GxX8$0ŋM?|FTIICDDtW9n \Ր!!VTp5S9j*n(ڷ%%\p\ߍعs*@́}кC7V\z5Й,PTK tzD⺛E† +OPu"ܣk5JK SSSQ^^_jEAI`tmO2 wtz#<.'^oوc0-{#"z/\C;6}|?ZZ%&&"Ku5C{3cƌY6m:| 5FiI D_} U!;q*$Iw:cйWᰡOr*QHW[Q_]-*¡px$C$Gm&)[O& fgW/aԀ;>g^ Ea>Ϻn1/\Cc:66_ F^*kkp,+K[[[NL%osdU9ӍW@sc5&s}%̑\{ (q|W^SSJK~-[&}6֎ z%EJIwzퟍ:Sm ոYHF+PZVacFRfs0z2)!.O/u,n7օK.K5wY:71c G ck/~:PA_u^ēkܴ%Ņ,)eZL`&DYQJ ć$E渄5e%Ցp\?;ΊӾwXnjjr475u3[nI;%g=wI'b(9﩯iC[lkkks45w9p?~Ł>h޲2vz`DptWH2Ŋ˯_^}nUUU FFF'Mu-I Z1nzdl fiT"]z"5F;J5jp)3 ,xi@$%%]_=7wP/ s[Zw Mnt,R[w-V5LF#SOxnpw$< r5εpI hEͩpmKl!>w%TPxU.'_&2z(h6\&$k3gkqEL3fa4[0lx"ڃ&B At:H][7"N"Rϱ/3ĨNG\P[C]B7 }B/,H Lڸ+w'|իߺ~]iݩS6A+b!NU>趦CH QA5\ a̦1V@Kmm\GL5\f9@7 ,|IP9 8^PU&MNo@|Z&iiTdefQ454<*]Tޝ+d'@L@A@vwD6wp0*:N$,I@ ${:Kw'NwWչ?; 茌~OUWթη9\ѨK7 MĤdZYYٽ Ƞ[|v/1 lm[F\Դj{(Ч^:Z:0zvEY#I`'梵C )EEEYţģN)cRs^;0B!:;5غy#;l JJ aUtZۭE1tCG(j DF3x0y/&aR7WoC (f,Y8޻n3W _tnᥘ;oxIچ-VTV~=8Uf[n'7\7?y Z6=/ɢҔnQT@jJkDɓyOP(?NJ)$IhDQ<(+iNJIr]DFW,w*QmmComX -[+*֝p7M *Jov>~EpFG$z$*$$ ðQi2}"BɼSZ'r䗖>㇧[Zy"~(Bx"""BjH6נ c.lOEeeq6HNI5/)?vgofdoDd݂<sB<Q1Xx^TW#s﷨*-FsS#,-;2x&IU@T!*:c1qUHNNru}UMc *Z&c,J>92p,5 ^YW0 E MK!pը./$8VH$}@0|߬e'|~n΃0YC"Gm(9eߴq\d\M~w8zk7U`ɢ4"44,o7ߴ w- 5GRGO+ypFz)2N޷@&~L5( mphmkqc0AՃWijT֋{WFοHL(܀MB\ =`6w2kV=8?3*-- F dbo/38NTWbώ-Xz6S7ub㦏Q&%IꑢT^TI'i?xOOH0a@7." gmݝig669 8~$kťQP[37TVjͬ$6gO')ǖeCN'Tj Gz=:gL>]ջOd#9w'2B}ZKkN)uZ՗7EИx0 _*< kVb]!#1|ѓH 1,@@nųصC7"$:~2r.ǎsYYw~j+[F!/@VnNNŸ˵-n.R {2[1."FS]%FBRR{3XG{GaHpЛ;%+~)6}۳@ԻDpH ()P(T: nmـ[֡!!! wao#u`/]jConYVXmX1f cǎ5izHD$γ`#X}$ŷ%\?t#Rw qq6x];D *0DԳ00DDI5/G.hP&A"}OO?^#333.c,uc~xT TQ2qoYdjJPb= u!$"zn6s#vc ?^\=ȕW^.N7fIh vR7CDd""qh~SaqqfϞ.Vd5\,=ԩܨHByۥQ+zS}܁^}p`$7V{Z)Ó'׌5 >C.}p 4n$Db`yZ81g#vPs4~n Uŧ;+흝ݥ*gHII DQDIyg+y$K& 8\S.1DͤV_A_<$]gJDmOl/Ð$LQSOMV>;re'AEہ׮@$,-(Ï`g IКͭϹܩ1)A͊ eX$x8y0N΃@ rBrTaM:]kbLL^sK7^}# \4+==999;:uEֈZӯ[pL[? 00pD7y`!K ?-fp6uc.݁ШX%@d!CIFGe =EU',QA7HJPP8[s#֪7CBB?vCzIj.}$DեP "yҞ\D@{=L`P0 w?-Q (VUAd^g"U(^r6i5V5 s?6n\Pln>71Fy9_4ي߇a]bW_[5N]N\R\rp?U%P(xtP(P(T0r}7=+^B (vw`X(8} %?evbZ͏9ݮ!xuԣGjl%"pI\p*$t^bR}$Ϝ)nU3(kU~K@~fd|̙YY'㊚keeEIQF p.G`30 ^N+NSyL6MKSUeEaٙ{phϷ(/>`j*MWOގ3žo V߮鞕M;(4-5ߤGx'oC &N0a.>BAhGDr  VΖqQBx`OC͆֡iia`;nCBbTۻɉm̭:c A{{;N8{ ~Kamv[$Q!AAOArr20(:AaP`

Xp!֯_͓!a ]VVXWW'ʕ+q}{ 4=Z˱iHġW9@DpAs@~)fp/.z\PHѹM C;Gr[Jn {`@ F3_/^k^(-[]YY=b8#V*nH qjTy͛?߾CCnySx'{x{EdX[@M{+ξCet,N1@ k} N}e_oF:zDM(s pZ(~Cbf7,} }̚1 ;vY|O " zV`sYtmY H_"n (E V޼7,)W@ NVoT B@~ )ӎ?'O1er<:C(O>fLw"‘|9s*ֿ ((\(E鎿c7t %NKL7/Ȯc0^2- P CMMK[t亦in ЇwW]-O!8 4T~; KDB@x<`{ )Q=K5751jdJhiSbxJ t9@@{9} W)$/Wh9J>Q>Y̓Ύ0Ɵ>_.FDưd,[xn&ςw*O84Xg"O*+s5P#S 07!1H/!zzKvuY}/ G&M3u͏$JUr 5hgޓ~gFz_m ]߫0SRCHs 8Jm}Q_ *NZswm.+UfLPgO;w*=u2,, PP9p$.H o.% զRk >2!-펧BBnvmV>?7s%Ga Vq'EAJPpYD sLGn5(֯}e+FOK[~Ax$ӛ+ ^ kx t 貃_w B6xoЌbke?8]!J,b]6&H rThA&ރ\b0n}ч*JK'MCpX({}c6-$쏄? qu(LԾR} n|rml`w}ïS޾$e JՊ[:h: (zE( AYrnB]"C/zN@}o ><@8LרLw<6V3>n<#3 :CCDX˫+|Aqzm1?bd SR+P*32mo|XtxpqZ?u(q\:vo|a ic}#>>>>>>5pnےIENDB`compose-1.5.2/project/000077500000000000000000000000001263011261000146225ustar00rootroot00000000000000compose-1.5.2/project/ISSUE-TRIAGE.md000066400000000000000000000017371263011261000170150ustar00rootroot00000000000000Triaging of issues ------------------ The docker-compose issue triage process follows https://github.com/docker/docker/blob/master/project/ISSUE-TRIAGE.md with the following additions or exceptions. ### Classify the Issue The following labels are provided in additional to the standard labels: | Kind | Description | |--------------|-------------------------------------------------------------------| | kind/cleanup | A refactor or improvement that is related to quality not function | | kind/parity | A request for feature parity with docker cli | ### Functional areas Most issues should fit into one of the following functional areas: | Area | |-----------------| | area/build | | area/cli | | area/config | | area/logs | | area/networking | | area/packaging | | area/run | | area/scale | | area/tests | | area/up | | area/volumes | compose-1.5.2/project/RELEASE-PROCESS.md000066400000000000000000000102331263011261000173370ustar00rootroot00000000000000Building a Compose release ========================== ## Prerequisites The release scripts require the following tools installed on the host: * https://hub.github.com/ * https://stedolan.github.io/jq/ * http://pandoc.org/ ## To get started with a new release Create a branch, update version, and add release notes by running `make-branch` ./script/release/make-branch $VERSION [$BASE_VERSION] `$BASE_VERSION` will default to master. Use the last version tag for a bug fix release. As part of this script you'll be asked to: 1. Update the version in `docs/install.md` and `compose/__init__.py`. If the next release will be an RC, append `rcN`, e.g. `1.4.0rc1`. 2. Write release notes in `CHANGES.md`. Almost every feature enhancement should be mentioned, with the most visible/exciting ones first. Use descriptive sentences and give context where appropriate. Bug fixes are worth mentioning if it's likely that they've affected lots of people, or if they were regressions in the previous version. Improvements to the code are not worth mentioning. ## When a PR is merged into master that we want in the release 1. Check out the bump branch and run the cherry pick script git checkout bump-$VERSION ./script/release/cherry-pick-pr $PR_NUMBER 2. When you are done cherry-picking branches move the bump version commit to HEAD ./script/release/rebase-bump-commit git push --force $USERNAME bump-$VERSION ## To release a version (whether RC or stable) Check out the bump branch and run the `build-binaries` script git checkout bump-$VERSION ./script/release/build-binaries When prompted build the non-linux binaries and test them. 1. Build the Mac binary in a Mountain Lion VM: script/prepare-osx script/build-osx 2. Download the windows binary from AppVeyor https://ci.appveyor.com/project/docker/compose 3. Draft a release from the tag on GitHub (the script will open the window for you) In the "Tag version" dropdown, select the tag you just pushed. 4. Paste in installation instructions and release notes. Here's an example - change the Compose version and Docker version as appropriate: Firstly, note that Compose 1.5.0 requires Docker 1.8.0 or later. Secondly, if you're a Mac user, the **[Docker Toolbox](https://www.docker.com/toolbox)** will install Compose 1.5.0 for you, alongside the latest versions of the Docker Engine, Machine and Kitematic. Otherwise, you can use the usual commands to install/upgrade. Either download the binary: curl -L https://github.com/docker/compose/releases/download/1.5.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose Or install the PyPi package: pip install -U docker-compose==1.5.0 Here's what's new: ...release notes go here... 5. Attach the binaries and `script/run.sh` 6. Add "Thanks" with a list of contributors. The contributor list can be generated by running `./script/release/contributors`. 7. If everything looks good, it's time to push the release. ./script/release/push-release 8. Publish the release on GitHub. 9. Check that all the binaries download (following the install instructions) and run. 10. Email maintainers@dockerproject.org and engineering@docker.com about the new release. ## If it’s a stable release (not an RC) 1. Merge the bump PR. 2. Make sure `origin/release` is updated locally: git fetch origin 3. Update the `docs` branch on the upstream repo: git push git@github.com:docker/compose.git origin/release:docs 4. Let the docs team know that it’s been updated so they can publish it. 5. Close the release’s milestone. ## If it’s a minor release (1.x.0), rather than a patch release (1.x.y) 1. Open a PR against `master` to: - update `CHANGELOG.md` to bring it in line with `release` - bump the version in `compose/__init__.py` to the *next* minor version number with `dev` appended. For example, if you just released `1.4.0`, update it to `1.5.0dev`. 2. Get the PR merged. ## Finally 1. Celebrate, however you’d like. compose-1.5.2/requirements-build.txt000066400000000000000000000000211263011261000175260ustar00rootroot00000000000000pyinstaller==3.0 compose-1.5.2/requirements-dev.txt000066400000000000000000000000741263011261000172150ustar00rootroot00000000000000coverage==3.7.1 mock>=1.0.1 pytest==2.7.2 pytest-cov==2.1.0 compose-1.5.2/requirements.txt000066400000000000000000000002421263011261000164360ustar00rootroot00000000000000PyYAML==3.11 docker-py==1.5.0 dockerpty==0.3.4 docopt==0.6.1 enum34==1.0.4 jsonschema==2.5.1 requests==2.7.0 six==1.7.3 texttable==0.8.4 websocket-client==0.32.0 compose-1.5.2/script/000077500000000000000000000000001263011261000144605ustar00rootroot00000000000000compose-1.5.2/script/build-image000077500000000000000000000005101263011261000165610ustar00rootroot00000000000000#!/bin/bash set -e if [ -z "$1" ]; then >&2 echo "First argument must be image tag." exit 1 fi TAG=$1 VERSION="$(python setup.py --version)" ./script/write-git-sha python setup.py sdist cp dist/docker-compose-$VERSION.tar.gz dist/docker-compose-release.tar.gz docker build -t docker/compose:$TAG -f Dockerfile.run . compose-1.5.2/script/build-linux000077500000000000000000000003551263011261000166450ustar00rootroot00000000000000#!/bin/bash set -ex ./script/clean TAG="docker-compose" docker build -t "$TAG" . | tail -n 200 docker run \ --rm --entrypoint="script/build-linux-inner" \ -v $(pwd)/dist:/code/dist \ -v $(pwd)/.git:/code/.git \ "$TAG" compose-1.5.2/script/build-linux-inner000077500000000000000000000004711263011261000177550ustar00rootroot00000000000000#!/bin/bash set -ex TARGET=dist/docker-compose-$(uname -s)-$(uname -m) VENV=/code/.tox/py27 mkdir -p `pwd`/dist chmod 777 `pwd`/dist $VENV/bin/pip install -q -r requirements-build.txt ./script/write-git-sha su -c "$VENV/bin/pyinstaller docker-compose.spec" user mv dist/docker-compose $TARGET $TARGET version compose-1.5.2/script/build-osx000077500000000000000000000006041263011261000163140ustar00rootroot00000000000000#!/bin/bash set -ex PATH="/usr/local/bin:$PATH" rm -rf venv virtualenv -p /usr/local/bin/python venv venv/bin/pip install -r requirements.txt venv/bin/pip install -r requirements-build.txt venv/bin/pip install --no-deps . ./script/write-git-sha venv/bin/pyinstaller docker-compose.spec mv dist/docker-compose dist/docker-compose-Darwin-x86_64 dist/docker-compose-Darwin-x86_64 version compose-1.5.2/script/build-windows.ps1000066400000000000000000000032611263011261000176760ustar00rootroot00000000000000# Builds the Windows binary. # # From a fresh 64-bit Windows 10 install, prepare the system as follows: # # 1. Install Git: # # http://git-scm.com/download/win # # 2. Install Python 2.7.10: # # https://www.python.org/downloads/ # # 3. Append ";C:\Python27;C:\Python27\Scripts" to the "Path" environment variable: # # https://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sysdm_advancd_environmnt_addchange_variable.mspx?mfr=true # # 4. In Powershell, run the following commands: # # $ pip install virtualenv # $ Set-ExecutionPolicy -Scope CurrentUser RemoteSigned # # 5. Clone the repository: # # $ git clone https://github.com/docker/compose.git # $ cd compose # # 6. Build the binary: # # .\script\build-windows.ps1 $ErrorActionPreference = "Stop" # Remove virtualenv if (Test-Path venv) { Remove-Item -Recurse -Force .\venv } # Remove .pyc files Get-ChildItem -Recurse -Include *.pyc | foreach ($_) { Remove-Item $_.FullName } # Create virtualenv virtualenv .\venv # Install dependencies .\venv\Scripts\pip install pypiwin32==219 .\venv\Scripts\pip install -r requirements.txt .\venv\Scripts\pip install --no-deps . .\venv\Scripts\pip install --allow-external pyinstaller -r requirements-build.txt git rev-parse --short HEAD | out-file -encoding ASCII compose\GITSHA # Build binary # pyinstaller has lots of warnings, so we need to run with ErrorAction = Continue $ErrorActionPreference = "Continue" .\venv\Scripts\pyinstaller .\docker-compose.spec $ErrorActionPreference = "Stop" Move-Item -Force .\dist\docker-compose.exe .\dist\docker-compose-Windows-x86_64.exe .\dist\docker-compose-Windows-x86_64.exe --version compose-1.5.2/script/ci000077500000000000000000000011611263011261000150000ustar00rootroot00000000000000#!/bin/bash # This should be run inside a container built from the Dockerfile # at the root of the repo: # # $ TAG="docker-compose:$(git rev-parse --short HEAD)" # $ docker build -t "$TAG" . # $ docker run --rm --volume="/var/run/docker.sock:/var/run/docker.sock" --volume="$(pwd)/.git:/code/.git" -e "TAG=$TAG" --entrypoint="script/ci" "$TAG" set -ex docker version export DOCKER_VERSIONS=all STORAGE_DRIVER=${STORAGE_DRIVER:-overlay} export DOCKER_DAEMON_ARGS="--storage-driver=$STORAGE_DRIVER" GIT_VOLUME="--volumes-from=$(hostname)" . script/test-versions >&2 echo "Building Linux binary" . script/build-linux-inner compose-1.5.2/script/clean000077500000000000000000000002131263011261000154640ustar00rootroot00000000000000#!/bin/sh set -e find . -type f -name '*.pyc' -delete find -name __pycache__ -delete rm -rf docs/_site build dist docker-compose.egg-info compose-1.5.2/script/dev000077500000000000000000000007511263011261000151670ustar00rootroot00000000000000#!/bin/bash # This is a script for running Compose inside a Docker container. It's handy for # development. # # $ ln -s `pwd`/script/dev /usr/local/bin/docker-compose # $ cd /a/compose/project # $ docker-compose up # set -e # Follow symbolic links if [ -h "$0" ]; then DIR=$(readlink "$0") else DIR=$0 fi DIR="$(dirname "$DIR")"/.. docker build -t docker-compose $DIR exec docker run -i -t -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:`pwd` -w `pwd` docker-compose $@ compose-1.5.2/script/prepare-osx000077500000000000000000000024101263011261000166500ustar00rootroot00000000000000#!/bin/bash set -ex python_version() { python -V 2>&1 } openssl_version() { python -c "import ssl; print ssl.OPENSSL_VERSION" } desired_python_version="2.7.9" desired_python_brew_version="2.7.9" python_formula="https://raw.githubusercontent.com/Homebrew/homebrew/1681e193e4d91c9620c4901efd4458d9b6fcda8e/Library/Formula/python.rb" desired_openssl_version="1.0.1j" desired_openssl_brew_version="1.0.1j_1" openssl_formula="https://raw.githubusercontent.com/Homebrew/homebrew/62fc2a1a65e83ba9dbb30b2e0a2b7355831c714b/Library/Formula/openssl.rb" PATH="/usr/local/bin:$PATH" if !(which brew); then ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" fi brew update > /dev/null if !(python_version | grep "$desired_python_version"); then if brew list | grep python; then brew unlink python fi brew install "$python_formula" brew switch python "$desired_python_brew_version" fi if !(openssl_version | grep "$desired_openssl_version"); then if brew list | grep openssl; then brew unlink openssl fi brew install "$openssl_formula" brew switch openssl "$desired_openssl_brew_version" fi echo "*** Using $(python_version)" echo "*** Using $(openssl_version)" if !(which virtualenv); then pip install virtualenv fi compose-1.5.2/script/release/000077500000000000000000000000001263011261000161005ustar00rootroot00000000000000compose-1.5.2/script/release/build-binaries000077500000000000000000000015751263011261000207270ustar00rootroot00000000000000#!/bin/bash # # Build the release binaries # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Build binaries for the release. This script requires that 'git config branch.${BRANCH}.release' is set to the release version for the release branch. EOM exit 1 } BRANCH="$(git rev-parse --abbrev-ref HEAD)" VERSION="$(git config "branch.${BRANCH}.release")" || usage REPO=docker/compose # Build the binaries script/clean script/build-linux # TODO: build osx binary # script/prepare-osx # script/build-osx # TODO: build or fetch the windows binary echo "You need to build the osx/windows binaries, that step is not automated yet." echo "Building the container distribution" script/build-image $VERSION echo "Create a github release" # TODO: script more of this https://developer.github.com/v3/repos/releases/ browser https://github.com/$REPO/releases/new compose-1.5.2/script/release/cherry-pick-pr000077500000000000000000000010321263011261000206610ustar00rootroot00000000000000#!/bin/bash # # Cherry-pick a PR into the release branch # set -e set -o pipefail function usage() { >&2 cat << EOM Cherry-pick commits from a github pull request. Usage: $0 EOM exit 1 } [ -n "$1" ] || usage if [ -z "$(command -v hub 2> /dev/null)" ]; then >&2 echo "$0 requires https://hub.github.com/." >&2 echo "Please install it and make sure it is available on your \$PATH." exit 2 fi REPO=docker/compose GITHUB=https://github.com/$REPO/pull PR=$1 url="$GITHUB/$PR" hub am -3 $url compose-1.5.2/script/release/contributors000077500000000000000000000007121263011261000205630ustar00rootroot00000000000000#!/bin/bash set -e function usage() { >&2 cat << EOM Print the list of github contributors for the release Usage: $0 EOM exit 1 } [[ -n "$1" ]] || usage PREV_RELEASE=$1 VERSION=HEAD URL="https://api.github.com/repos/docker/compose/compare" curl -sf "$URL/$PREV_RELEASE...$VERSION" | \ jq -r '.commits[].author.login' | \ sort | \ uniq -c | \ sort -nr | \ awk '{print "@"$2","}' | \ xargs echo compose-1.5.2/script/release/make-branch000077500000000000000000000047011263011261000202000ustar00rootroot00000000000000#!/bin/bash # # Prepare a new release branch # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Create a new release branch 'release-' Usage: $0 [] Options: version version string for the release (ex: 1.6.0) base_version branch or tag to start from. Defaults to master. For bug-fix releases use the previous stage release tag. EOM exit 1 } [ -n "$1" ] || usage VERSION=$1 BRANCH=bump-$VERSION REPO=docker/compose GITHUB_REPO=git@github.com:$REPO if [ -z "$2" ]; then BASE_VERSION="master" else BASE_VERSION=$2 fi DEFAULT_REMOTE=release REMOTE="$(find_remote "$GITHUB_REPO")" # If we don't have a docker remote add one if [ -z "$REMOTE" ]; then echo "Creating $DEFAULT_REMOTE remote" git remote add ${DEFAULT_REMOTE} ${GITHUB_REPO} fi # handle the difference between a branch and a tag if [ -z "$(git name-rev $BASE_VERSION | grep tags)" ]; then BASE_VERSION=$REMOTE/$BASE_VERSION fi echo "Creating a release branch $VERSION from $BASE_VERSION" read -n1 -r -p "Continue? (ctrl+c to cancel)" git fetch $REMOTE -p git checkout -b $BRANCH $BASE_VERSION echo "Merging remote release branch into new release branch" git merge --strategy=ours --no-edit $REMOTE/release # Store the release version for this branch in git, so that other release # scripts can use it git config "branch.${BRANCH}.release" $VERSION echo "Update versions in docs/install.md, compose/__init__.py, script/run.sh" $EDITOR docs/install.md $EDITOR compose/__init__.py $EDITOR script/run.sh echo "Write release notes in CHANGELOG.md" browser "https://github.com/docker/compose/issues?q=milestone%3A$VERSION+is%3Aclosed" $EDITOR CHANGELOG.md git diff echo "Verify changes before commit. Exit the shell to commit changes" $SHELL || true git commit -a -m "Bump $VERSION" --signoff --no-verify echo "Push branch to user remote" GITHUB_USER=$USER USER_REMOTE="$(find_remote $GITHUB_USER/compose)" if [ -z "$USER_REMOTE" ]; then echo "No user remote found for $GITHUB_USER" read -r -p "Enter the name of your github user: " GITHUB_USER # assumes there is already a user remote somewhere USER_REMOTE=$(find_remote $GITHUB_USER/compose) fi if [ -z "$USER_REMOTE" ]; then >&2 echo "No user remote found. You need to 'git push' your branch." exit 2 fi git push $USER_REMOTE browser https://github.com/$REPO/compare/docker:release...$GITHUB_USER:$BRANCH?expand=1 compose-1.5.2/script/release/push-release000077500000000000000000000041211263011261000204210ustar00rootroot00000000000000#!/bin/bash # # Create the official release # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Publish a release by building all artifacts and pushing them. This script requires that 'git config branch.${BRANCH}.release' is set to the release version for the release branch. EOM exit 1 } BRANCH="$(git rev-parse --abbrev-ref HEAD)" VERSION="$(git config "branch.${BRANCH}.release")" || usage if [ -z "$(command -v jq 2> /dev/null)" ]; then >&2 echo "$0 requires https://stedolan.github.io/jq/" >&2 echo "Please install it and make sure it is available on your \$PATH." exit 2 fi if [ -z "$(command -v pandoc 2> /dev/null)" ]; then >&2 echo "$0 requires http://pandoc.org/" >&2 echo "Please install it and make sure it is available on your \$PATH." exit 2 fi API=https://api.github.com/repos REPO=docker/compose GITHUB_REPO=git@github.com:$REPO # Check the build status is green sha=$(git rev-parse HEAD) url=$API/$REPO/statuses/$sha build_status=$(curl -s $url | jq -r '.[0].state') if [ -n "$SKIP_BUILD_CHECK" ]; then echo "Skipping build status check..." elif [[ "$build_status" != "success" ]]; then >&2 echo "Build status is $build_status, but it should be success." exit -1 fi echo "Tagging the release as $VERSION" git tag $VERSION git push $GITHUB_REPO $VERSION echo "Uploading the docker image" docker push docker/compose:$VERSION echo "Uploading sdist to pypi" pandoc -f markdown -t rst README.md -o README.rst sed -i -e 's/logo.png?raw=true/https:\/\/github.com\/docker\/compose\/raw\/master\/logo.png?raw=true/' README.rst ./script/write-git-sha python setup.py sdist if [ "$(command -v twine 2> /dev/null)" ]; then twine upload ./dist/docker-compose-${VERSION}.tar.gz else python setup.py upload fi echo "Testing pip package" virtualenv venv-test source venv-test/bin/activate pip install docker-compose==$VERSION docker-compose version deactivate rm -rf venv-test echo "Now publish the github release, and test the downloads." echo "Email maintainers@dockerproject.org and engineering@docker.com about the new release." compose-1.5.2/script/release/rebase-bump-commit000077500000000000000000000015471263011261000215250ustar00rootroot00000000000000#!/bin/bash # # Move the "bump to " commit to the HEAD of the branch # . "$(dirname "${BASH_SOURCE[0]}")/utils.sh" function usage() { >&2 cat << EOM Move the "bump to " commit to the HEAD of the branch This script requires that 'git config branch.${BRANCH}.release' is set to the release version for the release branch. EOM exit 1 } BRANCH="$(git rev-parse --abbrev-ref HEAD)" VERSION="$(git config "branch.${BRANCH}.release")" || usage COMMIT_MSG="Bump $VERSION" sha="$(git log --grep "$COMMIT_MSG" --format="%H")" if [ -z "$sha" ]; then >&2 echo "No commit with message \"$COMMIT_MSG\"" exit 2 fi if [[ "$sha" == "$(git rev-parse HEAD)" ]]; then >&2 echo "Bump commit already at HEAD" exit 0 fi commits=$(git log --format="%H" "$sha..HEAD" | wc -l) git rebase --onto $sha~1 HEAD~$commits $BRANCH git cherry-pick $sha compose-1.5.2/script/release/utils.sh000066400000000000000000000006241263011261000175760ustar00rootroot00000000000000#!/bin/bash # # Util functions for release scritps # set -e set -o pipefail function browser() { local url=$1 xdg-open $url || open $url } function find_remote() { local url=$1 for remote in $(git remote); do git config --get remote.${remote}.url | grep $url > /dev/null && echo -n $remote done # Always return true, extra remotes cause it to return false true } compose-1.5.2/script/run.sh000077500000000000000000000021501263011261000156210ustar00rootroot00000000000000#!/bin/bash # # Run docker-compose in a container # # This script will attempt to mirror the host paths by using volumes for the # following paths: # * $(pwd) # * $(dirname $COMPOSE_FILE) if it's set # * $HOME if it's set # # You can add additional volumes (or any docker run options) using # the $COMPOSE_OPTIONS environment variable. # set -e VERSION="1.5.2" IMAGE="docker/compose:$VERSION" # Setup options for connecting to docker host if [ -z "$DOCKER_HOST" ]; then DOCKER_HOST="/var/run/docker.sock" fi if [ -S "$DOCKER_HOST" ]; then DOCKER_ADDR="-v $DOCKER_HOST:$DOCKER_HOST -e DOCKER_HOST" else DOCKER_ADDR="-e DOCKER_HOST -e DOCKER_TLS_VERIFY -e DOCKER_CERT_PATH" fi # Setup volume mounts for compose config and context VOLUMES="-v $(pwd):$(pwd)" if [ -n "$COMPOSE_FILE" ]; then compose_dir=$(dirname $COMPOSE_FILE) fi # TODO: also check --file argument if [ -n "$compose_dir" ]; then VOLUMES="$VOLUMES -v $compose_dir:$compose_dir" fi if [ -n "$HOME" ]; then VOLUMES="$VOLUMES -v $HOME:$HOME" fi exec docker run --rm -ti $DOCKER_ADDR $COMPOSE_OPTIONS $VOLUMES -w $(pwd) $IMAGE $@ compose-1.5.2/script/shell000077500000000000000000000002511263011261000155130ustar00rootroot00000000000000#!/bin/sh set -ex docker build -t docker-compose . exec docker run -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:/code -ti --rm --entrypoint bash docker-compose compose-1.5.2/script/test000077500000000000000000000004511263011261000153650ustar00rootroot00000000000000#!/bin/bash # See CONTRIBUTING.md for usage. set -ex TAG="docker-compose:$(git rev-parse --short HEAD)" rm -rf coverage-html # Create the host directory so it's owned by $USER mkdir -p coverage-html docker build -t "$TAG" . GIT_VOLUME="--volume=$(pwd)/.git:/code/.git" . script/test-versions compose-1.5.2/script/test-versions000077500000000000000000000025041263011261000172340ustar00rootroot00000000000000#!/bin/bash # This should be run inside a container built from the Dockerfile # at the root of the repo - script/test will do it automatically. set -e >&2 echo "Running lint checks" docker run --rm \ ${GIT_VOLUME} \ --entrypoint="tox" \ "$TAG" -e pre-commit get_versions="docker run --rm --entrypoint=/code/.tox/py27/bin/python $TAG /code/script/versions.py docker/docker" if [ "$DOCKER_VERSIONS" == "" ]; then DOCKER_VERSIONS="$($get_versions default)" elif [ "$DOCKER_VERSIONS" == "all" ]; then DOCKER_VERSIONS="$($get_versions recent -n 2)" fi BUILD_NUMBER=${BUILD_NUMBER-$USER} for version in $DOCKER_VERSIONS; do >&2 echo "Running tests against Docker $version" daemon_container="compose-dind-$version-$BUILD_NUMBER" function on_exit() { if [[ "$?" != "0" ]]; then docker logs "$daemon_container" 2>&1 | tail -n 100 fi docker rm -vf "$daemon_container" } trap "on_exit" EXIT docker run \ -d \ --name "$daemon_container" \ --privileged \ --volume="/var/lib/docker" \ dockerswarm/dind:$version \ docker daemon -H tcp://0.0.0.0:2375 $DOCKER_DAEMON_ARGS \ 2>&1 | tail -n 10 docker run \ --rm \ --link="$daemon_container:docker" \ --env="DOCKER_HOST=tcp://docker:2375" \ --entrypoint="tox" \ "$TAG" \ -e py27,py34 -- "$@" done compose-1.5.2/script/travis/000077500000000000000000000000001263011261000157705ustar00rootroot00000000000000compose-1.5.2/script/travis/bintray.json.tmpl000066400000000000000000000014661263011261000213150ustar00rootroot00000000000000{ "package": { "name": "${TRAVIS_OS_NAME}", "repo": "master", "subject": "docker-compose", "desc": "Automated build of master branch from travis ci.", "website_url": "https://github.com/docker/compose", "issue_tracker_url": "https://github.com/docker/compose/issues", "vcs_url": "https://github.com/docker/compose.git", "licenses": ["Apache-2.0"] }, "version": { "name": "master", "desc": "Automated build of the master branch.", "released": "${DATE}", "vcs_tag": "master" }, "files": [ { "includePattern": "dist/(.*)", "excludePattern": ".*\.tar.gz", "uploadPattern": "$1", "matrixParams": { "override": 1 } } ], "publish": true } compose-1.5.2/script/travis/build-binary000077500000000000000000000003551263011261000203020ustar00rootroot00000000000000#!/bin/bash set -ex if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then script/build-linux script/build-image master # TODO: requires auth # docker push docker/compose:master else script/prepare-osx script/build-osx fi compose-1.5.2/script/travis/ci000077500000000000000000000003171263011261000163120ustar00rootroot00000000000000#!/bin/bash set -e if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then tox -e py27,py34 -- tests/unit else # TODO: we could also install py34 and test against it python -m tox -e py27 -- tests/unit fi compose-1.5.2/script/travis/install000077500000000000000000000002071263011261000173630ustar00rootroot00000000000000#!/bin/bash set -ex if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then pip install tox==2.1.1 else pip install --user tox==2.1.1 fi compose-1.5.2/script/travis/render-bintray-config.py000077500000000000000000000003341263011261000225350ustar00rootroot00000000000000#!/usr/bin/env python from __future__ import print_function import datetime import os.path import sys os.environ['DATE'] = str(datetime.date.today()) for line in sys.stdin: print(os.path.expandvars(line), end='') compose-1.5.2/script/versions.py000077500000000000000000000073531263011261000167150ustar00rootroot00000000000000#!/usr/bin/env python """ Query the github API for the git tags of a project, and return a list of version tags for recent releases, or the default release. The default release is the most recent non-RC version. Recent is a list of unqiue major.minor versions, where each is the most recent version in the series. For example, if the list of versions is: 1.8.0-rc2 1.8.0-rc1 1.7.1 1.7.0 1.7.0-rc1 1.6.2 1.6.1 `default` would return `1.7.1` and `recent -n 3` would return `1.8.0-rc2 1.7.1 1.6.2` """ from __future__ import print_function import argparse import itertools import operator from collections import namedtuple import requests GITHUB_API = 'https://api.github.com/repos' class Version(namedtuple('_Version', 'major minor patch rc')): @classmethod def parse(cls, version): version = version.lstrip('v') version, _, rc = version.partition('-') major, minor, patch = version.split('.', 3) return cls(int(major), int(minor), int(patch), rc) @property def major_minor(self): return self.major, self.minor @property def order(self): """Return a representation that allows this object to be sorted correctly with the default comparator. """ # rc releases should appear before official releases rc = (0, self.rc) if self.rc else (1, ) return (self.major, self.minor, self.patch) + rc def __str__(self): rc = '-{}'.format(self.rc) if self.rc else '' return '.'.join(map(str, self[:3])) + rc def group_versions(versions): """Group versions by `major.minor` releases. Example: >>> group_versions([ Version(1, 0, 0), Version(2, 0, 0, 'rc1'), Version(2, 0, 0), Version(2, 1, 0), ]) [ [Version(1, 0, 0)], [Version(2, 0, 0), Version(2, 0, 0, 'rc1')], [Version(2, 1, 0)], ] """ return list( list(releases) for _, releases in itertools.groupby(versions, operator.attrgetter('major_minor')) ) def get_latest_versions(versions, num=1): """Return a list of the most recent versions for each major.minor version group. """ versions = group_versions(versions) return [versions[index][0] for index in range(num)] def get_default(versions): """Return a :class:`Version` for the latest non-rc version.""" for version in versions: if not version.rc: return version def get_github_releases(project): """Query the Github API for a list of version tags and return them in sorted order. See https://developer.github.com/v3/repos/#list-tags """ url = '{}/{}/tags'.format(GITHUB_API, project) response = requests.get(url) response.raise_for_status() versions = [Version.parse(tag['name']) for tag in response.json()] return sorted(versions, reverse=True, key=operator.attrgetter('order')) def parse_args(argv): parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('project', help="Github project name (ex: docker/docker)") parser.add_argument('command', choices=['recent', 'default']) parser.add_argument('-n', '--num', type=int, default=2, help="Number of versions to return from `recent`") return parser.parse_args(argv) def main(argv=None): args = parse_args(argv) versions = get_github_releases(args.project) if args.command == 'recent': print(' '.join(map(str, get_latest_versions(versions, args.num)))) elif args.command == 'default': print(get_default(versions)) else: raise ValueError("Unknown command {}".format(args.command)) if __name__ == "__main__": main() compose-1.5.2/script/write-git-sha000077500000000000000000000003251263011261000170720ustar00rootroot00000000000000#!/bin/bash # # Write the current commit sha to the file GITSHA. This file is included in # packaging so that `docker-compose version` can include the git sha. # set -e git rev-parse --short HEAD > compose/GITSHA compose-1.5.2/setup.py000066400000000000000000000032601263011261000146670ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import absolute_import from __future__ import unicode_literals import codecs import os import re import sys from setuptools import find_packages from setuptools import setup def read(*parts): path = os.path.join(os.path.dirname(__file__), *parts) with codecs.open(path, encoding='utf-8') as fobj: return fobj.read() def find_version(*file_paths): version_file = read(*file_paths) version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M) if version_match: return version_match.group(1) raise RuntimeError("Unable to find version string.") install_requires = [ 'docopt >= 0.6.1, < 0.7', 'PyYAML >= 3.10, < 4', 'requests >= 2.6.1, < 2.8', 'texttable >= 0.8.1, < 0.9', 'websocket-client >= 0.32.0, < 1.0', 'docker-py >= 1.5.0, < 2', 'dockerpty >= 0.3.4, < 0.4', 'six >= 1.3.0, < 2', 'jsonschema >= 2.5.1, < 3', ] tests_require = [ 'pytest', ] if sys.version_info[:2] < (3, 4): tests_require.append('mock >= 1.0.1') install_requires.append('enum34 >= 1.0.4, < 2') setup( name='docker-compose', version=find_version("compose", "__init__.py"), description='Multi-container orchestration for Docker', url='https://www.docker.com/', author='Docker, Inc.', license='Apache License 2.0', packages=find_packages(exclude=['tests.*', 'tests']), include_package_data=True, test_suite='nose.collector', install_requires=install_requires, tests_require=tests_require, entry_points=""" [console_scripts] docker-compose=compose.cli.main:main """, ) compose-1.5.2/tests/000077500000000000000000000000001263011261000143165ustar00rootroot00000000000000compose-1.5.2/tests/__init__.py000066400000000000000000000003061263011261000164260ustar00rootroot00000000000000import sys if sys.version_info >= (2, 7): import unittest # NOQA else: import unittest2 as unittest # NOQA try: from unittest import mock except ImportError: import mock # NOQA compose-1.5.2/tests/acceptance/000077500000000000000000000000001263011261000164045ustar00rootroot00000000000000compose-1.5.2/tests/acceptance/__init__.py000066400000000000000000000000001263011261000205030ustar00rootroot00000000000000compose-1.5.2/tests/acceptance/cli_test.py000066400000000000000000000774701263011261000206030ustar00rootroot00000000000000from __future__ import absolute_import import os import shlex import signal import subprocess import time from collections import namedtuple from operator import attrgetter from docker import errors from .. import mock from compose.cli.command import get_project from compose.cli.docker_client import docker_client from compose.container import Container from tests.integration.testcases import DockerClientTestCase from tests.integration.testcases import pull_busybox ProcessResult = namedtuple('ProcessResult', 'stdout stderr') BUILD_CACHE_TEXT = 'Using cache' BUILD_PULL_TEXT = 'Status: Image is up to date for busybox:latest' def start_process(base_dir, options): proc = subprocess.Popen( ['docker-compose'] + options, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=base_dir) print("Running process: %s" % proc.pid) return proc def wait_on_process(proc, returncode=0): stdout, stderr = proc.communicate() if proc.returncode != returncode: print(stderr.decode('utf-8')) assert proc.returncode == returncode return ProcessResult(stdout.decode('utf-8'), stderr.decode('utf-8')) def wait_on_condition(condition, delay=0.1, timeout=5): start_time = time.time() while not condition(): if time.time() - start_time > timeout: raise AssertionError("Timeout: %s" % condition) time.sleep(delay) class ContainerCountCondition(object): def __init__(self, project, expected): self.project = project self.expected = expected def __call__(self): return len(self.project.containers()) == self.expected def __str__(self): return "waiting for counter count == %s" % self.expected class ContainerStateCondition(object): def __init__(self, client, name, running): self.client = client self.name = name self.running = running # State.Running == true def __call__(self): try: container = self.client.inspect_container(self.name) return container['State']['Running'] == self.running except errors.APIError: return False def __str__(self): return "waiting for container to have state %s" % self.expected class CLITestCase(DockerClientTestCase): def setUp(self): super(CLITestCase, self).setUp() self.base_dir = 'tests/fixtures/simple-composefile' def tearDown(self): self.project.kill() self.project.remove_stopped() for container in self.project.containers(stopped=True, one_off=True): container.remove(force=True) super(CLITestCase, self).tearDown() @property def project(self): # Hack: allow project to be overridden if not hasattr(self, '_project'): self._project = get_project(self.base_dir) return self._project def dispatch(self, options, project_options=None, returncode=0): project_options = project_options or [] proc = start_process(self.base_dir, project_options + options) return wait_on_process(proc, returncode=returncode) def test_help(self): old_base_dir = self.base_dir self.base_dir = 'tests/fixtures/no-composefile' result = self.dispatch(['help', 'up'], returncode=1) assert 'Usage: up [options] [SERVICE...]' in result.stderr # self.project.kill() fails during teardown # unless there is a composefile. self.base_dir = old_base_dir def test_ps(self): self.project.get_service('simple').create_container() result = self.dispatch(['ps']) assert 'simplecomposefile_simple_1' in result.stdout def test_ps_default_composefile(self): self.base_dir = 'tests/fixtures/multiple-composefiles' self.dispatch(['up', '-d']) result = self.dispatch(['ps']) self.assertIn('multiplecomposefiles_simple_1', result.stdout) self.assertIn('multiplecomposefiles_another_1', result.stdout) self.assertNotIn('multiplecomposefiles_yetanother_1', result.stdout) def test_ps_alternate_composefile(self): config_path = os.path.abspath( 'tests/fixtures/multiple-composefiles/compose2.yml') self._project = get_project(self.base_dir, [config_path]) self.base_dir = 'tests/fixtures/multiple-composefiles' self.dispatch(['-f', 'compose2.yml', 'up', '-d']) result = self.dispatch(['-f', 'compose2.yml', 'ps']) self.assertNotIn('multiplecomposefiles_simple_1', result.stdout) self.assertNotIn('multiplecomposefiles_another_1', result.stdout) self.assertIn('multiplecomposefiles_yetanother_1', result.stdout) def test_pull(self): result = self.dispatch(['pull']) assert sorted(result.stderr.split('\n'))[1:] == [ 'Pulling another (busybox:latest)...', 'Pulling simple (busybox:latest)...', ] def test_pull_with_digest(self): result = self.dispatch(['-f', 'digest.yml', 'pull']) assert 'Pulling simple (busybox:latest)...' in result.stderr assert ('Pulling digest (busybox@' 'sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b520' '04ee8502d)...') in result.stderr def test_pull_with_ignore_pull_failures(self): result = self.dispatch([ '-f', 'ignore-pull-failures.yml', 'pull', '--ignore-pull-failures']) assert 'Pulling simple (busybox:latest)...' in result.stderr assert 'Pulling another (nonexisting-image:latest)...' in result.stderr assert 'Error: image library/nonexisting-image:latest not found' in result.stderr def test_build_plain(self): self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple']) result = self.dispatch(['build', 'simple']) assert BUILD_CACHE_TEXT in result.stdout assert BUILD_PULL_TEXT not in result.stdout def test_build_no_cache(self): self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple']) result = self.dispatch(['build', '--no-cache', 'simple']) assert BUILD_CACHE_TEXT not in result.stdout assert BUILD_PULL_TEXT not in result.stdout def test_build_pull(self): # Make sure we have the latest busybox already pull_busybox(self.client) self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple'], None) result = self.dispatch(['build', '--pull', 'simple']) assert BUILD_CACHE_TEXT in result.stdout assert BUILD_PULL_TEXT in result.stdout def test_build_no_cache_pull(self): # Make sure we have the latest busybox already pull_busybox(self.client) self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['build', 'simple']) result = self.dispatch(['build', '--no-cache', '--pull', 'simple']) assert BUILD_CACHE_TEXT not in result.stdout assert BUILD_PULL_TEXT in result.stdout def test_build_failed(self): self.base_dir = 'tests/fixtures/simple-failing-dockerfile' self.dispatch(['build', 'simple'], returncode=1) labels = ["com.docker.compose.test_failing_image=true"] containers = [ Container.from_ps(self.project.client, c) for c in self.project.client.containers( all=True, filters={"label": labels}) ] assert len(containers) == 1 def test_build_failed_forcerm(self): self.base_dir = 'tests/fixtures/simple-failing-dockerfile' self.dispatch(['build', '--force-rm', 'simple'], returncode=1) labels = ["com.docker.compose.test_failing_image=true"] containers = [ Container.from_ps(self.project.client, c) for c in self.project.client.containers( all=True, filters={"label": labels}) ] assert not containers def test_up_detached(self): self.dispatch(['up', '-d']) service = self.project.get_service('simple') another = self.project.get_service('another') self.assertEqual(len(service.containers()), 1) self.assertEqual(len(another.containers()), 1) # Ensure containers don't have stdin and stdout connected in -d mode container, = service.containers() self.assertFalse(container.get('Config.AttachStderr')) self.assertFalse(container.get('Config.AttachStdout')) self.assertFalse(container.get('Config.AttachStdin')) def test_up_attached(self): self.base_dir = 'tests/fixtures/echo-services' result = self.dispatch(['up', '--no-color']) assert 'simple_1 | simple' in result.stdout assert 'another_1 | another' in result.stdout def test_up_without_networking(self): self.require_api_version('1.21') self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d'], None) client = docker_client(version='1.21') networks = client.networks(names=[self.project.name]) self.assertEqual(len(networks), 0) for service in self.project.get_services(): containers = service.containers() self.assertEqual(len(containers), 1) self.assertNotEqual(containers[0].get('Config.Hostname'), service.name) web_container = self.project.get_service('web').containers()[0] self.assertTrue(web_container.get('HostConfig.Links')) def test_up_with_networking(self): self.require_api_version('1.21') self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['--x-networking', 'up', '-d'], None) client = docker_client(version='1.21') services = self.project.get_services() networks = client.networks(names=[self.project.name]) for n in networks: self.addCleanup(client.remove_network, n['Id']) self.assertEqual(len(networks), 1) self.assertEqual(networks[0]['Driver'], 'bridge') network = client.inspect_network(networks[0]['Id']) self.assertEqual(len(network['Containers']), len(services)) for service in services: containers = service.containers() self.assertEqual(len(containers), 1) self.assertIn(containers[0].id, network['Containers']) web_container = self.project.get_service('web').containers()[0] self.assertFalse(web_container.get('HostConfig.Links')) def test_up_with_links(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', 'web'], None) web = self.project.get_service('web') db = self.project.get_service('db') console = self.project.get_service('console') self.assertEqual(len(web.containers()), 1) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 0) def test_up_with_no_deps(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', '--no-deps', 'web'], None) web = self.project.get_service('web') db = self.project.get_service('db') console = self.project.get_service('console') self.assertEqual(len(web.containers()), 1) self.assertEqual(len(db.containers()), 0) self.assertEqual(len(console.containers()), 0) def test_up_with_force_recreate(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) old_ids = [c.id for c in service.containers()] self.dispatch(['up', '-d', '--force-recreate'], None) self.assertEqual(len(service.containers()), 1) new_ids = [c.id for c in service.containers()] self.assertNotEqual(old_ids, new_ids) def test_up_with_no_recreate(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) old_ids = [c.id for c in service.containers()] self.dispatch(['up', '-d', '--no-recreate'], None) self.assertEqual(len(service.containers()), 1) new_ids = [c.id for c in service.containers()] self.assertEqual(old_ids, new_ids) def test_up_with_force_recreate_and_no_recreate(self): self.dispatch( ['up', '-d', '--force-recreate', '--no-recreate'], returncode=1) def test_up_with_timeout(self): self.dispatch(['up', '-d', '-t', '1']) service = self.project.get_service('simple') another = self.project.get_service('another') self.assertEqual(len(service.containers()), 1) self.assertEqual(len(another.containers()), 1) # Ensure containers don't have stdin and stdout connected in -d mode config = service.containers()[0].inspect()['Config'] self.assertFalse(config['AttachStderr']) self.assertFalse(config['AttachStdout']) self.assertFalse(config['AttachStdin']) def test_up_handles_sigint(self): proc = start_process(self.base_dir, ['up', '-t', '2']) wait_on_condition(ContainerCountCondition(self.project, 2)) os.kill(proc.pid, signal.SIGINT) wait_on_condition(ContainerCountCondition(self.project, 0)) def test_up_handles_sigterm(self): proc = start_process(self.base_dir, ['up', '-t', '2']) wait_on_condition(ContainerCountCondition(self.project, 2)) os.kill(proc.pid, signal.SIGTERM) wait_on_condition(ContainerCountCondition(self.project, 0)) def test_run_service_without_links(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['run', 'console', '/bin/true']) self.assertEqual(len(self.project.containers()), 0) # Ensure stdin/out was open container = self.project.containers(stopped=True, one_off=True)[0] config = container.inspect()['Config'] self.assertTrue(config['AttachStderr']) self.assertTrue(config['AttachStdout']) self.assertTrue(config['AttachStdin']) def test_run_service_with_links(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['run', 'web', '/bin/true'], None) db = self.project.get_service('db') console = self.project.get_service('console') self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 0) def test_run_with_no_deps(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['run', '--no-deps', 'web', '/bin/true']) db = self.project.get_service('db') self.assertEqual(len(db.containers()), 0) def test_run_does_not_recreate_linked_containers(self): self.base_dir = 'tests/fixtures/links-composefile' self.dispatch(['up', '-d', 'db']) db = self.project.get_service('db') self.assertEqual(len(db.containers()), 1) old_ids = [c.id for c in db.containers()] self.dispatch(['run', 'web', '/bin/true'], None) self.assertEqual(len(db.containers()), 1) new_ids = [c.id for c in db.containers()] self.assertEqual(old_ids, new_ids) def test_run_without_command(self): self.base_dir = 'tests/fixtures/commands-composefile' self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test') self.dispatch(['run', 'implicit']) service = self.project.get_service('implicit') containers = service.containers(stopped=True, one_off=True) self.assertEqual( [c.human_readable_command for c in containers], [u'/bin/sh -c echo "success"'], ) self.dispatch(['run', 'explicit']) service = self.project.get_service('explicit') containers = service.containers(stopped=True, one_off=True) self.assertEqual( [c.human_readable_command for c in containers], [u'/bin/true'], ) def test_run_service_with_entrypoint_overridden(self): self.base_dir = 'tests/fixtures/dockerfile_with_entrypoint' name = 'service' self.dispatch(['run', '--entrypoint', '/bin/echo', name, 'helloworld']) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=True)[0] self.assertEqual( shlex.split(container.human_readable_command), [u'/bin/echo', u'helloworld'], ) def test_run_service_with_user_overridden(self): self.base_dir = 'tests/fixtures/user-composefile' name = 'service' user = 'sshd' self.dispatch(['run', '--user={user}'.format(user=user), name], returncode=1) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=True)[0] self.assertEqual(user, container.get('Config.User')) def test_run_service_with_user_overridden_short_form(self): self.base_dir = 'tests/fixtures/user-composefile' name = 'service' user = 'sshd' self.dispatch(['run', '-u', user, name], returncode=1) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=True)[0] self.assertEqual(user, container.get('Config.User')) def test_run_service_with_environement_overridden(self): name = 'service' self.base_dir = 'tests/fixtures/environment-composefile' self.dispatch([ 'run', '-e', 'foo=notbar', '-e', 'allo=moto=bobo', '-e', 'alpha=beta', name, '/bin/true', ]) service = self.project.get_service(name) container = service.containers(stopped=True, one_off=True)[0] # env overriden self.assertEqual('notbar', container.environment['foo']) # keep environement from yaml self.assertEqual('world', container.environment['hello']) # added option from command line self.assertEqual('beta', container.environment['alpha']) # make sure a value with a = don't crash out self.assertEqual('moto=bobo', container.environment['allo']) def test_run_service_without_map_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['run', '-d', 'simple']) container = self.project.get_service('simple').containers(one_off=True)[0] # get port information port_random = container.get_local_port(3000) port_assigned = container.get_local_port(3001) # close all one off containers we just created container.stop() # check the ports self.assertEqual(port_random, None) self.assertEqual(port_assigned, None) def test_run_service_with_map_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['run', '-d', '--service-ports', 'simple']) container = self.project.get_service('simple').containers(one_off=True)[0] # get port information port_random = container.get_local_port(3000) port_assigned = container.get_local_port(3001) port_range = container.get_local_port(3002), container.get_local_port(3003) # close all one off containers we just created container.stop() # check the ports self.assertNotEqual(port_random, None) self.assertIn("0.0.0.0", port_random) self.assertEqual(port_assigned, "0.0.0.0:49152") self.assertEqual(port_range[0], "0.0.0.0:49153") self.assertEqual(port_range[1], "0.0.0.0:49154") def test_run_service_with_explicitly_maped_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['run', '-d', '-p', '30000:3000', '--publish', '30001:3001', 'simple']) container = self.project.get_service('simple').containers(one_off=True)[0] # get port information port_short = container.get_local_port(3000) port_full = container.get_local_port(3001) # close all one off containers we just created container.stop() # check the ports self.assertEqual(port_short, "0.0.0.0:30000") self.assertEqual(port_full, "0.0.0.0:30001") def test_run_service_with_explicitly_maped_ip_ports(self): # create one off container self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['run', '-d', '-p', '127.0.0.1:30000:3000', '--publish', '127.0.0.1:30001:3001', 'simple'], None) container = self.project.get_service('simple').containers(one_off=True)[0] # get port information port_short = container.get_local_port(3000) port_full = container.get_local_port(3001) # close all one off containers we just created container.stop() # check the ports self.assertEqual(port_short, "127.0.0.1:30000") self.assertEqual(port_full, "127.0.0.1:30001") def test_run_with_custom_name(self): self.base_dir = 'tests/fixtures/environment-composefile' name = 'the-container-name' self.dispatch(['run', '--name', name, 'service', '/bin/true']) service = self.project.get_service('service') container, = service.containers(stopped=True, one_off=True) self.assertEqual(container.name, name) def test_run_with_networking(self): self.require_api_version('1.21') client = docker_client(version='1.21') self.base_dir = 'tests/fixtures/simple-dockerfile' self.dispatch(['--x-networking', 'run', 'simple', 'true'], None) service = self.project.get_service('simple') container, = service.containers(stopped=True, one_off=True) networks = client.networks(names=[self.project.name]) for n in networks: self.addCleanup(client.remove_network, n['Id']) self.assertEqual(len(networks), 1) self.assertEqual(container.human_readable_command, u'true') def test_run_handles_sigint(self): proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top']) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', running=True)) os.kill(proc.pid, signal.SIGINT) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', running=False)) def test_run_handles_sigterm(self): proc = start_process(self.base_dir, ['run', '-T', 'simple', 'top']) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', running=True)) os.kill(proc.pid, signal.SIGTERM) wait_on_condition(ContainerStateCondition( self.project.client, 'simplecomposefile_simple_run_1', running=False)) def test_rm(self): service = self.project.get_service('simple') service.create_container() service.kill() self.assertEqual(len(service.containers(stopped=True)), 1) self.dispatch(['rm', '--force'], None) self.assertEqual(len(service.containers(stopped=True)), 0) service = self.project.get_service('simple') service.create_container() service.kill() self.assertEqual(len(service.containers(stopped=True)), 1) self.dispatch(['rm', '-f'], None) self.assertEqual(len(service.containers(stopped=True)), 0) def test_stop(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.dispatch(['stop', '-t', '1'], None) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertFalse(service.containers(stopped=True)[0].is_running) def test_pause_unpause(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertFalse(service.containers()[0].is_paused) self.dispatch(['pause'], None) self.assertTrue(service.containers()[0].is_paused) self.dispatch(['unpause'], None) self.assertFalse(service.containers()[0].is_paused) def test_logs_invalid_service_name(self): self.dispatch(['logs', 'madeupname'], returncode=1) def test_kill(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.dispatch(['kill'], None) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertFalse(service.containers(stopped=True)[0].is_running) def test_kill_signal_sigstop(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.dispatch(['kill', '-s', 'SIGSTOP'], None) self.assertEqual(len(service.containers()), 1) # The container is still running. It has only been paused self.assertTrue(service.containers()[0].is_running) def test_kill_stopped_service(self): self.dispatch(['up', '-d'], None) service = self.project.get_service('simple') self.dispatch(['kill', '-s', 'SIGSTOP'], None) self.assertTrue(service.containers()[0].is_running) self.dispatch(['kill', '-s', 'SIGKILL'], None) self.assertEqual(len(service.containers(stopped=True)), 1) self.assertFalse(service.containers(stopped=True)[0].is_running) def test_restart(self): service = self.project.get_service('simple') container = service.create_container() container.start() started_at = container.dictionary['State']['StartedAt'] self.dispatch(['restart', '-t', '1'], None) container.inspect() self.assertNotEqual( container.dictionary['State']['FinishedAt'], '0001-01-01T00:00:00Z', ) self.assertNotEqual( container.dictionary['State']['StartedAt'], started_at, ) def test_restart_stopped_container(self): service = self.project.get_service('simple') container = service.create_container() container.start() container.kill() self.assertEqual(len(service.containers(stopped=True)), 1) self.dispatch(['restart', '-t', '1'], None) self.assertEqual(len(service.containers(stopped=False)), 1) def test_scale(self): project = self.project self.dispatch(['scale', 'simple=1']) self.assertEqual(len(project.get_service('simple').containers()), 1) self.dispatch(['scale', 'simple=3', 'another=2']) self.assertEqual(len(project.get_service('simple').containers()), 3) self.assertEqual(len(project.get_service('another').containers()), 2) self.dispatch(['scale', 'simple=1', 'another=1']) self.assertEqual(len(project.get_service('simple').containers()), 1) self.assertEqual(len(project.get_service('another').containers()), 1) self.dispatch(['scale', 'simple=1', 'another=1']) self.assertEqual(len(project.get_service('simple').containers()), 1) self.assertEqual(len(project.get_service('another').containers()), 1) self.dispatch(['scale', 'simple=0', 'another=0']) self.assertEqual(len(project.get_service('simple').containers()), 0) self.assertEqual(len(project.get_service('another').containers()), 0) def test_port(self): self.base_dir = 'tests/fixtures/ports-composefile' self.dispatch(['up', '-d'], None) container = self.project.get_service('simple').get_container() def get_port(number): result = self.dispatch(['port', 'simple', str(number)]) return result.stdout.rstrip() self.assertEqual(get_port(3000), container.get_local_port(3000)) self.assertEqual(get_port(3001), "0.0.0.0:49152") self.assertEqual(get_port(3002), "0.0.0.0:49153") def test_port_with_scale(self): self.base_dir = 'tests/fixtures/ports-composefile-scale' self.dispatch(['scale', 'simple=2'], None) containers = sorted( self.project.containers(service_names=['simple']), key=attrgetter('name')) def get_port(number, index=None): if index is None: result = self.dispatch(['port', 'simple', str(number)]) else: result = self.dispatch(['port', '--index=' + str(index), 'simple', str(number)]) return result.stdout.rstrip() self.assertEqual(get_port(3000), containers[0].get_local_port(3000)) self.assertEqual(get_port(3000, index=1), containers[0].get_local_port(3000)) self.assertEqual(get_port(3000, index=2), containers[1].get_local_port(3000)) self.assertEqual(get_port(3002), "") def test_env_file_relative_to_compose_file(self): config_path = os.path.abspath('tests/fixtures/env-file/docker-compose.yml') self.dispatch(['-f', config_path, 'up', '-d'], None) self._project = get_project(self.base_dir, [config_path]) containers = self.project.containers(stopped=True) self.assertEqual(len(containers), 1) self.assertIn("FOO=1", containers[0].get('Config.Env')) @mock.patch.dict(os.environ) def test_home_and_env_var_in_volume_path(self): os.environ['VOLUME_NAME'] = 'my-volume' os.environ['HOME'] = '/tmp/home-dir' self.base_dir = 'tests/fixtures/volume-path-interpolation' self.dispatch(['up', '-d'], None) container = self.project.containers(stopped=True)[0] actual_host_path = container.get('Volumes')['/container-path'] components = actual_host_path.split('/') assert components[-2:] == ['home-dir', 'my-volume'] def test_up_with_default_override_file(self): self.base_dir = 'tests/fixtures/override-files' self.dispatch(['up', '-d'], None) containers = self.project.containers() self.assertEqual(len(containers), 2) web, db = containers self.assertEqual(web.human_readable_command, 'top') self.assertEqual(db.human_readable_command, 'top') def test_up_with_multiple_files(self): self.base_dir = 'tests/fixtures/override-files' config_paths = [ 'docker-compose.yml', 'docker-compose.override.yml', 'extra.yml', ] self._project = get_project(self.base_dir, config_paths) self.dispatch( [ '-f', config_paths[0], '-f', config_paths[1], '-f', config_paths[2], 'up', '-d', ], None) containers = self.project.containers() self.assertEqual(len(containers), 3) web, other, db = containers self.assertEqual(web.human_readable_command, 'top') self.assertTrue({'db', 'other'} <= set(web.links())) self.assertEqual(db.human_readable_command, 'top') self.assertEqual(other.human_readable_command, 'top') def test_up_with_extends(self): self.base_dir = 'tests/fixtures/extends' self.dispatch(['up', '-d'], None) self.assertEqual( set([s.name for s in self.project.services]), set(['mydb', 'myweb']), ) # Sort by name so we get [db, web] containers = sorted( self.project.containers(stopped=True), key=lambda c: c.name, ) self.assertEqual(len(containers), 2) web = containers[1] self.assertEqual(set(web.links()), set(['db', 'mydb_1', 'extends_mydb_1'])) expected_env = set([ "FOO=1", "BAR=2", "BAZ=2", ]) self.assertTrue(expected_env <= set(web.get('Config.Env'))) compose-1.5.2/tests/fixtures/000077500000000000000000000000001263011261000161675ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/UpperCaseDir/000077500000000000000000000000001263011261000205155ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/UpperCaseDir/docker-compose.yml000066400000000000000000000001371263011261000241530ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: busybox:latest command: top compose-1.5.2/tests/fixtures/build-ctx/000077500000000000000000000000001263011261000200625ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/build-ctx/Dockerfile000066400000000000000000000001201263011261000220450ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true CMD echo "success" compose-1.5.2/tests/fixtures/build-path/000077500000000000000000000000001263011261000202205ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/build-path/docker-compose.yml000066400000000000000000000000341263011261000236520ustar00rootroot00000000000000foo: build: ../build-ctx/ compose-1.5.2/tests/fixtures/commands-composefile/000077500000000000000000000000001263011261000222735ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/commands-composefile/docker-compose.yml000066400000000000000000000001431263011261000257260ustar00rootroot00000000000000implicit: image: composetest_test explicit: image: composetest_test command: [ "/bin/true" ] compose-1.5.2/tests/fixtures/dockerfile-with-volume/000077500000000000000000000000001263011261000225545ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/dockerfile-with-volume/Dockerfile000066400000000000000000000001221263011261000245410ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true VOLUME /data CMD top compose-1.5.2/tests/fixtures/dockerfile_with_entrypoint/000077500000000000000000000000001263011261000236245ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/dockerfile_with_entrypoint/Dockerfile000066400000000000000000000001501263011261000256120ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true ENTRYPOINT echo "From prebuilt entrypoint" compose-1.5.2/tests/fixtures/dockerfile_with_entrypoint/docker-compose.yml000066400000000000000000000000241263011261000272550ustar00rootroot00000000000000service: build: . compose-1.5.2/tests/fixtures/echo-services/000077500000000000000000000000001263011261000207265ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/echo-services/docker-compose.yml000066400000000000000000000001601263011261000243600ustar00rootroot00000000000000simple: image: busybox:latest command: echo simple another: image: busybox:latest command: echo another compose-1.5.2/tests/fixtures/env-file/000077500000000000000000000000001263011261000176745ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/env-file/docker-compose.yml000066400000000000000000000001021263011261000233220ustar00rootroot00000000000000web: image: busybox command: /bin/true env_file: ./test.env compose-1.5.2/tests/fixtures/env-file/test.env000066400000000000000000000000051263011261000213600ustar00rootroot00000000000000FOO=1compose-1.5.2/tests/fixtures/env/000077500000000000000000000000001263011261000167575ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/env/one.env000066400000000000000000000001731263011261000202530ustar00rootroot00000000000000# Keep the blank lines and comments in this file, please ONE=2 TWO=1 # (thanks) THREE=3 FOO=bar # FOO=somethingelse compose-1.5.2/tests/fixtures/env/resolve.env000066400000000000000000000000551263011261000211500ustar00rootroot00000000000000FILE_DEF=bär FILE_DEF_EMPTY= ENV_DEF NO_DEF compose-1.5.2/tests/fixtures/env/two.env000066400000000000000000000000201263011261000202720ustar00rootroot00000000000000FOO=baz DOO=dah compose-1.5.2/tests/fixtures/environment-composefile/000077500000000000000000000000001263011261000230365ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/environment-composefile/docker-compose.yml000066400000000000000000000001361263011261000264730ustar00rootroot00000000000000service: image: busybox:latest command: top environment: foo: bar hello: world compose-1.5.2/tests/fixtures/environment-interpolation/000077500000000000000000000000001263011261000234205ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/environment-interpolation/docker-compose.yml000066400000000000000000000004121263011261000270520ustar00rootroot00000000000000web: # unbracketed name image: $IMAGE # array element ports: - "${HOST_PORT}:8000" # dictionary item value labels: mylabel: "${LABEL_VALUE}" # unset value hostname: "host-${UNSET_VALUE}" # escaped interpolation command: "$${ESCAPED}" compose-1.5.2/tests/fixtures/extends/000077500000000000000000000000001263011261000176415ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/extends/circle-1.yml000066400000000000000000000002231263011261000217600ustar00rootroot00000000000000foo: image: busybox bar: image: busybox web: extends: file: circle-2.yml service: other baz: image: busybox quux: image: busybox compose-1.5.2/tests/fixtures/extends/circle-2.yml000066400000000000000000000002231263011261000217610ustar00rootroot00000000000000foo: image: busybox bar: image: busybox other: extends: file: circle-1.yml service: web baz: image: busybox quux: image: busybox compose-1.5.2/tests/fixtures/extends/common.yml000066400000000000000000000001221263011261000216470ustar00rootroot00000000000000web: image: busybox command: /bin/true environment: - FOO=1 - BAR=1 compose-1.5.2/tests/fixtures/extends/docker-compose.yml000066400000000000000000000003461263011261000233010ustar00rootroot00000000000000myweb: extends: file: common.yml service: web command: top links: - "mydb:db" environment: # leave FOO alone # override BAR BAR: "2" # add BAZ BAZ: "2" mydb: image: busybox command: top compose-1.5.2/tests/fixtures/extends/invalid-links.yml000066400000000000000000000001521263011261000231260ustar00rootroot00000000000000myweb: build: '.' extends: service: web command: top web: build: '.' links: - "mydb:db" compose-1.5.2/tests/fixtures/extends/invalid-net.yml000066400000000000000000000001471263011261000226000ustar00rootroot00000000000000myweb: build: '.' extends: service: web command: top web: build: '.' net: "container:db" compose-1.5.2/tests/fixtures/extends/invalid-volumes.yml000066400000000000000000000001541263011261000235020ustar00rootroot00000000000000myweb: build: '.' extends: service: web command: top web: build: '.' volumes_from: - "db" compose-1.5.2/tests/fixtures/extends/nested-intermediate.yml000066400000000000000000000001371263011261000243170ustar00rootroot00000000000000webintermediate: extends: file: common.yml service: web environment: - "FOO=2" compose-1.5.2/tests/fixtures/extends/nested.yml000066400000000000000000000001561263011261000216500ustar00rootroot00000000000000myweb: extends: file: nested-intermediate.yml service: webintermediate environment: - "BAR=2" compose-1.5.2/tests/fixtures/extends/no-file-specified.yml000066400000000000000000000001631263011261000236460ustar00rootroot00000000000000myweb: extends: service: web environment: - "BAR=1" web: image: busybox environment: - "BAZ=3" compose-1.5.2/tests/fixtures/extends/nonexistent-path-base.yml000066400000000000000000000001371263011261000246050ustar00rootroot00000000000000dnebase: build: nonexistent.path command: /bin/true environment: - FOO=1 - BAR=1 compose-1.5.2/tests/fixtures/extends/nonexistent-path-child.yml000066400000000000000000000002171263011261000247550ustar00rootroot00000000000000dnechild: extends: file: nonexistent-path-base.yml service: dnebase image: busybox command: /bin/true environment: - BAR=2 compose-1.5.2/tests/fixtures/extends/nonexistent-service.yml000066400000000000000000000000621263011261000243760ustar00rootroot00000000000000web: image: busybox extends: service: foo compose-1.5.2/tests/fixtures/extends/service-with-invalid-schema.yml000066400000000000000000000001111263011261000256500ustar00rootroot00000000000000myweb: extends: file: valid-composite-extends.yml service: web compose-1.5.2/tests/fixtures/extends/service-with-valid-composite-extends.yml000066400000000000000000000001301263011261000275340ustar00rootroot00000000000000myweb: build: '.' extends: file: 'valid-composite-extends.yml' service: web compose-1.5.2/tests/fixtures/extends/specify-file-as-self.yml000066400000000000000000000004221263011261000242710ustar00rootroot00000000000000myweb: extends: file: specify-file-as-self.yml service: web environment: - "BAR=1" web: extends: file: specify-file-as-self.yml service: otherweb image: busybox environment: - "BAZ=3" otherweb: image: busybox environment: - "YEP=1" compose-1.5.2/tests/fixtures/extends/valid-common-config.yml000066400000000000000000000001441263011261000242130ustar00rootroot00000000000000myweb: build: '.' extends: file: valid-common.yml service: common-config command: top compose-1.5.2/tests/fixtures/extends/valid-common.yml000066400000000000000000000000521263011261000227460ustar00rootroot00000000000000common-config: environment: - FOO=1 compose-1.5.2/tests/fixtures/extends/valid-composite-extends.yml000066400000000000000000000000241263011261000251270ustar00rootroot00000000000000web: command: top compose-1.5.2/tests/fixtures/extends/valid-interpolation-2.yml000066400000000000000000000000671263011261000245120ustar00rootroot00000000000000web: build: '.' hostname: "host-${HOSTNAME_VALUE}" compose-1.5.2/tests/fixtures/extends/valid-interpolation.yml000066400000000000000000000001261263011261000243470ustar00rootroot00000000000000myweb: extends: service: web file: valid-interpolation-2.yml command: top compose-1.5.2/tests/fixtures/extends/verbose-and-shorthand.yml000066400000000000000000000002611263011261000245600ustar00rootroot00000000000000base: image: busybox environment: - "BAR=1" verbose: extends: service: base environment: - "FOO=1" shorthand: extends: base environment: - "FOO=2" compose-1.5.2/tests/fixtures/links-composefile/000077500000000000000000000000001263011261000216125ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/links-composefile/docker-compose.yml000066400000000000000000000002341263011261000252460ustar00rootroot00000000000000db: image: busybox:latest command: top web: image: busybox:latest command: top links: - db:db console: image: busybox:latest command: top compose-1.5.2/tests/fixtures/longer-filename-composefile/000077500000000000000000000000001263011261000235365ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/longer-filename-composefile/docker-compose.yaml000066400000000000000000000000741263011261000273350ustar00rootroot00000000000000definedinyamlnotyml: image: busybox:latest command: top compose-1.5.2/tests/fixtures/multiple-composefiles/000077500000000000000000000000001263011261000225105ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/multiple-composefiles/compose2.yml000066400000000000000000000000631263011261000247610ustar00rootroot00000000000000yetanother: image: busybox:latest command: top compose-1.5.2/tests/fixtures/multiple-composefiles/docker-compose.yml000066400000000000000000000001371263011261000261460ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: busybox:latest command: top compose-1.5.2/tests/fixtures/no-composefile/000077500000000000000000000000001263011261000211065ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/no-composefile/.gitignore000066400000000000000000000000001263011261000230640ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/override-files/000077500000000000000000000000001263011261000211065ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/override-files/docker-compose.override.yml000066400000000000000000000000611263011261000263560ustar00rootroot00000000000000 web: command: "top" db: command: "top" compose-1.5.2/tests/fixtures/override-files/docker-compose.yml000066400000000000000000000002111263011261000245350ustar00rootroot00000000000000 web: image: busybox:latest command: "sleep 200" links: - db db: image: busybox:latest command: "sleep 200" compose-1.5.2/tests/fixtures/override-files/extra.yml000066400000000000000000000001431263011261000227520ustar00rootroot00000000000000 web: links: - db - other other: image: busybox:latest command: "top" compose-1.5.2/tests/fixtures/ports-composefile-scale/000077500000000000000000000000001263011261000227265ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/ports-composefile-scale/docker-compose.yml000066400000000000000000000001211263011261000263550ustar00rootroot00000000000000 simple: image: busybox:latest command: /bin/sleep 300 ports: - '3000' compose-1.5.2/tests/fixtures/ports-composefile/000077500000000000000000000000001263011261000216415ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/ports-composefile/docker-compose.yml000066400000000000000000000001671263011261000253020ustar00rootroot00000000000000 simple: image: busybox:latest command: top ports: - '3000' - '49152:3001' - '49153-49154:3002-3003' compose-1.5.2/tests/fixtures/simple-composefile/000077500000000000000000000000001263011261000217635ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/simple-composefile/digest.yml000066400000000000000000000002371263011261000237670ustar00rootroot00000000000000simple: image: busybox:latest command: top digest: image: busybox@sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d command: top compose-1.5.2/tests/fixtures/simple-composefile/docker-compose.yml000066400000000000000000000001371263011261000254210ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: busybox:latest command: top compose-1.5.2/tests/fixtures/simple-composefile/ignore-pull-failures.yml000066400000000000000000000001511263011261000265500ustar00rootroot00000000000000simple: image: busybox:latest command: top another: image: nonexisting-image:latest command: top compose-1.5.2/tests/fixtures/simple-dockerfile/000077500000000000000000000000001263011261000215655ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/simple-dockerfile/Dockerfile000066400000000000000000000001201263011261000235500ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true CMD echo "success" compose-1.5.2/tests/fixtures/simple-dockerfile/docker-compose.yml000066400000000000000000000000231263011261000252150ustar00rootroot00000000000000simple: build: . compose-1.5.2/tests/fixtures/simple-failing-dockerfile/000077500000000000000000000000001263011261000231745ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/simple-failing-dockerfile/Dockerfile000066400000000000000000000004571263011261000251740ustar00rootroot00000000000000FROM busybox:latest LABEL com.docker.compose.test_image=true LABEL com.docker.compose.test_failing_image=true # With the following label the container wil be cleaned up automatically # Must be kept in sync with LABEL_PROJECT from compose/const.py LABEL com.docker.compose.project=composetest RUN exit 1 compose-1.5.2/tests/fixtures/simple-failing-dockerfile/docker-compose.yml000066400000000000000000000000231263011261000266240ustar00rootroot00000000000000simple: build: . compose-1.5.2/tests/fixtures/user-composefile/000077500000000000000000000000001263011261000214505ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/user-composefile/docker-compose.yml000066400000000000000000000001001263011261000250740ustar00rootroot00000000000000service: image: busybox:latest user: notauser command: id compose-1.5.2/tests/fixtures/volume-path-interpolation/000077500000000000000000000000001263011261000233155ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/volume-path-interpolation/docker-compose.yml000066400000000000000000000001321263011261000267460ustar00rootroot00000000000000test: image: busybox command: top volumes: - "~/${VOLUME_NAME}:/container-path" compose-1.5.2/tests/fixtures/volume-path/000077500000000000000000000000001263011261000204305ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/volume-path/common/000077500000000000000000000000001263011261000217205ustar00rootroot00000000000000compose-1.5.2/tests/fixtures/volume-path/common/services.yml000066400000000000000000000001021263011261000242570ustar00rootroot00000000000000db: image: busybox volumes: - ./foo:/foo - ./bar:/bar compose-1.5.2/tests/fixtures/volume-path/docker-compose.yml000066400000000000000000000001311263011261000240600ustar00rootroot00000000000000db: extends: file: common/services.yml service: db volumes: - ./bar:/bar compose-1.5.2/tests/integration/000077500000000000000000000000001263011261000166415ustar00rootroot00000000000000compose-1.5.2/tests/integration/__init__.py000066400000000000000000000000001263011261000207400ustar00rootroot00000000000000compose-1.5.2/tests/integration/legacy_test.py000066400000000000000000000155041263011261000215230ustar00rootroot00000000000000import unittest from docker.errors import APIError from .. import mock from .testcases import DockerClientTestCase from compose import legacy from compose.project import Project class UtilitiesTestCase(unittest.TestCase): def test_has_container(self): self.assertTrue( legacy.has_container("composetest", "web", "composetest_web_1", one_off=False), ) self.assertFalse( legacy.has_container("composetest", "web", "composetest_web_run_1", one_off=False), ) def test_has_container_one_off(self): self.assertFalse( legacy.has_container("composetest", "web", "composetest_web_1", one_off=True), ) self.assertTrue( legacy.has_container("composetest", "web", "composetest_web_run_1", one_off=True), ) def test_has_container_different_project(self): self.assertFalse( legacy.has_container("composetest", "web", "otherapp_web_1", one_off=False), ) self.assertFalse( legacy.has_container("composetest", "web", "otherapp_web_run_1", one_off=True), ) def test_has_container_different_service(self): self.assertFalse( legacy.has_container("composetest", "web", "composetest_db_1", one_off=False), ) self.assertFalse( legacy.has_container("composetest", "web", "composetest_db_run_1", one_off=True), ) def test_is_valid_name(self): self.assertTrue( legacy.is_valid_name("composetest_web_1", one_off=False), ) self.assertFalse( legacy.is_valid_name("composetest_web_run_1", one_off=False), ) def test_is_valid_name_one_off(self): self.assertFalse( legacy.is_valid_name("composetest_web_1", one_off=True), ) self.assertTrue( legacy.is_valid_name("composetest_web_run_1", one_off=True), ) def test_is_valid_name_invalid(self): self.assertFalse( legacy.is_valid_name("foo"), ) self.assertFalse( legacy.is_valid_name("composetest_web_lol_1", one_off=True), ) def test_get_legacy_containers(self): client = mock.Mock() client.containers.return_value = [ { "Id": "abc123", "Image": "def456", "Name": "composetest_web_1", "Labels": None, }, { "Id": "ghi789", "Image": "def456", "Name": None, "Labels": None, }, { "Id": "jkl012", "Image": "def456", "Labels": None, }, ] containers = legacy.get_legacy_containers(client, "composetest", ["web"]) self.assertEqual(len(containers), 1) self.assertEqual(containers[0].id, 'abc123') class LegacyTestCase(DockerClientTestCase): def setUp(self): super(LegacyTestCase, self).setUp() self.containers = [] db = self.create_service('db') web = self.create_service('web', links=[(db, 'db')]) nginx = self.create_service('nginx', links=[(web, 'web')]) self.services = [db, web, nginx] self.project = Project('composetest', self.services, self.client) # Create a legacy container for each service for service in self.services: service.ensure_image_exists() container = self.client.create_container( name='{}_{}_1'.format(self.project.name, service.name), **service.options ) self.client.start(container) self.containers.append(container) # Create a single one-off legacy container self.containers.append(self.client.create_container( name='{}_{}_run_1'.format(self.project.name, db.name), **self.services[0].options )) def tearDown(self): super(LegacyTestCase, self).tearDown() for container in self.containers: try: self.client.kill(container) except APIError: pass try: self.client.remove_container(container) except APIError: pass def get_legacy_containers(self, **kwargs): return legacy.get_legacy_containers( self.client, self.project.name, [s.name for s in self.services], **kwargs ) def test_get_legacy_container_names(self): self.assertEqual(len(self.get_legacy_containers()), len(self.services)) def test_get_legacy_container_names_one_off(self): self.assertEqual(len(self.get_legacy_containers(one_off=True)), 1) def test_migration_to_labels(self): # Trying to get the container list raises an exception with self.assertRaises(legacy.LegacyContainersError) as cm: self.project.containers(stopped=True) self.assertEqual( set(cm.exception.names), set(['composetest_db_1', 'composetest_web_1', 'composetest_nginx_1']), ) self.assertEqual( set(cm.exception.one_off_names), set(['composetest_db_run_1']), ) # Migrate the containers legacy.migrate_project_to_labels(self.project) # Getting the list no longer raises an exception containers = self.project.containers(stopped=True) self.assertEqual(len(containers), len(self.services)) def test_migration_one_off(self): # We've already migrated legacy.migrate_project_to_labels(self.project) # Trying to create a one-off container results in a Docker API error with self.assertRaises(APIError) as cm: self.project.get_service('db').create_container(one_off=True) # Checking for legacy one-off containers raises an exception with self.assertRaises(legacy.LegacyOneOffContainersError) as cm: legacy.check_for_legacy_containers( self.client, self.project.name, ['db'], allow_one_off=False, ) self.assertEqual( set(cm.exception.one_off_names), set(['composetest_db_run_1']), ) # Remove the old one-off container c = self.client.inspect_container('composetest_db_run_1') self.client.remove_container(c) # Checking no longer raises an exception legacy.check_for_legacy_containers( self.client, self.project.name, ['db'], allow_one_off=False, ) # Creating a one-off container no longer results in an API error self.project.get_service('db').create_container(one_off=True) self.assertIsInstance(self.client.inspect_container('composetest_db_run_1'), dict) compose-1.5.2/tests/integration/project_test.py000066400000000000000000000376471263011261000217410ustar00rootroot00000000000000from __future__ import unicode_literals from .testcases import DockerClientTestCase from compose.cli.docker_client import docker_client from compose.config import config from compose.config.types import VolumeFromSpec from compose.config.types import VolumeSpec from compose.const import LABEL_PROJECT from compose.container import Container from compose.project import Project from compose.service import ConvergenceStrategy from compose.service import Net def build_service_dicts(service_config): return config.load( config.ConfigDetails( 'working_dir', [config.ConfigFile(None, service_config)])) class ProjectTest(DockerClientTestCase): def test_containers(self): web = self.create_service('web') db = self.create_service('db') project = Project('composetest', [web, db], self.client) project.up() containers = project.containers() self.assertEqual(len(containers), 2) def test_containers_with_service_names(self): web = self.create_service('web') db = self.create_service('db') project = Project('composetest', [web, db], self.client) project.up() containers = project.containers(['web']) self.assertEqual( [c.name for c in containers], ['composetest_web_1']) def test_containers_with_extra_service(self): web = self.create_service('web') web_1 = web.create_container() db = self.create_service('db') db_1 = db.create_container() self.create_service('extra').create_container() project = Project('composetest', [web, db], self.client) self.assertEqual( set(project.containers(stopped=True)), set([web_1, db_1]), ) def test_volumes_from_service(self): service_dicts = build_service_dicts({ 'data': { 'image': 'busybox:latest', 'volumes': ['/var/data'], }, 'db': { 'image': 'busybox:latest', 'volumes_from': ['data'], }, }) project = Project.from_dicts( name='composetest', service_dicts=service_dicts, client=self.client, ) db = project.get_service('db') data = project.get_service('data') self.assertEqual(db.volumes_from, [VolumeFromSpec(data, 'rw')]) def test_volumes_from_container(self): data_container = Container.create( self.client, image='busybox:latest', volumes=['/var/data'], name='composetest_data_container', labels={LABEL_PROJECT: 'composetest'}, ) project = Project.from_dicts( name='composetest', service_dicts=build_service_dicts({ 'db': { 'image': 'busybox:latest', 'volumes_from': ['composetest_data_container'], }, }), client=self.client, ) db = project.get_service('db') self.assertEqual(db._get_volumes_from(), [data_container.id + ':rw']) def test_get_network_does_not_exist(self): self.require_api_version('1.21') client = docker_client(version='1.21') project = Project('composetest', [], client) assert project.get_network() is None def test_get_network(self): self.require_api_version('1.21') client = docker_client(version='1.21') network_name = 'network_does_exist' project = Project(network_name, [], client) client.create_network(network_name) self.addCleanup(client.remove_network, network_name) assert project.get_network()['Name'] == network_name def test_net_from_service(self): project = Project.from_dicts( name='composetest', service_dicts=build_service_dicts({ 'net': { 'image': 'busybox:latest', 'command': ["top"] }, 'web': { 'image': 'busybox:latest', 'net': 'container:net', 'command': ["top"] }, }), client=self.client, ) project.up() web = project.get_service('web') net = project.get_service('net') self.assertEqual(web.net.mode, 'container:' + net.containers()[0].id) def test_net_from_container(self): net_container = Container.create( self.client, image='busybox:latest', name='composetest_net_container', command='top', labels={LABEL_PROJECT: 'composetest'}, ) net_container.start() project = Project.from_dicts( name='composetest', service_dicts=build_service_dicts({ 'web': { 'image': 'busybox:latest', 'net': 'container:composetest_net_container' }, }), client=self.client, ) project.up() web = project.get_service('web') self.assertEqual(web.net.mode, 'container:' + net_container.id) def test_start_pause_unpause_stop_kill_remove(self): web = self.create_service('web') db = self.create_service('db') project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(web.containers()), 0) self.assertEqual(len(db.containers()), 0) web_container_1 = web.create_container() web_container_2 = web.create_container() db_container = db.create_container() project.start(service_names=['web']) self.assertEqual(set(c.name for c in project.containers()), set([web_container_1.name, web_container_2.name])) project.start() self.assertEqual(set(c.name for c in project.containers()), set([web_container_1.name, web_container_2.name, db_container.name])) project.pause(service_names=['web']) self.assertEqual(set([c.name for c in project.containers() if c.is_paused]), set([web_container_1.name, web_container_2.name])) project.pause() self.assertEqual(set([c.name for c in project.containers() if c.is_paused]), set([web_container_1.name, web_container_2.name, db_container.name])) project.unpause(service_names=['db']) self.assertEqual(len([c.name for c in project.containers() if c.is_paused]), 2) project.unpause() self.assertEqual(len([c.name for c in project.containers() if c.is_paused]), 0) project.stop(service_names=['web'], timeout=1) self.assertEqual(set(c.name for c in project.containers()), set([db_container.name])) project.kill(service_names=['db']) self.assertEqual(len(project.containers()), 0) self.assertEqual(len(project.containers(stopped=True)), 3) project.remove_stopped(service_names=['web']) self.assertEqual(len(project.containers(stopped=True)), 1) project.remove_stopped() self.assertEqual(len(project.containers(stopped=True)), 0) def test_project_up(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) self.assertEqual(len(project.containers()), 1) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(web.containers()), 0) def test_project_up_starts_uncreated_services(self): db = self.create_service('db') web = self.create_service('web', links=[(db, 'db')]) project = Project('composetest', [db, web], self.client) project.up(['db']) self.assertEqual(len(project.containers()), 1) project.up() self.assertEqual(len(project.containers()), 2) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(web.containers()), 1) def test_recreate_preserves_volumes(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/etc')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) self.assertEqual(len(project.containers()), 1) old_db_id = project.containers()[0].id db_volume_path = project.containers()[0].get('Volumes./etc') project.up(strategy=ConvergenceStrategy.always) self.assertEqual(len(project.containers()), 2) db_container = [c for c in project.containers() if 'db' in c.name][0] self.assertNotEqual(db_container.id, old_db_id) self.assertEqual(db_container.get('Volumes./etc'), db_volume_path) def test_project_up_with_no_recreate_running(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) self.assertEqual(len(project.containers()), 1) old_db_id = project.containers()[0].id db_volume_path = project.containers()[0].inspect()['Volumes']['/var/db'] project.up(strategy=ConvergenceStrategy.never) self.assertEqual(len(project.containers()), 2) db_container = [c for c in project.containers() if 'db' in c.name][0] self.assertEqual(db_container.id, old_db_id) self.assertEqual(db_container.inspect()['Volumes']['/var/db'], db_volume_path) def test_project_up_with_no_recreate_stopped(self): web = self.create_service('web') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) project = Project('composetest', [web, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db']) project.kill() old_containers = project.containers(stopped=True) self.assertEqual(len(old_containers), 1) old_db_id = old_containers[0].id db_volume_path = old_containers[0].inspect()['Volumes']['/var/db'] project.up(strategy=ConvergenceStrategy.never) new_containers = project.containers(stopped=True) self.assertEqual(len(new_containers), 2) self.assertEqual([c.is_running for c in new_containers], [True, True]) db_container = [c for c in new_containers if 'db' in c.name][0] self.assertEqual(db_container.id, old_db_id) self.assertEqual(db_container.inspect()['Volumes']['/var/db'], db_volume_path) def test_project_up_without_all_services(self): console = self.create_service('console') db = self.create_service('db') project = Project('composetest', [console, db], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up() self.assertEqual(len(project.containers()), 2) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 1) def test_project_up_starts_links(self): console = self.create_service('console') db = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) web = self.create_service('web', links=[(db, 'db')]) project = Project('composetest', [web, db, console], self.client) project.start() self.assertEqual(len(project.containers()), 0) project.up(['web']) self.assertEqual(len(project.containers()), 2) self.assertEqual(len(web.containers()), 1) self.assertEqual(len(db.containers()), 1) self.assertEqual(len(console.containers()), 0) def test_project_up_starts_depends(self): project = Project.from_dicts( name='composetest', service_dicts=build_service_dicts({ 'console': { 'image': 'busybox:latest', 'command': ["top"], }, 'data': { 'image': 'busybox:latest', 'command': ["top"] }, 'db': { 'image': 'busybox:latest', 'command': ["top"], 'volumes_from': ['data'], }, 'web': { 'image': 'busybox:latest', 'command': ["top"], 'links': ['db'], }, }), client=self.client, ) project.start() self.assertEqual(len(project.containers()), 0) project.up(['web']) self.assertEqual(len(project.containers()), 3) self.assertEqual(len(project.get_service('web').containers()), 1) self.assertEqual(len(project.get_service('db').containers()), 1) self.assertEqual(len(project.get_service('data').containers()), 1) self.assertEqual(len(project.get_service('console').containers()), 0) def test_project_up_with_no_deps(self): project = Project.from_dicts( name='composetest', service_dicts=build_service_dicts({ 'console': { 'image': 'busybox:latest', 'command': ["top"], }, 'data': { 'image': 'busybox:latest', 'command': ["top"] }, 'db': { 'image': 'busybox:latest', 'command': ["top"], 'volumes_from': ['data'], }, 'web': { 'image': 'busybox:latest', 'command': ["top"], 'links': ['db'], }, }), client=self.client, ) project.start() self.assertEqual(len(project.containers()), 0) project.up(['db'], start_deps=False) self.assertEqual(len(project.containers(stopped=True)), 2) self.assertEqual(len(project.get_service('web').containers()), 0) self.assertEqual(len(project.get_service('db').containers()), 1) self.assertEqual(len(project.get_service('data').containers()), 0) self.assertEqual(len(project.get_service('data').containers(stopped=True)), 1) self.assertEqual(len(project.get_service('console').containers()), 0) def test_project_up_with_custom_network(self): self.require_api_version('1.21') client = docker_client(version='1.21') network_name = 'composetest-custom' client.create_network(network_name) self.addCleanup(client.remove_network, network_name) web = self.create_service('web', net=Net(network_name)) project = Project('composetest', [web], client, use_networking=True) project.up() assert project.get_network() is None def test_unscale_after_restart(self): web = self.create_service('web') project = Project('composetest', [web], self.client) project.start() service = project.get_service('web') service.scale(1) self.assertEqual(len(service.containers()), 1) service.scale(3) self.assertEqual(len(service.containers()), 3) project.up() service = project.get_service('web') self.assertEqual(len(service.containers()), 3) service.scale(1) self.assertEqual(len(service.containers()), 1) project.up() service = project.get_service('web') self.assertEqual(len(service.containers()), 1) # does scale=0 ,makes any sense? after recreating at least 1 container is running service.scale(0) project.up() service = project.get_service('web') self.assertEqual(len(service.containers()), 1) compose-1.5.2/tests/integration/resilience_test.py000066400000000000000000000032741263011261000224020ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from .. import mock from .testcases import DockerClientTestCase from compose.config.types import VolumeSpec from compose.project import Project from compose.service import ConvergenceStrategy class ResilienceTest(DockerClientTestCase): def setUp(self): self.db = self.create_service( 'db', volumes=[VolumeSpec.parse('/var/db')], command='top') self.project = Project('composetest', [self.db], self.client) container = self.db.create_container() container.start() self.host_path = container.get('Volumes')['/var/db'] def test_successful_recreate(self): self.project.up(strategy=ConvergenceStrategy.always) container = self.db.containers()[0] self.assertEqual(container.get('Volumes')['/var/db'], self.host_path) def test_create_failure(self): with mock.patch('compose.service.Service.create_container', crash): with self.assertRaises(Crash): self.project.up(strategy=ConvergenceStrategy.always) self.project.up() container = self.db.containers()[0] self.assertEqual(container.get('Volumes')['/var/db'], self.host_path) def test_start_failure(self): with mock.patch('compose.container.Container.start', crash): with self.assertRaises(Crash): self.project.up(strategy=ConvergenceStrategy.always) self.project.up() container = self.db.containers()[0] self.assertEqual(container.get('Volumes')['/var/db'], self.host_path) class Crash(Exception): pass def crash(*args, **kwargs): raise Crash() compose-1.5.2/tests/integration/service_test.py000066400000000000000000001144011263011261000217130ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import os import shutil import tempfile from os import path from docker.errors import APIError from six import StringIO from six import text_type from .. import mock from .testcases import DockerClientTestCase from .testcases import pull_busybox from compose import __version__ from compose.config.types import VolumeFromSpec from compose.config.types import VolumeSpec from compose.const import LABEL_CONFIG_HASH from compose.const import LABEL_CONTAINER_NUMBER from compose.const import LABEL_ONE_OFF from compose.const import LABEL_PROJECT from compose.const import LABEL_SERVICE from compose.const import LABEL_VERSION from compose.container import Container from compose.service import ConvergencePlan from compose.service import ConvergenceStrategy from compose.service import Net from compose.service import Service def create_and_start_container(service, **override_options): container = service.create_container(**override_options) container.start() return container class ServiceTest(DockerClientTestCase): def test_containers(self): foo = self.create_service('foo') bar = self.create_service('bar') create_and_start_container(foo) self.assertEqual(len(foo.containers()), 1) self.assertEqual(foo.containers()[0].name, 'composetest_foo_1') self.assertEqual(len(bar.containers()), 0) create_and_start_container(bar) create_and_start_container(bar) self.assertEqual(len(foo.containers()), 1) self.assertEqual(len(bar.containers()), 2) names = [c.name for c in bar.containers()] self.assertIn('composetest_bar_1', names) self.assertIn('composetest_bar_2', names) def test_containers_one_off(self): db = self.create_service('db') container = db.create_container(one_off=True) self.assertEqual(db.containers(stopped=True), []) self.assertEqual(db.containers(one_off=True, stopped=True), [container]) def test_project_is_added_to_container_name(self): service = self.create_service('web') create_and_start_container(service) self.assertEqual(service.containers()[0].name, 'composetest_web_1') def test_start_stop(self): service = self.create_service('scalingtest') self.assertEqual(len(service.containers(stopped=True)), 0) service.create_container() self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) service.start() self.assertEqual(len(service.containers()), 1) self.assertEqual(len(service.containers(stopped=True)), 1) service.stop(timeout=1) self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) service.stop(timeout=1) self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) def test_kill_remove(self): service = self.create_service('scalingtest') create_and_start_container(service) self.assertEqual(len(service.containers()), 1) service.remove_stopped() self.assertEqual(len(service.containers()), 1) service.kill() self.assertEqual(len(service.containers()), 0) self.assertEqual(len(service.containers(stopped=True)), 1) service.remove_stopped() self.assertEqual(len(service.containers(stopped=True)), 0) def test_create_container_with_one_off(self): db = self.create_service('db') container = db.create_container(one_off=True) self.assertEqual(container.name, 'composetest_db_run_1') def test_create_container_with_one_off_when_existing_container_is_running(self): db = self.create_service('db') db.start() container = db.create_container(one_off=True) self.assertEqual(container.name, 'composetest_db_run_1') def test_create_container_with_unspecified_volume(self): service = self.create_service('db', volumes=[VolumeSpec.parse('/var/db')]) container = service.create_container() container.start() self.assertIn('/var/db', container.get('Volumes')) def test_create_container_with_volume_driver(self): service = self.create_service('db', volume_driver='foodriver') container = service.create_container() container.start() self.assertEqual('foodriver', container.get('Config.VolumeDriver')) def test_create_container_with_cpu_shares(self): service = self.create_service('db', cpu_shares=73) container = service.create_container() container.start() self.assertEqual(container.get('HostConfig.CpuShares'), 73) def test_create_container_with_extra_hosts_list(self): extra_hosts = ['somehost:162.242.195.82', 'otherhost:50.31.209.229'] service = self.create_service('db', extra_hosts=extra_hosts) container = service.create_container() container.start() self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts)) def test_create_container_with_extra_hosts_dicts(self): extra_hosts = {'somehost': '162.242.195.82', 'otherhost': '50.31.209.229'} extra_hosts_list = ['somehost:162.242.195.82', 'otherhost:50.31.209.229'] service = self.create_service('db', extra_hosts=extra_hosts) container = service.create_container() container.start() self.assertEqual(set(container.get('HostConfig.ExtraHosts')), set(extra_hosts_list)) def test_create_container_with_cpu_set(self): service = self.create_service('db', cpuset='0') container = service.create_container() container.start() self.assertEqual(container.get('HostConfig.CpusetCpus'), '0') def test_create_container_with_read_only_root_fs(self): read_only = True service = self.create_service('db', read_only=read_only) container = service.create_container() container.start() self.assertEqual(container.get('HostConfig.ReadonlyRootfs'), read_only, container.get('HostConfig')) def test_create_container_with_security_opt(self): security_opt = ['label:disable'] service = self.create_service('db', security_opt=security_opt) container = service.create_container() container.start() self.assertEqual(set(container.get('HostConfig.SecurityOpt')), set(security_opt)) def test_create_container_with_mac_address(self): service = self.create_service('db', mac_address='02:42:ac:11:65:43') container = service.create_container() container.start() self.assertEqual(container.inspect()['Config']['MacAddress'], '02:42:ac:11:65:43') def test_create_container_with_specified_volume(self): host_path = '/tmp/host-path' container_path = '/container-path' service = self.create_service( 'db', volumes=[VolumeSpec(host_path, container_path, 'rw')]) container = service.create_container() container.start() volumes = container.inspect()['Volumes'] self.assertIn(container_path, volumes) # Match the last component ("host-path"), because boot2docker symlinks /tmp actual_host_path = volumes[container_path] self.assertTrue(path.basename(actual_host_path) == path.basename(host_path), msg=("Last component differs: %s, %s" % (actual_host_path, host_path))) def test_recreate_preserves_volume_with_trailing_slash(self): """When the Compose file specifies a trailing slash in the container path, make sure we copy the volume over when recreating. """ service = self.create_service('data', volumes=[VolumeSpec.parse('/data/')]) old_container = create_and_start_container(service) volume_path = old_container.get('Volumes')['/data'] new_container = service.recreate_container(old_container) self.assertEqual(new_container.get('Volumes')['/data'], volume_path) def test_duplicate_volume_trailing_slash(self): """ When an image specifies a volume, and the Compose file specifies a host path but adds a trailing slash, make sure that we don't create duplicate binds. """ host_path = '/tmp/data' container_path = '/data' volumes = [VolumeSpec.parse('{}:{}/'.format(host_path, container_path))] tmp_container = self.client.create_container( 'busybox', 'true', volumes={container_path: {}}, labels={'com.docker.compose.test_image': 'true'}, ) image = self.client.commit(tmp_container)['Id'] service = self.create_service('db', image=image, volumes=volumes) old_container = create_and_start_container(service) self.assertEqual( old_container.get('Config.Volumes'), {container_path: {}}, ) service = self.create_service('db', image=image, volumes=volumes) new_container = service.recreate_container(old_container) self.assertEqual( new_container.get('Config.Volumes'), {container_path: {}}, ) self.assertEqual(service.containers(stopped=False), [new_container]) def test_create_container_with_volumes_from(self): volume_service = self.create_service('data') volume_container_1 = volume_service.create_container() volume_container_2 = Container.create( self.client, image='busybox:latest', command=["top"], labels={LABEL_PROJECT: 'composetest'}, ) host_service = self.create_service( 'host', volumes_from=[ VolumeFromSpec(volume_service, 'rw'), VolumeFromSpec(volume_container_2, 'rw') ] ) host_container = host_service.create_container() host_container.start() self.assertIn(volume_container_1.id + ':rw', host_container.get('HostConfig.VolumesFrom')) self.assertIn(volume_container_2.id + ':rw', host_container.get('HostConfig.VolumesFrom')) def test_execute_convergence_plan_recreate(self): service = self.create_service( 'db', environment={'FOO': '1'}, volumes=[VolumeSpec.parse('/etc')], entrypoint=['top'], command=['-d', '1'] ) old_container = service.create_container() self.assertEqual(old_container.get('Config.Entrypoint'), ['top']) self.assertEqual(old_container.get('Config.Cmd'), ['-d', '1']) self.assertIn('FOO=1', old_container.get('Config.Env')) self.assertEqual(old_container.name, 'composetest_db_1') old_container.start() old_container.inspect() # reload volume data volume_path = old_container.get('Volumes')['/etc'] num_containers_before = len(self.client.containers(all=True)) service.options['environment']['FOO'] = '2' new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [old_container])) self.assertEqual(new_container.get('Config.Entrypoint'), ['top']) self.assertEqual(new_container.get('Config.Cmd'), ['-d', '1']) self.assertIn('FOO=2', new_container.get('Config.Env')) self.assertEqual(new_container.name, 'composetest_db_1') self.assertEqual(new_container.get('Volumes')['/etc'], volume_path) self.assertIn( 'affinity:container==%s' % old_container.id, new_container.get('Config.Env')) self.assertEqual(len(self.client.containers(all=True)), num_containers_before) self.assertNotEqual(old_container.id, new_container.id) self.assertRaises(APIError, self.client.inspect_container, old_container.id) def test_execute_convergence_plan_when_containers_are_stopped(self): service = self.create_service( 'db', environment={'FOO': '1'}, volumes=[VolumeSpec.parse('/var/db')], entrypoint=['top'], command=['-d', '1'] ) service.create_container() containers = service.containers(stopped=True) self.assertEqual(len(containers), 1) container, = containers self.assertFalse(container.is_running) service.execute_convergence_plan(ConvergencePlan('start', [container])) containers = service.containers() self.assertEqual(len(containers), 1) container.inspect() self.assertEqual(container, containers[0]) self.assertTrue(container.is_running) def test_execute_convergence_plan_with_image_declared_volume(self): service = Service( project='composetest', name='db', client=self.client, build='tests/fixtures/dockerfile-with-volume', ) old_container = create_and_start_container(service) self.assertEqual(list(old_container.get('Volumes').keys()), ['/data']) volume_path = old_container.get('Volumes')['/data'] new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [old_container])) self.assertEqual(list(new_container.get('Volumes')), ['/data']) self.assertEqual(new_container.get('Volumes')['/data'], volume_path) def test_execute_convergence_plan_when_image_volume_masks_config(self): service = self.create_service( 'db', build='tests/fixtures/dockerfile-with-volume', ) old_container = create_and_start_container(service) self.assertEqual(list(old_container.get('Volumes').keys()), ['/data']) volume_path = old_container.get('Volumes')['/data'] service.options['volumes'] = [VolumeSpec.parse('/tmp:/data')] with mock.patch('compose.service.log') as mock_log: new_container, = service.execute_convergence_plan( ConvergencePlan('recreate', [old_container])) mock_log.warn.assert_called_once_with(mock.ANY) _, args, kwargs = mock_log.warn.mock_calls[0] self.assertIn( "Service \"db\" is using volume \"/data\" from the previous container", args[0]) self.assertEqual(list(new_container.get('Volumes')), ['/data']) self.assertEqual(new_container.get('Volumes')['/data'], volume_path) def test_start_container_passes_through_options(self): db = self.create_service('db') create_and_start_container(db, environment={'FOO': 'BAR'}) self.assertEqual(db.containers()[0].environment['FOO'], 'BAR') def test_start_container_inherits_options_from_constructor(self): db = self.create_service('db', environment={'FOO': 'BAR'}) create_and_start_container(db) self.assertEqual(db.containers()[0].environment['FOO'], 'BAR') def test_start_container_creates_links(self): db = self.create_service('db') web = self.create_service('web', links=[(db, None)]) create_and_start_container(db) create_and_start_container(db) create_and_start_container(web) self.assertEqual( set(web.containers()[0].links()), set([ 'composetest_db_1', 'db_1', 'composetest_db_2', 'db_2', 'db']) ) def test_start_container_creates_links_with_names(self): db = self.create_service('db') web = self.create_service('web', links=[(db, 'custom_link_name')]) create_and_start_container(db) create_and_start_container(db) create_and_start_container(web) self.assertEqual( set(web.containers()[0].links()), set([ 'composetest_db_1', 'db_1', 'composetest_db_2', 'db_2', 'custom_link_name']) ) def test_start_container_with_external_links(self): db = self.create_service('db') web = self.create_service('web', external_links=['composetest_db_1', 'composetest_db_2', 'composetest_db_3:db_3']) for _ in range(3): create_and_start_container(db) create_and_start_container(web) self.assertEqual( set(web.containers()[0].links()), set([ 'composetest_db_1', 'composetest_db_2', 'db_3']), ) def test_start_normal_container_does_not_create_links_to_its_own_service(self): db = self.create_service('db') create_and_start_container(db) create_and_start_container(db) c = create_and_start_container(db) self.assertEqual(set(c.links()), set([])) def test_start_one_off_container_creates_links_to_its_own_service(self): db = self.create_service('db') create_and_start_container(db) create_and_start_container(db) c = create_and_start_container(db, one_off=True) self.assertEqual( set(c.links()), set([ 'composetest_db_1', 'db_1', 'composetest_db_2', 'db_2', 'db']) ) def test_start_container_builds_images(self): service = Service( name='test', client=self.client, build='tests/fixtures/simple-dockerfile', project='composetest', ) container = create_and_start_container(service) container.wait() self.assertIn(b'success', container.logs()) self.assertEqual(len(self.client.images(name='composetest_test')), 1) def test_start_container_uses_tagged_image_if_it_exists(self): self.check_build('tests/fixtures/simple-dockerfile', tag='composetest_test') service = Service( name='test', client=self.client, build='this/does/not/exist/and/will/throw/error', project='composetest', ) container = create_and_start_container(service) container.wait() self.assertIn(b'success', container.logs()) def test_start_container_creates_ports(self): service = self.create_service('web', ports=[8000]) container = create_and_start_container(service).inspect() self.assertEqual(list(container['NetworkSettings']['Ports'].keys()), ['8000/tcp']) self.assertNotEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8000') def test_build(self): base_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, base_dir) with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f: f.write("FROM busybox\n") self.create_service('web', build=base_dir).build() self.assertEqual(len(self.client.images(name='composetest_web')), 1) def test_build_non_ascii_filename(self): base_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, base_dir) with open(os.path.join(base_dir, 'Dockerfile'), 'w') as f: f.write("FROM busybox\n") with open(os.path.join(base_dir.encode('utf8'), b'foo\xE2bar'), 'w') as f: f.write("hello world\n") self.create_service('web', build=text_type(base_dir)).build() self.assertEqual(len(self.client.images(name='composetest_web')), 1) def test_build_with_git_url(self): build_url = "https://github.com/dnephin/docker-build-from-url.git" service = self.create_service('buildwithurl', build=build_url) self.addCleanup(self.client.remove_image, service.image_name) service.build() assert service.image() def test_start_container_stays_unpriviliged(self): service = self.create_service('web') container = create_and_start_container(service).inspect() self.assertEqual(container['HostConfig']['Privileged'], False) def test_start_container_becomes_priviliged(self): service = self.create_service('web', privileged=True) container = create_and_start_container(service).inspect() self.assertEqual(container['HostConfig']['Privileged'], True) def test_expose_does_not_publish_ports(self): service = self.create_service('web', expose=["8000"]) container = create_and_start_container(service).inspect() self.assertEqual(container['NetworkSettings']['Ports'], {'8000/tcp': None}) def test_start_container_creates_port_with_explicit_protocol(self): service = self.create_service('web', ports=['8000/udp']) container = create_and_start_container(service).inspect() self.assertEqual(list(container['NetworkSettings']['Ports'].keys()), ['8000/udp']) def test_start_container_creates_fixed_external_ports(self): service = self.create_service('web', ports=['8000:8000']) container = create_and_start_container(service).inspect() self.assertIn('8000/tcp', container['NetworkSettings']['Ports']) self.assertEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8000') def test_start_container_creates_fixed_external_ports_when_it_is_different_to_internal_port(self): service = self.create_service('web', ports=['8001:8000']) container = create_and_start_container(service).inspect() self.assertIn('8000/tcp', container['NetworkSettings']['Ports']) self.assertEqual(container['NetworkSettings']['Ports']['8000/tcp'][0]['HostPort'], '8001') def test_port_with_explicit_interface(self): service = self.create_service('web', ports=[ '127.0.0.1:8001:8000', '0.0.0.0:9001:9000/udp', ]) container = create_and_start_container(service).inspect() self.assertEqual(container['NetworkSettings']['Ports'], { '8000/tcp': [ { 'HostIp': '127.0.0.1', 'HostPort': '8001', }, ], '9000/udp': [ { 'HostIp': '0.0.0.0', 'HostPort': '9001', }, ], }) def test_create_with_image_id(self): # Get image id for the current busybox:latest pull_busybox(self.client) image_id = self.client.inspect_image('busybox:latest')['Id'][:12] service = self.create_service('foo', image=image_id) service.create_container() def test_scale(self): service = self.create_service('web') service.scale(1) self.assertEqual(len(service.containers()), 1) # Ensure containers don't have stdout or stdin connected container = service.containers()[0] config = container.inspect()['Config'] self.assertFalse(config['AttachStderr']) self.assertFalse(config['AttachStdout']) self.assertFalse(config['AttachStdin']) service.scale(3) self.assertEqual(len(service.containers()), 3) service.scale(1) self.assertEqual(len(service.containers()), 1) service.scale(0) self.assertEqual(len(service.containers()), 0) def test_scale_with_stopped_containers(self): """ Given there are some stopped containers and scale is called with a desired number that is the same as the number of stopped containers, test that those containers are restarted and not removed/recreated. """ service = self.create_service('web') next_number = service._next_container_number() valid_numbers = [next_number, next_number + 1] service.create_container(number=next_number) service.create_container(number=next_number + 1) with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout: service.scale(2) for container in service.containers(): self.assertTrue(container.is_running) self.assertTrue(container.number in valid_numbers) captured_output = mock_stdout.getvalue() self.assertNotIn('Creating', captured_output) self.assertIn('Starting', captured_output) def test_scale_with_stopped_containers_and_needing_creation(self): """ Given there are some stopped containers and scale is called with a desired number that is greater than the number of stopped containers, test that those containers are restarted and required number are created. """ service = self.create_service('web') next_number = service._next_container_number() service.create_container(number=next_number, quiet=True) for container in service.containers(): self.assertFalse(container.is_running) with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout: service.scale(2) self.assertEqual(len(service.containers()), 2) for container in service.containers(): self.assertTrue(container.is_running) captured_output = mock_stdout.getvalue() self.assertIn('Creating', captured_output) self.assertIn('Starting', captured_output) def test_scale_with_api_returns_errors(self): """ Test that when scaling if the API returns an error, that error is handled and the remaining threads continue. """ service = self.create_service('web') next_number = service._next_container_number() service.create_container(number=next_number, quiet=True) with mock.patch( 'compose.container.Container.create', side_effect=APIError(message="testing", response={}, explanation="Boom")): with mock.patch('sys.stdout', new_callable=StringIO) as mock_stdout: service.scale(3) self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) self.assertIn("ERROR: for 2 Boom", mock_stdout.getvalue()) def test_scale_with_api_returns_unexpected_exception(self): """ Test that when scaling if the API returns an error, that is not of type APIError, that error is re-raised. """ service = self.create_service('web') next_number = service._next_container_number() service.create_container(number=next_number, quiet=True) with mock.patch( 'compose.container.Container.create', side_effect=ValueError("BOOM") ): with self.assertRaises(ValueError): service.scale(3) self.assertEqual(len(service.containers()), 1) self.assertTrue(service.containers()[0].is_running) @mock.patch('compose.service.log') def test_scale_with_desired_number_already_achieved(self, mock_log): """ Test that calling scale with a desired number that is equal to the number of containers already running results in no change. """ service = self.create_service('web') next_number = service._next_container_number() container = service.create_container(number=next_number, quiet=True) container.start() self.assertTrue(container.is_running) self.assertEqual(len(service.containers()), 1) service.scale(1) self.assertEqual(len(service.containers()), 1) container.inspect() self.assertTrue(container.is_running) captured_output = mock_log.info.call_args[0] self.assertIn('Desired container number already achieved', captured_output) @mock.patch('compose.service.log') def test_scale_with_custom_container_name_outputs_warning(self, mock_log): """Test that calling scale on a service that has a custom container name results in warning output. """ # Disable this test against earlier versions because it is flaky self.require_api_version('1.21') service = self.create_service('app', container_name='custom-container') self.assertEqual(service.custom_container_name(), 'custom-container') service.scale(3) captured_output = mock_log.warn.call_args[0][0] self.assertEqual(len(service.containers()), 1) self.assertIn( "Remove the custom name to scale the service.", captured_output ) def test_scale_sets_ports(self): service = self.create_service('web', ports=['8000']) service.scale(2) containers = service.containers() self.assertEqual(len(containers), 2) for container in containers: self.assertEqual(list(container.inspect()['HostConfig']['PortBindings'].keys()), ['8000/tcp']) def test_network_mode_none(self): service = self.create_service('web', net=Net('none')) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.NetworkMode'), 'none') def test_network_mode_bridged(self): service = self.create_service('web', net=Net('bridge')) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.NetworkMode'), 'bridge') def test_network_mode_host(self): service = self.create_service('web', net=Net('host')) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.NetworkMode'), 'host') def test_pid_mode_none_defined(self): service = self.create_service('web', pid=None) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.PidMode'), '') def test_pid_mode_host(self): service = self.create_service('web', pid='host') container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.PidMode'), 'host') def test_dns_no_value(self): service = self.create_service('web') container = create_and_start_container(service) self.assertIsNone(container.get('HostConfig.Dns')) def test_dns_list(self): service = self.create_service('web', dns=['8.8.8.8', '9.9.9.9']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.Dns'), ['8.8.8.8', '9.9.9.9']) def test_restart_always_value(self): service = self.create_service('web', restart={'Name': 'always'}) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.RestartPolicy.Name'), 'always') def test_restart_on_failure_value(self): service = self.create_service('web', restart={ 'Name': 'on-failure', 'MaximumRetryCount': 5 }) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.RestartPolicy.Name'), 'on-failure') self.assertEqual(container.get('HostConfig.RestartPolicy.MaximumRetryCount'), 5) def test_cap_add_list(self): service = self.create_service('web', cap_add=['SYS_ADMIN', 'NET_ADMIN']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.CapAdd'), ['SYS_ADMIN', 'NET_ADMIN']) def test_cap_drop_list(self): service = self.create_service('web', cap_drop=['SYS_ADMIN', 'NET_ADMIN']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.CapDrop'), ['SYS_ADMIN', 'NET_ADMIN']) def test_dns_search(self): service = self.create_service('web', dns_search=['dc1.example.com', 'dc2.example.com']) container = create_and_start_container(service) self.assertEqual(container.get('HostConfig.DnsSearch'), ['dc1.example.com', 'dc2.example.com']) def test_working_dir_param(self): service = self.create_service('container', working_dir='/working/dir/sample') container = service.create_container() self.assertEqual(container.get('Config.WorkingDir'), '/working/dir/sample') def test_split_env(self): service = self.create_service('web', environment=['NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS=']) env = create_and_start_container(service).environment for k, v in {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}.items(): self.assertEqual(env[k], v) def test_env_from_file_combined_with_env(self): service = self.create_service( 'web', environment=['ONE=1', 'TWO=2', 'THREE=3'], env_file=['tests/fixtures/env/one.env', 'tests/fixtures/env/two.env']) env = create_and_start_container(service).environment for k, v in { 'ONE': '1', 'TWO': '2', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah' }.items(): self.assertEqual(env[k], v) @mock.patch.dict(os.environ) def test_resolve_env(self): os.environ['FILE_DEF'] = 'E1' os.environ['FILE_DEF_EMPTY'] = 'E2' os.environ['ENV_DEF'] = 'E3' service = self.create_service( 'web', environment={ 'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None } ) env = create_and_start_container(service).environment for k, v in { 'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': '' }.items(): self.assertEqual(env[k], v) def test_with_high_enough_api_version_we_get_default_network_mode(self): # TODO: remove this test once minimum docker version is 1.8.x with mock.patch.object(self.client, '_version', '1.20'): service = self.create_service('web') service_config = service._get_container_host_config({}) self.assertEquals(service_config['NetworkMode'], 'default') def test_labels(self): labels_dict = { 'com.example.description': "Accounting webapp", 'com.example.department': "Finance", 'com.example.label-with-empty-value': "", } compose_labels = { LABEL_CONTAINER_NUMBER: '1', LABEL_ONE_OFF: 'False', LABEL_PROJECT: 'composetest', LABEL_SERVICE: 'web', LABEL_VERSION: __version__, } expected = dict(labels_dict, **compose_labels) service = self.create_service('web', labels=labels_dict) labels = create_and_start_container(service).labels.items() for pair in expected.items(): self.assertIn(pair, labels) def test_empty_labels(self): labels_dict = {'foo': '', 'bar': ''} service = self.create_service('web', labels=labels_dict) labels = create_and_start_container(service).labels.items() for name in labels_dict: self.assertIn((name, ''), labels) def test_custom_container_name(self): service = self.create_service('web', container_name='my-web-container') self.assertEqual(service.custom_container_name(), 'my-web-container') container = create_and_start_container(service) self.assertEqual(container.name, 'my-web-container') one_off_container = service.create_container(one_off=True) self.assertNotEqual(one_off_container.name, 'my-web-container') def test_log_drive_invalid(self): service = self.create_service('web', log_driver='xxx') expected_error_msg = "logger: no log driver named 'xxx' is registered" with self.assertRaisesRegexp(APIError, expected_error_msg): create_and_start_container(service) def test_log_drive_empty_default_jsonfile(self): service = self.create_service('web') log_config = create_and_start_container(service).log_config self.assertEqual('json-file', log_config['Type']) self.assertFalse(log_config['Config']) def test_log_drive_none(self): service = self.create_service('web', log_driver='none') log_config = create_and_start_container(service).log_config self.assertEqual('none', log_config['Type']) self.assertFalse(log_config['Config']) def test_devices(self): service = self.create_service('web', devices=["/dev/random:/dev/mapped-random"]) device_config = create_and_start_container(service).get('HostConfig.Devices') device_dict = { 'PathOnHost': '/dev/random', 'CgroupPermissions': 'rwm', 'PathInContainer': '/dev/mapped-random' } self.assertEqual(1, len(device_config)) self.assertDictEqual(device_dict, device_config[0]) def test_duplicate_containers(self): service = self.create_service('web') options = service._get_container_create_options({}, 1) original = Container.create(service.client, **options) self.assertEqual(set(service.containers(stopped=True)), set([original])) self.assertEqual(set(service.duplicate_containers()), set()) options['name'] = 'temporary_container_name' duplicate = Container.create(service.client, **options) self.assertEqual(set(service.containers(stopped=True)), set([original, duplicate])) self.assertEqual(set(service.duplicate_containers()), set([duplicate])) def converge(service, strategy=ConvergenceStrategy.changed, do_build=True): """Create a converge plan from a strategy and execute the plan.""" plan = service.convergence_plan(strategy) return service.execute_convergence_plan(plan, do_build=do_build, timeout=1) class ConfigHashTest(DockerClientTestCase): def test_no_config_hash_when_one_off(self): web = self.create_service('web') container = web.create_container(one_off=True) self.assertNotIn(LABEL_CONFIG_HASH, container.labels) def test_no_config_hash_when_overriding_options(self): web = self.create_service('web') container = web.create_container(environment={'FOO': '1'}) self.assertNotIn(LABEL_CONFIG_HASH, container.labels) def test_config_hash_with_custom_labels(self): web = self.create_service('web', labels={'foo': '1'}) container = converge(web)[0] self.assertIn(LABEL_CONFIG_HASH, container.labels) self.assertIn('foo', container.labels) def test_config_hash_sticks_around(self): web = self.create_service('web', command=["top"]) container = converge(web)[0] self.assertIn(LABEL_CONFIG_HASH, container.labels) web = self.create_service('web', command=["top", "-d", "1"]) container = converge(web)[0] self.assertIn(LABEL_CONFIG_HASH, container.labels) compose-1.5.2/tests/integration/state_test.py000066400000000000000000000226731263011261000214040ustar00rootroot00000000000000""" Integration tests which cover state convergence (aka smart recreate) performed by `docker-compose up`. """ from __future__ import unicode_literals import py from .testcases import DockerClientTestCase from compose.config import config from compose.project import Project from compose.service import ConvergenceStrategy class ProjectTestCase(DockerClientTestCase): def run_up(self, cfg, **kwargs): kwargs.setdefault('timeout', 1) kwargs.setdefault('detached', True) project = self.make_project(cfg) project.up(**kwargs) return set(project.containers(stopped=True)) def make_project(self, cfg): details = config.ConfigDetails( 'working_dir', [config.ConfigFile(None, cfg)]) return Project.from_dicts( name='composetest', client=self.client, service_dicts=config.load(details)) class BasicProjectTest(ProjectTestCase): def setUp(self): super(BasicProjectTest, self).setUp() self.cfg = { 'db': {'image': 'busybox:latest'}, 'web': {'image': 'busybox:latest'}, } def test_no_change(self): old_containers = self.run_up(self.cfg) self.assertEqual(len(old_containers), 2) new_containers = self.run_up(self.cfg) self.assertEqual(len(new_containers), 2) self.assertEqual(old_containers, new_containers) def test_partial_change(self): old_containers = self.run_up(self.cfg) old_db = [c for c in old_containers if c.name_without_project == 'db_1'][0] old_web = [c for c in old_containers if c.name_without_project == 'web_1'][0] self.cfg['web']['command'] = '/bin/true' new_containers = self.run_up(self.cfg) self.assertEqual(len(new_containers), 2) preserved = list(old_containers & new_containers) self.assertEqual(preserved, [old_db]) removed = list(old_containers - new_containers) self.assertEqual(removed, [old_web]) created = list(new_containers - old_containers) self.assertEqual(len(created), 1) self.assertEqual(created[0].name_without_project, 'web_1') self.assertEqual(created[0].get('Config.Cmd'), ['/bin/true']) def test_all_change(self): old_containers = self.run_up(self.cfg) self.assertEqual(len(old_containers), 2) self.cfg['web']['command'] = '/bin/true' self.cfg['db']['command'] = '/bin/true' new_containers = self.run_up(self.cfg) self.assertEqual(len(new_containers), 2) unchanged = old_containers & new_containers self.assertEqual(len(unchanged), 0) new = new_containers - old_containers self.assertEqual(len(new), 2) class ProjectWithDependenciesTest(ProjectTestCase): def setUp(self): super(ProjectWithDependenciesTest, self).setUp() self.cfg = { 'db': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', }, 'web': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', 'links': ['db'], }, 'nginx': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', 'links': ['web'], }, } def test_up(self): containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in containers), set(['db_1', 'web_1', 'nginx_1']), ) def test_change_leaf(self): old_containers = self.run_up(self.cfg) self.cfg['nginx']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in new_containers - old_containers), set(['nginx_1']), ) def test_change_middle(self): old_containers = self.run_up(self.cfg) self.cfg['web']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in new_containers - old_containers), set(['web_1', 'nginx_1']), ) def test_change_root(self): old_containers = self.run_up(self.cfg) self.cfg['db']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up(self.cfg) self.assertEqual( set(c.name_without_project for c in new_containers - old_containers), set(['db_1', 'web_1', 'nginx_1']), ) def test_change_root_no_recreate(self): old_containers = self.run_up(self.cfg) self.cfg['db']['environment'] = {'NEW_VAR': '1'} new_containers = self.run_up( self.cfg, strategy=ConvergenceStrategy.never) self.assertEqual(new_containers - old_containers, set()) def test_service_removed_while_down(self): next_cfg = { 'web': { 'image': 'busybox:latest', 'command': 'tail -f /dev/null', }, 'nginx': self.cfg['nginx'], } containers = self.run_up(self.cfg) self.assertEqual(len(containers), 3) project = self.make_project(self.cfg) project.stop(timeout=1) containers = self.run_up(next_cfg) self.assertEqual(len(containers), 2) def test_service_recreated_when_dependency_created(self): containers = self.run_up(self.cfg, service_names=['web'], start_deps=False) self.assertEqual(len(containers), 1) containers = self.run_up(self.cfg) self.assertEqual(len(containers), 3) web, = [c for c in containers if c.service == 'web'] nginx, = [c for c in containers if c.service == 'nginx'] self.assertEqual(web.links(), ['composetest_db_1', 'db', 'db_1']) self.assertEqual(nginx.links(), ['composetest_web_1', 'web', 'web_1']) class ServiceStateTest(DockerClientTestCase): """Test cases for Service.convergence_plan.""" def test_trigger_create(self): web = self.create_service('web') self.assertEqual(('create', []), web.convergence_plan()) def test_trigger_noop(self): web = self.create_service('web') container = web.create_container() web.start() web = self.create_service('web') self.assertEqual(('noop', [container]), web.convergence_plan()) def test_trigger_start(self): options = dict(command=["top"]) web = self.create_service('web', **options) web.scale(2) containers = web.containers(stopped=True) containers[0].stop() containers[0].inspect() self.assertEqual([c.is_running for c in containers], [False, True]) self.assertEqual( ('start', containers[0:1]), web.convergence_plan(), ) def test_trigger_recreate_with_config_change(self): web = self.create_service('web', command=["top"]) container = web.create_container() web = self.create_service('web', command=["top", "-d", "1"]) self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_trigger_recreate_with_nonexistent_image_tag(self): web = self.create_service('web', image="busybox:latest") container = web.create_container() web = self.create_service('web', image="nonexistent-image") self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_trigger_recreate_with_image_change(self): repo = 'composetest_myimage' tag = 'latest' image = '{}:{}'.format(repo, tag) image_id = self.client.images(name='busybox')[0]['Id'] self.client.tag(image_id, repository=repo, tag=tag) self.addCleanup(self.client.remove_image, image) web = self.create_service('web', image=image) container = web.create_container() # update the image c = self.client.create_container(image, ['touch', '/hello.txt']) self.client.commit(c, repository=repo, tag=tag) self.client.remove_container(c) web = self.create_service('web', image=image) self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_trigger_recreate_with_build(self): context = py.test.ensuretemp('test_trigger_recreate_with_build') self.addCleanup(context.remove) base_image = "FROM busybox\nLABEL com.docker.compose.test_image=true\n" dockerfile = context.join('Dockerfile') dockerfile.write(base_image) web = self.create_service('web', build=str(context)) container = web.create_container() dockerfile.write(base_image + 'CMD echo hello world\n') web.build() web = self.create_service('web', build=str(context)) self.assertEqual(('recreate', [container]), web.convergence_plan()) def test_image_changed_to_build(self): context = py.test.ensuretemp('test_image_changed_to_build') self.addCleanup(context.remove) context.join('Dockerfile').write(""" FROM busybox LABEL com.docker.compose.test_image=true """) web = self.create_service('web', image='busybox') container = web.create_container() web = self.create_service('web', build=str(context)) plan = web.convergence_plan() self.assertEqual(('recreate', [container]), plan) containers = web.execute_convergence_plan(plan) self.assertEqual(len(containers), 1) compose-1.5.2/tests/integration/testcases.py000066400000000000000000000035751263011261000212230ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from docker.utils import version_lt from pytest import skip from .. import unittest from compose.cli.docker_client import docker_client from compose.config.config import resolve_environment from compose.const import LABEL_PROJECT from compose.progress_stream import stream_output from compose.service import Service def pull_busybox(client): client.pull('busybox:latest', stream=False) class DockerClientTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.client = docker_client() def tearDown(self): for c in self.client.containers( all=True, filters={'label': '%s=composetest' % LABEL_PROJECT}): self.client.kill(c['Id']) self.client.remove_container(c['Id']) for i in self.client.images( filters={'label': 'com.docker.compose.test_image'}): self.client.remove_image(i) def create_service(self, name, **kwargs): if 'image' not in kwargs and 'build' not in kwargs: kwargs['image'] = 'busybox:latest' if 'command' not in kwargs: kwargs['command'] = ["top"] kwargs['environment'] = resolve_environment(kwargs) labels = dict(kwargs.setdefault('labels', {})) labels['com.docker.compose.test-name'] = self.id() return Service(name, client=self.client, project='composetest', **kwargs) def check_build(self, *args, **kwargs): kwargs.setdefault('rm', True) build_output = self.client.build(*args, **kwargs) stream_output(build_output, open('/dev/null', 'w')) def require_api_version(self, minimum): api_version = self.client.version()['ApiVersion'] if version_lt(api_version, minimum): skip("API version is too low ({} < {})".format(api_version, minimum)) compose-1.5.2/tests/unit/000077500000000000000000000000001263011261000152755ustar00rootroot00000000000000compose-1.5.2/tests/unit/__init__.py000066400000000000000000000000001263011261000173740ustar00rootroot00000000000000compose-1.5.2/tests/unit/cli/000077500000000000000000000000001263011261000160445ustar00rootroot00000000000000compose-1.5.2/tests/unit/cli/__init__.py000066400000000000000000000000001263011261000201430ustar00rootroot00000000000000compose-1.5.2/tests/unit/cli/command_test.py000066400000000000000000000012301263011261000210670ustar00rootroot00000000000000from __future__ import absolute_import import pytest from requests.exceptions import ConnectionError from compose.cli import errors from compose.cli.command import friendly_error_message from tests import mock from tests import unittest class FriendlyErrorMessageTestCase(unittest.TestCase): def test_dispatch_generic_connection_error(self): with pytest.raises(errors.ConnectionErrorGeneric): with mock.patch( 'compose.cli.command.call_silently', autospec=True, side_effect=[0, 1] ): with friendly_error_message(): raise ConnectionError() compose-1.5.2/tests/unit/cli/docker_client_test.py000066400000000000000000000012111263011261000222550ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import os from compose.cli import docker_client from tests import mock from tests import unittest class DockerClientTestCase(unittest.TestCase): def test_docker_client_no_home(self): with mock.patch.dict(os.environ): del os.environ['HOME'] docker_client.docker_client() def test_docker_client_with_custom_timeout(self): timeout = 300 with mock.patch('compose.cli.docker_client.HTTP_TIMEOUT', 300): client = docker_client.docker_client() self.assertEqual(client.timeout, int(timeout)) compose-1.5.2/tests/unit/cli/formatter_test.py000066400000000000000000000017571263011261000214720ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import logging from compose.cli import colors from compose.cli.formatter import ConsoleWarningFormatter from tests import unittest MESSAGE = 'this is the message' def makeLogRecord(level): return logging.LogRecord('name', level, 'pathame', 0, MESSAGE, (), None) class ConsoleWarningFormatterTestCase(unittest.TestCase): def setUp(self): self.formatter = ConsoleWarningFormatter() def test_format_warn(self): output = self.formatter.format(makeLogRecord(logging.WARN)) expected = colors.yellow('WARNING') + ': ' assert output == expected + MESSAGE def test_format_error(self): output = self.formatter.format(makeLogRecord(logging.ERROR)) expected = colors.red('ERROR') + ': ' assert output == expected + MESSAGE def test_format_info(self): output = self.formatter.format(makeLogRecord(logging.INFO)) assert output == MESSAGE compose-1.5.2/tests/unit/cli/log_printer_test.py000066400000000000000000000051471263011261000220100ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import pytest import six from compose.cli.log_printer import LogPrinter from compose.cli.log_printer import wait_on_exit from compose.container import Container from tests import mock def build_mock_container(reader): return mock.Mock( spec=Container, name='myapp_web_1', name_without_project='web_1', has_api_logs=True, log_stream=None, attach=reader, wait=mock.Mock(return_value=0), ) @pytest.fixture def output_stream(): output = six.StringIO() output.flush = mock.Mock() return output @pytest.fixture def mock_container(): def reader(*args, **kwargs): yield b"hello\nworld" return build_mock_container(reader) class TestLogPrinter(object): def test_single_container(self, output_stream, mock_container): LogPrinter([mock_container], output=output_stream).run() output = output_stream.getvalue() assert 'hello' in output assert 'world' in output # Call count is 2 lines + "container exited line" assert output_stream.flush.call_count == 3 def test_monochrome(self, output_stream, mock_container): LogPrinter([mock_container], output=output_stream, monochrome=True).run() assert '\033[' not in output_stream.getvalue() def test_polychrome(self, output_stream, mock_container): LogPrinter([mock_container], output=output_stream).run() assert '\033[' in output_stream.getvalue() def test_unicode(self, output_stream): glyph = u'\u2022' def reader(*args, **kwargs): yield glyph.encode('utf-8') + b'\n' container = build_mock_container(reader) LogPrinter([container], output=output_stream).run() output = output_stream.getvalue() if six.PY2: output = output.decode('utf-8') assert glyph in output def test_wait_on_exit(self): exit_status = 3 mock_container = mock.Mock( spec=Container, name='cname', wait=mock.Mock(return_value=exit_status)) expected = '{} exited with code {}\n'.format(mock_container.name, exit_status) assert expected == wait_on_exit(mock_container) def test_generator_with_no_logs(self, mock_container, output_stream): mock_container.has_api_logs = False mock_container.log_driver = 'none' LogPrinter([mock_container], output=output_stream).run() output = output_stream.getvalue() assert "WARNING: no logs are available with the 'none' log driver\n" in output compose-1.5.2/tests/unit/cli/main_test.py000066400000000000000000000100551263011261000204020ustar00rootroot00000000000000from __future__ import absolute_import import logging from compose import container from compose.cli.errors import UserError from compose.cli.formatter import ConsoleWarningFormatter from compose.cli.log_printer import LogPrinter from compose.cli.main import attach_to_logs from compose.cli.main import build_log_printer from compose.cli.main import convergence_strategy_from_opts from compose.cli.main import setup_console_handler from compose.project import Project from compose.service import ConvergenceStrategy from tests import mock from tests import unittest def mock_container(service, number): return mock.create_autospec( container.Container, service=service, number=number, name_without_project='{0}_{1}'.format(service, number)) class CLIMainTestCase(unittest.TestCase): def test_build_log_printer(self): containers = [ mock_container('web', 1), mock_container('web', 2), mock_container('db', 1), mock_container('other', 1), mock_container('another', 1), ] service_names = ['web', 'db'] log_printer = build_log_printer(containers, service_names, True) self.assertEqual(log_printer.containers, containers[:3]) def test_build_log_printer_all_services(self): containers = [ mock_container('web', 1), mock_container('db', 1), mock_container('other', 1), ] service_names = [] log_printer = build_log_printer(containers, service_names, True) self.assertEqual(log_printer.containers, containers) def test_attach_to_logs(self): project = mock.create_autospec(Project) log_printer = mock.create_autospec(LogPrinter, containers=[]) service_names = ['web', 'db'] timeout = 12 with mock.patch('compose.cli.main.signal', autospec=True) as mock_signal: attach_to_logs(project, log_printer, service_names, timeout) assert mock_signal.signal.mock_calls == [ mock.call(mock_signal.SIGINT, mock.ANY), mock.call(mock_signal.SIGTERM, mock.ANY), ] log_printer.run.assert_called_once_with() class SetupConsoleHandlerTestCase(unittest.TestCase): def setUp(self): self.stream = mock.Mock() self.stream.isatty.return_value = True self.handler = logging.StreamHandler(stream=self.stream) def test_with_tty_verbose(self): setup_console_handler(self.handler, True) assert type(self.handler.formatter) == ConsoleWarningFormatter assert '%(name)s' in self.handler.formatter._fmt assert '%(funcName)s' in self.handler.formatter._fmt def test_with_tty_not_verbose(self): setup_console_handler(self.handler, False) assert type(self.handler.formatter) == ConsoleWarningFormatter assert '%(name)s' not in self.handler.formatter._fmt assert '%(funcName)s' not in self.handler.formatter._fmt def test_with_not_a_tty(self): self.stream.isatty.return_value = False setup_console_handler(self.handler, False) assert type(self.handler.formatter) == logging.Formatter class ConvergeStrategyFromOptsTestCase(unittest.TestCase): def test_invalid_opts(self): options = {'--force-recreate': True, '--no-recreate': True} with self.assertRaises(UserError): convergence_strategy_from_opts(options) def test_always(self): options = {'--force-recreate': True, '--no-recreate': False} self.assertEqual( convergence_strategy_from_opts(options), ConvergenceStrategy.always ) def test_never(self): options = {'--force-recreate': False, '--no-recreate': True} self.assertEqual( convergence_strategy_from_opts(options), ConvergenceStrategy.never ) def test_changed(self): options = {'--force-recreate': False, '--no-recreate': False} self.assertEqual( convergence_strategy_from_opts(options), ConvergenceStrategy.changed ) compose-1.5.2/tests/unit/cli/verbose_proxy_test.py000066400000000000000000000017631263011261000223720ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import six from compose.cli import verbose_proxy from tests import unittest class VerboseProxyTestCase(unittest.TestCase): def test_format_call(self): prefix = '' if six.PY3 else 'u' expected = "(%(p)s'arg1', True, key=%(p)s'value')" % dict(p=prefix) actual = verbose_proxy.format_call( ("arg1", True), {'key': 'value'}) self.assertEqual(expected, actual) def test_format_return_sequence(self): expected = "(list with 10 items)" actual = verbose_proxy.format_return(list(range(10)), 2) self.assertEqual(expected, actual) def test_format_return(self): expected = repr({'Id': 'ok'}) actual = verbose_proxy.format_return({'Id': 'ok'}, 2) self.assertEqual(expected, actual) def test_format_return_no_result(self): actual = verbose_proxy.format_return(None, 2) self.assertEqual(None, actual) compose-1.5.2/tests/unit/cli_test.py000066400000000000000000000157021263011261000174620ustar00rootroot00000000000000# encoding: utf-8 from __future__ import absolute_import from __future__ import unicode_literals import os import docker import py import pytest from .. import mock from .. import unittest from compose.cli.command import get_project from compose.cli.command import get_project_name from compose.cli.docopt_command import NoSuchCommand from compose.cli.errors import UserError from compose.cli.main import TopLevelCommand from compose.const import IS_WINDOWS_PLATFORM from compose.service import Service class CLITestCase(unittest.TestCase): def test_default_project_name(self): test_dir = py._path.local.LocalPath('tests/fixtures/simple-composefile') with test_dir.as_cwd(): project_name = get_project_name('.') self.assertEquals('simplecomposefile', project_name) def test_project_name_with_explicit_base_dir(self): base_dir = 'tests/fixtures/simple-composefile' project_name = get_project_name(base_dir) self.assertEquals('simplecomposefile', project_name) def test_project_name_with_explicit_uppercase_base_dir(self): base_dir = 'tests/fixtures/UpperCaseDir' project_name = get_project_name(base_dir) self.assertEquals('uppercasedir', project_name) def test_project_name_with_explicit_project_name(self): name = 'explicit-project-name' project_name = get_project_name(None, project_name=name) self.assertEquals('explicitprojectname', project_name) def test_project_name_from_environment_old_var(self): name = 'namefromenv' with mock.patch.dict(os.environ): os.environ['FIG_PROJECT_NAME'] = name project_name = get_project_name(None) self.assertEquals(project_name, name) def test_project_name_from_environment_new_var(self): name = 'namefromenv' with mock.patch.dict(os.environ): os.environ['COMPOSE_PROJECT_NAME'] = name project_name = get_project_name(None) self.assertEquals(project_name, name) def test_get_project(self): base_dir = 'tests/fixtures/longer-filename-composefile' project = get_project(base_dir) self.assertEqual(project.name, 'longerfilenamecomposefile') self.assertTrue(project.client) self.assertTrue(project.services) def test_help(self): command = TopLevelCommand() with self.assertRaises(SystemExit): command.dispatch(['-h'], None) def test_command_help(self): with self.assertRaises(SystemExit) as ctx: TopLevelCommand().dispatch(['help', 'up'], None) self.assertIn('Usage: up', str(ctx.exception)) def test_command_help_dashes(self): with self.assertRaises(SystemExit) as ctx: TopLevelCommand().dispatch(['help', 'migrate-to-labels'], None) self.assertIn('Usage: migrate-to-labels', str(ctx.exception)) def test_command_help_nonexistent(self): with self.assertRaises(NoSuchCommand): TopLevelCommand().dispatch(['help', 'nonexistent'], None) @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason="requires dockerpty") @mock.patch('compose.cli.main.dockerpty', autospec=True) def test_run_with_environment_merged_with_options_list(self, mock_dockerpty): command = TopLevelCommand() mock_client = mock.create_autospec(docker.Client) mock_project = mock.Mock(client=mock_client) mock_project.get_service.return_value = Service( 'service', client=mock_client, environment=['FOO=ONE', 'BAR=TWO'], image='someimage') command.run(mock_project, { 'SERVICE': 'service', 'COMMAND': None, '-e': ['BAR=NEW', 'OTHER=bär'.encode('utf-8')], '--user': None, '--no-deps': None, '--allow-insecure-ssl': None, '-d': True, '-T': None, '--entrypoint': None, '--service-ports': None, '--publish': [], '--rm': None, '--name': None, }) _, _, call_kwargs = mock_client.create_container.mock_calls[0] self.assertEqual( call_kwargs['environment'], {'FOO': 'ONE', 'BAR': 'NEW', 'OTHER': u'bär'}) def test_run_service_with_restart_always(self): command = TopLevelCommand() mock_client = mock.create_autospec(docker.Client) mock_project = mock.Mock(client=mock_client) mock_project.get_service.return_value = Service( 'service', client=mock_client, restart={'Name': 'always', 'MaximumRetryCount': 0}, image='someimage') command.run(mock_project, { 'SERVICE': 'service', 'COMMAND': None, '-e': [], '--user': None, '--no-deps': None, '--allow-insecure-ssl': None, '-d': True, '-T': None, '--entrypoint': None, '--service-ports': None, '--publish': [], '--rm': None, '--name': None, }) self.assertEquals( mock_client.create_host_config.call_args[1]['restart_policy']['Name'], 'always' ) command = TopLevelCommand() mock_client = mock.create_autospec(docker.Client) mock_project = mock.Mock(client=mock_client) mock_project.get_service.return_value = Service( 'service', client=mock_client, restart='always', image='someimage') command.run(mock_project, { 'SERVICE': 'service', 'COMMAND': None, '-e': [], '--user': None, '--no-deps': None, '--allow-insecure-ssl': None, '-d': True, '-T': None, '--entrypoint': None, '--service-ports': None, '--publish': [], '--rm': True, '--name': None, }) self.assertFalse( mock_client.create_host_config.call_args[1].get('restart_policy') ) def test_command_manula_and_service_ports_together(self): command = TopLevelCommand() mock_client = mock.create_autospec(docker.Client) mock_project = mock.Mock(client=mock_client) mock_project.get_service.return_value = Service( 'service', client=mock_client, restart='always', image='someimage', ) with self.assertRaises(UserError): command.run(mock_project, { 'SERVICE': 'service', 'COMMAND': None, '-e': [], '--user': None, '--no-deps': None, '--allow-insecure-ssl': None, '-d': True, '-T': None, '--entrypoint': None, '--service-ports': True, '--publish': ['80:80'], '--rm': None, '--name': None, }) compose-1.5.2/tests/unit/config/000077500000000000000000000000001263011261000165425ustar00rootroot00000000000000compose-1.5.2/tests/unit/config/__init__.py000066400000000000000000000000001263011261000206410ustar00rootroot00000000000000compose-1.5.2/tests/unit/config/config_test.py000066400000000000000000001635741263011261000214400ustar00rootroot00000000000000# encoding: utf-8 from __future__ import print_function import os import shutil import tempfile from operator import itemgetter import py import pytest from compose.config import config from compose.config.config import resolve_environment from compose.config.errors import ConfigurationError from compose.config.types import VolumeSpec from compose.const import IS_WINDOWS_PLATFORM from tests import mock from tests import unittest def make_service_dict(name, service_dict, working_dir, filename=None): """ Test helper function to construct a ServiceExtendsResolver """ resolver = config.ServiceExtendsResolver(config.ServiceConfig( working_dir=working_dir, filename=filename, name=name, config=service_dict)) return config.process_service(resolver.run()) def service_sort(services): return sorted(services, key=itemgetter('name')) def build_config_details(contents, working_dir='working_dir', filename='filename.yml'): return config.ConfigDetails( working_dir, [config.ConfigFile(filename, contents)]) class ConfigTest(unittest.TestCase): def test_load(self): service_dicts = config.load( build_config_details( { 'foo': {'image': 'busybox'}, 'bar': {'image': 'busybox', 'environment': ['FOO=1']}, }, 'tests/fixtures/extends', 'common.yml' ) ) self.assertEqual( service_sort(service_dicts), service_sort([ { 'name': 'bar', 'image': 'busybox', 'environment': {'FOO': '1'}, }, { 'name': 'foo', 'image': 'busybox', } ]) ) def test_load_throws_error_when_not_dict(self): with self.assertRaises(ConfigurationError): config.load( build_config_details( {'web': 'busybox:latest'}, 'working_dir', 'filename.yml' ) ) def test_load_config_invalid_service_names(self): for invalid_name in ['?not?allowed', ' ', '', '!', '/', '\xe2']: with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( {invalid_name: {'image': 'busybox'}}, 'working_dir', 'filename.yml')) assert 'Invalid service name \'%s\'' % invalid_name in exc.exconly() def test_load_with_invalid_field_name(self): config_details = build_config_details( {'web': {'image': 'busybox', 'name': 'bogus'}}, 'working_dir', 'filename.yml') with pytest.raises(ConfigurationError) as exc: config.load(config_details) error_msg = "Unsupported config option for 'web' service: 'name'" assert error_msg in exc.exconly() assert "Validation failed in file 'filename.yml'" in exc.exconly() def test_load_invalid_service_definition(self): config_details = build_config_details( {'web': 'wrong'}, 'working_dir', 'filename.yml') with pytest.raises(ConfigurationError) as exc: config.load(config_details) error_msg = "service 'web' doesn't have any configuration options" assert error_msg in exc.exconly() def test_config_integer_service_name_raise_validation_error(self): expected_error_msg = ("In file 'filename.yml' service name: 1 needs to " "be a string, eg '1'") with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( {1: {'image': 'busybox'}}, 'working_dir', 'filename.yml' ) ) @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') def test_load_with_multiple_files(self): base_file = config.ConfigFile( 'base.yaml', { 'web': { 'image': 'example/web', 'links': ['db'], }, 'db': { 'image': 'example/db', }, }) override_file = config.ConfigFile( 'override.yaml', { 'web': { 'build': '/', 'volumes': ['/home/user/project:/code'], }, }) details = config.ConfigDetails('.', [base_file, override_file]) service_dicts = config.load(details) expected = [ { 'name': 'web', 'build': '/', 'links': ['db'], 'volumes': [VolumeSpec.parse('/home/user/project:/code')], }, { 'name': 'db', 'image': 'example/db', }, ] self.assertEqual(service_sort(service_dicts), service_sort(expected)) def test_load_with_multiple_files_and_empty_override(self): base_file = config.ConfigFile( 'base.yml', {'web': {'image': 'example/web'}}) override_file = config.ConfigFile('override.yml', None) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) error_msg = "Top level object in 'override.yml' needs to be an object" assert error_msg in exc.exconly() def test_load_with_multiple_files_and_empty_base(self): base_file = config.ConfigFile('base.yml', None) override_file = config.ConfigFile( 'override.yml', {'web': {'image': 'example/web'}}) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) assert "Top level object in 'base.yml' needs to be an object" in exc.exconly() def test_load_with_multiple_files_and_extends_in_override_file(self): base_file = config.ConfigFile( 'base.yaml', { 'web': {'image': 'example/web'}, }) override_file = config.ConfigFile( 'override.yaml', { 'web': { 'extends': { 'file': 'common.yml', 'service': 'base', }, 'volumes': ['/home/user/project:/code'], }, }) details = config.ConfigDetails('.', [base_file, override_file]) tmpdir = py.test.ensuretemp('config_test') self.addCleanup(tmpdir.remove) tmpdir.join('common.yml').write(""" base: labels: ['label=one'] """) with tmpdir.as_cwd(): service_dicts = config.load(details) expected = [ { 'name': 'web', 'image': 'example/web', 'volumes': [VolumeSpec.parse('/home/user/project:/code')], 'labels': {'label': 'one'}, }, ] self.assertEqual(service_sort(service_dicts), service_sort(expected)) def test_load_with_multiple_files_and_invalid_override(self): base_file = config.ConfigFile( 'base.yaml', {'web': {'image': 'example/web'}}) override_file = config.ConfigFile( 'override.yaml', {'bogus': 'thing'}) details = config.ConfigDetails('.', [base_file, override_file]) with pytest.raises(ConfigurationError) as exc: config.load(details) assert "service 'bogus' doesn't have any configuration" in exc.exconly() assert "In file 'override.yaml'" in exc.exconly() def test_load_sorts_in_dependency_order(self): config_details = build_config_details({ 'web': { 'image': 'busybox:latest', 'links': ['db'], }, 'db': { 'image': 'busybox:latest', 'volumes_from': ['volume:ro'] }, 'volume': { 'image': 'busybox:latest', 'volumes': ['/tmp'], } }) services = config.load(config_details) assert services[0]['name'] == 'volume' assert services[1]['name'] == 'db' assert services[2]['name'] == 'web' def test_config_valid_service_names(self): for valid_name in ['_', '-', '.__.', '_what-up.', 'what_.up----', 'whatup']: services = config.load( build_config_details( {valid_name: {'image': 'busybox'}}, 'tests/fixtures/extends', 'common.yml')) assert services[0]['name'] == valid_name def test_config_hint(self): expected_error_msg = "(did you mean 'privileged'?)" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'foo': {'image': 'busybox', 'privilige': 'something'}, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_invalid_config_build_and_image_specified(self): expected_error_msg = "Service 'foo' has both an image and build path specified." with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'foo': {'image': 'busybox', 'build': '.'}, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_invalid_config_type_should_be_an_array(self): expected_error_msg = "Service 'foo' configuration key 'links' contains an invalid type, it should be an array" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'foo': {'image': 'busybox', 'links': 'an_link'}, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_invalid_config_not_a_dictionary(self): expected_error_msg = ("Top level object in 'filename.yml' needs to be " "an object.") with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( ['foo', 'lol'], 'tests/fixtures/extends', 'filename.yml' ) ) def test_invalid_config_not_unique_items(self): expected_error_msg = "has non-unique elements" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'web': {'build': '.', 'devices': ['/dev/foo:/dev/foo', '/dev/foo:/dev/foo']} }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_invalid_list_of_strings_format(self): expected_error_msg = "Service 'web' configuration key 'command' contains 1" expected_error_msg += ", which is an invalid type, it should be a string" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'web': {'build': '.', 'command': [1]} }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_config_image_and_dockerfile_raise_validation_error(self): expected_error_msg = "Service 'web' has both an image and alternate Dockerfile." with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( {'web': {'image': 'busybox', 'dockerfile': 'Dockerfile.alt'}}, 'working_dir', 'filename.yml' ) ) def test_config_extra_hosts_string_raises_validation_error(self): expected_error_msg = "Service 'web' configuration key 'extra_hosts' contains an invalid type" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( {'web': { 'image': 'busybox', 'extra_hosts': 'somehost:162.242.195.82' }}, 'working_dir', 'filename.yml' ) ) def test_config_extra_hosts_list_of_dicts_validation_error(self): expected_error_msg = "key 'extra_hosts' contains {'somehost': '162.242.195.82'}, which is an invalid type, it should be a string" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( {'web': { 'image': 'busybox', 'extra_hosts': [ {'somehost': '162.242.195.82'}, {'otherhost': '50.31.209.229'} ] }}, 'working_dir', 'filename.yml' ) ) def test_config_ulimits_invalid_keys_validation_error(self): expected = ("Service 'web' configuration key 'ulimits' 'nofile' contains " "unsupported option: 'not_soft_or_hard'") with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'web': { 'image': 'busybox', 'ulimits': { 'nofile': { "not_soft_or_hard": 100, "soft": 10000, "hard": 20000, } } } }, 'working_dir', 'filename.yml')) assert expected in exc.exconly() def test_config_ulimits_required_keys_validation_error(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'web': { 'image': 'busybox', 'ulimits': {'nofile': {"soft": 10000}} } }, 'working_dir', 'filename.yml')) assert "Service 'web' configuration key 'ulimits' 'nofile'" in exc.exconly() assert "'hard' is a required property" in exc.exconly() def test_config_ulimits_soft_greater_than_hard_error(self): expected = "cannot contain a 'soft' value higher than 'hard' value" with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( { 'web': { 'image': 'busybox', 'ulimits': { 'nofile': {"soft": 10000, "hard": 1000} } } }, 'working_dir', 'filename.yml')) assert expected in exc.exconly() def test_valid_config_which_allows_two_type_definitions(self): expose_values = [["8000"], [8000]] for expose in expose_values: service = config.load( build_config_details( {'web': { 'image': 'busybox', 'expose': expose }}, 'working_dir', 'filename.yml' ) ) self.assertEqual(service[0]['expose'], expose) def test_valid_config_oneof_string_or_list(self): entrypoint_values = [["sh"], "sh"] for entrypoint in entrypoint_values: service = config.load( build_config_details( {'web': { 'image': 'busybox', 'entrypoint': entrypoint }}, 'working_dir', 'filename.yml' ) ) self.assertEqual(service[0]['entrypoint'], entrypoint) @mock.patch('compose.config.validation.log') def test_logs_warning_for_boolean_in_environment(self, mock_logging): expected_warning_msg = "There is a boolean value in the 'environment' key." config.load( build_config_details( {'web': { 'image': 'busybox', 'environment': {'SHOW_STUFF': True} }}, 'working_dir', 'filename.yml' ) ) self.assertTrue(mock_logging.warn.called) self.assertTrue(expected_warning_msg in mock_logging.warn.call_args[0][0]) def test_config_valid_environment_dict_key_contains_dashes(self): services = config.load( build_config_details( {'web': { 'image': 'busybox', 'environment': {'SPRING_JPA_HIBERNATE_DDL-AUTO': 'none'} }}, 'working_dir', 'filename.yml' ) ) self.assertEqual(services[0]['environment']['SPRING_JPA_HIBERNATE_DDL-AUTO'], 'none') def test_load_yaml_with_yaml_error(self): tmpdir = py.test.ensuretemp('invalid_yaml_test') self.addCleanup(tmpdir.remove) invalid_yaml_file = tmpdir.join('docker-compose.yml') invalid_yaml_file.write(""" web: this is bogus: ok: what """) with pytest.raises(ConfigurationError) as exc: config.load_yaml(str(invalid_yaml_file)) assert 'line 3, column 32' in exc.exconly() def test_validate_extra_hosts_invalid(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'web': { 'image': 'alpine', 'extra_hosts': "www.example.com: 192.168.0.17", } })) assert "'extra_hosts' contains an invalid type" in exc.exconly() def test_validate_extra_hosts_invalid_list(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'web': { 'image': 'alpine', 'extra_hosts': [ {'www.example.com': '192.168.0.17'}, {'api.example.com': '192.168.0.18'} ], } })) assert "which is an invalid type" in exc.exconly() class PortsTest(unittest.TestCase): INVALID_PORTS_TYPES = [ {"1": "8000"}, False, "8000", 8000, ] NON_UNIQUE_SINGLE_PORTS = [ ["8000", "8000"], ] INVALID_PORT_MAPPINGS = [ ["8000-8001:8000"], ] VALID_SINGLE_PORTS = [ ["8000"], ["8000/tcp"], ["8000", "9000"], [8000], [8000, 9000], ] VALID_PORT_MAPPINGS = [ ["8000:8050"], ["49153-49154:3002-3003"], ] def test_config_invalid_ports_type_validation(self): for invalid_ports in self.INVALID_PORTS_TYPES: with pytest.raises(ConfigurationError) as exc: self.check_config({'ports': invalid_ports}) assert "contains an invalid type" in exc.value.msg def test_config_non_unique_ports_validation(self): for invalid_ports in self.NON_UNIQUE_SINGLE_PORTS: with pytest.raises(ConfigurationError) as exc: self.check_config({'ports': invalid_ports}) assert "non-unique" in exc.value.msg def test_config_invalid_ports_format_validation(self): for invalid_ports in self.INVALID_PORT_MAPPINGS: with pytest.raises(ConfigurationError) as exc: self.check_config({'ports': invalid_ports}) assert "Port ranges don't match in length" in exc.value.msg def test_config_valid_ports_format_validation(self): for valid_ports in self.VALID_SINGLE_PORTS + self.VALID_PORT_MAPPINGS: self.check_config({'ports': valid_ports}) def test_config_invalid_expose_type_validation(self): for invalid_expose in self.INVALID_PORTS_TYPES: with pytest.raises(ConfigurationError) as exc: self.check_config({'expose': invalid_expose}) assert "contains an invalid type" in exc.value.msg def test_config_non_unique_expose_validation(self): for invalid_expose in self.NON_UNIQUE_SINGLE_PORTS: with pytest.raises(ConfigurationError) as exc: self.check_config({'expose': invalid_expose}) assert "non-unique" in exc.value.msg def test_config_invalid_expose_format_validation(self): # Valid port mappings ARE NOT valid 'expose' entries for invalid_expose in self.INVALID_PORT_MAPPINGS + self.VALID_PORT_MAPPINGS: with pytest.raises(ConfigurationError) as exc: self.check_config({'expose': invalid_expose}) assert "should be of the format" in exc.value.msg def test_config_valid_expose_format_validation(self): # Valid single ports ARE valid 'expose' entries for valid_expose in self.VALID_SINGLE_PORTS: self.check_config({'expose': valid_expose}) def check_config(self, cfg): config.load( build_config_details( {'web': dict(image='busybox', **cfg)}, 'working_dir', 'filename.yml' ) ) class InterpolationTest(unittest.TestCase): @mock.patch.dict(os.environ) def test_config_file_with_environment_variable(self): os.environ.update( IMAGE="busybox", HOST_PORT="80", LABEL_VALUE="myvalue", ) service_dicts = config.load( config.find('tests/fixtures/environment-interpolation', None), ) self.assertEqual(service_dicts, [ { 'name': 'web', 'image': 'busybox', 'ports': ['80:8000'], 'labels': {'mylabel': 'myvalue'}, 'hostname': 'host-', 'command': '${ESCAPED}', } ]) @mock.patch.dict(os.environ) def test_unset_variable_produces_warning(self): os.environ.pop('FOO', None) os.environ.pop('BAR', None) config_details = build_config_details( { 'web': { 'image': '${FOO}', 'command': '${BAR}', 'container_name': '${BAR}', }, }, '.', None, ) with mock.patch('compose.config.interpolation.log') as log: config.load(config_details) self.assertEqual(2, log.warn.call_count) warnings = sorted(args[0][0] for args in log.warn.call_args_list) self.assertIn('BAR', warnings[0]) self.assertIn('FOO', warnings[1]) @mock.patch.dict(os.environ) def test_invalid_interpolation(self): with self.assertRaises(config.ConfigurationError) as cm: config.load( build_config_details( {'web': {'image': '${'}}, 'working_dir', 'filename.yml' ) ) self.assertIn('Invalid', cm.exception.msg) self.assertIn('for "image" option', cm.exception.msg) self.assertIn('in service "web"', cm.exception.msg) self.assertIn('"${"', cm.exception.msg) def test_empty_environment_key_allowed(self): service_dict = config.load( build_config_details( { 'web': { 'build': '.', 'environment': { 'POSTGRES_PASSWORD': '' }, }, }, '.', None, ) )[0] self.assertEquals(service_dict['environment']['POSTGRES_PASSWORD'], '') class VolumeConfigTest(unittest.TestCase): def test_no_binding(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['/data']}, working_dir='.') self.assertEqual(d['volumes'], ['/data']) @mock.patch.dict(os.environ) def test_volume_binding_with_environment_variable(self): os.environ['VOLUME_PATH'] = '/host/path' d = config.load(build_config_details( {'foo': {'build': '.', 'volumes': ['${VOLUME_PATH}:/container/path']}}, '.', ))[0] self.assertEqual(d['volumes'], [VolumeSpec.parse('/host/path:/container/path')]) @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix paths') @mock.patch.dict(os.environ) def test_volume_binding_with_home(self): os.environ['HOME'] = '/home/user' d = make_service_dict('foo', {'build': '.', 'volumes': ['~:/container/path']}, working_dir='.') self.assertEqual(d['volumes'], ['/home/user:/container/path']) def test_name_does_not_expand(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['mydatavolume:/data']}, working_dir='.') self.assertEqual(d['volumes'], ['mydatavolume:/data']) def test_absolute_posix_path_does_not_expand(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['/var/lib/data:/data']}, working_dir='.') self.assertEqual(d['volumes'], ['/var/lib/data:/data']) def test_absolute_windows_path_does_not_expand(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['c:\\data:/data']}, working_dir='.') self.assertEqual(d['volumes'], ['c:\\data:/data']) @pytest.mark.skipif(IS_WINDOWS_PLATFORM, reason='posix paths') def test_relative_path_does_expand_posix(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['./data:/data']}, working_dir='/home/me/myproject') self.assertEqual(d['volumes'], ['/home/me/myproject/data:/data']) d = make_service_dict('foo', {'build': '.', 'volumes': ['.:/data']}, working_dir='/home/me/myproject') self.assertEqual(d['volumes'], ['/home/me/myproject:/data']) d = make_service_dict('foo', {'build': '.', 'volumes': ['../otherproject:/data']}, working_dir='/home/me/myproject') self.assertEqual(d['volumes'], ['/home/me/otherproject:/data']) @pytest.mark.skipif(not IS_WINDOWS_PLATFORM, reason='windows paths') def test_relative_path_does_expand_windows(self): d = make_service_dict('foo', {'build': '.', 'volumes': ['./data:/data']}, working_dir='c:\\Users\\me\\myproject') self.assertEqual(d['volumes'], ['c:\\Users\\me\\myproject\\data:/data']) d = make_service_dict('foo', {'build': '.', 'volumes': ['.:/data']}, working_dir='c:\\Users\\me\\myproject') self.assertEqual(d['volumes'], ['c:\\Users\\me\\myproject:/data']) d = make_service_dict('foo', {'build': '.', 'volumes': ['../otherproject:/data']}, working_dir='c:\\Users\\me\\myproject') self.assertEqual(d['volumes'], ['c:\\Users\\me\\otherproject:/data']) @mock.patch.dict(os.environ) def test_home_directory_with_driver_does_not_expand(self): os.environ['NAME'] = 'surprise!' d = make_service_dict('foo', { 'build': '.', 'volumes': ['~:/data'], 'volume_driver': 'foodriver', }, working_dir='.') self.assertEqual(d['volumes'], ['~:/data']) def test_volume_path_with_non_ascii_directory(self): volume = u'/Füü/data:/data' container_path = config.resolve_volume_path(".", volume) self.assertEqual(container_path, volume) class MergePathMappingTest(object): def config_name(self): return "" def test_empty(self): service_dict = config.merge_service_dicts({}, {}) self.assertNotIn(self.config_name(), service_dict) def test_no_override(self): service_dict = config.merge_service_dicts( {self.config_name(): ['/foo:/code', '/data']}, {}, ) self.assertEqual(set(service_dict[self.config_name()]), set(['/foo:/code', '/data'])) def test_no_base(self): service_dict = config.merge_service_dicts( {}, {self.config_name(): ['/bar:/code']}, ) self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code'])) def test_override_explicit_path(self): service_dict = config.merge_service_dicts( {self.config_name(): ['/foo:/code', '/data']}, {self.config_name(): ['/bar:/code']}, ) self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/data'])) def test_add_explicit_path(self): service_dict = config.merge_service_dicts( {self.config_name(): ['/foo:/code', '/data']}, {self.config_name(): ['/bar:/code', '/quux:/data']}, ) self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/quux:/data'])) def test_remove_explicit_path(self): service_dict = config.merge_service_dicts( {self.config_name(): ['/foo:/code', '/quux:/data']}, {self.config_name(): ['/bar:/code', '/data']}, ) self.assertEqual(set(service_dict[self.config_name()]), set(['/bar:/code', '/data'])) class MergeVolumesTest(unittest.TestCase, MergePathMappingTest): def config_name(self): return 'volumes' class MergeDevicesTest(unittest.TestCase, MergePathMappingTest): def config_name(self): return 'devices' class BuildOrImageMergeTest(unittest.TestCase): def test_merge_build_or_image_no_override(self): self.assertEqual( config.merge_service_dicts({'build': '.'}, {}), {'build': '.'}, ) self.assertEqual( config.merge_service_dicts({'image': 'redis'}, {}), {'image': 'redis'}, ) def test_merge_build_or_image_override_with_same(self): self.assertEqual( config.merge_service_dicts({'build': '.'}, {'build': './web'}), {'build': './web'}, ) self.assertEqual( config.merge_service_dicts({'image': 'redis'}, {'image': 'postgres'}), {'image': 'postgres'}, ) def test_merge_build_or_image_override_with_other(self): self.assertEqual( config.merge_service_dicts({'build': '.'}, {'image': 'redis'}), {'image': 'redis'} ) self.assertEqual( config.merge_service_dicts({'image': 'redis'}, {'build': '.'}), {'build': '.'} ) class MergeListsTest(unittest.TestCase): def test_empty(self): service_dict = config.merge_service_dicts({}, {}) self.assertNotIn('ports', service_dict) def test_no_override(self): service_dict = config.merge_service_dicts( {'ports': ['10:8000', '9000']}, {}, ) self.assertEqual(set(service_dict['ports']), set(['10:8000', '9000'])) def test_no_base(self): service_dict = config.merge_service_dicts( {}, {'ports': ['10:8000', '9000']}, ) self.assertEqual(set(service_dict['ports']), set(['10:8000', '9000'])) def test_add_item(self): service_dict = config.merge_service_dicts( {'ports': ['10:8000', '9000']}, {'ports': ['20:8000']}, ) self.assertEqual(set(service_dict['ports']), set(['10:8000', '9000', '20:8000'])) class MergeStringsOrListsTest(unittest.TestCase): def test_no_override(self): service_dict = config.merge_service_dicts( {'dns': '8.8.8.8'}, {}, ) self.assertEqual(set(service_dict['dns']), set(['8.8.8.8'])) def test_no_base(self): service_dict = config.merge_service_dicts( {}, {'dns': '8.8.8.8'}, ) self.assertEqual(set(service_dict['dns']), set(['8.8.8.8'])) def test_add_string(self): service_dict = config.merge_service_dicts( {'dns': ['8.8.8.8']}, {'dns': '9.9.9.9'}, ) self.assertEqual(set(service_dict['dns']), set(['8.8.8.8', '9.9.9.9'])) def test_add_list(self): service_dict = config.merge_service_dicts( {'dns': '8.8.8.8'}, {'dns': ['9.9.9.9']}, ) self.assertEqual(set(service_dict['dns']), set(['8.8.8.8', '9.9.9.9'])) class MergeLabelsTest(unittest.TestCase): def test_empty(self): service_dict = config.merge_service_dicts({}, {}) self.assertNotIn('labels', service_dict) def test_no_override(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar']}, 'tests/'), make_service_dict('foo', {'build': '.'}, 'tests/'), ) self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': ''}) def test_no_base(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.'}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['foo=2']}, 'tests/'), ) self.assertEqual(service_dict['labels'], {'foo': '2'}) def test_override_explicit_value(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar']}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['foo=2']}, 'tests/'), ) self.assertEqual(service_dict['labels'], {'foo': '2', 'bar': ''}) def test_add_explicit_value(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar']}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['bar=2']}, 'tests/'), ) self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': '2'}) def test_remove_explicit_value(self): service_dict = config.merge_service_dicts( make_service_dict('foo', {'build': '.', 'labels': ['foo=1', 'bar=2']}, 'tests/'), make_service_dict('foo', {'build': '.', 'labels': ['bar']}, 'tests/'), ) self.assertEqual(service_dict['labels'], {'foo': '1', 'bar': ''}) class MemoryOptionsTest(unittest.TestCase): def test_validation_fails_with_just_memswap_limit(self): """ When you set a 'memswap_limit' it is invalid config unless you also set a mem_limit """ expected_error_msg = ( "Service 'foo' configuration key 'memswap_limit' is invalid: when " "defining 'memswap_limit' you must set 'mem_limit' as well" ) with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'foo': {'image': 'busybox', 'memswap_limit': 2000000}, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_validation_with_correct_memswap_values(self): service_dict = config.load( build_config_details( {'foo': {'image': 'busybox', 'mem_limit': 1000000, 'memswap_limit': 2000000}}, 'tests/fixtures/extends', 'common.yml' ) ) self.assertEqual(service_dict[0]['memswap_limit'], 2000000) def test_memswap_can_be_a_string(self): service_dict = config.load( build_config_details( {'foo': {'image': 'busybox', 'mem_limit': "1G", 'memswap_limit': "512M"}}, 'tests/fixtures/extends', 'common.yml' ) ) self.assertEqual(service_dict[0]['memswap_limit'], "512M") class EnvTest(unittest.TestCase): def test_parse_environment_as_list(self): environment = [ 'NORMAL=F1', 'CONTAINS_EQUALS=F=2', 'TRAILING_EQUALS=', ] self.assertEqual( config.parse_environment(environment), {'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': ''}, ) def test_parse_environment_as_dict(self): environment = { 'NORMAL': 'F1', 'CONTAINS_EQUALS': 'F=2', 'TRAILING_EQUALS': None, } self.assertEqual(config.parse_environment(environment), environment) def test_parse_environment_invalid(self): with self.assertRaises(ConfigurationError): config.parse_environment('a=b') def test_parse_environment_empty(self): self.assertEqual(config.parse_environment(None), {}) @mock.patch.dict(os.environ) def test_resolve_environment(self): os.environ['FILE_DEF'] = 'E1' os.environ['FILE_DEF_EMPTY'] = 'E2' os.environ['ENV_DEF'] = 'E3' service_dict = { 'build': '.', 'environment': { 'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': None, 'NO_DEF': None }, } self.assertEqual( resolve_environment(service_dict), {'FILE_DEF': 'F1', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': ''}, ) def test_resolve_environment_from_env_file(self): self.assertEqual( resolve_environment({'env_file': ['tests/fixtures/env/one.env']}), {'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'bar'}, ) def test_resolve_environment_with_multiple_env_files(self): service_dict = { 'env_file': [ 'tests/fixtures/env/one.env', 'tests/fixtures/env/two.env' ] } self.assertEqual( resolve_environment(service_dict), {'ONE': '2', 'TWO': '1', 'THREE': '3', 'FOO': 'baz', 'DOO': 'dah'}, ) def test_resolve_environment_nonexistent_file(self): with pytest.raises(ConfigurationError) as exc: config.load(build_config_details( {'foo': {'image': 'example', 'env_file': 'nonexistent.env'}}, working_dir='tests/fixtures/env')) assert 'Couldn\'t find env file' in exc.exconly() assert 'nonexistent.env' in exc.exconly() @mock.patch.dict(os.environ) def test_resolve_environment_from_env_file_with_empty_values(self): os.environ['FILE_DEF'] = 'E1' os.environ['FILE_DEF_EMPTY'] = 'E2' os.environ['ENV_DEF'] = 'E3' self.assertEqual( resolve_environment({'env_file': ['tests/fixtures/env/resolve.env']}), { 'FILE_DEF': u'bär', 'FILE_DEF_EMPTY': '', 'ENV_DEF': 'E3', 'NO_DEF': '' }, ) @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') @mock.patch.dict(os.environ) def test_resolve_path(self): os.environ['HOSTENV'] = '/tmp' os.environ['CONTAINERENV'] = '/host/tmp' service_dict = config.load( build_config_details( {'foo': {'build': '.', 'volumes': ['$HOSTENV:$CONTAINERENV']}}, "tests/fixtures/env", ) )[0] self.assertEqual( set(service_dict['volumes']), set([VolumeSpec.parse('/tmp:/host/tmp')])) service_dict = config.load( build_config_details( {'foo': {'build': '.', 'volumes': ['/opt${HOSTENV}:/opt${CONTAINERENV}']}}, "tests/fixtures/env", ) )[0] self.assertEqual( set(service_dict['volumes']), set([VolumeSpec.parse('/opt/tmp:/opt/host/tmp')])) def load_from_filename(filename): return config.load(config.find('.', [filename])) class ExtendsTest(unittest.TestCase): def test_extends(self): service_dicts = load_from_filename('tests/fixtures/extends/docker-compose.yml') self.assertEqual(service_sort(service_dicts), service_sort([ { 'name': 'mydb', 'image': 'busybox', 'command': 'top', }, { 'name': 'myweb', 'image': 'busybox', 'command': 'top', 'links': ['mydb:db'], 'environment': { "FOO": "1", "BAR": "2", "BAZ": "2", }, } ])) def test_nested(self): service_dicts = load_from_filename('tests/fixtures/extends/nested.yml') self.assertEqual(service_dicts, [ { 'name': 'myweb', 'image': 'busybox', 'command': '/bin/true', 'environment': { "FOO": "2", "BAR": "2", }, }, ]) def test_self_referencing_file(self): """ We specify a 'file' key that is the filename we're already in. """ service_dicts = load_from_filename('tests/fixtures/extends/specify-file-as-self.yml') self.assertEqual(service_sort(service_dicts), service_sort([ { 'environment': { 'YEP': '1', 'BAR': '1', 'BAZ': '3' }, 'image': 'busybox', 'name': 'myweb' }, { 'environment': {'YEP': '1'}, 'image': 'busybox', 'name': 'otherweb' }, { 'environment': {'YEP': '1', 'BAZ': '3'}, 'image': 'busybox', 'name': 'web' } ])) def test_circular(self): with pytest.raises(config.CircularReference) as exc: load_from_filename('tests/fixtures/extends/circle-1.yml') path = [ (os.path.basename(filename), service_name) for (filename, service_name) in exc.value.trail ] expected = [ ('circle-1.yml', 'web'), ('circle-2.yml', 'other'), ('circle-1.yml', 'web'), ] self.assertEqual(path, expected) def test_extends_validation_empty_dictionary(self): with self.assertRaisesRegexp(ConfigurationError, 'service'): config.load( build_config_details( { 'web': {'image': 'busybox', 'extends': {}}, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_extends_validation_missing_service_key(self): with self.assertRaisesRegexp(ConfigurationError, "'service' is a required property"): config.load( build_config_details( { 'web': {'image': 'busybox', 'extends': {'file': 'common.yml'}}, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_extends_validation_invalid_key(self): expected_error_msg = ( "Service 'web' configuration key 'extends' " "contains unsupported option: 'rogue_key'" ) with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'web': { 'image': 'busybox', 'extends': { 'file': 'common.yml', 'service': 'web', 'rogue_key': 'is not allowed' } }, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_extends_validation_sub_property_key(self): expected_error_msg = ( "Service 'web' configuration key 'extends' 'file' contains 1, " "which is an invalid type, it should be a string" ) with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): config.load( build_config_details( { 'web': { 'image': 'busybox', 'extends': { 'file': 1, 'service': 'web', } }, }, 'tests/fixtures/extends', 'filename.yml' ) ) def test_extends_validation_no_file_key_no_filename_set(self): dictionary = {'extends': {'service': 'web'}} def load_config(): return make_service_dict('myweb', dictionary, working_dir='tests/fixtures/extends') self.assertRaisesRegexp(ConfigurationError, 'file', load_config) def test_extends_validation_valid_config(self): service = config.load( build_config_details( { 'web': {'image': 'busybox', 'extends': {'service': 'web', 'file': 'common.yml'}}, }, 'tests/fixtures/extends', 'common.yml' ) ) self.assertEquals(len(service), 1) self.assertIsInstance(service[0], dict) self.assertEquals(service[0]['command'], "/bin/true") def test_extended_service_with_invalid_config(self): expected_error_msg = "Service 'myweb' has neither an image nor a build path specified" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): load_from_filename('tests/fixtures/extends/service-with-invalid-schema.yml') def test_extended_service_with_valid_config(self): service = load_from_filename('tests/fixtures/extends/service-with-valid-composite-extends.yml') self.assertEquals(service[0]['command'], "top") def test_extends_file_defaults_to_self(self): """ Test not specifying a file in our extends options that the config is valid and correctly extends from itself. """ service_dicts = load_from_filename('tests/fixtures/extends/no-file-specified.yml') self.assertEqual(service_sort(service_dicts), service_sort([ { 'name': 'myweb', 'image': 'busybox', 'environment': { "BAR": "1", "BAZ": "3", } }, { 'name': 'web', 'image': 'busybox', 'environment': { "BAZ": "3", } } ])) def test_invalid_links_in_extended_service(self): expected_error_msg = "services with 'links' cannot be extended" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): load_from_filename('tests/fixtures/extends/invalid-links.yml') def test_invalid_volumes_from_in_extended_service(self): expected_error_msg = "services with 'volumes_from' cannot be extended" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): load_from_filename('tests/fixtures/extends/invalid-volumes.yml') def test_invalid_net_in_extended_service(self): expected_error_msg = "services with 'net: container' cannot be extended" with self.assertRaisesRegexp(ConfigurationError, expected_error_msg): load_from_filename('tests/fixtures/extends/invalid-net.yml') @mock.patch.dict(os.environ) def test_valid_interpolation_in_extended_service(self): os.environ.update( HOSTNAME_VALUE="penguin", ) expected_interpolated_value = "host-penguin" service_dicts = load_from_filename('tests/fixtures/extends/valid-interpolation.yml') for service in service_dicts: self.assertTrue(service['hostname'], expected_interpolated_value) @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') def test_volume_path(self): dicts = load_from_filename('tests/fixtures/volume-path/docker-compose.yml') paths = [ VolumeSpec( os.path.abspath('tests/fixtures/volume-path/common/foo'), '/foo', 'rw'), VolumeSpec( os.path.abspath('tests/fixtures/volume-path/bar'), '/bar', 'rw') ] self.assertEqual(set(dicts[0]['volumes']), set(paths)) def test_parent_build_path_dne(self): child = load_from_filename('tests/fixtures/extends/nonexistent-path-child.yml') self.assertEqual(child, [ { 'name': 'dnechild', 'image': 'busybox', 'command': '/bin/true', 'environment': { "FOO": "1", "BAR": "2", }, }, ]) def test_load_throws_error_when_base_service_does_not_exist(self): err_msg = r'''Cannot extend service 'foo' in .*: Service not found''' with self.assertRaisesRegexp(ConfigurationError, err_msg): load_from_filename('tests/fixtures/extends/nonexistent-service.yml') def test_partial_service_config_in_extends_is_still_valid(self): dicts = load_from_filename('tests/fixtures/extends/valid-common-config.yml') self.assertEqual(dicts[0]['environment'], {'FOO': '1'}) def test_extended_service_with_verbose_and_shorthand_way(self): services = load_from_filename('tests/fixtures/extends/verbose-and-shorthand.yml') self.assertEqual(service_sort(services), service_sort([ { 'name': 'base', 'image': 'busybox', 'environment': {'BAR': '1'}, }, { 'name': 'verbose', 'image': 'busybox', 'environment': {'BAR': '1', 'FOO': '1'}, }, { 'name': 'shorthand', 'image': 'busybox', 'environment': {'BAR': '1', 'FOO': '2'}, }, ])) def test_extends_with_environment_and_env_files(self): tmpdir = py.test.ensuretemp('test_extends_with_environment') self.addCleanup(tmpdir.remove) commondir = tmpdir.mkdir('common') commondir.join('base.yml').write(""" app: image: 'example/app' env_file: - 'envs' environment: - SECRET - TEST_ONE=common - TEST_TWO=common """) tmpdir.join('docker-compose.yml').write(""" ext: extends: file: common/base.yml service: app env_file: - 'envs' environment: - THING - TEST_ONE=top """) commondir.join('envs').write(""" COMMON_ENV_FILE TEST_ONE=common-env-file TEST_TWO=common-env-file TEST_THREE=common-env-file TEST_FOUR=common-env-file """) tmpdir.join('envs').write(""" TOP_ENV_FILE TEST_ONE=top-env-file TEST_TWO=top-env-file TEST_THREE=top-env-file """) expected = [ { 'name': 'ext', 'image': 'example/app', 'environment': { 'SECRET': 'secret', 'TOP_ENV_FILE': 'secret', 'COMMON_ENV_FILE': 'secret', 'THING': 'thing', 'TEST_ONE': 'top', 'TEST_TWO': 'common', 'TEST_THREE': 'top-env-file', 'TEST_FOUR': 'common-env-file', }, }, ] with mock.patch.dict(os.environ): os.environ['SECRET'] = 'secret' os.environ['THING'] = 'thing' os.environ['COMMON_ENV_FILE'] = 'secret' os.environ['TOP_ENV_FILE'] = 'secret' config = load_from_filename(str(tmpdir.join('docker-compose.yml'))) assert config == expected @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') class ExpandPathTest(unittest.TestCase): working_dir = '/home/user/somedir' def test_expand_path_normal(self): result = config.expand_path(self.working_dir, 'myfile') self.assertEqual(result, self.working_dir + '/' + 'myfile') def test_expand_path_absolute(self): abs_path = '/home/user/otherdir/somefile' result = config.expand_path(self.working_dir, abs_path) self.assertEqual(result, abs_path) def test_expand_path_with_tilde(self): test_path = '~/otherdir/somefile' with mock.patch.dict(os.environ): os.environ['HOME'] = user_path = '/home/user/' result = config.expand_path(self.working_dir, test_path) self.assertEqual(result, user_path + 'otherdir/somefile') class VolumePathTest(unittest.TestCase): @pytest.mark.xfail((not IS_WINDOWS_PLATFORM), reason='does not have a drive') def test_split_path_mapping_with_windows_path(self): windows_volume_path = "c:\\Users\\msamblanet\\Documents\\anvil\\connect\\config:/opt/connect/config:ro" expected_mapping = ( "/opt/connect/config:ro", "c:\\Users\\msamblanet\\Documents\\anvil\\connect\\config" ) mapping = config.split_path_mapping(windows_volume_path) self.assertEqual(mapping, expected_mapping) @pytest.mark.xfail(IS_WINDOWS_PLATFORM, reason='paths use slash') class BuildPathTest(unittest.TestCase): def setUp(self): self.abs_context_path = os.path.join(os.getcwd(), 'tests/fixtures/build-ctx') def test_nonexistent_path(self): with self.assertRaises(ConfigurationError): config.load( build_config_details( { 'foo': {'build': 'nonexistent.path'}, }, 'working_dir', 'filename.yml' ) ) def test_relative_path(self): relative_build_path = '../build-ctx/' service_dict = make_service_dict( 'relpath', {'build': relative_build_path}, working_dir='tests/fixtures/build-path' ) self.assertEquals(service_dict['build'], self.abs_context_path) def test_absolute_path(self): service_dict = make_service_dict( 'abspath', {'build': self.abs_context_path}, working_dir='tests/fixtures/build-path' ) self.assertEquals(service_dict['build'], self.abs_context_path) def test_from_file(self): service_dict = load_from_filename('tests/fixtures/build-path/docker-compose.yml') self.assertEquals(service_dict, [{'name': 'foo', 'build': self.abs_context_path}]) def test_valid_url_in_build_path(self): valid_urls = [ 'git://github.com/docker/docker', 'git@github.com:docker/docker.git', 'git@bitbucket.org:atlassianlabs/atlassian-docker.git', 'https://github.com/docker/docker.git', 'http://github.com/docker/docker.git', 'github.com/docker/docker.git', ] for valid_url in valid_urls: service_dict = config.load(build_config_details({ 'validurl': {'build': valid_url}, }, '.', None)) assert service_dict[0]['build'] == valid_url def test_invalid_url_in_build_path(self): invalid_urls = [ 'example.com/bogus', 'ftp://example.com/', '/path/does/not/exist', ] for invalid_url in invalid_urls: with pytest.raises(ConfigurationError) as exc: config.load(build_config_details({ 'invalidurl': {'build': invalid_url}, }, '.', None)) assert 'build path' in exc.exconly() class GetDefaultConfigFilesTestCase(unittest.TestCase): files = [ 'docker-compose.yml', 'docker-compose.yaml', 'fig.yml', 'fig.yaml', ] def test_get_config_path_default_file_in_basedir(self): for index, filename in enumerate(self.files): self.assertEqual( filename, get_config_filename_for_files(self.files[index:])) with self.assertRaises(config.ComposeFileNotFound): get_config_filename_for_files([]) def test_get_config_path_default_file_in_parent_dir(self): """Test with files placed in the subdir""" def get_config_in_subdir(files): return get_config_filename_for_files(files, subdir=True) for index, filename in enumerate(self.files): self.assertEqual(filename, get_config_in_subdir(self.files[index:])) with self.assertRaises(config.ComposeFileNotFound): get_config_in_subdir([]) def get_config_filename_for_files(filenames, subdir=None): def make_files(dirname, filenames): for fname in filenames: with open(os.path.join(dirname, fname), 'w') as f: f.write('') project_dir = tempfile.mkdtemp() try: make_files(project_dir, filenames) if subdir: base_dir = tempfile.mkdtemp(dir=project_dir) else: base_dir = project_dir filename, = config.get_default_config_files(base_dir) return os.path.basename(filename) finally: shutil.rmtree(project_dir) compose-1.5.2/tests/unit/config/sort_services_test.py000066400000000000000000000154631263011261000230560ustar00rootroot00000000000000from compose.config.errors import DependencyError from compose.config.sort_services import sort_service_dicts from compose.config.types import VolumeFromSpec from tests import unittest class SortServiceTest(unittest.TestCase): def test_sort_service_dicts_1(self): services = [ { 'links': ['redis'], 'name': 'web' }, { 'name': 'grunt' }, { 'name': 'redis' } ] sorted_services = sort_service_dicts(services) self.assertEqual(len(sorted_services), 3) self.assertEqual(sorted_services[0]['name'], 'grunt') self.assertEqual(sorted_services[1]['name'], 'redis') self.assertEqual(sorted_services[2]['name'], 'web') def test_sort_service_dicts_2(self): services = [ { 'links': ['redis', 'postgres'], 'name': 'web' }, { 'name': 'postgres', 'links': ['redis'] }, { 'name': 'redis' } ] sorted_services = sort_service_dicts(services) self.assertEqual(len(sorted_services), 3) self.assertEqual(sorted_services[0]['name'], 'redis') self.assertEqual(sorted_services[1]['name'], 'postgres') self.assertEqual(sorted_services[2]['name'], 'web') def test_sort_service_dicts_3(self): services = [ { 'name': 'child' }, { 'name': 'parent', 'links': ['child'] }, { 'links': ['parent'], 'name': 'grandparent' }, ] sorted_services = sort_service_dicts(services) self.assertEqual(len(sorted_services), 3) self.assertEqual(sorted_services[0]['name'], 'child') self.assertEqual(sorted_services[1]['name'], 'parent') self.assertEqual(sorted_services[2]['name'], 'grandparent') def test_sort_service_dicts_4(self): services = [ { 'name': 'child' }, { 'name': 'parent', 'volumes_from': [VolumeFromSpec('child', 'rw')] }, { 'links': ['parent'], 'name': 'grandparent' }, ] sorted_services = sort_service_dicts(services) self.assertEqual(len(sorted_services), 3) self.assertEqual(sorted_services[0]['name'], 'child') self.assertEqual(sorted_services[1]['name'], 'parent') self.assertEqual(sorted_services[2]['name'], 'grandparent') def test_sort_service_dicts_5(self): services = [ { 'links': ['parent'], 'name': 'grandparent' }, { 'name': 'parent', 'net': 'container:child' }, { 'name': 'child' } ] sorted_services = sort_service_dicts(services) self.assertEqual(len(sorted_services), 3) self.assertEqual(sorted_services[0]['name'], 'child') self.assertEqual(sorted_services[1]['name'], 'parent') self.assertEqual(sorted_services[2]['name'], 'grandparent') def test_sort_service_dicts_6(self): services = [ { 'links': ['parent'], 'name': 'grandparent' }, { 'name': 'parent', 'volumes_from': [VolumeFromSpec('child', 'ro')] }, { 'name': 'child' } ] sorted_services = sort_service_dicts(services) self.assertEqual(len(sorted_services), 3) self.assertEqual(sorted_services[0]['name'], 'child') self.assertEqual(sorted_services[1]['name'], 'parent') self.assertEqual(sorted_services[2]['name'], 'grandparent') def test_sort_service_dicts_7(self): services = [ { 'net': 'container:three', 'name': 'four' }, { 'links': ['two'], 'name': 'three' }, { 'name': 'two', 'volumes_from': [VolumeFromSpec('one', 'rw')] }, { 'name': 'one' } ] sorted_services = sort_service_dicts(services) self.assertEqual(len(sorted_services), 4) self.assertEqual(sorted_services[0]['name'], 'one') self.assertEqual(sorted_services[1]['name'], 'two') self.assertEqual(sorted_services[2]['name'], 'three') self.assertEqual(sorted_services[3]['name'], 'four') def test_sort_service_dicts_circular_imports(self): services = [ { 'links': ['redis'], 'name': 'web' }, { 'name': 'redis', 'links': ['web'] }, ] try: sort_service_dicts(services) except DependencyError as e: self.assertIn('redis', e.msg) self.assertIn('web', e.msg) else: self.fail('Should have thrown an DependencyError') def test_sort_service_dicts_circular_imports_2(self): services = [ { 'links': ['postgres', 'redis'], 'name': 'web' }, { 'name': 'redis', 'links': ['web'] }, { 'name': 'postgres' } ] try: sort_service_dicts(services) except DependencyError as e: self.assertIn('redis', e.msg) self.assertIn('web', e.msg) else: self.fail('Should have thrown an DependencyError') def test_sort_service_dicts_circular_imports_3(self): services = [ { 'links': ['b'], 'name': 'a' }, { 'name': 'b', 'links': ['c'] }, { 'name': 'c', 'links': ['a'] } ] try: sort_service_dicts(services) except DependencyError as e: self.assertIn('a', e.msg) self.assertIn('b', e.msg) else: self.fail('Should have thrown an DependencyError') def test_sort_service_dicts_self_imports(self): services = [ { 'links': ['web'], 'name': 'web' }, ] try: sort_service_dicts(services) except DependencyError as e: self.assertIn('web', e.msg) else: self.fail('Should have thrown an DependencyError') compose-1.5.2/tests/unit/config/types_test.py000066400000000000000000000043531263011261000213240ustar00rootroot00000000000000import pytest from compose.config.errors import ConfigurationError from compose.config.types import parse_extra_hosts from compose.config.types import VolumeSpec from compose.const import IS_WINDOWS_PLATFORM def test_parse_extra_hosts_list(): expected = {'www.example.com': '192.168.0.17'} assert parse_extra_hosts(["www.example.com:192.168.0.17"]) == expected expected = {'www.example.com': '192.168.0.17'} assert parse_extra_hosts(["www.example.com: 192.168.0.17"]) == expected assert parse_extra_hosts([ "www.example.com: 192.168.0.17", "static.example.com:192.168.0.19", "api.example.com: 192.168.0.18" ]) == { 'www.example.com': '192.168.0.17', 'static.example.com': '192.168.0.19', 'api.example.com': '192.168.0.18' } def test_parse_extra_hosts_dict(): assert parse_extra_hosts({ 'www.example.com': '192.168.0.17', 'api.example.com': '192.168.0.18' }) == { 'www.example.com': '192.168.0.17', 'api.example.com': '192.168.0.18' } class TestVolumeSpec(object): def test_parse_volume_spec_only_one_path(self): spec = VolumeSpec.parse('/the/volume') assert spec == (None, '/the/volume', 'rw') def test_parse_volume_spec_internal_and_external(self): spec = VolumeSpec.parse('external:interval') assert spec == ('external', 'interval', 'rw') def test_parse_volume_spec_with_mode(self): spec = VolumeSpec.parse('external:interval:ro') assert spec == ('external', 'interval', 'ro') spec = VolumeSpec.parse('external:interval:z') assert spec == ('external', 'interval', 'z') def test_parse_volume_spec_too_many_parts(self): with pytest.raises(ConfigurationError) as exc: VolumeSpec.parse('one:two:three:four') assert 'has incorrect format' in exc.exconly() @pytest.mark.xfail((not IS_WINDOWS_PLATFORM), reason='does not have a drive') def test_parse_volume_windows_absolute_path(self): windows_path = "c:\\Users\\me\\Documents\\shiny\\config:\\opt\\shiny\\config:ro" assert VolumeSpec.parse(windows_path) == ( "/c/Users/me/Documents/shiny/config", "/opt/shiny/config", "ro" ) compose-1.5.2/tests/unit/container_test.py000066400000000000000000000132741263011261000206770ustar00rootroot00000000000000from __future__ import unicode_literals import docker from .. import mock from .. import unittest from compose.container import Container from compose.container import get_container_name class ContainerTest(unittest.TestCase): def setUp(self): self.container_dict = { "Id": "abc", "Image": "busybox:latest", "Command": "top", "Created": 1387384730, "Status": "Up 8 seconds", "Ports": None, "SizeRw": 0, "SizeRootFs": 0, "Names": ["/composetest_db_1", "/composetest_web_1/db"], "NetworkSettings": { "Ports": {}, }, "Config": { "Labels": { "com.docker.compose.project": "composetest", "com.docker.compose.service": "web", "com.docker.compose.container-number": 7, }, } } def test_from_ps(self): container = Container.from_ps(None, self.container_dict, has_been_inspected=True) self.assertEqual( container.dictionary, { "Id": "abc", "Image": "busybox:latest", "Name": "/composetest_db_1", }) def test_from_ps_prefixed(self): self.container_dict['Names'] = ['/swarm-host-1' + n for n in self.container_dict['Names']] container = Container.from_ps(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.dictionary, { "Id": "abc", "Image": "busybox:latest", "Name": "/composetest_db_1", }) def test_environment(self): container = Container(None, { 'Id': 'abc', 'Config': { 'Env': [ 'FOO=BAR', 'BAZ=DOGE', ] } }, has_been_inspected=True) self.assertEqual(container.environment, { 'FOO': 'BAR', 'BAZ': 'DOGE', }) def test_number(self): container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.number, 7) def test_name(self): container = Container.from_ps(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.name, "composetest_db_1") def test_name_without_project(self): self.container_dict['Name'] = "/composetest_web_7" container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.name_without_project, "web_7") def test_name_without_project_custom_container_name(self): self.container_dict['Name'] = "/custom_name_of_container" container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.name_without_project, "custom_name_of_container") def test_inspect_if_not_inspected(self): mock_client = mock.create_autospec(docker.Client) container = Container(mock_client, dict(Id="the_id")) container.inspect_if_not_inspected() mock_client.inspect_container.assert_called_once_with("the_id") self.assertEqual(container.dictionary, mock_client.inspect_container.return_value) self.assertTrue(container.has_been_inspected) container.inspect_if_not_inspected() self.assertEqual(mock_client.inspect_container.call_count, 1) def test_human_readable_ports_none(self): container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual(container.human_readable_ports, '') def test_human_readable_ports_public_and_private(self): self.container_dict['NetworkSettings']['Ports'].update({ "45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}], "45453/tcp": [], }) container = Container(None, self.container_dict, has_been_inspected=True) expected = "45453/tcp, 0.0.0.0:49197->45454/tcp" self.assertEqual(container.human_readable_ports, expected) def test_get_local_port(self): self.container_dict['NetworkSettings']['Ports'].update({ "45454/tcp": [{"HostIp": "0.0.0.0", "HostPort": "49197"}], }) container = Container(None, self.container_dict, has_been_inspected=True) self.assertEqual( container.get_local_port(45454, protocol='tcp'), '0.0.0.0:49197') def test_get(self): container = Container(None, { "Status": "Up 8 seconds", "HostConfig": { "VolumesFrom": ["volume_id"] }, }, has_been_inspected=True) self.assertEqual(container.get('Status'), "Up 8 seconds") self.assertEqual(container.get('HostConfig.VolumesFrom'), ["volume_id"]) self.assertEqual(container.get('Foo.Bar.DoesNotExist'), None) class GetContainerNameTestCase(unittest.TestCase): def test_get_container_name(self): self.assertIsNone(get_container_name({})) self.assertEqual(get_container_name({'Name': 'myproject_db_1'}), 'myproject_db_1') self.assertEqual(get_container_name({'Names': ['/myproject_db_1', '/myproject_web_1/db']}), 'myproject_db_1') self.assertEqual( get_container_name({ 'Names': [ '/swarm-host-1/myproject_db_1', '/swarm-host-1/myproject_web_1/db' ] }), 'myproject_db_1' ) compose-1.5.2/tests/unit/interpolation_test.py000066400000000000000000000032141263011261000215750ustar00rootroot00000000000000import unittest from compose.config.interpolation import BlankDefaultDict as bddict from compose.config.interpolation import interpolate from compose.config.interpolation import InvalidInterpolation class InterpolationTest(unittest.TestCase): def test_valid_interpolations(self): self.assertEqual(interpolate('$foo', bddict(foo='hi')), 'hi') self.assertEqual(interpolate('${foo}', bddict(foo='hi')), 'hi') self.assertEqual(interpolate('${subject} love you', bddict(subject='i')), 'i love you') self.assertEqual(interpolate('i ${verb} you', bddict(verb='love')), 'i love you') self.assertEqual(interpolate('i love ${object}', bddict(object='you')), 'i love you') def test_empty_value(self): self.assertEqual(interpolate('${foo}', bddict(foo='')), '') def test_unset_value(self): self.assertEqual(interpolate('${foo}', bddict()), '') def test_escaped_interpolation(self): self.assertEqual(interpolate('$${foo}', bddict(foo='hi')), '${foo}') def test_invalid_strings(self): self.assertRaises(InvalidInterpolation, lambda: interpolate('${', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('$}', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${}', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${ }', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${ foo}', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${foo }', bddict())) self.assertRaises(InvalidInterpolation, lambda: interpolate('${foo!}', bddict())) compose-1.5.2/tests/unit/multiplexer_test.py000066400000000000000000000020101263011261000212510ustar00rootroot00000000000000import unittest from compose.cli.multiplexer import Multiplexer class MultiplexerTest(unittest.TestCase): def test_no_iterators(self): mux = Multiplexer([]) self.assertEqual([], list(mux.loop())) def test_empty_iterators(self): mux = Multiplexer([ (x for x in []), (x for x in []), ]) self.assertEqual([], list(mux.loop())) def test_aggregates_output(self): mux = Multiplexer([ (x for x in [0, 2, 4]), (x for x in [1, 3, 5]), ]) self.assertEqual( [0, 1, 2, 3, 4, 5], sorted(list(mux.loop())), ) def test_exception(self): class Problem(Exception): pass def problematic_iterator(): yield 0 yield 2 raise Problem(":(") mux = Multiplexer([ problematic_iterator(), (x for x in [1, 3, 5]), ]) with self.assertRaises(Problem): list(mux.loop()) compose-1.5.2/tests/unit/progress_stream_test.py000066400000000000000000000043561263011261000221350ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from six import StringIO from compose import progress_stream from tests import unittest class ProgressStreamTestCase(unittest.TestCase): def test_stream_output(self): output = [ b'{"status": "Downloading", "progressDetail": {"current": ' b'31019763, "start": 1413653874, "total": 62763875}, ' b'"progress": "..."}', ] events = progress_stream.stream_output(output, StringIO()) self.assertEqual(len(events), 1) def test_stream_output_div_zero(self): output = [ b'{"status": "Downloading", "progressDetail": {"current": ' b'0, "start": 1413653874, "total": 0}, ' b'"progress": "..."}', ] events = progress_stream.stream_output(output, StringIO()) self.assertEqual(len(events), 1) def test_stream_output_null_total(self): output = [ b'{"status": "Downloading", "progressDetail": {"current": ' b'0, "start": 1413653874, "total": null}, ' b'"progress": "..."}', ] events = progress_stream.stream_output(output, StringIO()) self.assertEqual(len(events), 1) def test_stream_output_progress_event_tty(self): events = [ b'{"status": "Already exists", "progressDetail": {}, "id": "8d05e3af52b0"}' ] class TTYStringIO(StringIO): def isatty(self): return True output = TTYStringIO() events = progress_stream.stream_output(events, output) self.assertTrue(len(output.getvalue()) > 0) def test_stream_output_progress_event_no_tty(self): events = [ b'{"status": "Already exists", "progressDetail": {}, "id": "8d05e3af52b0"}' ] output = StringIO() events = progress_stream.stream_output(events, output) self.assertEqual(len(output.getvalue()), 0) def test_stream_output_no_progress_event_no_tty(self): events = [ b'{"status": "Pulling from library/xy", "id": "latest"}' ] output = StringIO() events = progress_stream.stream_output(events, output) self.assertTrue(len(output.getvalue()) > 0) compose-1.5.2/tests/unit/project_test.py000066400000000000000000000234571263011261000203670ustar00rootroot00000000000000from __future__ import unicode_literals import docker from .. import mock from .. import unittest from compose.config.types import VolumeFromSpec from compose.const import LABEL_SERVICE from compose.container import Container from compose.project import Project from compose.service import ContainerNet from compose.service import Net from compose.service import Service class ProjectTest(unittest.TestCase): def setUp(self): self.mock_client = mock.create_autospec(docker.Client) def test_from_dict(self): project = Project.from_dicts('composetest', [ { 'name': 'web', 'image': 'busybox:latest' }, { 'name': 'db', 'image': 'busybox:latest' }, ], None) self.assertEqual(len(project.services), 2) self.assertEqual(project.get_service('web').name, 'web') self.assertEqual(project.get_service('web').options['image'], 'busybox:latest') self.assertEqual(project.get_service('db').name, 'db') self.assertEqual(project.get_service('db').options['image'], 'busybox:latest') def test_from_config(self): dicts = [ { 'name': 'web', 'image': 'busybox:latest', }, { 'name': 'db', 'image': 'busybox:latest', }, ] project = Project.from_dicts('composetest', dicts, None) self.assertEqual(len(project.services), 2) self.assertEqual(project.get_service('web').name, 'web') self.assertEqual(project.get_service('web').options['image'], 'busybox:latest') self.assertEqual(project.get_service('db').name, 'db') self.assertEqual(project.get_service('db').options['image'], 'busybox:latest') def test_get_service(self): web = Service( project='composetest', name='web', client=None, image="busybox:latest", ) project = Project('test', [web], None) self.assertEqual(project.get_service('web'), web) def test_get_services_returns_all_services_without_args(self): web = Service( project='composetest', name='web', image='foo', ) console = Service( project='composetest', name='console', image='foo', ) project = Project('test', [web, console], None) self.assertEqual(project.get_services(), [web, console]) def test_get_services_returns_listed_services_with_args(self): web = Service( project='composetest', name='web', image='foo', ) console = Service( project='composetest', name='console', image='foo', ) project = Project('test', [web, console], None) self.assertEqual(project.get_services(['console']), [console]) def test_get_services_with_include_links(self): db = Service( project='composetest', name='db', image='foo', ) web = Service( project='composetest', name='web', image='foo', links=[(db, 'database')] ) cache = Service( project='composetest', name='cache', image='foo' ) console = Service( project='composetest', name='console', image='foo', links=[(web, 'web')] ) project = Project('test', [web, db, cache, console], None) self.assertEqual( project.get_services(['console'], include_deps=True), [db, web, console] ) def test_get_services_removes_duplicates_following_links(self): db = Service( project='composetest', name='db', image='foo', ) web = Service( project='composetest', name='web', image='foo', links=[(db, 'database')] ) project = Project('test', [web, db], None) self.assertEqual( project.get_services(['web', 'db'], include_deps=True), [db, web] ) def test_use_volumes_from_container(self): container_id = 'aabbccddee' container_dict = dict(Name='aaa', Id=container_id) self.mock_client.inspect_container.return_value = container_dict project = Project.from_dicts('test', [ { 'name': 'test', 'image': 'busybox:latest', 'volumes_from': [VolumeFromSpec('aaa', 'rw')] } ], self.mock_client) self.assertEqual(project.get_service('test')._get_volumes_from(), [container_id + ":rw"]) def test_use_volumes_from_service_no_container(self): container_name = 'test_vol_1' self.mock_client.containers.return_value = [ { "Name": container_name, "Names": [container_name], "Id": container_name, "Image": 'busybox:latest' } ] project = Project.from_dicts('test', [ { 'name': 'vol', 'image': 'busybox:latest' }, { 'name': 'test', 'image': 'busybox:latest', 'volumes_from': [VolumeFromSpec('vol', 'rw')] } ], self.mock_client) self.assertEqual(project.get_service('test')._get_volumes_from(), [container_name + ":rw"]) def test_use_volumes_from_service_container(self): container_ids = ['aabbccddee', '12345'] project = Project.from_dicts('test', [ { 'name': 'vol', 'image': 'busybox:latest' }, { 'name': 'test', 'image': 'busybox:latest', 'volumes_from': [VolumeFromSpec('vol', 'rw')] } ], None) with mock.patch.object(Service, 'containers') as mock_return: mock_return.return_value = [ mock.Mock(id=container_id, spec=Container) for container_id in container_ids] self.assertEqual( project.get_service('test')._get_volumes_from(), [container_ids[0] + ':rw']) def test_net_unset(self): project = Project.from_dicts('test', [ { 'name': 'test', 'image': 'busybox:latest', } ], self.mock_client) service = project.get_service('test') self.assertEqual(service.net.id, None) self.assertNotIn('NetworkMode', service._get_container_host_config({})) def test_use_net_from_container(self): container_id = 'aabbccddee' container_dict = dict(Name='aaa', Id=container_id) self.mock_client.inspect_container.return_value = container_dict project = Project.from_dicts('test', [ { 'name': 'test', 'image': 'busybox:latest', 'net': 'container:aaa' } ], self.mock_client) service = project.get_service('test') self.assertEqual(service.net.mode, 'container:' + container_id) def test_use_net_from_service(self): container_name = 'test_aaa_1' self.mock_client.containers.return_value = [ { "Name": container_name, "Names": [container_name], "Id": container_name, "Image": 'busybox:latest' } ] project = Project.from_dicts('test', [ { 'name': 'aaa', 'image': 'busybox:latest' }, { 'name': 'test', 'image': 'busybox:latest', 'net': 'container:aaa' } ], self.mock_client) service = project.get_service('test') self.assertEqual(service.net.mode, 'container:' + container_name) def test_uses_default_network_true(self): web = Service('web', project='test', image="alpine", net=Net('test')) db = Service('web', project='test', image="alpine", net=Net('other')) project = Project('test', [web, db], None) assert project.uses_default_network() def test_uses_default_network_custom_name(self): web = Service('web', project='test', image="alpine", net=Net('other')) project = Project('test', [web], None) assert not project.uses_default_network() def test_uses_default_network_host(self): web = Service('web', project='test', image="alpine", net=Net('host')) project = Project('test', [web], None) assert not project.uses_default_network() def test_uses_default_network_container(self): container = mock.Mock(id='test') web = Service( 'web', project='test', image="alpine", net=ContainerNet(container)) project = Project('test', [web], None) assert not project.uses_default_network() def test_container_without_name(self): self.mock_client.containers.return_value = [ {'Image': 'busybox:latest', 'Id': '1', 'Name': '1'}, {'Image': 'busybox:latest', 'Id': '2', 'Name': None}, {'Image': 'busybox:latest', 'Id': '3'}, ] self.mock_client.inspect_container.return_value = { 'Id': '1', 'Config': { 'Labels': { LABEL_SERVICE: 'web', }, }, } project = Project.from_dicts( 'test', [{ 'name': 'web', 'image': 'busybox:latest', }], self.mock_client, ) self.assertEqual([c.id for c in project.containers()], ['1']) compose-1.5.2/tests/unit/service_test.py000066400000000000000000000672401263011261000203570ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals import docker from .. import mock from .. import unittest from compose.config.types import VolumeFromSpec from compose.config.types import VolumeSpec from compose.const import LABEL_CONFIG_HASH from compose.const import LABEL_ONE_OFF from compose.const import LABEL_PROJECT from compose.const import LABEL_SERVICE from compose.container import Container from compose.service import build_ulimits from compose.service import build_volume_binding from compose.service import ContainerNet from compose.service import get_container_data_volumes from compose.service import merge_volume_bindings from compose.service import NeedsBuildError from compose.service import Net from compose.service import NoSuchImageError from compose.service import parse_repository_tag from compose.service import Service from compose.service import ServiceNet from compose.service import warn_on_masked_volume class ServiceTest(unittest.TestCase): def setUp(self): self.mock_client = mock.create_autospec(docker.Client) def test_containers(self): service = Service('db', self.mock_client, 'myproject', image='foo') self.mock_client.containers.return_value = [] self.assertEqual(list(service.containers()), []) def test_containers_with_containers(self): self.mock_client.containers.return_value = [ dict(Name=str(i), Image='foo', Id=i) for i in range(3) ] service = Service('db', self.mock_client, 'myproject', image='foo') self.assertEqual([c.id for c in service.containers()], list(range(3))) expected_labels = [ '{0}=myproject'.format(LABEL_PROJECT), '{0}=db'.format(LABEL_SERVICE), '{0}=False'.format(LABEL_ONE_OFF), ] self.mock_client.containers.assert_called_once_with( all=False, filters={'label': expected_labels}) def test_container_without_name(self): self.mock_client.containers.return_value = [ {'Image': 'foo', 'Id': '1', 'Name': '1'}, {'Image': 'foo', 'Id': '2', 'Name': None}, {'Image': 'foo', 'Id': '3'}, ] service = Service('db', self.mock_client, 'myproject', image='foo') self.assertEqual([c.id for c in service.containers()], ['1']) self.assertEqual(service._next_container_number(), 2) self.assertEqual(service.get_container(1).id, '1') def test_get_volumes_from_container(self): container_id = 'aabbccddee' service = Service( 'test', image='foo', volumes_from=[VolumeFromSpec(mock.Mock(id=container_id, spec=Container), 'rw')]) self.assertEqual(service._get_volumes_from(), [container_id + ':rw']) def test_get_volumes_from_container_read_only(self): container_id = 'aabbccddee' service = Service( 'test', image='foo', volumes_from=[VolumeFromSpec(mock.Mock(id=container_id, spec=Container), 'ro')]) self.assertEqual(service._get_volumes_from(), [container_id + ':ro']) def test_get_volumes_from_service_container_exists(self): container_ids = ['aabbccddee', '12345'] from_service = mock.create_autospec(Service) from_service.containers.return_value = [ mock.Mock(id=container_id, spec=Container) for container_id in container_ids ] service = Service('test', volumes_from=[VolumeFromSpec(from_service, 'rw')], image='foo') self.assertEqual(service._get_volumes_from(), [container_ids[0] + ":rw"]) def test_get_volumes_from_service_container_exists_with_flags(self): for mode in ['ro', 'rw', 'z', 'rw,z', 'z,rw']: container_ids = ['aabbccddee:' + mode, '12345:' + mode] from_service = mock.create_autospec(Service) from_service.containers.return_value = [ mock.Mock(id=container_id.split(':')[0], spec=Container) for container_id in container_ids ] service = Service('test', volumes_from=[VolumeFromSpec(from_service, mode)], image='foo') self.assertEqual(service._get_volumes_from(), [container_ids[0]]) def test_get_volumes_from_service_no_container(self): container_id = 'abababab' from_service = mock.create_autospec(Service) from_service.containers.return_value = [] from_service.create_container.return_value = mock.Mock( id=container_id, spec=Container) service = Service('test', image='foo', volumes_from=[VolumeFromSpec(from_service, 'rw')]) self.assertEqual(service._get_volumes_from(), [container_id + ':rw']) from_service.create_container.assert_called_once_with() def test_split_domainname_none(self): service = Service('foo', image='foo', hostname='name', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name', 'hostname') self.assertFalse('domainname' in opts, 'domainname') def test_memory_swap_limit(self): self.mock_client.create_host_config.return_value = {} service = Service(name='foo', image='foo', hostname='name', client=self.mock_client, mem_limit=1000000000, memswap_limit=2000000000) service._get_container_create_options({'some': 'overrides'}, 1) self.assertTrue(self.mock_client.create_host_config.called) self.assertEqual( self.mock_client.create_host_config.call_args[1]['mem_limit'], 1000000000 ) self.assertEqual( self.mock_client.create_host_config.call_args[1]['memswap_limit'], 2000000000 ) def test_cgroup_parent(self): self.mock_client.create_host_config.return_value = {} service = Service(name='foo', image='foo', hostname='name', client=self.mock_client, cgroup_parent='test') service._get_container_create_options({'some': 'overrides'}, 1) self.assertTrue(self.mock_client.create_host_config.called) self.assertEqual( self.mock_client.create_host_config.call_args[1]['cgroup_parent'], 'test' ) def test_log_opt(self): self.mock_client.create_host_config.return_value = {} log_opt = {'syslog-address': 'tcp://192.168.0.42:123'} service = Service(name='foo', image='foo', hostname='name', client=self.mock_client, log_driver='syslog', log_opt=log_opt) service._get_container_create_options({'some': 'overrides'}, 1) self.assertTrue(self.mock_client.create_host_config.called) self.assertEqual( self.mock_client.create_host_config.call_args[1]['log_config'], {'Type': 'syslog', 'Config': {'syslog-address': 'tcp://192.168.0.42:123'}} ) def test_split_domainname_fqdn(self): service = Service( 'foo', hostname='name.domain.tld', image='foo', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name', 'hostname') self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') def test_split_domainname_both(self): service = Service( 'foo', hostname='name', image='foo', domainname='domain.tld', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name', 'hostname') self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') def test_split_domainname_weird(self): service = Service( 'foo', hostname='name.sub', domainname='domain.tld', image='foo', client=self.mock_client) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertEqual(opts['hostname'], 'name.sub', 'hostname') self.assertEqual(opts['domainname'], 'domain.tld', 'domainname') def test_no_default_hostname_when_not_using_networking(self): service = Service( 'foo', image='foo', use_networking=False, client=self.mock_client, ) opts = service._get_container_create_options({'image': 'foo'}, 1) self.assertIsNone(opts.get('hostname')) def test_get_container_create_options_with_name_option(self): service = Service( 'foo', image='foo', client=self.mock_client, container_name='foo1') name = 'the_new_name' opts = service._get_container_create_options( {'name': name}, 1, one_off=True) self.assertEqual(opts['name'], name) def test_get_container_create_options_does_not_mutate_options(self): labels = {'thing': 'real'} environment = {'also': 'real'} service = Service( 'foo', image='foo', labels=dict(labels), client=self.mock_client, environment=dict(environment), ) self.mock_client.inspect_image.return_value = {'Id': 'abcd'} prev_container = mock.Mock( id='ababab', image_config={'ContainerConfig': {}}) opts = service._get_container_create_options( {}, 1, previous_container=prev_container) self.assertEqual(service.options['labels'], labels) self.assertEqual(service.options['environment'], environment) self.assertEqual( opts['labels'][LABEL_CONFIG_HASH], '3c85881a8903b9d73a06c41860c8be08acce1494ab4cf8408375966dccd714de') self.assertEqual( opts['environment'], { 'affinity:container': '=ababab', 'also': 'real', } ) def test_get_container_not_found(self): self.mock_client.containers.return_value = [] service = Service('foo', client=self.mock_client, image='foo') self.assertRaises(ValueError, service.get_container) @mock.patch('compose.service.Container', autospec=True) def test_get_container(self, mock_container_class): container_dict = dict(Name='default_foo_2') self.mock_client.containers.return_value = [container_dict] service = Service('foo', image='foo', client=self.mock_client) container = service.get_container(number=2) self.assertEqual(container, mock_container_class.from_ps.return_value) mock_container_class.from_ps.assert_called_once_with( self.mock_client, container_dict) @mock.patch('compose.service.log', autospec=True) def test_pull_image(self, mock_log): service = Service('foo', client=self.mock_client, image='someimage:sometag') service.pull() self.mock_client.pull.assert_called_once_with( 'someimage', tag='sometag', stream=True) mock_log.info.assert_called_once_with('Pulling foo (someimage:sometag)...') def test_pull_image_no_tag(self): service = Service('foo', client=self.mock_client, image='ababab') service.pull() self.mock_client.pull.assert_called_once_with( 'ababab', tag='latest', stream=True) @mock.patch('compose.service.log', autospec=True) def test_pull_image_digest(self, mock_log): service = Service('foo', client=self.mock_client, image='someimage@sha256:1234') service.pull() self.mock_client.pull.assert_called_once_with( 'someimage', tag='sha256:1234', stream=True) mock_log.info.assert_called_once_with('Pulling foo (someimage@sha256:1234)...') @mock.patch('compose.service.Container', autospec=True) def test_recreate_container(self, _): mock_container = mock.create_autospec(Container) service = Service('foo', client=self.mock_client, image='someimage') service.image = lambda: {'Id': 'abc123'} new_container = service.recreate_container(mock_container) mock_container.stop.assert_called_once_with(timeout=10) mock_container.rename_to_tmp_name.assert_called_once_with() new_container.start.assert_called_once_with() mock_container.remove.assert_called_once_with() @mock.patch('compose.service.Container', autospec=True) def test_recreate_container_with_timeout(self, _): mock_container = mock.create_autospec(Container) self.mock_client.inspect_image.return_value = {'Id': 'abc123'} service = Service('foo', client=self.mock_client, image='someimage') service.recreate_container(mock_container, timeout=1) mock_container.stop.assert_called_once_with(timeout=1) def test_parse_repository_tag(self): self.assertEqual(parse_repository_tag("root"), ("root", "", ":")) self.assertEqual(parse_repository_tag("root:tag"), ("root", "tag", ":")) self.assertEqual(parse_repository_tag("user/repo"), ("user/repo", "", ":")) self.assertEqual(parse_repository_tag("user/repo:tag"), ("user/repo", "tag", ":")) self.assertEqual(parse_repository_tag("url:5000/repo"), ("url:5000/repo", "", ":")) self.assertEqual(parse_repository_tag("url:5000/repo:tag"), ("url:5000/repo", "tag", ":")) self.assertEqual(parse_repository_tag("root@sha256:digest"), ("root", "sha256:digest", "@")) self.assertEqual(parse_repository_tag("user/repo@sha256:digest"), ("user/repo", "sha256:digest", "@")) self.assertEqual(parse_repository_tag("url:5000/repo@sha256:digest"), ("url:5000/repo", "sha256:digest", "@")) def test_create_container_with_build(self): service = Service('foo', client=self.mock_client, build='.') self.mock_client.inspect_image.side_effect = [ NoSuchImageError, {'Id': 'abc123'}, ] self.mock_client.build.return_value = [ '{"stream": "Successfully built abcd"}', ] service.create_container(do_build=True) self.mock_client.build.assert_called_once_with( tag='default_foo', dockerfile=None, stream=True, path='.', pull=False, forcerm=False, nocache=False, rm=True, ) def test_create_container_no_build(self): service = Service('foo', client=self.mock_client, build='.') self.mock_client.inspect_image.return_value = {'Id': 'abc123'} service.create_container(do_build=False) self.assertFalse(self.mock_client.build.called) def test_create_container_no_build_but_needs_build(self): service = Service('foo', client=self.mock_client, build='.') self.mock_client.inspect_image.side_effect = NoSuchImageError with self.assertRaises(NeedsBuildError): service.create_container(do_build=False) def test_build_does_not_pull(self): self.mock_client.build.return_value = [ b'{"stream": "Successfully built 12345"}', ] service = Service('foo', client=self.mock_client, build='.') service.build() self.assertEqual(self.mock_client.build.call_count, 1) self.assertFalse(self.mock_client.build.call_args[1]['pull']) def test_config_dict(self): self.mock_client.inspect_image.return_value = {'Id': 'abcd'} service = Service( 'foo', image='example.com/foo', client=self.mock_client, net=ServiceNet(Service('other')), links=[(Service('one'), 'one')], volumes_from=[VolumeFromSpec(Service('two'), 'rw')]) config_dict = service.config_dict() expected = { 'image_id': 'abcd', 'options': {'image': 'example.com/foo'}, 'links': [('one', 'one')], 'net': 'other', 'volumes_from': [('two', 'rw')], } self.assertEqual(config_dict, expected) def test_config_dict_with_net_from_container(self): self.mock_client.inspect_image.return_value = {'Id': 'abcd'} container = Container( self.mock_client, {'Id': 'aaabbb', 'Name': '/foo_1'}) service = Service( 'foo', image='example.com/foo', client=self.mock_client, net=container) config_dict = service.config_dict() expected = { 'image_id': 'abcd', 'options': {'image': 'example.com/foo'}, 'links': [], 'net': 'aaabbb', 'volumes_from': [], } self.assertEqual(config_dict, expected) def test_specifies_host_port_with_no_ports(self): service = Service( 'foo', image='foo') self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_container_port(self): service = Service( 'foo', image='foo', ports=["2000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_port(self): service = Service( 'foo', image='foo', ports=["1000:2000"]) self.assertEqual(service.specifies_host_port(), True) def test_specifies_host_port_with_host_ip_no_port(self): service = Service( 'foo', image='foo', ports=["127.0.0.1::2000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_ip_and_port(self): service = Service( 'foo', image='foo', ports=["127.0.0.1:1000:2000"]) self.assertEqual(service.specifies_host_port(), True) def test_specifies_host_port_with_container_port_range(self): service = Service( 'foo', image='foo', ports=["2000-3000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_port_range(self): service = Service( 'foo', image='foo', ports=["1000-2000:2000-3000"]) self.assertEqual(service.specifies_host_port(), True) def test_specifies_host_port_with_host_ip_no_port_range(self): service = Service( 'foo', image='foo', ports=["127.0.0.1::2000-3000"]) self.assertEqual(service.specifies_host_port(), False) def test_specifies_host_port_with_host_ip_and_port_range(self): service = Service( 'foo', image='foo', ports=["127.0.0.1:1000-2000:2000-3000"]) self.assertEqual(service.specifies_host_port(), True) def test_get_links_with_networking(self): service = Service( 'foo', image='foo', links=[(Service('one'), 'one')], use_networking=True) self.assertEqual(service._get_links(link_to_self=True), []) def sort_by_name(dictionary_list): return sorted(dictionary_list, key=lambda k: k['name']) class BuildUlimitsTestCase(unittest.TestCase): def test_build_ulimits_with_dict(self): ulimits = build_ulimits( { 'nofile': {'soft': 10000, 'hard': 20000}, 'nproc': {'soft': 65535, 'hard': 65535} } ) expected = [ {'name': 'nofile', 'soft': 10000, 'hard': 20000}, {'name': 'nproc', 'soft': 65535, 'hard': 65535} ] assert sort_by_name(ulimits) == sort_by_name(expected) def test_build_ulimits_with_ints(self): ulimits = build_ulimits({'nofile': 20000, 'nproc': 65535}) expected = [ {'name': 'nofile', 'soft': 20000, 'hard': 20000}, {'name': 'nproc', 'soft': 65535, 'hard': 65535} ] assert sort_by_name(ulimits) == sort_by_name(expected) def test_build_ulimits_with_integers_and_dicts(self): ulimits = build_ulimits( { 'nproc': 65535, 'nofile': {'soft': 10000, 'hard': 20000} } ) expected = [ {'name': 'nofile', 'soft': 10000, 'hard': 20000}, {'name': 'nproc', 'soft': 65535, 'hard': 65535} ] assert sort_by_name(ulimits) == sort_by_name(expected) class NetTestCase(unittest.TestCase): def test_net(self): net = Net('host') self.assertEqual(net.id, 'host') self.assertEqual(net.mode, 'host') self.assertEqual(net.service_name, None) def test_net_container(self): container_id = 'abcd' net = ContainerNet(Container(None, {'Id': container_id})) self.assertEqual(net.id, container_id) self.assertEqual(net.mode, 'container:' + container_id) self.assertEqual(net.service_name, None) def test_net_service(self): container_id = 'bbbb' service_name = 'web' mock_client = mock.create_autospec(docker.Client) mock_client.containers.return_value = [ {'Id': container_id, 'Name': container_id, 'Image': 'abcd'}, ] service = Service(name=service_name, client=mock_client) net = ServiceNet(service) self.assertEqual(net.id, service_name) self.assertEqual(net.mode, 'container:' + container_id) self.assertEqual(net.service_name, service_name) def test_net_service_no_containers(self): service_name = 'web' mock_client = mock.create_autospec(docker.Client) mock_client.containers.return_value = [] service = Service(name=service_name, client=mock_client) net = ServiceNet(service) self.assertEqual(net.id, service_name) self.assertEqual(net.mode, None) self.assertEqual(net.service_name, service_name) class ServiceVolumesTest(unittest.TestCase): def setUp(self): self.mock_client = mock.create_autospec(docker.Client) def test_build_volume_binding(self): binding = build_volume_binding(VolumeSpec.parse('/outside:/inside')) assert binding == ('/inside', '/outside:/inside:rw') def test_get_container_data_volumes(self): options = [VolumeSpec.parse(v) for v in [ '/host/volume:/host/volume:ro', '/new/volume', '/existing/volume', ]] self.mock_client.inspect_image.return_value = { 'ContainerConfig': { 'Volumes': { '/mnt/image/data': {}, } } } container = Container(self.mock_client, { 'Image': 'ababab', 'Volumes': { '/host/volume': '/host/volume', '/existing/volume': '/var/lib/docker/aaaaaaaa', '/removed/volume': '/var/lib/docker/bbbbbbbb', '/mnt/image/data': '/var/lib/docker/cccccccc', }, }, has_been_inspected=True) expected = [ VolumeSpec.parse('/var/lib/docker/aaaaaaaa:/existing/volume:rw'), VolumeSpec.parse('/var/lib/docker/cccccccc:/mnt/image/data:rw'), ] volumes = get_container_data_volumes(container, options) assert sorted(volumes) == sorted(expected) def test_merge_volume_bindings(self): options = [ VolumeSpec.parse('/host/volume:/host/volume:ro'), VolumeSpec.parse('/host/rw/volume:/host/rw/volume'), VolumeSpec.parse('/new/volume'), VolumeSpec.parse('/existing/volume'), ] self.mock_client.inspect_image.return_value = { 'ContainerConfig': {'Volumes': {}} } intermediate_container = Container(self.mock_client, { 'Image': 'ababab', 'Volumes': {'/existing/volume': '/var/lib/docker/aaaaaaaa'}, }, has_been_inspected=True) expected = [ '/host/volume:/host/volume:ro', '/host/rw/volume:/host/rw/volume:rw', '/var/lib/docker/aaaaaaaa:/existing/volume:rw', ] binds = merge_volume_bindings(options, intermediate_container) self.assertEqual(set(binds), set(expected)) def test_mount_same_host_path_to_two_volumes(self): service = Service( 'web', image='busybox', volumes=[ VolumeSpec.parse('/host/path:/data1'), VolumeSpec.parse('/host/path:/data2'), ], client=self.mock_client, ) self.mock_client.inspect_image.return_value = { 'Id': 'ababab', 'ContainerConfig': { 'Volumes': {} } } service._get_container_create_options( override_options={}, number=1, ) self.assertEqual( set(self.mock_client.create_host_config.call_args[1]['binds']), set([ '/host/path:/data1:rw', '/host/path:/data2:rw', ]), ) def test_different_host_path_in_container_json(self): service = Service( 'web', image='busybox', volumes=[VolumeSpec.parse('/host/path:/data')], client=self.mock_client, ) self.mock_client.inspect_image.return_value = { 'Id': 'ababab', 'ContainerConfig': { 'Volumes': { '/data': {}, } } } self.mock_client.inspect_container.return_value = { 'Id': '123123123', 'Image': 'ababab', 'Volumes': { '/data': '/mnt/sda1/host/path', }, } service._get_container_create_options( override_options={}, number=1, previous_container=Container(self.mock_client, {'Id': '123123123'}), ) self.assertEqual( self.mock_client.create_host_config.call_args[1]['binds'], ['/mnt/sda1/host/path:/data:rw'], ) def test_warn_on_masked_volume_no_warning_when_no_container_volumes(self): volumes_option = [VolumeSpec('/home/user', '/path', 'rw')] container_volumes = [] service = 'service_name' with mock.patch('compose.service.log', autospec=True) as mock_log: warn_on_masked_volume(volumes_option, container_volumes, service) assert not mock_log.warn.called def test_warn_on_masked_volume_when_masked(self): volumes_option = [VolumeSpec('/home/user', '/path', 'rw')] container_volumes = [ VolumeSpec('/var/lib/docker/path', '/path', 'rw'), VolumeSpec('/var/lib/docker/path', '/other', 'rw'), ] service = 'service_name' with mock.patch('compose.service.log', autospec=True) as mock_log: warn_on_masked_volume(volumes_option, container_volumes, service) mock_log.warn.assert_called_once_with(mock.ANY) def test_warn_on_masked_no_warning_with_same_path(self): volumes_option = [VolumeSpec('/home/user', '/path', 'rw')] container_volumes = [VolumeSpec('/home/user', '/path', 'rw')] service = 'service_name' with mock.patch('compose.service.log', autospec=True) as mock_log: warn_on_masked_volume(volumes_option, container_volumes, service) assert not mock_log.warn.called def test_create_with_special_volume_mode(self): self.mock_client.inspect_image.return_value = {'Id': 'imageid'} self.mock_client.create_container.return_value = {'Id': 'containerid'} volume = '/tmp:/foo:z' Service( 'web', client=self.mock_client, image='busybox', volumes=[VolumeSpec.parse(volume)], ).create_container() assert self.mock_client.create_container.call_count == 1 self.assertEqual( self.mock_client.create_host_config.call_args[1]['binds'], [volume]) compose-1.5.2/tests/unit/split_buffer_test.py000066400000000000000000000026561263011261000214030ustar00rootroot00000000000000from __future__ import absolute_import from __future__ import unicode_literals from .. import unittest from compose.utils import split_buffer class SplitBufferTest(unittest.TestCase): def test_single_line_chunks(self): def reader(): yield b'abc\n' yield b'def\n' yield b'ghi\n' self.assert_produces(reader, ['abc\n', 'def\n', 'ghi\n']) def test_no_end_separator(self): def reader(): yield b'abc\n' yield b'def\n' yield b'ghi' self.assert_produces(reader, ['abc\n', 'def\n', 'ghi']) def test_multiple_line_chunk(self): def reader(): yield b'abc\ndef\nghi' self.assert_produces(reader, ['abc\n', 'def\n', 'ghi']) def test_chunked_line(self): def reader(): yield b'a' yield b'b' yield b'c' yield b'\n' yield b'd' self.assert_produces(reader, ['abc\n', 'd']) def test_preserves_unicode_sequences_within_lines(self): string = u"a\u2022c\n" def reader(): yield string.encode('utf-8') self.assert_produces(reader, [string]) def assert_produces(self, reader, expectations): split = split_buffer(reader()) for (actual, expected) in zip(split, expectations): self.assertEqual(type(actual), type(expected)) self.assertEqual(actual, expected) compose-1.5.2/tests/unit/utils_test.py000066400000000000000000000021641263011261000200510ustar00rootroot00000000000000# encoding: utf-8 from __future__ import unicode_literals from compose import utils class TestJsonSplitter(object): def test_json_splitter_no_object(self): data = '{"foo": "bar' assert utils.json_splitter(data) is None def test_json_splitter_with_object(self): data = '{"foo": "bar"}\n \n{"next": "obj"}' assert utils.json_splitter(data) == ({'foo': 'bar'}, '{"next": "obj"}') class TestStreamAsText(object): def test_stream_with_non_utf_unicode_character(self): stream = [b'\xed\xf3\xf3'] output, = utils.stream_as_text(stream) assert output == '���' def test_stream_with_utf_character(self): stream = ['ěĝ'.encode('utf-8')] output, = utils.stream_as_text(stream) assert output == 'ěĝ' class TestJsonStream(object): def test_with_falsy_entries(self): stream = [ '{"one": "two"}\n{}\n', "[1, 2, 3]\n[]\n", ] output = list(utils.json_stream(stream)) assert output == [ {'one': 'two'}, {}, [1, 2, 3], [], ] compose-1.5.2/tox.ini000066400000000000000000000014461263011261000144740ustar00rootroot00000000000000[tox] envlist = py27,py34,pre-commit [testenv] usedevelop=True passenv = LD_LIBRARY_PATH DOCKER_HOST DOCKER_CERT_PATH DOCKER_TLS_VERIFY setenv = HOME=/tmp deps = -rrequirements.txt -rrequirements-dev.txt commands = py.test -v -rxs \ --cov=compose \ --cov-report html \ --cov-report term \ --cov-config=tox.ini \ {posargs:tests} [testenv:pre-commit] skip_install = True deps = pre-commit commands = pre-commit install pre-commit run --all-files # Coverage configuration [run] branch = True [report] show_missing = true [html] directory = coverage-html # end coverage configuration [flake8] # Allow really long lines for now max-line-length = 140 # Set this high for now max-complexity = 20 exclude = compose/packages