Disclaimer: all code snippets below are working only with Docker 1.13+ TL;DR simplifies deployment of composed application to a (mode) cluster. And you can do it without creating a new ( ) file, but just using familiar and well-known syntax (with some additions) and option. Docker 1.13 swarm dab Distribution Application Bundle docker-compose.yml --compose-file Swarm cluster Engine 1.12 introduced a new for natively managing a cluster of Docker Engines called a . Docker implements and does not require using external key value store anymore, such as or . Docker swarm mode swarm swarm mode Raft Consensus Algorithm Consul etcd If you want to run a cluster on a developer’s machine, there are several options. swarm The first option and most widely known, is to use a tool with some virtual driver (Virtualbox, Parallels or other). docker-machine But, in this post I will use another approach: using Docker image with Docker for Mac, see more details in my post. docker-in-docker Docker Swarm cluster with docker-in-docker on MacOS Docker Registry mirror When you deploy a new service on local swarm cluster, I recommend to setup local Docker registry mirror and run all swarm nodes with option, pointing to local Docker registry. By running a local Docker registry mirror, you can keep most of the redundant image fetch traffic on your local network and speedup service deployment. --registry-mirror Docker Swarm cluster bootstrap script I’ve prepared a shell script to bootstrap 4 nodes swarm cluster with Docker registry mirror and very nice application. swarm visualizer The script initialize docker engine as a , then starts 3 new docker-in-docker containers and joins them to the cluster as worker nodes. All worker nodes run with option. swarm master swarm --registry-mirror #!/bin/bash # vars[ -z "$NUM_WORKERS" ] && NUM_WORKERS=3 # init swarm (need for service command); if not createddocker node ls 2> /dev/null | grep "Leader"if [ $? -ne 0 ]; thendocker swarm init > /dev/null 2>&1fi # get join tokenSWARM_TOKEN=$(docker swarm join-token -q worker) # get Swarm master IP (Docker for Mac xhyve VM IP)SWARM_MASTER=$(docker info --format "{{.Swarm.NodeAddr}}")echo "Swarm master IP: ${SWARM_MASTER}"sleep 10 # start Docker registry mirrordocker run -d --restart=always -p 4000:5000 --name v2_mirror \-v $PWD/rdata:/var/lib/registry \-e REGISTRY_PROXY_REMOTEURL= \registry:2.5 https://registry-1.docker.io # run NUM_WORKERS workers with SWARM_TOKENfor i in $(seq "${NUM_WORKERS}"); do remove node from cluster if exists docker node rm --force \$(docker node ls --filter "name=worker-${i}" -q) \> /dev/null 2>&1 remove worker container with same name if exists docker rm --force \$(docker ps -q --filter "name=worker-${i}") > /dev/null 2>&1 run new worker container docker run -d --privileged --name worker-${i} \--hostname=worker-${i} \-p ${i}2375:2375 \-p ${i}5000:5000 \-p ${i}5001:5001 \-p ${i}5601:5601 \docker:1.13-rc-dind \--registry-mirror http://${SWARM_MASTER}:4000 add worker container to the cluster docker --host=localhost:${i}2375 swarm join \--token ${SWARM_TOKEN} ${SWARM_MASTER}:2377done # show swarm clusterprintf "\nLocal Swarm Cluster\n===================\n" docker node ls # echo swarm visualizerprintf "\nLocal Swarm Visualizer\n===================\n"docker run -it -d --name swarm_visualizer \-p 8000:8080 -e HOST=localhost \-v /var/run/docker.sock:/var/run/docker.sock \manomarks/visualizer:beta Deploy multi-container application — the “old” way The Docker is a tool (and deployment specification format) for defining and running composed multi-container Docker applications. Before Docker 1.12, you could use tool to deploy such applications to a cluster. With 1.12 release, it’s not possible anymore: can deploy your application only on single Docker host. compose docker-compose swarm docker-compose In order to deploy it to a cluster, you need to create a special deployment specification file (also knows as ) in format (see more ). swarm Distribution Application Bundle dab here The way to create this file, is to run the command. The output of this command is a JSON file, that describes multi-container composed application with Docker images referenced by instead of tags. Currently file format does not support multiple settings from and does not allow to use supported options from command. docker-compose bundle @sha256 dab docker-compose.yml docker service create Such a pity story: the bundle format looks promising, but currently is totally useless (at least in Docker 1.12). dab Deploy multi-container application — the “new” way With Docker 1.13, the “new” way to deploy a multi-container composed application is to use again ( ). Kudos to Docker team! docker-compose.yml hurrah! * : And you do not need the tool, only file in format ( ) Note docker-compose yaml docker-compose version: "3" $ docker deploy --compose-file docker-compose.yml Docker compose v3 ( ) version: "3" So, what’s new in docker compose version 3? First, I suggest you take a deeper look at . It is an extension of well-known format. docker-compose schema docker-compose tool ( ) does not support yet. Note: docker-compose ver. 1.9.0 docker-compose.yaml version: "3" The most visible change is around service deployment. Now you can specify all options supported by commands: swarm docker service create/update number of service replicas (or global service) service labels hard and soft limits for service (container) CPU and memory service restart policy service rolling update policy deployment placement constraints link Docker compose v3 example I’ve created a compose file (v3) for classic example. This example application contains 5 services with following deployment configurations: “new” “Cats vs. Dogs” - a Python webapp which lets you vote between two options; requires voting-app redis - Redis queue which collects new votes; deployed on node redis swarm manager - Postgres database backed by a Docker volume; deployed on node db swarm manager - Node.js webapp which shows the results of the voting in real time; 2 replicas, deployed on nodes result-app swarm worker .NET worker which consumes votes and stores them in ; worker db 2 replicas # of replicas: max 25% CPU and 512MB memory hard limit: max 25% CPU and 256MB memory soft limit: on nodes only placement: swarm worker restart on-failure, with 5 seconds delay, up to 3 attempts restart policy: one by one, with 10 seconds delay and 0.3 failure rate to tolerate during the update update policy: version: "3" services: redis:image: redis:3.2-alpineports:- "6379"networks:- voteappdeploy:placement:constraints: [node.role == manager] db:image: postgres:9.4volumes:- db-data:/var/lib/postgresql/datanetworks:- voteappdeploy:placement:constraints: [node.role == manager] voting-app:image: gaiadocker/example-voting-app-vote:goodports:- 5000:80networks:- voteappdepends_on:- redisdeploy:mode: replicatedreplicas: 2labels: [APP=VOTING]placement:constraints: [node.role == worker] result-app:image: gaiadocker/example-voting-app-result:latestports:- 5001:80networks:- voteappdepends_on:- db worker:image: gaiadocker/example-voting-app-worker:latestnetworks:voteapp:aliases:- workersdepends_on:- db- redis# service deploymentdeploy:mode: replicatedreplicas: 2labels: [APP=VOTING]# service resource managementresources:# Hard limit - Docker does not allow to allocate morelimits:cpus: '0.25'memory: 512M# Soft limit - Docker makes best effort to return to itreservations:cpus: '0.25'memory: 256M# service restart policyrestart_policy:condition: on-failuredelay: 5smax_attempts: 3window: 120s# service update configurationupdate_config:parallelism: 1delay: 10sfailure_action: continuemonitor: 60smax_failure_ratio: 0.3# placement constraint - in this case on 'worker' nodes onlyplacement:constraints: [node.role == worker] networks:voteapp: volumes:db-data: Run the command to deploy my version of application on a cluster. docker deploy — compose-file docker-compose.yml VOTE “Cats vs. Dogs” swarm Cats vs. Dogs on Swarm cluster Hope you find this post useful. I look forward to your comments and any questions you have. Originally published at Codefresh Blog .