Docker Machine is the single easiest way to provision a new Docker host or machine. I use it to setup new remote staging servers and it takes a minute at most. It selects an appropriate Linux distribution for the Docker Engine and installs the Docker daemon, all in one go.
It can do all of this with 15 different cloud providers! You can set up a server in any of them, using the same simple command docker-machine create.
For example, if you’re using AWS, Docker Machine calls the AWS API on your behalf to create an EC2 instance into your AWS account.
— it’s blazing fast
— the command line is dead simple
— it manages all of the SSH keys and TLS certificates even if you have dozens of servers
— makes your servers immediately ready for Docker deployments
But there’s one pain… It can only store all of its configuration locally on your computer.
So after you’ve setup your servers and deployed your project your teammates won’t be able to connect to the machine and redeploy by themselves.
How can you share access to the Docker Host with your teammates? They need both the SSH keys and the TLS certificates that Machine created to connect to the remote Docker daemon.
You need at leas the TLS certificates to connect to your remote Docker host from your local Docker client, as it is the secure way for Docker to connect to the Docker HTTP API exposed on port 2376.
Btw, how does it work?
I’ve shown here how Docker Machine stores keys and certificates on your local computer. It’s all in
Fail #1: Using the Docker Machine generic driver
Docker Machine “generic” driver creates machines using an existing VM/Host with SSH. It’s also great if you’ve provisioned your servers with Terraform for example, as it will connect to them and install the Docker daemon for you.
The Generic driver then sounds like a valid solution for your teammates to install the Docker Machine:
docker-machine create \
--driver generic \
--generic-ssh-key PATH_TO_YOUR_SSH_KEY \
This will connect to the remote server, restart the Docker daemon, stop all the running containers and … regenerate the remote TLS certificates. Which means they can connect but you can’t connect anymore!
Well, you could regenerate both the local and the remote certificates from your end and your teammates would have to do the same next time etc.. FAIL
Fail #2: Copying folders
Now that you know how Machine internally stores the machine configuration, the second thought is simply to copy the
~/.docker/machine/machines/xxx folder directly into your teammates
Let’s try it, and try to connect to the Docker host:
Wow, Machine detects that the client certificates you’ve been copying haven’t been created/signed by yourself (with your own master certificates stored in
~/.docker/machine/certs) and refuses to connect. FAIL
Fail #3: Use Docker Machine “none” driver
This one looks like it fits the bill but seems to have been broken for a while.
The solution is to import both the client certificates and SSH keys for this Docker host, but also the master certificates from the developer who initially created the Docker machine.
You don’t want to overwrite your own master certificates so we’ll save them in the machine folder and modify the machine
config.json file to point to these specific master certificates.
- Clone your your machine folder into a temporary directory and your certificates into it:
$cp -R ~/.docker/machine/machines/my_machine . && \
cp ~/.docker/machine/certs/* ./my_machine/certs
2. Update the
./my_machine/config.json to be compatible with your teammate Docker storage path (Docker Machine only takes absolute paths so you need to replace
/Users/her_username/.docker) (assuming OSX):
$ sed -i.bak 's/machine\/certs/machine\/machines\/my_machine\/certs/' ./my_machine/config.json
$ sed -i.bak 's/your_username/her_username/' ./my_machine/config.json
3. Archive your machine configuration:
$ tar -zcf my_machine.tar.gz my_machine
Get them to extract the archive into their