Hey, another quick tip regarding Docker, specially the Docker for AWS offering (edge edition) which ships with cloudstor
. That's a volume plugin that does enables containers to attach volumes that are saved into AWS EFS allowing one to benefit from both shared storage as well as durable storage (think of network filesystem that is backed by AWS and wrapped on a nice interface by Docker).
Here I go through how you can visualize how the plugin puts stuff into EFS.
tl;dr: the plugin runs inside a container with a mount point with mounts to EFS where for each named volume it creates a directory (which gets propagated over the machines).
Note.: I'm not from Docker and neither have all the informations about how things work there. Some of these things are proprietary and are not very much documented but we can still have an idea about how things work. Feel free to correct me in case I get something wrong. 😁
First things first, if you've already set up docker-for-aws
once (using their standard CloudFormation template) you probably noticed that you don't really log into the machine, but into a container with SSH that is running on the host. This is one of the set of containers that are kept running there to interact with AWS and make the whole thing cohesive.
list of docker-for-aws containers running in the manager
There's one container that is missing, though: the cloudstor:aws
. That's because plugins like cloudstor:aws
(the volumes plugin mentioned above) runs inside containers (see https://docs.docker.com/engine/extend/plugin_api) and react to command issued against the docker daemon in a given host. So, now the question: where is that container? How can I see it?
To do that we must get out of the "jail" placed by the SSH container that we get in when SSHing. We can do that, guess what, with a container:
using `nsenter1` to get into all the PID1 namespaces
What the command above does is allow us to have a container with all privileges grant, sharing the PID namespace of the host (thus we have access to the host's processes) and then using the imagejustincormack/nsenter1
which essentially gives us a way of entering the mount namespace of the PID 1 of the host (thus, giving us access to all the files of the host).
(see nsenter1.c to get into what nsenter1
does)
Once you're there (in the whole view of the host files) we have access to docker-runc
(see runc.io), which essentially is the most low-level engine that docker uses to create and run linux containers. Just execute docker-runc list
:
execution of `docker-runc list` shows all the runc linux containers running
Those bundle directories gives us the place where we can find the config.json
files used by runc
to create the containers. Whenever we want to exec
into one of those containers, we can use docker-runc exec -t <cid> /my/binary
. Interesting but still not there.
retrieving the configuration of the container running the plugin
We're interested in the container that is run by the docker volume plugin so we must first get the id
of the container. To do so, issue docker plugin ls --notrunc
, which will give us in the ID
section the full id
of the containers. Now we can use the ID to get to the directory where we can find the configuration of the container running the plugin.
The configuration file (`config.json`) describes all the bare minimum set of properties needed for a linux container to be run (you can check out about it at https://github.com/opencontainers/runc). It's interesting but doesn't describe much about the plugin, as that's too low level.
Using the plugin API (docker plugin inspect <pluginID>
) we can find something useful: there's something being mount to /mnt
as, https://github.com/moby/moby/pull/26398 states that "Volume plugins that do mounts and want it to propagate to the host namespace, need to mount inside /mnt
.", so, let's look at what's under that /mnt
:
inspection of the plugin configuration
`exec` into the plugin container and viewing the container filesystem
So there it is! The named volumes we create end up under directories in a /mnt
directory inside the plugin containers that run in our hosts. As they all have this /mnt
on EFS, then we have the storage shared on all these machines. Pretty neat 🕺🏻