Hey, another quick tip regarding Docker, specially the offering (edge edition) which ships with . That's a volume plugin that does enables containers to attach volumes that are saved into allowing one to benefit from both shared storage as well as durable storage (think of network filesystem that is backed by AWS and wrapped on a nice interface by Docker). Docker for AWS cloudstor AWS EFS Here I go through how you can visualize how the plugin puts stuff into EFS. : tl;dr the plugin runs inside a container with a mount point with mounts to EFS where for each named volume it creates a directory (which gets propagated over the machines). 😁 Note.: I'm not from Docker and neither have all the informations about how things work there. Some of these things are proprietary and are not very much documented but we can still have an idea about how things work. Feel free to correct me in case I get something wrong. First things first, if you've already set up once (using their standard template) you probably noticed that you don't really log into the machine, but into a container with SSH that is running on the host. This is one of the set of containers that are kept running there to interact with AWS and make the whole thing cohesive. docker-for-aws CloudFormation list of docker-for-aws containers running in the manager There's one container that is missing, though: the . That's because plugins like (the volumes plugin mentioned above) runs inside containers (see ) and react to command issued against the docker daemon in a given host. So, now the question: where is that container? How can I see it? cloudstor:aws cloudstor:aws https://docs.docker.com/engine/extend/plugin_api To do that we must get out of the "jail" placed by the SSH container that we get in when SSHing. We can do that, guess what, with a container: using `nsenter1` to get into all the PID1 namespaces What the command above does is allow us to have a container with all privileges grant, sharing the PID namespace of the host (thus we have access to the host's processes) and then using the image which essentially gives us a way of entering the mount namespace of the PID 1 of the host (thus, giving us access to all the files of the host). justincormack/nsenter1 (see to get into what does) nsenter1.c nsenter1 Once you're there (in the whole view of the host files) we have access to (see ), which essentially is the most low-level engine that docker uses to create and run linux containers. Just execute : docker-runc runc.io docker-runc list execution of `docker-runc list` shows all the runc linux containers running Those bundle directories gives us the place where we can find the files used by to create the containers. Whenever we want to into one of those containers, we can use . Interesting but still not there. config.json runc exec docker-runc exec -t <cid> /my/binary retrieving the configuration of the container running the plugin We're interested in the container that is run by the docker volume plugin so we must first get the of the container. To do so, issue , which will give us in the section the full of the containers. Now we can use the ID to get to the directory where we can find the configuration of the container running the plugin. id docker plugin ls --notrunc ID id The configuration file (`config.json`) describes all the bare minimum set of properties needed for a linux container to be run (you can check out about it at ). It's interesting but doesn't describe much about the plugin, as that's too low level. https://github.com/opencontainers/runc Using the plugin API ( ) we can find something useful: there's something being mount to as, states that "Volume plugins that do mounts and want it to propagate to the host namespace, need to mount inside .", so, let's look at what's under that : docker plugin inspect <pluginID> /mnt https://github.com/moby/moby/pull/26398 /mnt /mnt inspection of the plugin configuration `exec` into the plugin container and viewing the container filesystem So there it is! The named volumes we create end up under directories in a directory inside the plugin containers that run in our hosts. As they all have this on EFS, then we have the storage shared on all these machines. Pretty neat 🕺🏻 /mnt /mnt