Docker is an open platform tool to make it easier to create, deploy and to execute the applications by using containers. Containers allow us to separate the applications from the infrastructure so we can deploy application/software faster.
We can manage our infrastructure in the same ways as we manage our applications. The Docker is like a virtual machine but creating a new whole virtual machine; it allows us to use the same Linux kernel.
The advantage of Docker platform is to ship, test, and deploy code quicker so that we can effectively reduce the time between writing code and execute it in production.
And the main important thing about Docker is that it’s open source i.e. anyone can use it and can contribute to docker to make it easier and more features in it which aren’t available in it.
The advantage of docker is to build the package and run the application in sandbox environment said container.
The docker container system utilizes the operating system virtualization to utilize and combine the components of an application system which support every standard Linux machine.
The isolation and security factors allow us to execute many containers parallel on a given system.
Containers are lightweight in size because they don’t need the extra resource of a HyperV or VMware, but run directly within the machine kernel. We can even run Docker containers within machines that are actually virtual/hyper machines.
The Core of the Docker is consists of Docker Engine, Docker Containers, Docker images, Docker Client, Docker daemon etc. Let discuss the components of the Docker.
The Docker engine is a part of Docker which create and run the Docker containers. The docker container is a live running instance of a docker image. Docker Engine is a client-server based application with following components -
The command line interface client uses the Docker REST API to interact with the Docker daemon through using CLI commands. Many other Docker applications also use the API and CLI. The daemon process creates and manage Docker images, containers, networks, and volumes.
The docker daemon process is used to control and manage the containers. The Docker daemon listens to only Docker API requests and manages Docker images, containers, networks, and volumes. It also communicates with other daemons to manage Docker services.
Docker client is the primary service using which Docker users communicate with the Docker. When we use commands “docker run” the client sends these commands to dockerd, which execute them out.
The command used by docker depend on Docker API. In Docker client can interact more than one daemon process.
The Docker images are building the block of docker or docker image is a read-only template with instructions to create a Docker container. Docker images are the most build part of docker life cycle.
Mostly, an image is based on another image, with some additional customization in the image.
We can build an image which is based on the centos image, which can install the Nginx web server with required application and configuration details which need to make the application run.
We can create our own images or only use those created by others and published in registry directory. To build our own image is very simple because we need to create a Dockerfile with some syntax contains the steps that needed to create the image and make to run it.
Each instruction in a Dockerfile creates a new layer in the image. If we need to modify the Dockerfile we can do the same and rebuild the image, the layers which have changed are rebuilt.
This is why images are so lightweight, small, and fast when compared to other virtualization technologies.
A Docker registry keeps Docker images. We can run our private registry.
When we run the docker pull and docker run commands, the required images are pulled from our configured registry directory.
Using Docker push command, the image can be uploaded to our configured registry directory.
A container is the instance of an image. We can create, run, stop, or delete a container using the Docker CLI. We can connect a container to more than one networks, or even create a new image based on its current state.
By default, a container is well isolated from other containers and its system machine. A container defined by its image or configuration options that we provide during to create or run it.
Docker using a service named namespaces is provided to the isolated environment called container. When we run a container, Docker creates a set of namespaces for that particular container. The namespaces provide a layer of isolation. Some of the namespace layer is -
Docker Engine in Linux relies on named control groups. A group limits the application to a predefined set of resources.
Control groups used by Docker Engine to share the available hardware resources to containers.
Using control groups, we can define the memory available to a specific container.
Union file systems, a file system which is used by creating layers, making them lightweight in size and faster. Docker Engine using union file system provide the building blocks to containers.
Docker Engine uses many UnionFS variants some of including are AUFS, btrfs, vfs, Device Mapper, etc.
Docker Engine adds the namespaces, control groups & UnionFS into a file called a container format. The default format for the container is lib container.
A docker file is a text file that consists of all commands so that user can call on the command line to build an image. Use of base Docker image add and copy files, run commands and expose the ports.
The docker file can be considered as the source code and images to make compile for our container which is running code. The Dockerfile are portable files which can be shared, stored and updated as required. Some of the docker files instruction is -
Docker uses a client-server based architecture model. The Docker client communicates with the Docker daemon, which does process the lifting of the building, running, and distributing Docker containers.
We can connect a Docker client to another remote Docker daemon. The Docker client and daemon communicate using of REST API and network interface.
Docker security should be considered before deploy and run it. There are some areas which need to be considered while checking the Docker security which include security level of the kernel and how it support for namespaces and groups.
Docker Compose is a tool which is used to define and running multiple-containers in Docker applications. Docker composes use to create a compose file to configure the application services. After that, a single command, we create and start all the services from our configuration.
Docker Compose is a very helpful tool for development, testing, and staging environments.
Docker Compose is a three-step process.
The features of docker compose that make it unique are -
Docker Engine version also includes swarm mode for managing a cluster of Docker Engines said a swarm. With the help of Docker CLI, we create a swarm, deploy application services to a swarm, and manage swarm.
To secure the Docker Swarm cluster we have following options -
All nodes in a Swarm cluster should bind their Docker Engine daemons to a network port. It brings with it all of the usual network related security concern such as man-in-the-middle attack.
These type of risks are compounded when the network is untrusted such as the internet. To eliminate these risks, Swarm and the Engine use TLS for authentication.
The Engine daemons, including the Swarm manager, which is configured to use TLS will only accept commands from Docker Engine clients which sign their communications. The Engine and Swarm also support other party Certificate Authorities as well as internal corporate CAs.
Docker Engine and Swarm ports for TLS are -
Production networks are those networks in which everything locked down so that only allowed traffic can flow from the network. The below mention lists show the network ports and protocols which are used by different components of a Swarm cluster.
We should configure firewalls and other network access control lists to allow the traffic from below mention ports.
Swarm Manager -
Swarm Nodes -
Custom, Cross-host container networks -
For added security, we need to configure the well-known/unknown port rules to only allow connections from interfaces on known Swarm devices.
To configured Swarm cluster for TLS, replace 2375 & 3375 with 2376 & 3376.
The Swarm manager is a single point for accepting all commands for the Swarm cluster. It also schedules resources against the cluster.
In case Swarm manager becomes unavailable, cluster operations stop working until the Swarm manager becomes up again, which is not unacceptable any in critical scenarios.
In Swarm, we have High Availability features against possible failures of the Swarm manager. We can use Swarm’s HA feature to configure multiple Swarm managers for a single cluster.
These Swarm managers operate in an active and passive formation with a single Swarm manager one is primary, and all others will be secondaries.
In Swarm secondary managers operate as a warm standby, i.e. they run in the background of the primary Swarm manager.
The secondary Swarm managers can accept commands issued to the cluster, acting as a primary Swarm manage, still, any commands received by the secondaries are directly forwarded to the primary where they are run.
In the case of primary Swarm manager fail, a new primary is selected from the available secondaries.
While creating high availability Swarm managers, it should take care to distribute them over as many failure domains as possible. A failure domain is a network that can be negatively affected if a critical device or service experiences problems.
Docker container networks are overlay networks and created over the multiple Engine hosts. A container network needs a key value to store and maintain the network configuration and state.
These key value can be shared in common with the one used by the Swarm cluster discovery service.
Docker developed an enterprise edition solutions where development and IT teams who build, ship and run applications in production at scale level.
Docker enterprise edition is a certified solution to provide enterprises with the most secure container platform in the industry to deploy any applications. Docker EE can run on any infrastructure.
Docker provides an integrated, tested platform for apps running on Linux or Windows operating systems and Cloud providers. Docker Certified Infrastructure, Containers and Plugins are premium features which are available for Docker EE with cooperative support from Docker.
In today’s world, a very app is dynamic in nature and requires a security model which is defined for the app.