Roman Burdiuzha here, Cloud Architect and Co-Founder & CTO at Gart Solutions. Today I'll talk about Docker, containerization, how Docker works, why you need it as a developer, and how to create and run a container.
Let's start with what Docker is.
Docker is a platform for developing, shipping, and running containerized applications. Everything except "containerized applications" seems clear, so it's worth understanding what a container is.
A container is a software unit that packages code and all its dependencies so that the application runs quickly and reliably on both one computer and another. Accordingly, containerization or dockerization is the process of putting an application into containers, it's simple.
An image is a lightweight, standalone, executable set of programs that includes everything needed for an application to run: code, environment, system tools, libraries, and settings.
An image becomes a container at runtime when it runs on the Docker Engine. Docker is available for both Linux and Windows applications, and a containerized application will always work the same, regardless of the environment. Containers isolate the program from the external environment and ensure that they work the same.
It is important to understand that a container is essentially its own separate operating system with some program running inside it. The fact that it does not depend on the operating system allows you to run any number of identical containers, delete, and configure them, and this will not affect the computer on which all this works.
Some novice developers, when creating a project, install all the necessary applications - databases, some other services - locally on their computer.
The first obvious drawback is that there may not be a suitable version of the application for your computer, so what to do then?
The second - you install and run one instance of the application. You can only run one Postgres locally, not 5 or 10.
The third - if you want to share the application with your friend, then he will have to install all these dependencies or write docker-compose for you. If you were developing an application on Windows and then you needed to run it on Linux, then you would spend a lot of effort to make it all work. It's not very convenient, is it?
This is why Docker was invented. By putting everything into containers, you free yourself from the hassle of checking if everything works, what you need, or how to configure these dependencies.
Some of you may say that there are virtual machines and they do about the same thing. Yes, their functions are similar to those of containers, but there is a drawback. Each virtual machine needs a copy of the operating system to interact with the computer's processor and memory. This can take up tens of gigabytes, which is not at all like a Postgres database container that takes up 100 megabytes.
Another reason to use containers is microservices and their orchestration. When you write large applications that will run dozens of microservices, it will somehow use containers to manage the life cycle of not a separately launched application, but a container.
There are two ways.
The first is to download an image from a special place - Registry, a repository of docker images. Many companies have their own registry, but mostly everyone uses DockerHub. By going to hub.docker.com, you can find any image that has been publicly published. Sometimes you can see blue checkmarks next to the name, which indicates that the developer is verified and can be trusted. Let's look at the official image from Docker, which has a guide for beginners -
We see a blue checkmark, the number of downloads of this image, its description, and the "tags" tab.
Tags are special human-readable identifiers that indicate a specific version or variant of an image. Just like when installing regular programs, you can choose the required version, or you can leave it unspecified, in which case the latest version will be selected. I recommend explicitly specifying the required versions, as this guarantees that you will download the same image even after a long time.
Here, as we can see, there is one tag - latest. Let's download this image to our computer. You must have Docker Desktop installed and running in order to do this. So check that this is the case.
Many operations with Docker are performed through the console, and I want you to not be afraid of the console and be able to use it. Let's open the console and type the command that we see on DockerHub.
The command to download the image is:
docker pull docker/welcome-to-docker
We can see that the container was not found locally, so the download started.
It's worth noting that with such a simple command, Docker searches for the image on DockerHub, but as I mentioned earlier, you can use different image repositories. To do this, you need to specify the path to the repository before the image name.
For example, to download an image from the Google Cloud Registry, you would use the following command:
docker pull gcr.io/google-containers/hello-world
If we simply download the image as shown here, a new container will not be created. We just downloaded the image locally.
By running the command from the Overview, we can start the container.
The DockerHub Overview tells us to go to http://localhost:8088
. The Overview doesn't always contain detailed information about how to start the container and how it works, which means that sometimes we have to figure it out for ourselves. Here we saw congratulations on getting the first container up and running.
Let's go to Docker Desktop and see what we have.
On the Images tab, we can see the image that was just downloaded and launched.
On the Containers tab, we see a running container named welcome-to-docker.
In Docker, containers can be named with their own names, or they will be assigned such a name. Let's click on the container and enter it.
First of all, we get to the Logs. Here we can see some information about what the container did when it was running. When you run your applications, you will be able to see the logs here if something goes wrong.
On the second tab, Inspect, you can see the container variables and the ports on which it is running. Many applications have a port that can be used to access them. This port is available inside the container, and since containers are independent of each other, everything works fine. Inside each container, there is this port, and they do not interfere with each other. But we want to somehow get into the container. To do this, you need to make a mapping or mapping of the port inside the container to the port outside the container - on the machine.
In our case, the internal port is 80. And when starting the application, we specified a mapping to port 8088 (-p 8088:80).
docker run -d -p 8088:80 --name welcome-to-docker docker/welcome-to-docker
This is how we were able to get inside the container. That is, you need to understand that now on the local machine when accessing port 8088, there is a forwarding of the request to Docker, which forwards it to port 80 of this container. If we run the command from DockerHub again, the container will not start, because port 8088 is already occupied by this existing container.
Replacing it with 8089, for example, we can start another container. You only need to forward ports from the container if necessary, otherwise it will take up free ports on the machine and may be unsafe.
The third tab is Terminal. This is the console inside the container, where you can execute various commands available for this container. Since nginx is installed here, we can run the command nginx -v and see the current version of nginx.
On the next tab, Files, we can see the entire file system of the container.
The last tab, Stats, contains statistics on the container's resource usage. Here you can track the consumption of RAM, CPU, and the number of read and write operations to the local disk.
Let's go back to the containers tab and delete this container. Then we will delete the image of this container. It cannot be deleted if there is at least one running container with this image.
Here are some additional tips for using Docker Desktop:
The second way to get an image is to create it from your application. You can make an empty image that doesn't have anything in it, but that's not very useful, but it's possible. You are free to choose what will happen inside the image.
To create an image that can then be run and become a container, you need to describe the application in a special file - Dockerfile. By reading this file, Docker can create an image with the configuration you specified.
Let's go back to Dockerhub and find the official
After copying the command from Overview to the console, we start the container and go to http://localhost.
We are greeted by the Getting Started Guide. Go to the Our Application tab and start creating a Dockerfile. First, let's download the test Node.js project that we will dockerize, which means putting it into a container.
On the second step, we need to create a Dockerfile without an extension, just Dockerfile, and put the following commands in it:
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN yarn install --production
CMD ["node", "src/index.js"]
Let's start building the image using the command (in the console from the same directory where the Dockerfile is located):
docker build -t getting-started .
Let's give it the tag getting-started so we can distinguish it. After that, let's run this image:
docker run -dp 3000:3000 getting-started
Going to localhost:3000, we can use the application. As you can see, all you need to use it is Docker.
This container can also be opened in Docker Desktop to view its variables and files, statistics and logs.
Here are some additional tips for building images from Dockerfiles: