Docker Tutorial Series Part 1: Images, Layers and Dockerfile

Written by adityadixit06 | Published 2018/08/06
Tech Story Tags: docker | continuous-deployment | docker-image | docker-layers | dockerfiles

TLDRvia the TL;DR App

This article is first in a series of articles on Docker. Today, we are going to introduce the core concepts of Docker. Then we will build our first Docker image using Dockerfile and then instantiate a container.

If you want to get a brief overview of containers and virtual machines then visit this link:

Age of Docker: Containers vs Virtual Machines_Virtual Machines and Containers perform the same task — they isolate an application and its dependencies, allowing it…_medium.com

Docker is a platform that allows you to separate your application from your infrastructure. This decoupling makes it easier to develop, scale and run applications. Docker is based on client-server architecture. The ‘Docker daemon’ does the heavy lifting tasks of building, deploying and managing objects. Whereas the ‘Docker client’ is a command line interface that interacts with the Docker daemon.

Before we get started I would like to introduce a few terms that we will using in this article on Docker:

  • Image: Read-only template with instructions to create a Docker container.
  • Container: A runnable instance of an image. It is configurable and isolated from other containers.
  • Service: Allows you to define the expected state of an application.
  • Dockerfile: A text file with instructions to create an image.
  • Docker Store: A public repository of Docker images.

First step in using Docker is to pull a base version of an image and then install the applications file and libraries on top of this base image. We can perform this process manually but it as complexity grows it is quite likely that this method would become impossible to manage. A much better alternative is to use Dockerfile. A Dockerfile is a series of instructions to create an image. It is present in the root folder of the application. This Dockerfile is consumed by build command to produce a custom image of application.

The simplest Dockerfile will perform the following operations:

  1. Pull base image of OS and programs from Docker Store
  2. Install libraries and binaries
  3. Copy application files to container
  4. Set working directory
  5. Open ports to communicate with outside world
  6. Set commands to run when container starts

The build process for creating image will consume Dockerfile and produce our image. This image is then used to run container that holds our application.

docker image build -t [container-name] . docker container run [container-name]

An image is not a single monolithic block. It is composed of several layers, with each Dockerfile instruction generally representing a separate layer. Each layer is immutable and builds on top of previous layers. A dedicated layer keeps tracks of changes being made to an image. So, when we rebuild an image, the process does not start from scratch. If the underlying files needed to build a layer remain unchanged, then the layer is not touched and a cached version is used. The rebuild process starts at the layer at which variation with previous version is found. This reconciliation method makes build process extremely fast.

Please keep in mind that while writing a Dockerfile, source code files, which are updated several times a day, should be placed in higher layers to speed up the build process. Whereas installing application dependencies such as 3rd party modules and built in libraries should occur earlier. Failure to keep this in mind would mean that you would end up reinstalling all dependencies on every image build, significantly slowing down image build process.

Originally published at blog.adityadixit.me.


Published by HackerNoon on 2018/08/06