in its entirety is an incredibly useful concept. From being able to execute applications in isolation, to being able to port them easily with all of their dependencies and configuration is all a developer could ask for. Containerization After getting somewhat familiar with this concept, I decided to get my hands on it. So, let me walk you through the whole process of containerizing a frontend application using Docker. Nextjs Note that this is an absolute beginner approach. I am not advising to use it in production. This is something new to me and I am still exploring Docker. If you’re reading this, maybe you can help me make some improvements to this approach. Anyways, this tutorial is for someone who wants to explore and get a hands-on experience with Docker. By the way, a huge shoutout to who is an incredible DevOps instructor. I learned everything about docker from her youtube channel. 💙 Nana Janashia Content Overview Why use Docker? Understanding Docker Docker Files Containerization of a Simple Nextjs Application Why use Docker? You might wonder, "What's the point of containerizing an application?" After all, going through the process of installing dependencies, setting up a database, and dealing with various configurations every time can be cumbersome. Instead, wouldn't it be more convenient to configure it once and ship it, enabling it to run on any machine effortlessly? And not only that, some dependencies pollute your local environment, but with docker, everything runs in an isolated environment giving you more control. You might ask, why can't we use a virtual machine if the end goal is to isolate everything? The problem is — VMs are heavy and they run on their own OS and kernel. Docker uses the resources of its host but has its own application layer and file system. You only need the docker engine to run a containerized application. Apart from that, it makes it so much easier to collaborate with project team members, testers, and the DevOps teams to run the application regardless of their operating system. Watch the video below to learn what problems docker solves in development as well as the deployment process. ✨ https://youtu.be/pg19Z8LL06w?t=232&embedable=true Then, install Docker from 🏃 docker.com Understanding Docker revolves around three core components: , and . Let's understand what are they and how they work together. Docker containers images volumes If you want to containerize your application, first you have to build an of it. An image is nothing but a combination of your app code, dependencies, and configuration. It's like a complete package of your application, ready to be shipped. image A container is just a running instance of an image. It lets you run the application in an isolated environment. You can run multiple containers based on different images. Volumes are often used to store persistent data. For example, if you to access the host's files from the container, you can use volumes to map the host path to a container path. Docker Files There are a bunch of docker-specific files which are used to configure docker. It's good to understand what each of them does, so here we go: : It's used to build an image of an application. Dockerfile : A structured way to execute docker commands with a lot of options in order to handle containers. docker-compose.yaml : Similar to , it is used to ignore files in docker. .dockerignore .gitignore Containerization of a Simple Nextjs Application Create a NextJs app using the steps in their . documentation While creating a NextJs application, I went for directory-based routing and thus the directory will act like the directory. Now, you need to build an of your application so that you can run it inside a container. app src image Building a docker image To create your own docker image, you need to create the . This file will have everything you need to package your application including dependencies and initial commands. Create the at the root of your project. Dockerfile Dockerfile FROM node:18-bullseye-slim RUN mkdir -p /home/yourapp COPY . /home/yourapp WORKDIR /home/yourapp RUN npm install CMD ["npm", "run", "dev"] Directives such as , , , etc., are specific to the Dockerfile. To execute and run your application you need a node installed in your container, that's why we are directing docker to pull the official image from the docker hub which the application can use. FROM COPY RUN nodejs Then, with the directive, you are telling docker to run a command inside the image's file system. It will create a directory in the home directory. RUN yourapp Next, the directive will copy the files from the current directory of your local system to the directory of the image's file system. COPY yourapp To ignore certain files or directories such as , , etc., you can create a file and list them there. .env node_modules .dockerignore With the directive, the current directory will be set to the root directory of your project in the image's file system. If you don't specify it, will install dependencies in the root directory of the file system and not your project. WORKDIR npm The directive is used to execute commands, often to install dependencies. will create folder so you don't need to copy it from your local system. The directive is like an entrypoint command to run your built image. Docker will by default execute this command when you run your built image in a container. RUN npm install node_modules CMD CMD Now that your image-building instructions are ready, you can execute the following command to create an image. docker build -t image_name:tag . The flag is for specifying a tag to the image. You can specify something like or anything you want by replacing the placeholder. And, the is actually the context path that will be used by Docker to find your project files. If you are at the root of your project, you can specify , otherwise, you have to give a path relative to where your terminal is currently pointing to. -t 1.0 :tag . . After the command gets executed successfully, you should get something like this in your terminal. [+] Building 55.4s (11/11) FINISHED docker:default => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 208B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 72B 0.0s => [internal] load metadata for docker.io/library/node:18-bullseye-slim 2.5s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [1/5] FROM docker.io/library/node:18-bullseye-slim@sha256:d2617c7df857596e4f29715c7a4d8e861852 0.0s => [internal] load build context 0.1s => => transferring context: 18.36kB 0.1s => CACHED [2/5] RUN mkdir -p /app/path 0.0s => [3/5] COPY . /app/path 0.1s => [4/5] WORKDIR /app/path 0.0s => [5/5] RUN npm install 41.8s => exporting to image 10.3s => => exporting layers 10.3s => => writing image sha256:fb05dcdb130301a8a1f8afa39c01a4430c59127e266cb49bcb763b5dd73d7aef 0.0s => => naming to docker.io/image_name:tag 0.0s And if you have a docker desktop installed, you can see a docker image entry in the tab. images That's the basic process for building an image in Docker from a Dockerfile. You can port this image to any other machine and it will execute exactly the same as it will on your machine. But still, your app isn't running! Why? Because you didn't run the image. And that's where containers come into action. Running Image in a Container As you know, a container is nothing but a running instance of an image. So how do you run an image? Using : docker run docker run -d -p5000:3000 --name container_name image_name:tag What's going on with this command? you may ask! Well, nothing much — the first flag is used for mode which basically means the container will keep running in the background by freeing your terminal. If you don't specify it, the container will log everything in the terminal and once you kill the process, the container will exit and stop running. -d detached The second flag is used for port binding. Port binding in docker is a way to map the container port to a port on your machine i.e. host port. Let's say you are running your application at port inside the container and because the container is isolated, it has it's own ports, and thus you have to specify which port on your local machine will it forward the application content to. -p 3000 Here, with you are binding the port of your container to the port of your local machine. So, the syntax is: . The next two arguments are quite straightforward. You have to specify a container name and the name of the image which you want to run with it's tag. -p5000:3000 3000 5000 -pHOST:CONTAINER If you have some variables, you can specify them in an env file and then use the flag to specify the path of it. env --env-file After running the command, you will see a container being created and the command which we specified in the directive will get executed, and the app will be live at port ! CMD 5000 To see all the running containers just run and to view all containers, run . docker ps docker ps -a But there's a catch, every time you need to run an image, you have to run the docker run command with a long list of options. This isn't feasible and it's time-consuming when you have multiple services or containers talking to each other in a complex application. To overcome this, there's a file named that you can use to specify the containers with all of their required options and env variables. With only one container, your docker-compose file can look something like this: docker-compose.yaml services: your_container_name: container_name: your_container_name image: image_name:tag env_file: <<path to env file>> ports: - 5000:3000 command: npm run dev To run docker compose, execute with a detached flag for the container to run in the background. To stop and remove the container, run ! docker-compose up -d docker-compose down Persistent Data - Volumes Try updating your NextJs code locally and see if you can see those changes being reflected in your containerized application. Does it update? No, it won't. The reason for that is your local code is not in sync with the code in the container, i.e. the code running in the container hasn't updated and is still the same. To overcome this issue, you need to keep some of your container files in sync with the local files. Volumes can be used to do that. You need to map your local directory to the container's directory with the help of a volume, which will help docker to listen for changes in the local directory and update the container directory! In your file, add the following: docker-compose.yaml services: container_name: # ... volumes: - ./app:your_container_app_dir/app # ... For the current Nextjs application, you can persist the directory (if you opted for directory-based navigation) or the directory. The path before the colon is your local directory's relative path to the app directory and the path after is the absolute path to where the project lives and its app directory. app src : : All of that takes care of live changes, but hot reloading still won't work when you make any changes to the app directory. For that to work, you have to add a webpack config to the file. next.config.js const nextConfig = { webpack: (config => { config.watchOptions = { poll: 1000, aggregateTimeout: 300, ignored: ['**/node_modules'] } return config }) } After updating your next config, you need to rebuild the image because we didn't persist that file to reflect our local changes and added it as a one-time static file using the directive. A better approach would be to add the whole project directory to the volume so that you don't have to rebuild the image, but it's debatable. COPY That's it, you just built your own docker image and ran it in a container. If you want, you can also . publish your docker image to docker hub This post was originally published in . Syntackle The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "an app in a container".