paint-brush
Docker for Beginners: Containerizing a Nextjs Applicationby@murtuzaalisurti
719 reads
719 reads

Docker for Beginners: Containerizing a Nextjs Application

by MurtuzaJuly 30th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Containerization in its entirety is an incredibly useful concept. From being able to execute applications in isolation, to being able to port them easily with all of their dependencies and configuration is all a developer could ask for. After getting somewhat familiar with this concept, I decided to get my hands on it. So, let me walk you through the whole process of containerizing a frontend Nextjs application using Docker. Note that this is an absolute beginner approach. I am not advising to use it in production. This is something new to me and I am still exploring Docker. If you’re reading this, maybe you can help me make some improvements to this approach.

People Mentioned

Mention Thumbnail
featured image - Docker for Beginners: Containerizing a Nextjs Application
Murtuza HackerNoon profile picture

Containerization in its entirety is an incredibly useful concept. From being able to execute applications in isolation, to being able to port them easily with all of their dependencies and configuration is all a developer could ask for.


After getting somewhat familiar with this concept, I decided to get my hands on it. So, let me walk you through the whole process of containerizing a frontend Nextjs application using Docker.


Note that this is an absolute beginner approach. I am not advising to use it in production. This is something new to me and I am still exploring Docker. If you’re reading this, maybe you can help me make some improvements to this approach.


Anyways, this tutorial is for someone who wants to explore and get a hands-on experience with Docker.


By the way, a huge shoutout to Nana Janashia who is an incredible DevOps instructor. I learned everything about docker from her youtube channel. 💙


Content Overview

  • Why use Docker?
  • Understanding Docker
  • Docker Files
  • Containerization of a Simple Nextjs Application

Why use Docker?

You might wonder, "What's the point of containerizing an application?" After all, going through the process of installing dependencies, setting up a database, and dealing with various configurations every time can be cumbersome. Instead, wouldn't it be more convenient to configure it once and ship it, enabling it to run on any machine effortlessly?


And not only that, some dependencies pollute your local environment, but with docker, everything runs in an isolated environment giving you more control.


You might ask, why can't we use a virtual machine if the end goal is to isolate everything?


The problem is — VMs are heavy and they run on their own OS and kernel. Docker uses the resources of its host but has its own application layer and file system. You only need the docker engine to run a containerized application.


Apart from that, it makes it so much easier to collaborate with project team members, testers, and the DevOps teams to run the application regardless of their operating system.


Watch the video below to learn what problems docker solves in development as well as the deployment process. ✨



Then, install Docker from docker.com🏃


Understanding Docker

Docker revolves around three core components: containers, images and volumes. Let's understand what are they and how they work together.


If you want to containerize your application, first you have to build an image of it. An image is nothing but a combination of your app code, dependencies, and configuration. It's like a complete package of your application, ready to be shipped.


A container is just a running instance of an image. It lets you run the application in an isolated environment. You can run multiple containers based on different images.


Volumes are often used to store persistent data. For example, if you to access the host's files from the container, you can use volumes to map the host path to a container path.


Docker Files

There are a bunch of docker-specific files which are used to configure docker. It's good to understand what each of them does, so here we go:


  • Dockerfile: It's used to build an image of an application.
  • docker-compose.yaml: A structured way to execute docker commands with a lot of options in order to handle containers.
  • .dockerignore: Similar to .gitignore, it is used to ignore files in docker.


Containerization of a Simple Nextjs Application


Create a NextJs app using the steps in their documentation.


While creating a NextJs application, I went for directory-based routing and thus the app directory will act like the src directory. Now, you need to build an image of your application so that you can run it inside a container.


Building a docker image

To create your own docker image, you need to create the Dockerfile. This file will have everything you need to package your application including dependencies and initial commands. Create the Dockerfile at the root of your project.


FROM node:18-bullseye-slim

RUN mkdir -p /home/yourapp
COPY . /home/yourapp

WORKDIR /home/yourapp

RUN npm install

CMD ["npm", "run", "dev"]


Directives such as FROM, COPY, RUN, etc., are specific to the Dockerfile. To execute and run your application you need a node installed in your container, that's why we are directing docker to pull the official nodejs image from the docker hub which the application can use.


Then, with the RUN directive, you are telling docker to run a command inside the image's file system. It will create a yourapp directory in the home directory.


Next, the COPY directive will copy the files from the current directory of your local system to the yourapp directory of the image's file system.


To ignore certain files or directories such as .env, node_modules, etc., you can create a .dockerignore file and list them there.


With the WORKDIR directive, the current directory will be set to the root directory of your project in the image's file system. If you don't specify it, npm will install dependencies in the root directory of the file system and not your project.


The RUN directive is used to execute commands, often to install dependencies. npm install will create node_modules folder so you don't need to copy it from your local system. The CMD directive is like an entrypoint command to run your built image. Docker will by default execute this CMD command when you run your built image in a container.


Now that your image-building instructions are ready, you can execute the following command to create an image.


docker build -t image_name:tag .


The -t flag is for specifying a tag to the image. You can specify something like 1.0 or anything you want by replacing the :tag placeholder. And, the . is actually the context path that will be used by Docker to find your project files. If you are at the root of your project, you can specify ., otherwise, you have to give a path relative to where your terminal is currently pointing to.


After the command gets executed successfully, you should get something like this in your terminal.


[+] Building 55.4s (11/11) FINISHED                                                       docker:default
 => [internal] load build definition from Dockerfile                                                0.1s
 => => transferring dockerfile: 208B                                                                0.0s 
 => [internal] load .dockerignore                                                                   0.0s 
 => => transferring context: 72B                                                                    0.0s 
 => [internal] load metadata for docker.io/library/node:18-bullseye-slim                            2.5s 
 => [auth] library/node:pull token for registry-1.docker.io                                         0.0s
 => [1/5] FROM docker.io/library/node:18-bullseye-slim@sha256:d2617c7df857596e4f29715c7a4d8e861852  0.0s
 => [internal] load build context                                                                   0.1s
 => => transferring context: 18.36kB                                                                0.1s
 => CACHED [2/5] RUN mkdir -p /app/path                                                   0.0s
 => [3/5] COPY . /app/path                                                                0.1s
 => [4/5] WORKDIR /app/path                                                               0.0s
 => [5/5] RUN npm install                                                                          41.8s
 => exporting to image                                                                             10.3s
 => => exporting layers                                                                            10.3s
 => => writing image sha256:fb05dcdb130301a8a1f8afa39c01a4430c59127e266cb49bcb763b5dd73d7aef        0.0s
 => => naming to docker.io/image_name:tag                                  0.0s


And if you have a docker desktop installed, you can see a docker image entry in the images tab.

That's the basic process for building an image in Docker from a Dockerfile. You can port this image to any other machine and it will execute exactly the same as it will on your machine.

But still, your app isn't running! Why? Because you didn't run the image. And that's where containers come into action.

Running Image in a Container

As you know, a container is nothing but a running instance of an image. So how do you run an image? Using docker run:


docker run -d -p5000:3000 --name container_name image_name:tag


What's going on with this command? you may ask! Well, nothing much — the first flag -d is used for detached mode which basically means the container will keep running in the background by freeing your terminal. If you don't specify it, the container will log everything in the terminal and once you kill the process, the container will exit and stop running.


The second flag -p is used for port binding. Port binding in docker is a way to map the container port to a port on your machine i.e. host port. Let's say you are running your application at port 3000 inside the container and because the container is isolated, it has it's own ports, and thus you have to specify which port on your local machine will it forward the application content to.


Here, with -p5000:3000 you are binding the 3000 port of your container to the port 5000 of your local machine. So, the syntax is: -pHOST:CONTAINER. The next two arguments are quite straightforward. You have to specify a container name and the name of the image which you want to run with it's tag.


If you have some env variables, you can specify them in an env file and then use the flag --env-file to specify the path of it.


After running the command, you will see a container being created and the command which we specified in the CMD directive will get executed, and the app will be live at port 5000!


To see all the running containers just run docker ps and to view all containers, run docker ps -a.


But there's a catch, every time you need to run an image, you have to run the docker run command with a long list of options. This isn't feasible and it's time-consuming when you have multiple services or containers talking to each other in a complex application. To overcome this, there's a file named docker-compose.yaml that you can use to specify the containers with all of their required options and env variables. With only one container, your docker-compose file can look something like this:


services:
  your_container_name:
    container_name: your_container_name
    image: image_name:tag
    env_file: <<path to env file>>
    ports:
      - 5000:3000
    command: npm run dev


To run docker compose, execute docker-compose up -d with a detached flag for the container to run in the background. To stop and remove the container, run docker-compose down!


Persistent Data - Volumes

Try updating your NextJs code locally and see if you can see those changes being reflected in your containerized application. Does it update? No, it won't. The reason for that is your local code is not in sync with the code in the container, i.e. the code running in the container hasn't updated and is still the same.


To overcome this issue, you need to keep some of your container files in sync with the local files. Volumes can be used to do that. You need to map your local directory to the container's directory with the help of a volume, which will help docker to listen for changes in the local directory and update the container directory!


In your docker-compose.yaml file, add the following:


services:
  container_name:
    # ...
    volumes:
      - ./app:your_container_app_dir/app
    # ...


For the current Nextjs application, you can persist the app directory (if you opted for directory-based navigation) or the src directory. The path before the colon : is your local directory's relative path to the app directory and the path after : is the absolute path to where the project lives and its app directory.


All of that takes care of live changes, but hot reloading still won't work when you make any changes to the app directory. For that to work, you have to add a webpack config to the next.config.js file.


const nextConfig = {
  webpack: (config => {
    config.watchOptions = {
      poll: 1000,
      aggregateTimeout: 300,
      ignored: ['**/node_modules']
    }
    return config
  })
}


After updating your next config, you need to rebuild the image because we didn't persist that file to reflect our local changes and added it as a one-time static file using the COPY directive. A better approach would be to add the whole project directory to the volume so that you don't have to rebuild the image, but it's debatable.


That's it, you just built your own docker image and ran it in a container. If you want, you can also publish your docker image to docker hub.


This post was originally published in Syntackle.


The lead image for this article was generated by HackerNoon's AI Image Generator via the prompt "an app in a container".