paint-brush
Create lean Node.js image with Docker multi-stage buildby@alexeiled
2,717 reads
2,717 reads

Create lean Node.js image with Docker multi-stage build

by Alexei LedenevMay 4th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Starting from <a href="https://hackernoon.com/tagged/docker" target="_blank">Docker</a> <strong>17.05+</strong>, you can create a single <code class="markup--code markup--p-code">Dockerfile</code> that can build multiple helper images with compilers, <a href="https://hackernoon.com/tagged/tools" target="_blank">tools</a>, and tests and use files from above images to produce the <strong>final</strong> Docker image.

Coin Mentioned

Mention Thumbnail
featured image - Create lean Node.js image with Docker multi-stage build
Alexei Ledenev HackerNoon profile picture

Starting from Docker 17.05+, you can create a single Dockerfile that can build multiple helper images with compilers, tools, and tests and use files from above images to produce the final Docker image.

The “core principle” of Dockerfile

Docker can build images by reading the instructions from a Dockerfile. A Dockerfile is a text file that contains a list of all the commands needed to build a new Docker image. The syntax of Dockerfile is pretty simple and the Docker team tries to keep it intact between Docker engine releases.

The core principle is very simple: 1 Dockerfile -> 1 Docker Image.

This principle works just fine for basic use cases, where you just need to demonstrate Docker capabilities or put some “static” content into a Docker image.

Once you advance with Docker and would like to create secure and lean Docker images, a single Dockerfile is not enough.

People who insist on following the above principle find themselves with slow Docker builds, huge Docker images (several GB size images), slow deployment time and lots of CVE violations embedded into these images.

The Docker Build Container pattern

Docker Pattern: The Build Container

The basic idea behind Build Container pattern is simple:

Create additional Docker images with required tools (compilers, linters, testing tools) and use these images to produce lean, secure and production ready Docker image.

An example of the Build Container pattern for typical Node.js application:

  1. Derive FROM a Node base image (for example node:6.10-alpine) node and npm installed (Dockerfile.build)
  2. Add package.json
  3. Install all node modules from dependency and devDependency
  4. Copy application code
  5. Run compilers, code coverage, linters, code analysis and testing tools
  6. Create the production Docker image; derive FROM same or other Node base image
  7. install node modules required for runtime (npm install --only=production)
  8. expose PORT and define a default CMD (command to run your application)
  9. Push the production image to some Docker registry

This flow assumes that you are using two or more Dockerfiles and a shell script or flow tool to orchestrate all steps above.

Example

I use a fork of Let’s Chat node.js application.

Builder Docker image with eslint, mocha and gulp















FROM alpine:3.5# install nodeRUN apk add --no-cache nodejs# set working directoryWORKDIR /root/chat# copy project fileCOPY package.json .# install node packagesRUN npm set progress=false && \npm config set depth 0 && \npm install# copy app filesCOPY . .# run linter, setup and testsCMD npm run lint && npm run setup && npm run test

Production Docker image with ‘production’ node modules only

FROM alpine:3.5



















# install nodeRUN apk add --no-cache nodejs tini# set working directoryWORKDIR /root/chat# copy project fileCOPY package.json .# install node packagesRUN npm set progress=false && \npm config set depth 0 && \npm install --only=production && \npm cache clean# copy app filesCOPY . .# Set tini as entrypointENTRYPOINT [“/sbin/tini”, “--”]# application server portEXPOSE 5000# default run commandCMD npm run start

What is Docker multi-stage build?

Docker 17.0.5 extends Dockerfile syntax to support new multi-stage build, by extending two commands: FROM and COPY.

The multi-stage build allows using multiple FROM commands in the same Dockerfile. The last FROM command produces the final Docker image, all other images are intermediate images (no final Docker image is produced, but all layers are cached).

The FROM syntax also supports AS keyword. Use AS keyword to give the current image a logical name and reference to it later by this name.

To copy files from intermediate images use COPY --from=<image_AS_name|image_number>, where number starts from 0 (but better to use logical name through AS keyword).

Creating a multi-stage Dockerfile for Node.js application

The Dockerfile below makes the Build Container pattern obsolete, allowing to achieve the same result with the single file.










# ---- Base Node ----FROM alpine:3.5 AS base# install nodeRUN apk add --no-cache nodejs-npm tini# set working directoryWORKDIR /root/chat# Set tini as entrypointENTRYPOINT ["/sbin/tini", "--"]# copy project fileCOPY package.json .









# ---- Dependencies ----FROM base AS dependencies# install node packagesRUN npm set progress=false && npm config set depth 0RUN npm install --only=production# copy production node_modules asideRUN cp -R node_modules prod_node_modules# install ALL node_modules, including 'devDependencies'RUN npm install





# ---- Test ----# run linters, setup and testsFROM dependencies AS testCOPY . .RUN npm run lint && npm run setup && npm run test









# ---- Release ----FROM base AS release# copy production node_modulesCOPY --from=dependencies /root/chat/prod_node_modules ./node_modules# copy app sourcesCOPY . .# expose port and define CMDEXPOSE 5000CMD npm run start

The above Dockerfile creates 3 intermediate Docker images and single release Docker image (the final FROM).

  1. First image FROM alpine:3.5 AS base – is a base Node image with: node, npm, tini (init app) and package.json
  2. Second image FROM base AS dependencies – contains all node modules from dependencies and devDependencies with additional copy of dependencies required for final image only
  3. Third image FROM dependencies AS test – runs linters, setup and tests (with mocha); if this run command fail not final image is produced
  4. The final image FROM base AS release – is a base Node image with application code and all node modules from dependencies

Try Docker multi-stage build today

In order to try Docker multi-stage build, you need to get Docker 17.0.5, which is going to be released in May and currently available on the beta channel.

So, you have two options:

  1. Use beta channel to get Docker 17.0.5
  2. Run dind container (docker-in-docker)

Running Docker-in-Docker 17.0.5 (beta)

Running Docker 17.0.5 (beta) in docker container (--privileged is required):


$ docker run -d --rm --privileged -p 23751:2375 --name dind \docker:17.05.0-ce-dind --storage-driver overlay2

Try mult-stage build. Add --host=:23751 to every Docker command, or set DOCKER_HOST environment variable.


$ # using --host$ docker --host=:23751 build -t local/chat:multi-stage .



$ # OR: setting DOCKER_HOST$ export DOCKER_HOST=localhost:23751$ docker build -t local/chat:multi-stage .

Summary

With Docker multi-stage build feature, it’s possible to implement an advanced Docker image build pipeline using a single Dockerfile .

Kudos to Docker team for such a useful feature!

Hope, you find this post useful. I look forward to your comments and any questions you have.

Originally published at codefresh.io on April 24, 2017.