In this article I want to show how to use docker for development and testing. To show that now is time to switch from development to engineering, from single stack to full stack. And of course full stack is not only frontend and backend, it’s environment too. And docker is a great tool for this stuff.
And some thoughts that in near future full stack will contain and machine learning. I’ll show how easy use docker in this area.
Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to be portable among any system running the Linux operating system (OS).
Think of a Docker container as another form of virtualization. Virtual Machines (VM) allow a piece of hardware to be split up into different VMs — or virtualized — so that the hardware power can be shared among different users and appear as separate servers or machines. Docker containers virtualize the OS, splitting it up into virtualized compartments to run container applications.
This approach allows pieces of code to be put into smaller, easily transportable pieces that can run anywhere Linux is running. It’s a way to make applications even more distributed, and strip them down into specific functions.
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.
After docker installation it works using command line:
You can start from official node documentation article: Dockerizing a Node.js web app.
I created repository on github so you can get source:
git clone https://github.com/evheniy/yeps-docker-example.git
# Create app directory
RUN mkdir -p /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
CMD [ "npm", "start" ]
To create own image you need to extend it from existing image using FROM. This image extends latest version of origin node image on docker hub.
After we need to create working directory in container. Working with node.js there is a good practice copy package.json and install all dependencies before copy all other files. So I copy it and run npm install command to get all dependencies. After I copy all files to container.
EXPOSE command makes docker container listen port. And CMD command runs our server.
There can be only one CMD command. It’s the philosophy of docker — one process per container.
After building we can store our image on docker hub or own private image repository. But I won’t describe it in this article.
To build it run:
docker build -t yeps .
All examples and commands how to work with this image I described in README.md file. Option -t creates name for image, in our case it’s yeps.
To run container:
docker run -p 3000:3000 --name yeps -d yeps
Option -p maps port from host machine to container port and option -d runs it as a service. Open http://localhost:3000/ and see working node application.
There are a lot of commands to stop it. The simple is docker stop <conteinerID>. To check container id run docker ps -a. If you run docker image ls you can find images: node and yeps. And to delete it use docker image rm. But there is other way to top and remove image:
docker rm -f yeps
Docker helps to work with images without building own image. On docker hub you can find a lot of interesting official and non official images. You can extend it for making own new image or just run it for some reasons.
And one interesting example is testing. On official docker hub page for node.js there are a lot of images for different node.js versions. It can be useful for testing. For example I’ll show how to test any node.js application. Let’s try it for YEPS framework. First we should get code from github:
git clone https://github.com/evheniy/yeps.git
After we need to run npm test command using any node version like this:
docker run -it --rm -v "$PWD":/www -w /www node:8 /bin/bash -c "node -v && npm -v && npm i && npm t"
Here we run docker image using node:8 image (I use latest 8 version, you can specify any other version). -it parameter helps to run it in interactive mode, and to clean all data after finish we put — rm parameter. How to clean disc from old containers and images I’ll describe next in tutorial.
Command -v helps to map current directory to /www and -w command is analog of cd command (change directory), it helps to run our commands in this directory.
And we run our node.js commands node -v && npm -v && npm i && npm t using /bin/bash with flag -c.
If you need to run the same commands using node.js 7 just change docker image of node:
docker run -it --rm -v "$PWD":/www -w /www node:7 /bin/bash -c "node -v && npm -v && npm i && npm t"
All node images from official repository you can find on docker hub. But as a good practice if you see that there are images based on alpine linux it’s better to use them to spend less disc space. So if we need to test our app using latest node version just run (in alpine linux we need to use /bin/sh instead of bash):
docker run -it --rm -v "$PWD":/www -w /www node:alpine /bin/sh -c "node -v && npm -v && npm i && npm t"
For testing YEPS packages with different databases I use docker. For example yeps-redis. For running and stopping tests I added command to script section of package.json:
"db:start": "docker run -d --name redis -p 6379:6379 redis:latest", "db:stop": "docker rm -f redis"
The same for yeps-mysql:
"db:start": "docker run -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=yeps mysql:latest",
"db:stop": "docker rm -f mysql"
Here using -e option I set environment variables like user and password.
And for yeps-mongoose:
"db:start": "docker run -d --name mongo -p 27017:27017 mongo", "db:stop": "docker rm -f mongo"
- npm install
- docker version
- node --version
- npm --version
- npm run lint
- npm run test
- npm run report
docker run --name jenkins -p 8080:8080 -d jenkins
For getting access to admin UI you need to get admin password. As we run our container with -d option we can get it only from /var/jenkins_home/secrets/initialAdminPassword:
docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword
Here I use exec command. It helps to run command like “cat /var/jenkins_home/secrets/initialAdminPassword” in running container.
And command “docker rm -f jenkins” will stop it.
Almost the same for TeamCity. But here we need to run main process (server) and build agents in separate containers.
But first we need to create directory where we can store our configs and logs:
To start TeamCity server just run:
docker run -it --name teamcity-server-instance -v "$PWD"/datadir:/data/teamcity_server/datadir -v "$PWD"/logs:/opt/teamcity/logs -p 8111:8111 -d jetbrains/teamcity-server
And almost the same for build agents (in free version we can work only with 3 build agents):
docker run -it -d -e SERVER_URL="teamcity-server-instance:8111" -v "$PWD"/agent1:/data/teamcity_agent/conf --link teamcity-server-instance:teamcity-server-instance --privileged jetbrains/teamcity-agent
docker run -it -d -e SERVER_URL="teamcity-server-instance:8111" -v "$PWD"/agent2:/data/teamcity_agent/conf --link teamcity-server-instance:teamcity-server-instance --privileged jetbrains/teamcity-agent
docker run -it -d -e SERVER_URL="teamcity-server-instance:8111" -v "$PWD"/agent3:/data/teamcity_agent/conf --link teamcity-server-instance:teamcity-server-instance --privileged jetbrains/teamcity-agent
I specified different names and directories for each agent. As you can see I mapped directories from server and agents to local directory teamcity.
Docker is a good thing if you need build and run single image. But in most real apps you need to work with different stuff at the same time. Databases, instances in cluster mode, microservices… And Docker compose is perfect tool for this.
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Compose has commands for managing the whole lifecycle of your application:
Let’s create compose version for our TeamCity cluster — docker-compose.yml:
We use almost the same parameters (ports, images, volumes) but with some compose specific updates.
To start compose just run docker-compose up or run it as a service:
docker-compose up -d
And to stop:
To bring everything down (with volumes):
docker-compose down --volumes
Or if you want to remove docker images:
docker-compose down --rmi all
Let’s update our previous node.js YEPS example with cluster of node.js instances and nginx as a load balancer.
I created a github repository, so just clone it:
docker-compose up -d
In this example we have docker-compose.yml and two directories: nginx and node. You can open links and check each file. I use the same idea like in a previous compose example. But this time I build own image for nginx:
# Set nginx base image
# Copy custom configuration file from the current directory
COPY nginx.conf /etc/nginx/nginx.conf
and for node I use existing Dockerfile.
Docker and Docker compose are really helping with development and testing in real modern applications even using microservice architecture. But you can use it not only for development. Next I’ll show you how to use it for data science experiments.
The same idea is useful not only for development I mean computer science. It useful and for data science. And running machine learning experiments in docker container is a good too.
To work with this container just clone git repository and build image:
$ git clone https://github.com/evheniy/python-docker.git
$ cd python-docker
So for building just run npm run build or “docker build -t python .”. And to start: npm start or “docker run — name python -p 8888:8888 -v $PWD/python:/opt/notebooks -d python”. Here I use parameter -d for demonizing process and -v for mapping current directory and save all my data after stopping container.
And open http://localhost:8888 with password root.
You can find some examples using web UI of jupyter notebook and run any for example plot_face_recognition:
Docker makes it easy to wrap your applications and services in containers so you can run them anywhere. As you work with Docker, however, it’s also easy to accumulate an excessive number of unused images, containers, and data volumes that clutter the output and consume disk space.
Docker doesn’t provide direct cleanup commands, but it does give you all the tools you need to clean up your system from the command line. In this tutorial you can find a quick reference to commands that are useful for freeing disk space and keeping your system organized by removing unused Docker images, containers, and volumes.
Some useful commands I’m going to provide here:
List of all images:
docker image ls
docker ps -a
One line to stop and remove all containers:
docker rm -f $(docker ps -a -q)
And remove all images:
docker rmi -f $(docker images -q)
To delete all dangling volumes:
docker volume rm $(docker volume ls -f dangling=true -q)
In this tutorial I shown some useful examples and commands for working with docker. There are many other combinations and flags that can be used with each. To get more information you can read docker documentation and I recommend to finish Udemy course.
As I said before docker is a great tool for development and testing. On production you will use devops help for configuration web services like AWS. So with docker you can use any thing now and let’s devops care about production.
Create your free account to unlock your custom reading experience.