In this article I want to show how to use docker for and . To show that now is time to switch from development to , from single stack to . And of course full stack is not only frontend and backend, it’s environment too. And docker is a great tool for this stuff. development testing engineering full stack And some thoughts that in near future . I’ll show how easy use docker in this area. full stack will contain and machine learning Docker philosophy Docker container is an open source software development platform. Its main benefit is to package applications in “containers,” allowing them to be portable among any system running the Linux operating system (OS). Think of a Docker container as another form of virtualization. Virtual Machines (VM) allow a piece of hardware to be split up into different VMs — or virtualized — so that the hardware power can be shared among different users and appear as separate servers or machines. Docker containers virtualize the OS, splitting it up into virtualized compartments to run container applications. This approach allows pieces of code to be put into smaller, easily transportable pieces that can run anywhere Linux is running. It’s a way to make applications even more distributed, and strip them down into specific functions. A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure. After docker it works using command line: installation There are a lot of parameters and options. We will work mostly with , , , , and . build images run exec rm rmi Dockerizing node.js app using YEPS You can start from official node documentation article: . Dockerizing a Node.js web app To work with you need to get image from or create own image using and command. Let’s create our own image using node.js and . docker container docker hub Dockerfile docker build framework YEPS I created so you can get source: repository on github git clone https://github.com/evheniy/yeps-docker-example.git cd yeps-docker-example : Dockerfile node:latest FROM # Create app directory mkdir -p /usr/src/app /usr/src/app RUN WORKDIR # Install app dependencies package.json /usr/src/app/ npm install COPY RUN # Bundle app source . /usr/src/app COPY 3000 EXPOSE [ " ", " " ] CMD npm start To create own image you need to extend it from existing image using . This image extends latest version of origin node image on . FROM docker hub After we need to create working directory in container. Working with node.js there is a good practice copy package.json and install all dependencies before copy all other files. So I copy it and run command to get all dependencies. After I copy all files to container. npm install command makes docker container listen port. And command runs our server. EXPOSE CMD There can be command. It’s the philosophy of docker — . only one CMD one process per container After building we can store our image on docker hub or own private image repository. But I won’t describe it in this article. To build it run: docker build -t yeps . All examples and commands how to work with this image I described in file. Option creates name for image, in our case it’s . README.md -t yeps To run container: docker run -p 3000:3000 --name yeps -d yeps Option maps port from host machine to container port and option runs it as a service. Open and see working node application. -p -d http://localhost:3000/ There are a lot of commands to stop it. The simple is . To check run . If you run you can find images: and . And to delete it use . But there is other way to top and remove image: docker stop <conteinerID> container id docker ps -a docker image ls node yeps docker image rm docker rm -f yeps Interactive mode Docker helps to work with images without building own image. On docker hub you can find a lot of interesting official and non official images. You can extend it for making own new image or just run it for some reasons. And one interesting example is testing. On official docker hub page for node.js there are a lot of images for different node.js versions. It can be useful for testing. For example I’ll show how to test any node.js application. Let’s try it for framework. First we should get code from github: YEPS git clone https://github.com/evheniy/yeps.gitcd yeps After we need to run npm test command using any node version like this: docker run -it --rm -v "$PWD":/www -w /www /bin/bash -c "node -v && npm -v && npm i && npm t" node:8 Here we run docker image using image (I use latest 8 version, you can specify any other version). parameter helps to run it in interactive mode, and to clean all data after finish we put parameter. How to clean disc from old containers and images I’ll describe next in tutorial. node:8 -it — rm Command helps to map current directory to and command is analog of command (change directory), it helps to run our commands in this directory. -v /www -w cd And we run our node.js commands using with flag . node -v && npm -v && npm i && npm t /bin/bash -c If you need to run the same commands using just change docker image of node: node.js 7 docker run -it --rm -v "$PWD":/www -w /www /bin/bash -c " " node:7 node -v && npm -v && npm i && npm t All node images from official repository you can find on . But as a good practice if you see that there are images based on alpine linux it’s better to use them to spend less disc space. So if we need to test our app using latest node version just run (in alpine linux we need to use instead of ): docker hub /bin/sh bash docker run -it --rm -v "$PWD":/www -w /www -c " " node:alpine /bin/sh node -v && npm -v && npm i && npm t Database as a service For testing packages with different databases I use docker. For example . For running and stopping tests I added command to script section of : YEPS yeps-redis package.json " ": "docker run -d --name redis -p 6379:6379 redis:latest", " ": "docker rm -f redis" db:start db:stop The same for : yeps-mysql " ": "docker run -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=yeps mysql:latest", db:start " ": "docker rm -f mysql" db:stop Here using option I set environment variables like user and password. -e And for : yeps-mongoose " ": "docker run -d --name mongo -p 27017:27017 mongo", " ": "docker rm -f mongo" db:start db:stop As I use for testing it’s easy to use docker as a service because TravisCI based on docker. Just register repository and create like I made it for : TravisCI .travis.yml yeps-mongoose sudo: requiredlanguage: node_jsnode_js: "7" "8"**services: docker**before_install: npm installscript: docker version node --version npm --version npm run lint npm run test npm run report If you need to work with private CI services you can use or . Jenkins has where you can find documentation how to run it. For example: Jenkins TeamCity official repository on docker hub docker run --name jenkins -p 8080:8080 -d jenkins For getting access to admin UI you need to get admin . As we run our container with option we can get it only from : password -d /var/jenkins_home/secrets/initialAdminPassword docker exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword Here I use command. It helps to run command like “ in running container. exec cat /var/jenkins_home/secrets/initialAdminPassword” And command will stop it. “docker rm -f jenkins” Almost the same for . But here we need to run main process (server) and build agents in separate containers. TeamCity But first we need to create directory where we can store our configs and logs: mkdir teamcitycd teamcity To start just run: TeamCity server docker run -it --name teamcity-server-instance -v "$PWD"/datadir:/data/teamcity_server/datadir -v "$PWD"/logs:/opt/teamcity/logs -p 8111:8111 -d jetbrains/teamcity-server And almost the same for (in free version we can work only with 3 build agents): build agents docker run -it -d -e SERVER_URL="teamcity-server-instance:8111" -v "$PWD"/ :/data/teamcity_agent/conf --link teamcity-server-instance:teamcity-server-instance --privileged jetbrains/teamcity-agent agent1 docker run -it -d -e SERVER_URL="teamcity-server-instance:8111" -v "$PWD"/ :/data/teamcity_agent/conf --link teamcity-server-instance:teamcity-server-instance --privileged jetbrains/teamcity-agent agent2 docker run -it -d -e SERVER_URL="teamcity-server-instance:8111" -v "$PWD"/ :/data/teamcity_agent/conf --link teamcity-server-instance:teamcity-server-instance --privileged jetbrains/teamcity-agent agent3 I specified different names and directories for each agent. As you can see I mapped directories from server and agents to local directory . teamcity Docker compose Docker is a good thing if you need build and run single image. But in most real apps you need to work with different stuff at the same time. Databases, instances in cluster mode, microservices… And is perfect tool for this. Docker compose Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see . the list of features Compose has commands for managing the whole lifecycle of your application: Start, stop, and rebuild services View the status of running services Stream the log output of running services Run a one-off command on a service Let’s create compose version for our cluster — : TeamCity docker-compose.yml version: '2'services: :restart: unless-stopped : jetbrains/teamcity-server :- "8111:8111" :- "./server/datadir:/data/teamcity_server/datadir"- "./server/logs:/opt/teamcity/logs" :restart: unless-stopped : jetbrains/teamcity-agent :- "server:server" :SERVER_URL: server:8111 :- "./ :/data/teamcity_agent/conf" :restart: unless-stopped : jetbrains/teamcity-agent :- "server:server" :SERVER_URL: server:8111 :- "./ :/data/teamcity_agent/conf" :restart: unless-stopped : jetbrains/teamcity-agent :- "server:server" :SERVER_URL: server:8111 :- "./ :/data/teamcity_agent/conf" server image ports volumes agent1 image links environment volumes agent1 agent2 image links environment volumes agent2 agent3 image links environment volumes agent3 We use almost the same parameters ( , , ) but with some compose specific updates. ports images volumes To start compose just run or run it as a service: docker-compose up docker-compose up -d And to stop: docker-compose stop To bring everything down (with volumes): docker-compose down --volumes Or if you want to remove docker images: docker-compose down --rmi all Let’s update our previous node.js example with cluster of node.js instances and as a load balancer. YEPS nginx I created a , so just clone it: github repository git clone cd https://github.com/evheniy/yeps-docker-compose-example.git yeps-docker-compose-example docker-compose up -d In this example we have and two directories: and . You can open links and check each file. I use the same idea like in a previous compose example. But this time I build own image for nginx: docker-compose.yml nginx node # Set nginx base image nginx:latest FROM # Copy custom configuration file from the current directory nginx.conf /etc/nginx/nginx.conf COPY and for node I use existing . Dockerfile and are really helping with development and testing in real modern applications even using . But you can use it not only for development. Next I’ll show you how to use it for data science experiments. Docker Docker compose microservice architecture Machine learning The same idea is useful not only for development I mean computer science. It useful and for data science. And running machine learning experiments in docker container is a good too. For my experiments in I created with docker image based on . machine learning github repository python anaconda To work with this container just clone git repository and build image: $ git clone https://github.com/evheniy/python-docker.git$ cd python-docker As I work a lot with and I created commands in and described it in file. node.js npm package.json script section README.md So for building just run or “ ”. And to start: or “ ”. Here I use parameter for demonizing process and for mapping current directory and save all my data after stopping container. npm run build docker build -t python . npm start docker run — name python -p 8888:8888 -v $PWD/python:/opt/notebooks -d python -d -v And open with password . http://localhost:8888 root You can find some examples using web UI of and run any for example : jupyter notebook plot_face_recognition Docker helps to work with the same environment ( , , …) in any place. After finishing you need to stop container using and clean disc space using . python scikit-learn SkPy npm run stop npm run rm Cleaning Docker makes it easy to wrap your applications and services in containers so you can run them anywhere. As you work with Docker, however, it’s also easy to accumulate an excessive number of unused images, containers, and data volumes that clutter the output and consume disk space. Docker doesn’t provide direct cleanup commands, but it does give you all the tools you need to clean up your system from the command line. In you can find a quick reference to commands that are useful for freeing disk space and keeping your system organized by removing unused Docker images, containers, and volumes. this tutorial Some useful commands I’m going to provide here: List of all images: docker image ls : Running containers docker ps -a One line to stop and remove all containers: docker rm -f $(docker ps -a -q) And remove all images: docker rmi -f $(docker images -q) To delete all dangling volumes: docker volume rm $(docker volume ls -f dangling=true -q) Conclusion In this tutorial I shown some useful examples and commands for working with docker. There are many other combinations and flags that can be used with each. To get more information you can read and I recommend to finish . docker documentation Udemy course Docker can help with researching and testing of new tools, databases, machine learning and big data tools like and . You can run it locally and keep your PC clean after stopping. Hadoop Apache Spark As I said before docker is a great tool for and . On production you will use help for configuration web services like . So with docker you can use any thing now and let’s devops care about production. development testing devops AWS