paint-brush
Using Jenkins, Docker and CI/CD for Serverless Applications by@ali-yuksel
4,448 reads
4,448 reads

Using Jenkins, Docker and CI/CD for Serverless Applications

by ali yukselJanuary 12th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Using Jenkins, Docker and CI/CD for Serverless Applications, I am developing a freelance project with aws Lambda. I used Jenkins and docker for CI-CD. I prefer to run Jenkins in docker container. It will be easy and clean. I found a docker image for Jenkins. You can find it in this site https://hub.docker.com/r/jenkins/. If you don't have knowledge about docker image you can look at docker site. It runs containers over images. For example in this image, there are Linux and Jenkins. When Docker create a container over this image we have a machine that has been installed Linux. We have to define new Volume to keep Jenkins_home directory that is in container.

Company Mentioned

Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Using Jenkins, Docker and CI/CD for Serverless Applications
ali yuksel HackerNoon profile picture

Hi, I am developing a freelance project with aws lambda. I used Jenkins and docker for CI/CD. Jenkins runs pipelines on containers. In this Tutorial I will show you how I set my environment.

I am using Macintosh so firstly i installed docker on my machine. i used this url to download and install docker (https://docs.docker.com/docker-for-mac/install/). you also can find windows version of Docker in this site.

I need Jenkins. I prefer to run Jenkins in docker container. It will be easy and clean. I found a docker image for Jenkins. You can find it in this site https://hub.docker.com/r/jenkins/jenkins/. If you don't have knowledge about docker image you can look at docker site.

Basically docker runs container as machine. Docker creates containers over images. For example in this image, there are Linux and Jenkins. When Docker create a container over this image we have a machine that has been installed Linux and Jenkins.

How can we set up this container. Let begin, we pull image file from docker hub.

docker pull jenkins/jenkins:lts
lts: Pulling from jenkins/jenkins
844c33c7e6ea: Pull complete 
ada5d61ae65d: Pull complete 
f8427fdf4292: Pull complete 
f025bafc4ab8: Pull complete 
67b8714e1225: Pull complete 
64b12da521a3: Pull complete 
2e38df533772: Pull complete 
b1842c00e465: Pull complete 
b08450b01d3d: Pull complete 
2c6efeb9f289: Pull complete 
0805b9b9cdc4: Pull complete 
f129619fc383: Pull complete 
cd27f3a82cdf: Pull complete 
f31251f493ed: Pull complete 
2c902f1f4dfa: Pull complete 
2fe1d2cb7aab: Pull complete 
908723de775f: Pull complete 
54aa3899e429: Pull complete 
f48cf8764dc1: Pull complete 
Digest: sha256:d5069c543e80454279caacd13457d012fb32c5229b5037a163d8bf61ffa6b80b
Status: Downloaded newer image for jenkins/jenkins:lts

Docker downloaded the image file and extracted it. Now we can see image file.

alis-MBP:scms aliyuksel$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
jenkins/jenkins     lts                 a3f949e5ebfd        2 weeks ago         582MB

We can run a container over this image. we can use this command.

docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

it starts correctly and we can use Jenkins until shut down the container. But there will be a problem after that. In Next run we can't find any our pipelines or configurations which we had created. Because docker run container over image file and image file is immutable file. I mean when we define a pipeline in Jenkins, Jenkins save it into container file system not into image file. When we shut down container all of them fly. We have to use Volumes to fix that problem.

Volumes are used to persist container file system into our real machine file system. We have to define new Volume to keep Jenkins_home directory that is in container. it is very simple we should change run command.

docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts

-v
means Volume. If there is no volume it creates automatically. You can look at docker documentation for additional information. Basically we define new volume that is name jenkins_home and mount the path which is in container file system that is
/var/jenkins_home
. After that when we shut down container, our changes do not fly. Because all files under
/var/jenkins_home
folder keep into Volume.

Now we can call url of Jenkins on browser. Jenkins says that we should be unlock it. We need initialAdminPassword. When we execute above command we can see logs. In these logs we can see initialAdminPassword.

*************************************************************
*************************************************************
*************************************************************

Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:

fb7b0666b23f40db985817700d1d0821

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

*************************************************************
*************************************************************
*************************************************************

We enter this hex code into page of Jenkins. We pass below steps;

  • Install suggested plugins
  • Create first admin user
  • Instance Configuration
  • Start using Jenkins.

We finished first part of Tutorial. We have a Jenkins instance.

We can start implement pipeline. This pipeline will do below steps;

  • All below steps will run on another docker container
  • Pull my project from git(bitbucket) that is lambda project that has coded with nodejs.
  • deploy lambda to AWS with serverless framework..

We need new image for this container. It is a simple linux machine. we pull new container from docker hub.

docker pull alpine

We have a docker on local machine. But Jenkins is running on the container. So Jenkins need to connect our docker. We have to create a bridge between container and Docker. We can fix the problem with Volume defination.

-v /var/run/docker.sock:/var/run/docker.sock  -v $(which docker):$(which docker) 

docker.sock file is used for socket listening. After added these volumes to command line, "

docker.sock
" mounted to "
docker.sock
" in localmachine. If we use
-v
with real path on the localmachine, it means mount. And if we use $() , it means run command.
-v $(which docker):$(which docker)
means, if "which docker" runs on container , docker runs "which container" on localmachine and returns result to container. We have to add this parameter because jenkins runs "which docker" to find docker.

We have to change privileges of this file because jenkins user can't acces this file. We need connect to container with root user.

we can run this command and run container.

 docker run -p 8080:8080 -p 50000:50000 --name jenkins  -v jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock  -v $(which docker):$(which docker)  jenkins/jenkins:lts

we can run below command and open a terminal for the container. Basically we run bash command on container with root user. We can change privileges of file.

One importing point, container name can't be used two times. when shut down container you have to remove container.

docker container rm jenkins

After that you can use

--name
jenkins next time.

We can implement pipeline now.

Click "New Item" under menu. Enter an Item Name. Select PipeLine and click ok.

Select PipeLine section on screen. Firstly pipeline create a new container and exec all commands on the container.

pipeline {
   agent {
        docker { image 'alpine' }
    }

   stages {
      stage('pull project') {
         steps {
             sh 'pwd'
         }
      }
   }
}

You can look at logs via jenkins. it should look like that,

We can pull project from bitbucket.

Firstly we should define a user on Jenkins who can access bitbucket.

Click Credentials/System on menu. And click Global Credentials on screen. Menu changes and we can see Add Credentials on new menu. Click Add Credentials and define your user that has been already defined on bitbucket.

Important point is ID field. We will use this value of field when we pull project from bitbucket.

We go to configuration page of pipeline. We write pipeline script to pull project from bitbucket.

pipeline {
   agent {
        docker { image 'alpine' }
    }

   stages {
      stage('pull project') {
         steps {
             git credentialsId : 'aliyksel', url:'https://[email protected]/allscms/scms.git'
         }
      }
   }
}

Save it and run pipeline. Click "Build Now".

PipeLine run and pull project from bitbucket.

Now we has finished step 2. Next step is run unit test. We need npm. Normally we can use npm plug-in. But we need create a new image file for serverless and I prefer to use npm which is in the new image file, instead of plug-in. So we will create an image file that has npm and serverless framework.

Firstly we pull image file that has node .

docker pull node

We will add serverless framework into the image.

we run container from node.

docker run -it --name node  node bash

Now we are inside container. We are installing serverless framework.

npm install serverless -g

Now I can create a new image file from this container. We run below command in local machine.

docker commit node  serverlessimg

We exit docker container.

exit

We have a new image for serverless and we can use this image in Jenkins. We can use the pipeline which will deploy our serverless application to aws.

You should change name of image file and add below steps to your pipeline.

   agent {
        docker { image 'serverlessimg' }
    }
...
stage('pull and deploy project : scms') {
         steps {
               git credentialsId : 'aliyksel', url:'https://[email protected]/allscms/scms.git'
               sh 'npm install'
               sh 'serverless config credentials --provider aws --key AKIAIOSFODNN7EXAMPLE --secret wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
               sh 'serverless deploy'
             }
         }
      }

We should define user information to access aws services. If you don't have a aws user, you can look at this document. I have already a user. I will setup my user credentials like this.

serverless config credentials --provider aws --key AKIAIOSFODNN7EXAMPLE --secret wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

So we created a new Image for serverless framework and deployed an appication to aws. I hope it will be useful for you.