Kubernetes Tutorial: Using The System For Personal Projects

Written by danielcrouch | Published 2021/04/15
Tech Story Tags: kubernetes | docker | yaml | rest-api | containers | devops | backend | deployment

TLDR In this article, we are going to deploy a simple rest API with ExpressJS and expose it using a service and ingress. In this tutorial, we will use Katakoda as a playground to try out Kubernetes or run tests. We will then use docker to containerize our application using Docker. We then publish a port for the container to be accessed on the host machine. The image we have built is pushed to the docker repository so that we can access it from our deployment.via the TL;DR App

In this article, we are going to deploy a simple rest API with ExpressJS  and expose it using a service and ingress. After reading the article you would have learned how to deploy your personal projects on Kubernetes and understood the YAML configuration object for deployments, services and ingresses. You would have also learned how to Dockerize/Containerize a NodeJS application using Docker.
If you are new to coding and are thinking ‘what is Kubernetes?,’ here is a list of prerequisites before we get started. 

Prerequisites

  • Nodejs
  • Docker 
  • DockerHub account
  • Curl (It is a command-line tool for making requests over the terminal. If you do not have it installed, you can use postman.)
  • Note that:
    • I am using a Linux environment (Ubuntu 20.04). 
    • Directory here refers also to a folder
    • Terminal in this article refer to the command-line

Setting up the application

To set up the application in this article, we will follow these steps:

Containerizing the application with docker

In this step, we will containerize our application using docker.
In the project directory, create a file with the name Dockerfile. In the Dockerfile, add this code:
    FROM node:15-alpine
    WORKDIR /usr/app
    COPY . .
    RUN yarn install
    EXPOSE 3000
    ENTRYPOINT ["node", "index.js"]
In the Dockerfile, we specify a base image to use for our container and a build context, copy our application files into the image, install npm modules, set 3000 as the port to listen to when connecting to our container, and then specify then start command for the container.
Create a file with the name .dockerignore in the root directory and add a single line of text to it—node_modules—to prevent it from being added to the image during the build. 
Build the docker image for the application on your terminal with the following command: docker build -t simple-express-app.
Run the docker images command to list the images you have.
Run the docker image—docker run -d -p 4100:3000
We then publish a port for the container to be accessed on the host machine.
Check that the container is running.
Now let us test our running application. Instead of using 3000 as our port, we will use 4100.
  1. curl -X GET "localhost:4100" 
  2. curl -X POST "localhost:4100/welcome/DAVID"

Hosting the docker image

In this stage, we will push the image we have built to the docker repository so that we can access it from our deployment.
On your terminal, log in to docker hub docker login -u username -p password. You will get login succeeded as a response.
Run the command docker tag simple-express-app davidshare/simple-express-app.
  • We give the docker image a new tag which is in the format <docker_username>/<image_name>.
Run the command docker push davidshare/simple-express-app where simple-express-app is the name of the docker image we created.
When we check dockerhub, we see that our image has been pushed.

Deploying the application

In this step, we will deploy our application to a Kubernetes cluster. 
For this tutorial, I am going to use Katakoda They offer you a playground to try out Kubernetes or run tests. You will need to login to access the full features that it offers.
The picture below is of a YAML definition for a deployment in Kubernetes.
apiVersion: apps/v1
kind: Deployment
metadata:
   name: simple-express-app
   labels:
      apps: simple-express-app
spec:
   selector:
      matchLabels:
         app: simple-express-app
   strategy:
      type: RollingUpdate
      rollingUpdate:
         maxSurge: 1
         maxUnavailable: 0
   replicas: 3
   template:
       metadata:
          labels:
             app: simple-express-app
       spec: 
          containers:
          - name: simple-express-app
            image: davidshare/simple-express-app
            ports:
               - containerPort: 3000
  1. API Version: Users interact with Kubernetes clusters using an API. Whenever there are major changes to the Kubernetes API, the API version is changed accordingly. The API version field in Kubernetes object declarations reflects the API version being used. Some of the API versions are v1, apps/v1, and extensions/v1beta1.
  2. Kind: This indicates the kind of object to be created. In this case, it is a deployment.
  3. Metadata: It provides descriptive information about the object you want to create. It usually contains the name of the object to be deployed, the labels for the object, and the annotations. You can have many labels in your metadata. 
  4. Spec: The spec section of an object declaration defines the properties of your object. In our case, the spec on line 7 defines the characteristics for the deployment. Line 21 defines the properties of the pods that will be managed by this deployment.
  5. Selector: The selector section is used for specifying which pods are going to be managed by the deployment. The matchLabels section in our deployment has a label—app: simple-express-app. This label must match the label specified in the metadata section of the pod. In the case of our deployment, line 10 must match the label in line 20.
  6. Strategy: This determines how the old pods in a deployment will be replaced with new ones. There are two strategies for deployments in Kubernetes: Recreate and RollingUpdate. The Recreate strategy kills all the existing pods and then creates new ones to replace them. The problem with this strategy is that downtime will occur. The RollingUpdates strategy first creates new pods and then deletes the old ones after the new ones are in the ready state.
  7. maxSurge: It specifies the maximum number of new pods that will be created besides the old pods during a deployment update. We set the value of the maxSurge to 1. During the update, 1 new pod will be created in addition to the already existing pod(s), and 1 old pod will only be deleted after the new pod has reached the ready state. You can also specify the value of the maxSurge in percentage.
  8. maxUnavailable: This specifies the number of pods that can be deleted/unavailable when your deployment is being updated. We specified 0 as the value. No pod will be deleted until the new pod that is created has reached the ready state. The value of the maxUnavailable can also be specified in percentages.
  9. Replicas: This states the number of pods that the deployment will create and manage. It is the desired state of the deployment. So, everytime, the deployment will make sure that there are 3 pods running.
  10. Template: The template section defines the configuration for the pods that will be created by the deployment. As you can see, it has the metadata for the pod and the spec. 
  11. Containers: It is an array which means that you can have many container specifications. Containers help us specify the following properties of the pod:
  • Image: the application image that the pod will run
  • Name: the name of the container 
  • Ports: this is an array, and we can specify as many ports as are needed. The ports have a containerPort and an optional name. In our deployment we specify a containerPort. The containerPort is the port that is exposed for our running application.

Test the deployment

The code for this article is in the repository you cloned before.
To create the deployment we will use the kubectl command and a link to the deployment file in the repository.

Exposing your deployment

In Kubernetes, services are used to enable network access to pods. Each pod managed by the deployment has its own unique IP address and name. This IP address can be used to communicate with the pod within the cluster but cannot be accessed externally.
Each time a deployment is updated or a pod is deleted, new ones are created to replace the old. If you are connecting directly to the pods, it means that you have to find a way to keep track of the changing IP addresses. This is why we need services. Services provide a point of access to a group of pods. The service keeps track of these pods using labels. Kubernetes assigns cluster IPs to these services, so all the pods mapped to the service can be accessed using that IP.
apiVersion: v1
kind: Service
metadata: 
   name: simple-express-app
spec:
   type: NodePort
   ports:
      - port: 80
        targetPort: 3000
        protocol: TCP
   selector:
      app: simple-express-app
There are different kinds of services. The most popular are ClusterIP, which is the default, the NodePort and the Loadbalancer.
  • The ClusterIP service can only be accessed within the cluster and not externally. 
  • The NodePort uses the IP address of each node in the cluster, but adds a specific static port to the IPs for the service to be accessed. To access the service, you use the following format: <node_ip>:<nodeport>.
  • Loadbalancer uses a cloud service provider’s LoadBalancer to expose the service externally.
  • controlplane $ kubectl apply -f https://raw.githubusercontent.com/davidshare/simple-express-app/master/kubernetes/service.yaml
    service/simple-express-app created
  • Kubectl apply -f https://raw.githubusercontent.com/davidshare/simple-express-app/master/kubernetes/service.yaml
  • kubectl get service simple-express-app
  • Kubectl describe service simple-express app
  • The above commands will help you get and describe the service you have created.

Using an ingress

An ingress is used to allow access to the Kubernetes service from outside the cluster. It uses rules that determine where traffic will be directed to. An ingress needs an ingress controller for it to function properly.
kind: Ingress
apiVersion: extensions/vlbetal
metadata:
   name:simple-express-app
   annotations:
      kubernetes.io/ingress.class: 'nginx'
spec:
   rules:
     - host: simple-express-app.com
       http:
          paths:
            - path: /
              backend:
                 serviceName: simple-express-app
                 servicePort: 80
The code above is a YAML description of an ingress with a rule to direct network traffic coming to the root path of a url to the simple-express-app service. We can specify as many rules as we want. In the screenshot below, we specify two rules, and direct traffic coming to the /app2 path to the simple-express-app2 service.
kind: Ingress
apiVersion: extensions/vlbetal
metadata:
   name: simple-express-app
   annotations:
      kubernetes.io/ingress.class: 'nginx'
spec:
   rules:
     - host: simple-express-app.com
       http:
          paths: 
            - path: /
              backend:
                 serviceName: simple-express-app
                 servicePort: 80
            - path: /app2
              backend:
                 serviceName: simple-express-app2
                 servicePort: 80

Improvements

As a way of practising what you have learned here, you can:
  • Setup a domain name for your application.
  • Use a Kubernetes service provider. This will also help set up the LoadBalancer.
  • And use the most recent version of the ingress from the Kubernetes documentation (networking.k8s.io/v1)

Conclusion

In this article, we learned how to build a simple express application, containerize it and deploy it to a Kubernetes cluster. I hope this was helpful and I look forward to hearing exciting stories of how you applied what you learned here.
Cover Photo by Christopher Gower on Unsplash

Written by danielcrouch | Occasional Thoughts on Coding, Security, and Management
Published by HackerNoon on 2021/04/15