paint-brush
Using Codeship to Deploy a Dotnet app on Oracle Kubernetes by@naseem

Using Codeship to Deploy a Dotnet app on Oracle Kubernetes

by Naseem MohammedMay 5th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

CodeShip Pro spawns single-tenant AWS instances for you whenever you push a build. The resulting image is tagged by Codeship with the Github Commit Id and pushed to the Azure Container Registry (ACR) This is not sharing your instance’s CPU, Memory, etc. with anyone else. This will be a Kubernetes Deployment and the a Service. The Docker Image that we build and pushedearlier will be the image in this deployment. This is what I did with appkubectl Service. It is just a Dockerfile. It has been downloaded 500k+ times while the Azure one has only been downloaded 10K+ times.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Using Codeship to Deploy a Dotnet app on Oracle Kubernetes
Naseem Mohammed HackerNoon profile picture

So, I was looking at an alternative to Azure DevOps and Jenkins to build a CI CD pipeline for a new project. A friend had asked me for a recommendation. His wanted to host microservices in Oracle Kubernetes Service.

  1. I hesitated at recommending Azure DevOps. This was mainly due to the Azure branding. (Why would they name it like that? It just gives the impression that you are tied to Azure Cloud. When in realty you can use it anywhere.)
  2. On Jenkins; I don’t know, I just wanted a break. Thought there should be different way.

So I spend some time on google and finally decided to invest 2-3 days on Codeship. At the end I think it was totally worth it.

There are two version of Codeship. Basic and Pro. Pro is a bit more expensive. According their faq below is the reason.

Why is CodeShip Pro more expensive than Codeship Basic?

CodeShip Pro spawns single-tenant AWS instances for you whenever you push a build. You are not sharing your instance’s CPU, Memory, etc. with anyone else.

Anyway, this is what I set out to achieve.

Flow

  1. Developers checks in the code to a Github repo.
  2. This triggers a build in Codeship. Codeship uses the Dockerfile checked in and tries to build a Docker image.
  3. The resulting image is tagged by Codeship with the Github Commit Id and pushed to Azure Container registry (ACR). (Yes, I will figure Oracle Registry later.)
  4. Codeship now issues a kubectl command to Oracle Kubernetes Engine (OKE) master API service. This will be a Kubernetes Deployment and the a Service. The Docker Image that we build and pushedearlier will be the image in this deployment.

    Simple straight forward nothing fancy.


Prerequisites for achieving this

  • Github Account
  • Codeship Pro Account (There is a free tier which is what I am using).
  • Azure Account and Azure Container Registry (You can replace this with Oracle Container Registry).
  • Oracle Cloud account and a running instance of Oracle Kubernetes Engine.

Codeship Structure and my Setup

In Codeship everything revolves around two configuration files. codeship-service & codeship-steps files

* Correlation I build in my head about Codeship Services & Steps.

Services

Codeship Services provide the functionality to accomplish the CI CD pipeline’s steps (or tasks). Services provide these functionalities by using Dockers. Services in the end is just a Docker.

1) Used for Build and Push Microservice

Two services are used to accomplish a Build & Push to registry functionality. Thta is two corresponding dockers are required. These two Services are actually mapped to a codeship step (or task) with a attribute called type(=Push). Below is Codeship documentation for that step which provides the Build and Push functionality.

https://documentation.codeship.com/pro/builds-and-configuration/steps/

2) Utility Service

The Services (Dockers) maybe prebuilt by Codeship or us and pulled from a registry like Docker Hub at the time of CICD execution.

codeship/azure-dockercfg-generator is an example of Codeship pre built Dockers.

You can see more of Codeship prebuilt Dockers here. Interesting to see that the AWS Docker has been downloaded 500k+ times while the Azure one has only been downloaded 10K+ times.

3) Utility Service (Custom built at pipeline execution time)

A Service can also custom build a Docker at runtime. We provide the Codeship Service a Dockerfile. This is what I did with appkubectl Service. Custom build one with Oracle CLI (OCI) and kubectl packaged within it.

This is how my code-service file looks like

app:
  build:
    image: dockerstore.azurecr.io/aksistioinsurance
    dockerfile_path: Dockerfileweb
azure_dockercfg:
  image: codeship/azure-dockercfg-generator
  add_docker: true
  encrypted_env_file: az_config_encrypted
appkubectl:
  build:
    image: dockerstore.azurecr.io/oci_kubectl:0.0.4
    dockerfile_path: Dockerfile
    encrypted_args_file: config_encrypted
    #encrypted_env_file: config_encrypted
    args:
      CommitID: "{{.CommitID }}"

Steps (Or Tasks)

codeship-steps.yml file is where you specify the steps. Think of this as the tasks. Each step uses one of the Services. Usually it is a 1: Many relationships between Service & Tasks. But some steps like the Building and Pushing Dockers to repository has 2 Services mapped to it.

Example Steps (or tasks) can look like below. This we define and build the functionality as per our requirements.

  • Build Microservice & Push to Container Registry
  • Integration Test
  • Deploy to Oracle Kubernetes Engine

To accomplish these tasks, we use our custom Services (or Dockers) or Codeship provided Services. Below table shows the relationship I used in my Build pipeline.

# codeship-steps.yml
- name: Build and push to Azure Docker Registry
  service: app
  type: push
  tag: master
  image_name: dockerstore.azurecr.io/aksistioinsurance
  image_tag: "{{ .CommitID }}"
  registry: dockerstore.azurecr.io
  dockercfg_service: azure_dockercfg
- name: Check response to kubectl config
  command: kubectl get nodes
  service: appkubectl
- name: Check OCI Version
  command: oci -v
  service: appkubectl
- name: Deploy to Oracle Kubernetes Engine
  command: kubectl apply -f /config/.kube/insurance.yaml
  service: appkubectl
- name: Print out the environment varibales
  service: appkubectl
  command: printenv

The above one is my codeship-steps.yml file

Desktop utility

There is a nice command line utility that Codeship ships called Jet. (Ship & Jet Hmm..) You can use it for

  • Encrypting/Decrypting your configuration files/variables. These may db password or Container Registry credentials.
  • Local Testing of your Codeship Steps (tasks) before pushing to Github.
  • Validation. (Didn’t use this much.).

Ok, let's look at the pipeline in action.

Flow

1) When I check in the code in Github, I expect a CommitId being provided by GitHub.

As you can see from above the Commit Id starts with characters => aa02a67

2) Now we expect this to have triggered a build in codeship.

3) Codeship will build and push a new Docker image to Azure Container Registry. This image will be tagged with the new GitHub Commit Id. The below screenshot confirms that the tagging has happened.

4) Next we expect Codeship to issue a Kubectl command. Against the below yaml file. It contains a Kubernetes Deployment and related Service object.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: insurance-api
  namespace: nm
spec:
  replicas: 1
  selector:
    matchLabels:
      app:  insurance-api
      version: old
  template:
    metadata:
      labels:
        app:  insurance-api
        version: old
    spec:
      containers:
      - name: insurance-api
        image: dockerstore.azurecr.io/aksistioinsurance:##tag##
        resources:
          requests:
            memory: "32Mi"
            cpu: "25m"
          limits:
            memory: "64Mi"
            cpu: "100m"
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: health
            port: 80
            httpHeaders:
            - name: X-Custom-Header
              value: Awesome
          initialDelaySeconds: 90
          periodSeconds: 10
        env:              
        - name: "DeviceName" 
          value: "aStrangeDevice"        
      imagePullSecrets:
      - name: topsecretregistryconnection
---
kind: Service
apiVersion: v1
metadata:
  name: insurance-api-service
  namespace: nm
spec:
  type: ClusterIP
  ports:
  - name: http
    protocol: TCP
    port: 80      
  selector:
    app:  insurance-api 

There are two things I want to bring to your attention about the above yaml file.

1) The imagepullsecret. This actualy has the Docker username and password of the Azure Cotainer Registry. This secret was setup initially at the time of creation of Oracle Kubernetes Engine Cluster. The below command was used.

kubectl create secret docker
-registry topsecretregistryconnection 
connection --docker-server dockerstore.azurecr.io 
--docker-email "###" --docker-username="###" 
--docker-password "##$#####"

For details check Kubernetes documentation

2) The image tag has a placeholder ##tag##. This placeholder will dynamically changed in the Dockerfile.

cp insurance.yaml $HOME/.kube/insurance.yaml &&  
sed -i 's/##tag##/'$CommitID'/1' $HOME/.kube/insurance.yaml && 
cat $HOME/.kube/insurance.yaml && \
   

Once Codeship runs the kubectl apply command against above yaml file we would expect this image to be deployed in Kubernetes cluster within OKE. Let's check and find out.

As you can see from above screenshot the image with right tag has been picked up and deployed to OKE.

One more thing. The codeship dashboard is minimal but has enough to help you in debugging.

And one last thing, be careful of Codeship build arguments and environment variables.

I wish Codeship was little more consistent in their naming. For instance at places they got CI_Commit_ID and for build arguments they got CommitID (no underscore). The error messages on jet steps will help a bit. But still either stick with an underscore or not across Environment variables and Build arguments.