Deploying Flogo apps to Kubernetes

Written by retgits | Published 2017/11/14
Tech Story Tags: kubernetes | flogo | deploying-flogo-apps | flogo-apps | apps-to-kubernetes

TLDRvia the TL;DR App

With Project Flogo you can visually create Ultralight Edge Microservices and run them anywhere. But what if you want to run those incredibly light microservices using one of the most powerful container management platforms, Kubernetes?

Prerequisites

As described on the Kubernetes website

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.

If you haven’t set up your own Kubernetes cluster yet, I can absolutely recommend looking at minikube. The team has made an amazing effort to make it super easy to run your own cluster locally with minimal installation effort.

As Kubernetes is meant for containerized apps it means we’ll have to create a Docker image from our Flogo app and push it to a registry accessible to the Kubernetes cluster. In the examples below I’ll make use of Docker Cloud, but depending on your preference you can pick any container registry.

The Flogo app

As the post is more about running the app on Kubernetes than it is on how to create the apps, I’ve simply used the tutorial in the Flogo documentation. This app has a simple HTTP receiver listening on port 8080 and sends back a default string. If you want to use a different app that is of course possible as well!

Create a Docker image

Flogo describes itself as an _Ultralight Edge Microservices Framework_, so containerizing the apps built with it shouldn’t add too much overhead. Luckily today you have a whole bunch of small base containers available, ranging from alpine to debian (with jessie-slim). My three favorites being:

$ docker images

debian jessie-slim a870c469749c 10 days ago 79.1MB

alpine latest 053cde6e8953 11 days ago 3.97MB

bitnami/minideb latest c5693017e0d4 3 weeks ago 53.6MB

The app I have, compiled to run on Linux, is about 7.4MB and because I want to keep the overhead as low as possible I’ll use alpine for this one. Combining alpine with my Flogo app should result in an image of about 12MB, which I think is pretty good. To build an image we need a Dockerfile:

# Dockerfile for flogoapp

# VERSION 0.0.1

# The FROM instruction initializes a new build stage and sets the Base Image for subsequent instructions.

# We’re using alpine because of the small size

FROM alpine

# The ADD instruction copies new files, directories or remote file URLs from <src> and adds them to the filesystem of the image at the path <dest>.

# We’ll add the flogoapp, built using the Web UI, to the working directory

ADD flogoapp.dms .

# The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime.

# The app we’re using listens on port 8080 by default

EXPOSE 8080

# The main purpose of a CMD is to provide defaults for an executing container.

# In our case we simply want to run the app

CMD ./flogoapp.dms

To build an app out of this you can simply run the command:

docker build . -t <your username>/flogoalpine

In my case that ended up with quite a small image, at roughly the size I expected it to be!

REPOSITORY TAG IMAGE ID CREATED SIZE

retgits/flogoalpine latest e7bc672e009e About an hour ago 11.7MB

As mentioned I’ll make use of Docker Cloud to push my images to so that the Kubernetes cluster can access them. One simple command makes the image available :-)

docker push <your username>/flogoalpine

That takes care of the Docker part, let’s get over to Kubernetes!

Create a “Deployment”

The Deployment in Kubernetes is a controller which provides declarative updates for Pods and ReplicaSets. Essentially speaking it gives you the ability to declaratively update your apps, meaning zero downtime!

A sample `deployment.yaml` file could like like below. This will create a Deployment on Kubernetes, with a single replica (so one instance of our app running) where the container will have the name `flogoapp` and it will pull the `<image name>` as the container to run. Pay special attention to the `containerPort` as that will make sure that the port will be accessible from the outside (though still within the cluster)

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: flogoapp-deployment

spec:

replicas: 1

template:

metadata:

  labels:

    app: flogoapp

spec:

  containers:

  - name: flogoapp

    image: <image name>

    imagePullPolicy: Always

    ports:

    - containerPort: 8080

To now create a deployment you can run

kubectl create -f deployment.yaml

Within the kubectl cli tool, or using the dashboard, you can see the status of your deployments:

$ kubectl get deployments

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

flogoapp-deployment 1 1 1 1 50m

Our app is running! Now we need to make sure we can access it from the outside as well…

Create a “Service”

The Kubernetes documentation has an excellent explanation on why you need Services, so I’ll let them tell the story

Kubernetes Pods are mortal. They are born and when they die, they are not resurrected. ReplicationControllers in particular create and destroy Pods dynamically (e.g. when scaling up or down or when doing rolling updates). While each Pod gets its own IP address, even those IP addresses cannot be relied upon to be stable over time. This leads to a problem: if some set of Pods (let’s call them backends) provides functionality to other Pods (let’s call them frontends) inside the Kubernetes cluster, how do those frontends find out and keep track of which backends are in that set?

So the services logically group pods together and make sure that even when a pod goes away you don’t have to change IP addresses. A service can have a lot of different capabilities and many more configuration options, so let’s create one that is fairly simple.

The below `service.yaml` file simply defines the service `flogoapp` that directly binds port 8080 of the app we have deployed to port 30061 that we can access from outside of the cluster.

apiVersion: v1

kind: Service

metadata:

name: flogoapp

labels:

app: flogoapp

spec:

selector:

app: flogoapp

ports:

  • port: 8080

    protocol: TCP

    nodePort: 30061

type: LoadBalancer

To create the service in Kubernetes you can simply run:

kubectl create -f service.yaml

Within the kubectl cli tool, or using the dashboard, you can see the status of your services just like your deployments:

$ kubectl get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

flogoapp LoadBalancer 10.0.0.110 <pending> 8989:30061/TCP 1h

kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1d

And that takes care of the exposing the app outside of the cluster as well. So we have one final task!

Access your app!

Accessing the app is quite simple now. First we need the external IP address from the Kubernetes cluster. If you’re running minikube you can get that by running `minikube ip`. with cURL you can now invoke the API from the app and see the internal Flogo ID of the app.

$ curl http://192.168.99.100:30061/helloworld

{“id”:”006257ffaf5fb1e9621914dcd0203af8"}

Conclusion

We’ve taken a simple Flogo app, added that app into a Docker container and deployed that to Kubernetes. By itself Flogo is incredibly powerful and lightweight. Combining that with the power and flexibility of Kubernetes gives you the power to run ultralight microservices on a very cool and powerful platform. If you want to try out Project Flogo, visit our web page or GitHub project.


Published by HackerNoon on 2017/11/14