paint-brush
Rethinking Programming: From Code to Cloudby@lakwarus
713 reads
713 reads

Rethinking Programming: From Code to Cloud

by Lakmal WarusawithanaMarch 17th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Ballerina is an open source programming language that specializes in moving from code to cloud while providing a unique developer experience. Its compiler can be extended to read annotations defined in the source code and generate artifacts to deploy your code into different clouds. Docker helps to package applications and their dependencies in a binary image that can run in various locations, whether on-premises, in a public cloud, or in a private cloud. To create optimized images, developers have to follow a set of best practices, otherwise the image that is built will be large in size.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Rethinking Programming: From Code to Cloud
Lakmal Warusawithana HackerNoon profile picture

Earlier, developers simply wrote their program, built it and ran it. Today, developers need to also think of the various ways of running it whether it be as a binary on a machine (virtual most likely), by packaging it into a container, by making that container a part of a bigger deployment (K8s) or by deploying it into a serverless environment or a service mesh. However, these deployment options are not part of the programming experience for a developer. The developer has to write code in a certain way to work well in a given execution environment, and removing this from the programming problem isn’t good.

Ballerina is an open source programming language that specializes in moving from code to cloud while providing a unique developer experience. Its compiler can be extended to read annotations defined in the source code and generate artifacts to deploy your code into different clouds. These artifacts can be Dockerfiles, Docker images, Kubernetes YAML files or serverless functions.

From Code to Docker

Agility is a key benefit that we expect from microservices-based application development and Docker plays a major role here. Docker helps to package applications and their dependencies in a binary image that can run in various locations, whether on-premises, in a public cloud, or in a private cloud. To create a Docker image, developers have to create a Dockerfile by choosing a suitable base image, bundling all dependencies, copying the application binary and setting the execution command with proper permissions. To create optimized images, developers have to follow a set of best practices, otherwise, the image that is built will be large in size, less secure and have many other shortcomings.

The Ballerina compiler is capable of creating optimized Docker images out of the application source code. The following code illustrates how to bundle, package and run a Ballerina hello service as a Docker container.

import ballerina/http;
import ballerina/log;
import ballerina/docker;
 
@docker:Expose {}
listener http:Listener helloWorldEP = new(9090);

@docker:Config {
   name: "helloworld"
}
service hello on helloWorldEP {
 
resource function sayHello(http:Caller caller,
http:Request request) {
      var result = caller->respond("Hello World!");
      if (result is error) {
          log:printError("Error in responding ", err = result);
      }
  }
}

Adding the @docker:Config {} to a service, generates the Dockerfile and a Docker image and adding the @docker:Expose {} annotation to the listener object exposes the endpoint port by allowing incoming traffic to the container. 

Let’s build the source file.

$ ballerina build hello.bal
Compiling source
    hello.bal

Generating executables
    hello.jar

Generating docker artifacts...
    @docker          - complete 2/2

    Run the following command to start a Docker container:
    docker run -d -p 9090:9090 helloworld:latest

Created Coder image:

$ docker images

REPOSITORY           TAG             IMAGE ID            CREATED             SIZE
helloworld         latest           0f68d7fea5e8        1 minutes ago      133MB

Generated Dockerfile:

FROM ballerina/jre8:v1

LABEL maintainer="[email protected]"

RUN addgroup troupe \
    && adduser -S -s /bin/bash -g 'ballerina' -G troupe -D ballerina \
    && apk add --update --no-cache bash \
    && chown -R ballerina:troupe /usr/bin/java \
    && rm -rf /var/cache/apk/*

WORKDIR /home/ballerina

COPY hello.jar /home/ballerina

EXPOSE  9090
USER ballerina

CMD java -jar hello.jar

The created Docker image follows image building best practices and the developer can just run the Docker container by using the docker run command.

$docker run -d -p 9090:9090 helloworld:latest
aa63c1e101317630c9e86b9ae0b424f406fde81073e859c66d7173b965a2039a

$ curl http://localhost:9090/hello/sayHello
Hello World!

Ballerina has comprehensive support for Docker functionality. The following list provides working samples for different use cases:

From Code to Kubernetes

Docker helps to package the application and to perform some developer testing. But to run an application with multiple microservices in production, I would recommend using a platform like Kubernetes. Kubernetes is an open source platform for automating deployment, and scaling and management of containerized applications. Kubernetes defines a set of unique building blocks that collectively provide mechanisms to deploy, maintain and scale applications. A Pod is a logical group of containers that are guaranteed running in co-located on the host machine. A Kubernetes Service provides discovery, routing and load balancing capabilities to the set of pods that it constitutes. Kubernetes Deployment is a set of pods in conjunction with a defined replica set, health check, and rolling update mechanisms. All of these Kubernetes objects need to be defined as YAML files and deployed into the Kubernetes cluster.

Even though developers want to run their application in a Kubernetes platform, in many cases, creating these YAML files is out of a developer’s comfort zone. The Ballerina compiler is capable of creating these YAML files while compiling the source code. Let's modify the above sample to generate Kubernetes artifacts.

import ballerina/http;
import ballerina/log;
import ballerina/kubernetes;
 
@kubernetes:Service {
   serviceType: "NodePort"
}
listener http:Listener helloWorldEP = new(9090);
 
@kubernetes:Deployment {
   name: "helloworld"
}
service hello on helloWorldEP {
 
resource function sayHello(http:Caller caller,
http:Request request) {
      var result = caller->respond("Hello World!");
      if (result is error) {
          log:printError("Error in responding ", err = result);
      }
  }
}

Adding the @kubernetes:Deployment{} annotation to the Ballerina service will generate the Kubernetes Deployment YAML that is required to deploy our hello application into Kubernetes. Adding the @kubernetes:Service{} annotation will generate the Kubernetes Service YAML. In this scenario, we have set serviceType as `NodePort` to access the hello service via the nodeIP:Port.

$ ballerina build hello.bal
Compiling source
    hello.bal

Generating executables
    hello.jar

Generating artifacts...

    @kubernetes:Service        - complete 1/1
    @kubernetes:Deployment        - complete 1/1
    @kubernetes:Docker            - complete 2/2
    @kubernetes:Helm            - complete 1/1

    Run the following command to deploy the Kubernetes artifacts:
    kubectl apply -f hello/kubernetes

    Run the following command to install the application using Helm:
    helm install --name helloworld hello/kubernetes/helloworld

The Ballerina compiler generates the Dockerfile, Docker image, hello.yaml file (with Kubernetes Deployment and Service), and helm chart YAML in addition to the hello.jar binary.

$ tree
.
├── docker
│   └── Dockerfile
├── hello.bal
├── hello.jar
└── kubernetes
    ├── hello.yaml
    └── helloworld
        ├── Chart.yaml
        └── templates
            └── hello.yaml

4 directories, 6 files

Generated hello.yaml file

---
apiVersion: "v1"
kind: "Service"
metadata:
  annotations: {}
  labels:
    app: "hello"
  name: "helloworldep-svc"
spec:
  ports:
  - name: "http-helloworldep-svc"
    port: 9090
    protocol: "TCP"
    targetPort: 9090
  selector:
    app: "hello"
  type: "NodePort"
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
  annotations: {}
  labels:
    app: "hello"
  name: "helloworld"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "hello"
  template:
    metadata:
      annotations: {}
      labels:
        app: "hello"
    spec:
      containers:
      - image: "hello:latest"
        imagePullPolicy: "IfNotPresent"
        name: "helloworld"
        ports:
        - containerPort: 9090
          protocol: "TCP"
      nodeSelector: {}

Developers can use the generated Kubernetes artifacts to deploy applications in any Kubernetes platform.

$ kubectl apply -f hello/kubernetes
service/helloworldep-svc created
deployment.apps/helloworld created

$ kubectl get all
NAME                              READY   STATUS    RESTARTS       AGE
pod/helloworld-696ff58c79-txlbk   1/1     Running       0          41s

NAME                       TYPE      CLUSTER-IP   EXTERNAL-IP   PORT(S)           AGE
service/helloworldep-svc   NodePort 10.110.217.216  <none>     9090:31833/TCP   41s
service/kubernetes         ClusterIP   10.96.0.1    <none>      443/TCP        44d

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/helloworld   1/1     1            1           41s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/helloworld-696ff58c79   1             1         1   41s

Let’s access the hello service via the node port:

$ curl http://localhost:31833/hello/sayHello
Hello World!

Ballerina provides comprehensive Kubernetes support that is required to run an application in a Kubernetes platform. The following list provides working samples for different Kubernetes scenarios required to run in production:  

In addition to generic Kubernetes support, if you wish to deploy a Ballerina application into OpenShift, read this sample, which illustrates OpenShift Build Configs and Routes.

From Code to Istio

Microservice architecture offers many advantages to developers to make their development agile, which leads to faster innovation. But it comes with its own complexity. Docker and Kubernetes solves some of these complexities. The service mesh is a modern software architecture that tries to reduce some more complexities running on top of platforms like Kubernetes. Istio is an open source service mesh implementation. Service discovery, load balancing, failure recovery, metrics, and monitoring are its main focus areas. Istio also supports complex operational requirements like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.

Istio has introduced a few additional unique concepts in addition to the Kubernetes objects. VirtualService and Gateway play a major role among them. A VirtualService defines a set of traffic routing rules to achieve the above complex operational requirements. Istio Gateway is a load balancer operating at the edge of the mesh receiving incoming or outgoing HTTP/TCP connections.

Ballerina is capable of generating artifacts to deploy VirtualService and Gateway by adding two annotations on top of the Ballerina listener object. The following code snippet shows two annotations that define the Istio support: 

@istio:Gateway {}
@istio:VirtualService {}
@kubernetes:Service {
   serviceType: "NodePort"
}
listener http:Listener helloWorldEP = new(9090);

Building the source code will generate the Istio VirtualService and Gateway artifacts. 

$ ballerina build hello.bal
Compiling source
    hello.bal

Generating executables
    hello.jar

Generating artifacts...

    @kubernetes:Service            - complete 1/1
    @kubernetes:Deployment            - complete 1/1
    @kubernetes:Docker                  - complete 2/2
    @kubernetes:Helm                  - complete 1/1
    @istio:Gateway                - complete 1/1
    @istio:VirtualService            - complete 1/1

    Run the following command to deploy the Kubernetes artifacts:
    kubectl apply -f kubernetes

    Run the following command to install the application using Helm:
    helm install --name helloworld kubernetes/helloworld

You can find the full working source code in the following git repository:

From Code to Knative

Knative is a serverless platform created originally by Google with contributions from over 50 different companies. It uses Kubernetes platform capabilities to build a serverless platform that enables developers to focus on writing code without the need to worry about the “boring but difficult” parts of building, deploying, and managing their application. One of the key functionalities of Knative is scaling automatically from zero replicas and sizing workloads based on demand.

Knative also has its own object model and artifacts. Knative Service is defined by a Route and a Configuration, which have the same name as the Service contained in a YAML file. Every time the Configuration is updated, a new Revision is created.

Ballerina is capable of generating these necessary artifacts while compiling the source code. The only requirement is to add a simple annotation in the code.

import ballerina/http;
import ballerina/log;
import ballerina/knative;
 
@knative:Service {
   name: "helloworld"
}
listener http:Listener helloWorldEP = new(9090);
 
service hello on helloWorldEP {
 
resource function sayHello(http:Caller caller,
http:Request request) {
      var result = caller->respond("Hello World!");
      if (result is error) {
          log:printError("Error in responding ", err = result);
      }
  }
}

Adding this will generate the following artifacts that are required to deploy our application in serverless mode in a Knative cluster.

$ ballerina build hello.bal
Compiling source
    hello.bal

Generating executables
    hello.jar

Generating Knative artifacts...

    @knative:Service              - complete 1/1
    @knative:Docker              - complete 2/2

    Run the following command to deploy the Knative artifacts:
    kubectl apply -f kubernetes

    Run the following command to install the application using Helm:
    helm install --name helloworld kubernetes/helloworld

Generated hello.yaml for knative deployment:

---
apiVersion: "serving.knative.dev/v1alpha1"
kind: "Service"
metadata:
  annotations: {}
  labels: {}
  name: "helloworld"
spec:
  template:
    spec:
      containerConcurrency: 100
      containers:
      - image: "hello:latest"
        name: "helloworld"
        ports:
        - containerPort: 9090
          protocol: "TCP"

From Code to AWS Lambda

AWS Lambda is an event-driven, serverless computing platform. Ballerina functions can be deployed in AWS Lambda by annotating a Ballerina function with “@awslambda:Function”, which should have the function signature function (awslambda:Context, json) returns json|error

You can find a comprehensive sample in the following Ballerina by Example:

CI/CD with GitHub Action

In a microservice architecture, continuous integration and continuous delivery (CI/CD) is critical in creating an agile environment for incorporating incremental changes to your system. There are different technologies that provide this CI / CD functionality and very recently GitHub has introduced GitHub Actions, which is now available for general usage. GitHub Actions provides a convenient mechanism for implementing CI/CD pipelines using their workflows concept, right from our GitHub repositories.

With the Ballerina GitHub Action, which is available in the GitHub Marketplace, we can create a Ballerina development environment with built-in CI/CD. The following article has a comprehensive guideline:

Support for SaaS Connectors

We have discussed how Ballerina supports different technologies to automate cloud deployments. To obtain the full strength of the cloud, applications should be able to integrate with Software-as-a-Service (SaaS) provided by different cloud vendors. 

Ballerina provides a simple workflow to connect and integrate with these SaaS services. For example, the following code snippet shows how to initialize and send out a tweet with the Twitter SaaS service:

import ballerina/config;
import ballerina/log;
import wso2/twitter;
// Twitter package defines this type of endpoint
// that incorporates the twitter API.
// We need to initialize it with OAuth data from apps.twitter.com.
// Instead of providing this confidential data in the code
// we read it from a toml file.
twitter:Client twitterClient = new ({
  clientId: config:getAsString("clientId"),
  clientSecret: config:getAsString("clientSecret"),
  accessToken: config:getAsString("accessToken"),
  accessTokenSecret: config:getAsString("accessTokenSecret"),
  clientConfig: {}
});
 
public function main() {
 
  twitter:Status|error status = twitterClient->tweet("Hello World!");
  if (status is error) {
      log:printError("Tweet Failed", status);
  } else {
      log:printInfo("Tweeted: " + <@untainted>status.id.toString());
  }
}

Ballerina has many out-of-the-box SaaS connectors. The following list shows a few Ballerina SaaS connectors: 

Key Takeaways

  • Earlier, developers simply wrote their program, built it and ran it. But today, developers have various ways to run it.
  • Cloud-native platforms like Docker, Kubernetes, Service Mesh and Serverless play a major role in modern platforms that support deployment automation. 
  • However, these deployment options are not a part of the programming experience for a developer.
  • Ballerina is an open source programming language that specializes in moving from code to cloud while providing a unique developer experience. 
  • Its compiler can be extended to read annotations defined in the source code and generate artifacts to deploy your code into different clouds. These artifacts can be Dockerfiles, Docker images, Kubernetes YAML files or serverless functions.