How to Automate Kubernetes Deployments with Postman by@kevinswiber
847 reads

How to Automate Kubernetes Deployments with Postman

tldt arrow
Read on Terminal Reader

Too Long; Didn't Read

1. Get a Kubernetes cluster up and running 2. Build a server to fetch local Kubernetes resource files 3. Run a proxy to the Kubernetes API Server 4. Use Postman to deploy our application

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - How to Automate Kubernetes Deployments with Postman
Kevin Swiber HackerNoon profile picture

@kevinswiber

Kevin Swiber

A software engineering, architecture, and devtools advocate, focused on distributed...

Learn More
LEARN MORE ABOUT @KEVINSWIBER'S EXPERTISE AND PLACE ON THE INTERNET.
react to story with heart

Kubernetes is the most widely used container-based orchestration platform today. Practitioners are often segmented by specialty:

  • Application Developer

  • Administrator

  • Security Specialist

According to the currently in-progress Kubernetes Usage Report 2021, one relevant discovery is that Kubernetes deployments do not live in isolation. Most organizations are running a combination of bare metal, VMs, and Kubernetes.

Due to the persistent need to manage heterogeneous infrastructure, Kubernetes specialists are often tasked with other job responsibilities. How do we wrangle the dissonance in application deployments, administration, security, and orchestration? The reality is that we're saddled with the necessity of integration.

APIs all the way down

Today, the communication and interaction possibilities are endless:

  • B2B partner services

  • Data migration

  • Legacy system reuse

APIs are not just glue code. They enable value chain optimizations through platform ecosystems. (blah blah blah) APIs are the raw materials we piece together to make this magic happen. This is what's driving so much adoption of quirky mashup delights. It's not just shiny object obsession, though if we're honest with ourselves, our little raccoon paws have a wandering mind of their own. 🦝

Here's what we're going to accomplish:

  1. Get a Kubernetes cluster up and running

  2. Build a server to fetch local Kubernetes resource files

  3. Run a proxy to the Kubernetes API Server

  4. Use Postman to deploy our application

Get ready!

Oof. Feeling a bit yikes?

We're using a lot of buzz words here. It's important to note that some familiarity with Kubernetes, CI/CD pipelines, and GitHub would really be beneficial to following along.

Pulling the strings of Kubernetes

Okay, okay, okay... Let's get to it. Here's our inventory:

Here's what we're looking to accomplish:

Overview

Overview

Kubernetes is an API-driven container orchestration platform. Most people are familiar with using the kubectl CLI to send commands to their cluster. Fun fact: kubectl is just using APIs under the hood. Don't believe me? Tack -v=8 to the tail end of any kubectl command, and let the HTTP wash over us. 🌊

Now that we know we can make API calls to pull the strings of Kubernetes, let's see what we can do in Postman. Here's a public workspace to follow along!

Kubernetes Deployment collection in Postman

Kubernetes Deployment collection in Postman

Let's step through what's going on here.

0. Pre-requisites

First and foremost, we'll need a Kubernetes cluster. Cloud providers offer managed options that are relatively quick and easy to get up and running. For this walkthrough, we'll be using Amazon Elastic Kubernetes Service (Amazon EKS). One of the easiest ways to get an Amazon EKS cluster up and running is to use the Amazon EKS Quickstart CloudFormation template.

Once our cluster is up and running, we'll need to create an SSH tunnel to our bastion host. Using default security options with the Amazon EKS Quickstart template, only the bastion host has access to kube-apiserver. This prevents outside access to our precious Pods. However, we'd like to use kubectl on our own workstations so we can test our integrations. Scott Lowe has an easy-to-follow blog post on how to do this: Using kubectl via an SSH Tunnel.

Before we get started with local testing, there are a couple of servers we need to run. If the Postman collection is going to deploy Kubernetes resources, it needs a way to access those resources. Furthermore, we need a way of accessing the Kubernetes API. These will require setup for both local testing and testing in our CI/CD pipeline.

Proxying kube-apiserver

In Kubernetes, kube-apiserver validates and configures data for the cluster. It's exposed via a REST API and is the front door by which we'll enter. The kubectl CLI has a built-in way to run a local proxy, and we'll utilize the default URL http://localhost:8001 for our local testing.

When we run our Postman collection in our CI/CD workflow, we'll be accessing the API from a Pod where we'll use a different URL, https://kubernetes.default.svc. Don't worry. These URLs can be set as variables in Postman and changed depending on our execution environment.

Local kube-resource-server

We're going to run a local server in our CI/CD pipeline that's going to conveniently serve the Kubernetes resources over HTTP! One benefit of this is that we can actually use this server when testing our collection run in Postman. Here's a quick implementation in Node.js. After initializing a new Node.js application, be sure to install a couple of dependencies.

mkdir -p ci/kube-resource-server && cd ci/kube-resource-server
npm init
npm install express ys-yaml

After we have our new Node.js application and our dependencies, let's write a basic HTTP server that will aggregate Kubernetes resource YAML files and return them as JSON. Our Postman collection will use these JSON representations to deploy to Kubernetes.

// ci/kube-resource-server/server.js

const fs = require("fs/promises");
const path = require("path");
const express = require("express");
const yaml = require("js-yaml");

// NOTE: Change this directory to match where our
// Kubernetes resources live. This assumes they're
// in the root of our project under the "kubernetes"
// directory.
const resourceDirectory = path.join(__dirname, "..", "kubernetes");

const app = express();

app.get("/resources", async (req, res) => {
  const canAccess = fs.access(resourceDirectory);
  if (!canAccess) {
    console.error(
      `The resource directory cannot be accessed: ${resourceDirectory}. Returning an empty array ([]).`
    );
    return res.json([]);
  }

  try {
    const files = await fs.readdir(resourceDirectory);
    const resources = files.map(async (file) => {
      const f = await fs.readFile(path.join(resourceDirectory, file), "utf8");
      const converted = yaml.load(f); // convert YAML into JSON
      return converted;
    });

    res.json(await Promise.all(resources));
  } catch (err) {
    console.error(err);
    res.json([]);
  }
});

const port = process.env.PORT || 3001;
app.listen(port, () => {
  console.log(`Listening on http://localhost:${port}`);
});

Now let's start this server before we begin testing.

# in our ci/kube-resource-server directory
node ./server

Remember the URL for the kube-resource-server (default: http://localhost:3001). We'll need it later. This one stays the same whether running locally or in our CI/CD workflow.

While we're at it, let's run that Kubernetes proxy, too.

kubectl proxy

Next, we'll walk through each part of the Postman collection. Then we'll work on actually running it!

1. Initialize

This step is only necessary when running in the Postman Collection Runner. With Postman, we want to enable both a real-time developer experience and a way to automate collection runs via the command line. To do that, we'll use Newman. For now, let's see if we can make some API calls!

Kubernetes Deployment collection - Initialize

Kubernetes Deployment collection - Initialize

This step resets all collection variables that are required for the collection run. When running in Newman, we'll skip this step by only running the Run folder from this collection.

2. Fetch resources from the filesystem

Finally, we get to use our nifty little Node.js server! If it isn't already running from the instructions earlier, go ahead and fire it up now.

Kubernetes Deployment collection - Fetch Application Resources

Kubernetes Deployment collection - Fetch Application Resources

In order for this to be successful, we need to add a couple of Kubernetes YAML files to our ci/kubernetes directory.

The deployment...

# ci/kubernetes/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: jsonplaceholder
  labels:
    app: jsonplaceholder
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jsonplaceholder
  template:
    metadata:
      labels:
        app: jsonplaceholder
    spec:
      containers:
        - name: app
          image: svenwal/jsonplaceholder
          ports:
            - containerPort: 3000

And the service...

# ci/kubernetes/service.yaml

apiVersion: v1
kind: Service
metadata:
  name: jsonplaceholder-svc
  labels:
    app: jsonplaceholder
spec:
  ports:
    - port: 3000
      protocol: TCP
  selector:
    app: jsonplaceholder

3. Deploy using the Kubernetes API

Once we've got everything in place, we're ready to hit the Kubernetes API and deploy our application! The Kubernetes API can be fairly complex at times. Luckily, we already have these requests ready in Postman, and we can re-use them across all of our projects.

Kubernetes Deployment collection - Deploy to Kubernetes

Kubernetes Deployment collection - Deploy to Kubernetes

Apps, roll out

Now that we have an understanding of the collection workflow, let's take a look at the variables that make this all work.

Variables

Required to access the Kubernetes server and the resource server that will run in the CI/CD pipeline.

  • kubeBaseUrl - The URL for Kubernetes. If you're testing locally, you should have kube proxy running, and the URL should be http://localhost:8001. If you're running this collection in Kubernetes, the URL should be https://kubernetes.default.svc.

  • kubeNamespace - The default namespace to create/update resources. If the resource has a metadata.namespace field, that takes precedence.

  • resourceBaseUrl - The URL to the service that will run in your CI/CD environment. It can return resources that can be deployed (e.g., deployment and service YAML files stored in your code repository).

  • fieldManager - Kubernetes server-side apply requires field management.

  • forceUpdates - Only needed if resources were previously created/updated by something other than Postman, such as kubectl. More information here: Server-Side Apply.

These can all be overridden in an environment. Here's a snapshot for a local configuration.

Local variables used to run the Kubernetes Deployment collection

Local variables used to run the Kubernetes Deployment collection

The big reveal

And finally, cue the drum roll... 🥁

When we run the collection, not only do we see all the API calls being made, we also see test execution happening for each call. The test results we see are verifying the success of our application deployment.

A successful Collection Run

A successful Collection Run

Now, if everything worked correctly, the service should be accessible via the proxy at http://localhost:8001/api/v1/namespaces/default/services/jsonplaceholder-svc/proxy/posts/.

Our brand new service running in Kubernetes

Our brand new service running in Kubernetes

We did it! Finally, we have API testing for our integrated deployment environment.

To put a bow on all this, we can actually automate Postman collection runs in our CI/CD pipelines using an open source CLI called Newman. All we need to do is integrate our Git repository with an API version in Postman.

Here's an example of running this same collection from within a Pod in our Kubernetes cluster! There's a lot more to this, but that's a whole 'nother post.

newman run \
  --folder=Run \
  --env-var=kubeBaseUrl=https://kubernetes.default.svc \
  --env-var=token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) \
  --ssl-extra-ca-certs /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
  --verbose \
  "./postman/collections/deploy-to-kubernetes.json"

Going beyond

This just scratches the surface of what's possible with API-driven deployments. Here are some ideas for upgrading our workflow:

  • Canary deployments with manual job approval.
  • More complex orchestration with databases and message brokers.
  • Environment promotion with Slack notifications and the CircleCI API.

API-driven automation allows us to use existing tools and infrastructure, leveraging the investments we've already made. This isn't shiny object syndrome. It's perseverance.

Still, it probably wouldn't hurt to just try the next shiny thing... 'Til next time! ✨

Disclaimer*: This post mentions Postman. I work there. 🚀*

RELATED STORIES

L O A D I N G
. . . comments & more!
Hackernoon hq - po box 2206, edwards, colorado 81632, usa