Kubernetes is the most widely used container-based orchestration platform today. Practitioners are often segmented by specialty: Application Developer Administrator Security Specialist According to the currently in-progress , one relevant discovery is that Kubernetes deployments do not live in isolation. Kubernetes Usage Report 2021 Most organizations are . running a combination of bare metal, VMs, and Kubernetes Due to the persistent need to manage heterogeneous infrastructure, Kubernetes specialists are often tasked with other job responsibilities. How do we wrangle the dissonance in application deployments, administration, security, and orchestration? The reality is that we're . saddled with the necessity of integration APIs all the way down Today, the communication and interaction possibilities are endless: B2B partner services Data migration Legacy system reuse (blah blah blah) APIs are the raw materials we piece together to make this magic happen. This is what's driving so much adoption of quirky mashup delights. It's not just shiny object obsession, though if we're honest with ourselves, our little raccoon paws have a wandering mind of their own. 🦝 APIs are not just glue code. They enable value chain optimizations through platform ecosystems. Here's what we're going to accomplish: Get a Kubernetes cluster up and running Build a server to fetch local Kubernetes resource files Run a proxy to the Kubernetes API Server Use Postman to deploy our application Get ready! Oof. Feeling a bit yikes? We're using a lot of buzz words here. It's important to note that some familiarity with Kubernetes, CI/CD pipelines, and GitHub would really be beneficial to following along. Pulling the strings of Kubernetes Okay, okay, okay... Let's get to it. Here's our inventory: Kubernetes kubectl Postman Deploy to Kubernetes Postman collection Sample repository Here's what we're looking to accomplish: Most people are familiar with using the CLI to send commands to their cluster. Fun fact: is just using APIs under the hood. Don't believe me? Tack to the tail end of any command, and let the HTTP wash over us. 🌊 Kubernetes is an API-driven container orchestration platform. kubectl kubectl -v=8 kubectl Now that we know we can make API calls to pull the strings of Kubernetes, let's see what we can do in Postman. Here's to follow along! a public workspace Let's step through what's going on here. 0. Pre-requisites First and foremost, we'll need a Kubernetes cluster. Cloud providers offer managed options that are relatively quick and easy to get up and running. For this walkthrough, we'll be using . One of the easiest ways to get an Amazon EKS cluster up and running is to use the . Amazon Elastic Kubernetes Service (Amazon EKS) Amazon EKS Quickstart CloudFormation template Once our cluster is up and running, we'll need to create an SSH tunnel to our bastion host. Using default security options with the Amazon EKS Quickstart template, only the has access to . This prevents outside access to our precious Pods. However, we'd like to use on our own workstations so we can test our integrations. Scott Lowe has an easy-to-follow blog post on how to do this: . bastion host kube-apiserver kubectl Using kubectl via an SSH Tunnel Before we get started with local testing, there are a couple of servers we need to run. If the Postman collection is going to deploy Kubernetes resources, it needs a way to access those resources. Furthermore, we need a way of accessing the Kubernetes API. These will require setup for both local testing and testing in our CI/CD pipeline. Proxying kube-apiserver In Kubernetes, validates and configures data for the cluster. It's exposed via a REST API and is the front door by which we'll enter. The CLI has , and we'll utilize the default URL for our local testing. kube-apiserver kubectl a built-in way to run a local proxy http://localhost:8001 When we run our Postman collection in our CI/CD workflow, we'll be where we'll use a different URL, . Don't worry. These URLs can be set as variables in Postman and changed depending on our execution environment. accessing the API from a Pod https://kubernetes.default.svc Local kube-resource-server We're going to run a local server in our CI/CD pipeline that's going to conveniently serve the Kubernetes resources over HTTP! One benefit of this is that we can actually use this server when testing our collection run in Postman. Here's a quick implementation in Node.js. After initializing a new Node.js application, be sure to install a couple of dependencies. mkdir -p ci/kube-resource-server && cd ci/kube-resource-server npm init npm install express ys-yaml After we have our new Node.js application and our dependencies, let's write a basic HTTP server that will aggregate Kubernetes resource YAML files and return them as JSON. Our Postman collection will use these JSON representations to deploy to Kubernetes. // ci/kube-resource-server/server.js const fs = require("fs/promises"); const path = require("path"); const express = require("express"); const yaml = require("js-yaml"); // NOTE: Change this directory to match where our // Kubernetes resources live. This assumes they're // in the root of our project under the "kubernetes" // directory. const resourceDirectory = path.join(__dirname, "..", "kubernetes"); const app = express(); app.get("/resources", async (req, res) => { const canAccess = fs.access(resourceDirectory); if (!canAccess) { console.error( `The resource directory cannot be accessed: ${resourceDirectory}. Returning an empty array ([]).` ); return res.json([]); } try { const files = await fs.readdir(resourceDirectory); const resources = files.map(async (file) => { const f = await fs.readFile(path.join(resourceDirectory, file), "utf8"); const converted = yaml.load(f); // convert YAML into JSON return converted; }); res.json(await Promise.all(resources)); } catch (err) { console.error(err); res.json([]); } }); const port = process.env.PORT || 3001; app.listen(port, () => { console.log(`Listening on http://localhost:${port}`); }); Now let's start this server before we begin testing. # in our ci/kube-resource-server directory node ./server Remember the URL for the kube-resource-server (default: ). We'll need it later. This one stays the same whether running locally or in our CI/CD workflow. http://localhost:3001 While we're at it, let's run that Kubernetes proxy, too. kubectl proxy Next, we'll walk through each part of the Postman collection. Then we'll work on actually running it! 1. Initialize This step is only necessary when running in the Postman Collection Runner. With Postman, we want to enable both a real-time developer experience a way to automate collection runs via the command line. To do that, we'll use . For now, let's see if we can make some API calls! and Newman This step resets all collection variables that are required for the collection run. When running in Newman, we'll skip this step by only running the Run folder from this collection. 2. Fetch resources from the filesystem Finally, we get to use our nifty little Node.js server! If it isn't already running from the instructions earlier, go ahead and fire it up now. In order for this to be successful, we need to add a couple of Kubernetes YAML files to our directory. ci/kubernetes The deployment... # ci/kubernetes/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: jsonplaceholder labels: app: jsonplaceholder spec: replicas: 1 selector: matchLabels: app: jsonplaceholder template: metadata: labels: app: jsonplaceholder spec: containers: - name: app image: svenwal/jsonplaceholder ports: - containerPort: 3000 And the service... # ci/kubernetes/service.yaml apiVersion: v1 kind: Service metadata: name: jsonplaceholder-svc labels: app: jsonplaceholder spec: ports: - port: 3000 protocol: TCP selector: app: jsonplaceholder 3. Deploy using the Kubernetes API Once we've got everything in place, we're ready to hit the Kubernetes API and deploy our application! The Kubernetes API can be fairly complex at times. Luckily, we already have these requests ready in Postman, and we can re-use them across all of our projects. Apps, roll out Now that we have an understanding of the collection workflow, let's take a look at the variables that make this all work. Variables Required to access the Kubernetes server and the resource server that will run in the CI/CD pipeline. - The URL for Kubernetes. If you're testing locally, you should have running, and the URL should be . If you're running this collection in Kubernetes, the URL should be . kubeBaseUrl kube proxy http://localhost:8001 https://kubernetes.default.svc - The default namespace to create/update resources. If the resource has a field, that takes precedence. kubeNamespace metadata.namespace - The URL to the service that will run in your CI/CD environment. It can return resources that can be deployed (e.g., deployment and service YAML files stored in your code repository). resourceBaseUrl - Kubernetes server-side apply requires field management. fieldManager - Only needed if resources were previously created/updated by something other than Postman, such as . More information here: . forceUpdates kubectl Server-Side Apply These can all be overridden in an environment. Here's a snapshot for a local configuration. The big reveal And finally, cue the drum roll... 🥁 When we , not only do we see all the API calls being made, we also see test execution happening for each call. The test results we see are verifying the success of our application deployment. run the collection Now, if everything worked correctly, the service should be accessible via the proxy at . http://localhost:8001/api/v1/namespaces/default/services/jsonplaceholder-svc/proxy/posts/ We did it! Finally, we have API testing for our integrated deployment environment. To put a bow on all this, we can actually automate Postman collection runs in our CI/CD pipelines using an open source CLI called . All we need to do is . Newman integrate our Git repository with an API version in Postman Here's an example of running this same collection from within a Pod in our Kubernetes cluster! There's a lot more to this, but that's a whole 'nother post. newman run \ --folder=Run \ --env-var=kubeBaseUrl=https://kubernetes.default.svc \ --env-var=token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) \ --ssl-extra-ca-certs /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \ --verbose \ "./postman/collections/deploy-to-kubernetes.json" Going beyond This just scratches the surface of what's possible with API-driven deployments. Here are some ideas for upgrading our workflow: Canary deployments with manual job approval. More complex orchestration with databases and message brokers. Environment promotion with Slack notifications and the CircleCI API. API-driven automation allows us to use existing tools and infrastructure, leveraging the investments we've already made. This isn't shiny object syndrome. It's perseverance. Still, it probably wouldn't hurt to just the next shiny thing... 'Til next time! ✨ try *: This post mentions Postman. I work there. 🚀* Disclaimer