Before you go, check out these stories!

Hackernoon logoKuma and Prometheus for Observability in Kubernetes Microservices Clusters by@devadvocado

Kuma and Prometheus for Observability in Kubernetes Microservices Clusters

Author profile picture

@devadvocadoKevin Chen

A year ago, Harry Bagdi wrote an amazingly helpful blog post (link at bottom of article) on observability for microservices. And by comparing titles, it becomes obvious that my blog post draws inspiration from his work.

When he published it, our company, Kong, was doing an amazing job at one thing: API gateways. So naturally, the blog post only featured leveraging the Prometheus monitoring stack in conjunction with Kong Gateway. But to quote Bob Dylan, “the times they are a-changin [and sometimes an API gateway is just not enough]”. So, we released Kuma (which was donated to the Cloud Native Computing Foundation as a Sandbox project in June 2020), an open source service mesh to work in conjunction with Kong Gateway.

How does this change observability for the microservices in our Kubernetes cluster? Well, let me show you.


The first thing to do is to set up Kuma and Kong. But why reinvent the wheel when my previous blog post already covered exactly how to do this. Follow the steps here to set up Kong and Kuma in a Kubernetes cluster. 

Install Prometheus Monitoring Stack

Once the prerequisite cluster is set up, getting Prometheus monitoring stack setup is a breeze. Just run the following 

kumactl install [..]
command and it will deploy the stack. This is the same 
binary we used in the prerequisite step. However, if you do not have it set up, you can download it on Kuma’s installation page

$ kumactl install metrics | kubectl apply -f -
namespace/kuma-metrics created
podsecuritypolicy.policy/grafana created
configmap/grafana created
configmap/prometheus-alertmanager created
configmap/provisioning-datasource created
configmap/provisioning-dashboards created
configmap/prometheus-server created
persistentvolumeclaim/prometheus-alertmanager created
persistentvolumeclaim/prometheus-server created

To check if everything has been deployed, check the 


$ kubectl get pods -n kuma-metrics
NAME                                             READY   STATUS    RESTARTS   AGE
grafana-c987548d6-5l7h7                          1/1     Running   0          2m18s
prometheus-alertmanager-655d8568-frxhc           2/2     Running   0          2m18s
prometheus-kube-state-metrics-5c45f8b9df-h9qh9   1/1     Running   0          2m18s
prometheus-node-exporter-ngqvm                   1/1     Running   0          2m18s
prometheus-pushgateway-6c894bb86f-2gflz          1/1     Running   0          2m18s
prometheus-server-65895587f-kqzrf                3/3     Running   0          2m18s

Enable Metrics on Mesh

Once the pods are all up and running, we need to edit the Kuma mesh object to include the 

metrics: prometheus
 section you see below. It is not included by default, so you can edit the mesh object using 
like so:

$ cat <<EOF | kubectl apply -f - 
kind: Mesh
  name: default
      builtin: {}
    prometheus: {}

Accessing Grafana Dashboards

We can visualize our metrics with Kuma’s prebuilt Grafana dashboards. And the best part is that Grafana was also installed alongside the Prometheus stack, so if you port-forward the Grafana server pod in 

 namespace, you will see all your metrics:

$ kubectl port-forward grafana-c987548d6-5l7h7 -n kuma-metrics 3000
Forwarding from -> 3000
Forwarding from [::1]:3000 -> 3000

Next step is to visit the Grafana dashboard to query the metrics that Prometheus is scraping from Envoy sidecar proxies within the mesh. If you are prompted to log in, just use admin for both the username and password.

There will be three Kuma dashboards: 

  • Kuma Mesh: High level overview of the entire service mesh
  • Kuma Dataplane: In-depth metrics on a particular Envoy dataplane
  • Kuma Service to Service: Metrics on connection/traffic between two services

But we can do better…by stealing more ideas from Harry’s blog. In the remainder of this tutorial, I will explain how you can extend the Prometheus monitoring stack we just deployed to work in conjunction with Kong. 

To start, while we are still on Grafana, let’s add the official Kong dashboard to our Grafana server. Visit this import page in Grafana to import a new dashboard:

On this page, you will enter the Kong Grafana dashboard ID 

 into the top field. The page will automatically redirect you to the screenshot page below if you entered the ID correctly:

Here, you need to select the Prometheus data source. The drop down should only have one option named “Prometheus,” so be sure to just select that. Click the green “Import” button when you are done. But before we go explore that new dashboard we created, we need to set up the Prometheus plugin on the Kong API gateway.

Enabling Prometheus Plugins on Kong Ingress Controller

We need the Prometheus plugin to expose metrics related to Kong and proxied upstream services in Prometheus exposition format. But you may ask, “wait, didn’t we just set up Prometheus by enabling the metrics option on the entire Kuma mesh? And if Kong sits within this mesh, why do we need an additional Prometheus plugin?” I know it may seem redundant, but let me explain. When enabling the metrics option on the mesh, Prometheus only has access to metrics exposed by the data planes (Envoy sidecar proxies) that sit alongside the services in the mesh, not from the actual services. So, Kong Gateway has a lot more metrics available that we can gain insight into if we can reuse the same Prometheus server.

To do so, it really is quite simple. We will create a Custom Resource in Kubernetes to enable the Prometheus plugin in Kong. This configures Kong to collect metrics for all requests proxies via Kong and expose them to Prometheus.

Execute the following to enable the Prometheus plugin for all requests:

echo "apiVersion:
kind: KongPlugin
    global: \"true\"
  name: prometheus
plugin: prometheus
" | kubectl apply -f -

Export the PROXY_IP once again since we’ll be using it to generate some consistent traffic. 

export PROXY_IP=$(minikube service -p kuma-demo -n kuma-demo kong-proxy --url | head -1)

This will be the same PROXY_IP step we used in the prerequisite blog post. If nothing shows up when you 

echo $PROXY_IP
, you will need to revisit the prerequisite and make sure Kong is set up correctly within your mesh. But if you can access the application via the PROXY_IP,  run this loop to throw traffic into our mesh:

while true;
  curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items"
  curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items?q=dress"
  curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items/8/reviews"
  curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/items"
  curl -s -o /dev/null -w "%{http_code}" "${PROXY_IP}/dead_endpoint"
  sleep 0.01

“Show Me the Metrics!”

Go back to the Kong Grafana dashboard and watch those sweet metrics trickle in:

You now have Kuma and Kong metrics using one Prometheus monitoring stack. That’s all for this blog. Thanks, Harry, for the idea! And thank you for following along. Let me know what you would like to see next by tweeting at me at @devadvocado or emailing me at

Previously published at

Harry Bagdi's article:


Join Hacker Noon

Create your free account to unlock your custom reading experience.