A year ago, Harry Bagdi wrote an amazingly helpful blog post (link at bottom of article) on observability for microservices. And by comparing titles, it becomes obvious that my blog post draws inspiration from his work. When he published it, our company, Kong, was doing an amazing job at one thing: API gateways. So naturally, the blog post only featured leveraging the Prometheus monitoring stack in conjunction with Kong Gateway. But to quote Bob Dylan, “the times they are a-changin [and sometimes an API gateway is just not enough]”. So, we released Kuma (which was as a Sandbox project in June 2020), an open source service mesh to work in conjunction with Kong Gateway. donated to the Cloud Native Computing Foundation How does this change observability for the microservices in our Kubernetes cluster? Well, let me show you. Prerequisites The first thing to do is to set up Kuma and Kong. But why reinvent the wheel when my previous blog post already covered exactly how to do this. Follow the steps to set up Kong and Kuma in a Kubernetes cluster. here Install Prometheus Monitoring Stack Once the prerequisite cluster is set up, getting Prometheus monitoring stack setup is a breeze. Just run the following command and it will deploy the stack. This is the same binary we used in the prerequisite step. However, if you do not have it set up, you can download it on . kumactl install [..] kumactl Kuma’s installation page $ kumactl install metrics | kubectl apply -f - namespace/kuma-metrics created podsecuritypolicy.policy/grafana created configmap/grafana created configmap/prometheus-alertmanager created configmap/provisioning-datasource created configmap/provisioning-dashboards created configmap/prometheus-server created persistentvolumeclaim/prometheus-alertmanager created persistentvolumeclaim/prometheus-server created ... To check if everything has been deployed, check the namespace: kuma-metrics $ kubectl get pods -n kuma-metrics NAME READY STATUS RESTARTS AGE grafana-c987548d6 l7h7 / Running m18s prometheus-alertmanager d8568-frxhc / Running m18s prometheus-kube-state-metrics c45f8b9df-h9qh9 / Running m18s prometheus-node-exporter-ngqvm / Running m18s prometheus-pushgateway c894bb86f gflz / Running m18s prometheus-server f-kqzrf / Running m18s -5 1 1 0 2 -655 2 2 0 2 -5 1 1 0 2 1 1 0 2 -6 -2 1 1 0 2 -65895587 3 3 0 2 Enable Metrics on Mesh Once the pods are all up and running, we need to edit the Kuma mesh object to include the section you see below. It is not included by default, so you can edit the mesh object using like so: metrics: prometheus kubectl $ cat <<EOF | kubectl apply -f - apiVersion: kuma.io/v1alpha1 kind: Mesh metadata: name: spec: mtls: ca: builtin: {} metrics: prometheus: {} EOF default Accessing Grafana Dashboards We can visualize our metrics with Kuma’s prebuilt Grafana dashboards. And the best part is that Grafana was also installed alongside the Prometheus stack, so if you port-forward the Grafana server pod in namespace, you will see all your metrics: kuma-metrics $ kubectl port-forward grafana-c987548d6 l7h7 -n kuma-metrics Forwarding : -> Forwarding [:: ]: -> -5 3000 from 127.0 .0 .1 3000 3000 from 1 3000 3000 Next step is to visit the to query the metrics that Prometheus is scraping from within the mesh. If you are prompted to log in, just use admin for both the username and password. Grafana dashboard Envoy sidecar proxies There will be three Kuma dashboards: Kuma Mesh: High level overview of the entire service mesh Kuma Dataplane: In-depth metrics on a particular Envoy dataplane Kuma Service to Service: Metrics on connection/traffic between two services But we can do better…by stealing more ideas from Harry’s blog. In the remainder of this tutorial, I will explain how you can extend the Prometheus monitoring stack we just deployed to work in conjunction with Kong. To start, while we are still on Grafana, let’s add the official to our Grafana server. Visit import page in Grafana to import a new dashboard: Kong dashboard this On this page, you will enter the Kong Grafana dashboard ID into the top field. The page will automatically redirect you to the screenshot page below if you entered the ID correctly: 7424 Here, you need to select the Prometheus data source. The drop down should only have one option named “Prometheus,” so be sure to just select that. Click the green “Import” button when you are done. But before we go explore that new dashboard we created, we need to set up the on the Kong API gateway. Prometheus plugin Enabling Prometheus Plugins on Kong Ingress Controller We need the Prometheus plugin to expose metrics related to Kong and proxied upstream services in Prometheus exposition format. But you may ask, “wait, didn’t we just set up Prometheus by enabling the metrics option on the entire Kuma mesh? And if Kong sits within this mesh, why do we need an additional Prometheus plugin?” I know it may seem redundant, but let me explain. When enabling the metrics option on the mesh, Prometheus only has access to metrics exposed by the ( ) that sit alongside the services in the mesh, not from the actual services. So, Kong Gateway has a lot more metrics available that we can gain insight into if we can reuse the same Prometheus server. data planes Envoy sidecar proxies To do so, it really is quite simple. We will create a in Kubernetes to enable the Prometheus plugin in Kong. This configures Kong to collect metrics for all requests proxies via Kong and expose them to Prometheus. Custom Resource Execute the following to enable the Prometheus plugin for all requests: echo | kubectl apply -f - "apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: labels: global: \"true\" name: prometheus plugin: prometheus " Export the PROXY_IP once again since we’ll be using it to generate some consistent traffic. PROXY_IP=$(minikube service -p kuma-demo -n kuma-demo kong-proxy --url | head ) export -1 This will be the same PROXY_IP step we used in the prerequisite blog post. If nothing shows up when you , you will need to revisit the prerequisite and make sure Kong is set up correctly within your mesh. But if you can access the application via the PROXY_IP, run this loop to throw traffic into our mesh: echo $PROXY_IP ; curl -s -o /dev/ -w curl -s -o /dev/ -w curl -s -o /dev/ -w curl -s -o /dev/ -w curl -s -o /dev/ -w sleep done while true do null "%{http_code}" "${PROXY_IP}/items" null "%{http_code}" "${PROXY_IP}/items?q=dress" null "%{http_code}" "${PROXY_IP}/items/8/reviews" null "%{http_code}" "${PROXY_IP}/items" null "%{http_code}" "${PROXY_IP}/dead_endpoint" 0.01 “Show Me the Metrics!” Go back to the Kong Grafana dashboard and watch those sweet metrics trickle in: You now have Kuma and Kong metrics using one Prometheus monitoring stack. That’s all for this blog. Thanks, Harry, for the idea! And thank you for following along. Let me know what you would like to see next by tweeting at me at or emailing me at . @devadvocado kevin.chen@konghq.com Previously published at https://konghq.com/blog/observability-for-your-kubernetes-microservices-using-kuma-and-prometheus/ Harry Bagdi's article: https://konghq.com/blog/observability-kubernetes-kong/