Kuma Meshes Head-On - A Beginner’s Guide To quickly start learning , one of the most important things we need is cluster. Then, we also need a command to find out the status of our pods in Kubernetes (aka ), we also need to be able to install , and finally, we also need to be able to issue some commands. Kuma k8s Kuma Kuma This is a long way of saying that we need to install 4 essential commands in order to make everything ready for . These commands are: Kuma - This is also known as Kubernetes in Docker. This is a command that leverages the weight of creating stuff with only . kind kubectl - Probably the most expected one on this list, if you are already used to working with . This is how we can issue commands to our cluster. kubectl k8s k8s - Helm allows us to execute some very handy scripts that allow, among others, the installation of the control plane. helm Kuma - We will not be using this command very often in this guide, but it is important to be aware of how to use it. kumactl This guide will let you know how to do this in . All of this has been tested in a system. If you are interested in a guide on how to install this in or or any other operating system you may have, please give me a shout-out at my YouTube channel . Ubuntu Ubuntu Mac-OS Windows JESPROTECH community I. Installing the Commands Kind ( ) in Docker k8s In order to install kind, we need to issue these commands: [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind It is important to note that the command will be installed in you . This may vary per system, even within Linux distributions. kind /usr/local/bin/kind Installing Certificates and GPG Keys Both and commands need to be installed with the presence of certain keys. This is how we can add them to our local repository of our Linux distribution: helm kubectl GPG apt sudo apt-get install -y apt-transport-https ca-certificates curl gpg curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update kubectl The installation of is very easy once the previous step is complete: Kubectl sudo apt-get install -y kubelet kubeadm kubectl The commands , , and aren't mandatory, but it is a good idea to install them. kubelet kubeadm kubectl helm As you may have already guessed, is also now very easy to install: helm sudo apt-get install -y helm kuma installation can be a bit cumbersome because it involves one manual step, but first, we need to download our dependencies: Kuma cd ~ || exit; curl -L https://kuma.io/installer.sh | VERSION=2.6.1 sh - Be sure to be in your folder before issuing this command. It is important to have installed in a place where it is easily accessible and easily spotted should we, for example, decide to remove it. HOME Kuma Once we are done with that, it is also very important to add the folder to our PATH: bin export PATH=~/kuma-2.6.1/bin:$PATH; Adding this line to the end or anywhere in between your star-up script will make this process easy. Your startup script may be any of these , , and possibly take another form. .bashrc .zshrc .profile k9s Installing is also quite different from other applications. In this case, we can either use or for . I have used mostly for and hardly ever needed it in Linux, but in this case, it is very much needed, and so to do that first, we need to install brew like this: k9s pacman brew Linux brew Mac-OS /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Once the brew installation is completed, all we have to do is to install (" "): k9s kanines brew install derailed/k9s/k9s One thing that is important to take into account, and you'll probably notice this once you install and start running for the first time, is that will crash if a cluster that it is monitoring gets removed and/or added. k9s k9s II. Creating the Cluster kind create cluster --name=wlsm-mesh-zone kubectl cluster-info --context kind-wlsm-mesh-zone The first command creates a cluster named . This is just a cluster that we will use to install Kuma. The second command is used to check the status of the cluster. wlsm-mesh-zone III. Creating a Local Docker Registry As I mentioned before, we can create a docker registry quite easily. As easy as it may sound to create it, the script to do this is a handful. So, the best thing to do is to just copy and paste the kind that is already available on their website. Here, we can download this : script #!/bin/sh # Original Source # https://creativecommons.org/licenses/by/4.0/ # https://kind.sigs.k8s.io/docs/user/local-registry/ set -o errexit # 1. Create registry container unless it already exists reg_name='kind-registry' reg_port='5001' if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then docker run \ -d --restart=always -p "127.0.0.1:${reg_port}:5000" --network bridge --name "${reg_name}" \ registry:2 fi # 2. Create kind cluster with containerd registry config dir enabled # TODO: kind will eventually enable this by default and this patch will # be unnecessary. # # See: # https://github.com/kubernetes-sigs/kind/issues/2875 # https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration # See: https://github.com/containerd/containerd/blob/main/docs/hosts.md cat <<EOF | kind create cluster --config=- kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 containerdConfigPatches: - |- [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d" EOF # 3. Add the registry config to the nodes # # This is necessary because localhost resolves to loopback addresses that are # network-namespace local. # In other words: localhost in the container is not localhost on the host. # # We want a consistent name that works from both ends, so we tell containerd to # alias localhost:${reg_port} to the registry container when pulling images REGISTRY_DIR="/etc/containerd/certs.d/localhost:${reg_port}" for node in $(kind get nodes); do docker exec "${node}" mkdir -p "${REGISTRY_DIR}" cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml" [host."http://${reg_name}:5000"] EOF done # 4. Connect the registry to the cluster network if not already connected # This allows kind to bootstrap the network but ensures they're on the same network if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then docker network connect "kind" "${reg_name}" fi # 5. Document the local registry # https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: local-registry-hosting namespace: kube-public data: localRegistryHosting.v1: | host: "localhost:${reg_port}" help: "https://kind.sigs.k8s.io/docs/user/local-registry/" EOF This script can be found in the root . And to install the local docker registry, we only need to run this bash script. folder of the project IV. How the Code Has Been Created There may be a lot to be said about the code that I have provided for the example for this blog post. However, in this case, let's just focus on a few key aspects. Let's start from the to the and then to the . When we run the services locally or even use a configuration to get the containers going, usually, we use the DNS-attributed names which automatically get assigned to be the container name or the name that we configure with . listener service collector database docker-compose hostname With , there is also a set of rules that make the host names available throughout the cluster. Let's have a look at the listener and collector examples: k8s Listener Example The listener is an application developed in using the . Like all applications created this way, there is also an file: Java Spring framework application.properties spring.application.name=wlsm-listener-service server.port=8080 spring.main.web-application-type=reactive spring.webflux.base-path=/app/v1/listener wslm.url.collector=http://localhost:8081/api/v1/collector In all of these properties, the most important one to focus on for the moment is the property. With the configuration, we can run this service locally without the need to use any containerized environment. However, in the cluster, we need to be able to access the , and for that, we have a profile with the definition file : wslm.url.collector default k8s collector prod application-prod.properties wslm.url.collector=http://wlsm-collector-deployment.wlsm-namespace.svc.cluster.local:8081/api/v1/collector This property tries to reach host . This file follows this configuration: wlsm-collector-deployment.wlsm-namespace.svc.cluster.local <Service Name>.<Namespace>.svc.cluster.local We've got 5 dot-separated elements. The last three are static, and the first two depend on the machine we are trying to reach. On the left, we place the service name followed by the namespace. This is important to understand how the containers are connected to each other within the cluster. The part of the code that is interesting to have a look at is of course the controller and the service. The controller looks like this: @RestController @RequestMapping public class ListenerController { private final ListenerService listenerService; ListenerController(ListenerService listenerService) { this.listenerService = listenerService; } @GetMapping("info") public String info() { return "Listener Service V1"; } @PostMapping("create") public Mono<AnimalLocationDto> sendAnimalLocation( @RequestBody AnimalLocationDto animalLocationDto) { return listenerService.persist(animalLocationDto); } } And the service looks like this: @Service public class ListenerService { @Value("${wslm.url.collector:http://localhost:8080}") private String collectorUrl; private final WebClient client = WebClient.create(collectorUrl); HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance(); List<AnimalLocationDto> cache = hazelcastInstance.getList("data"); public Mono<AnimalLocationDto> persist(AnimalLocationDto animalLocationDto) { cache.add(animalLocationDto); return client.post() .uri(collectorUrl.concat("/animals")) .contentType(MediaType.APPLICATION_JSON) .bodyValue(animalLocationDto) .retrieve() .bodyToMono(AnimalLocationDto.class); } } As you may have already noticed, this first application, like all of the applications implemented using the in this repository, is reactive, and they all use instead of .For the moment, we can ignore the usage in this code. This will be used for later versions of this project. Spring Framework netty tomcat hazelcast Collector Example The collector works in exactly the same way as the listener at this point. Its only duty for now is to relay data from the listener to the database and to do that, the collector only needs to know exactly where the database is. Let's make the same analysis on the : application.properties file of this project spring.application.name=wlsm-collector-service server.port=8081 spring.main.web-application-type=reactive spring.webflux.base-path=/api/v1/collector spring.r2dbc.url=r2dbc:postgresql://localhost:5432/wlsm spring.r2dbc.username=admin spring.r2dbc.password=admin spring.data.r2dbc.repositories.naming-strategy=org.springframework.data.relational.core.mapping.BasicRelationalPersistentEntityNamingStrategy spring.data.r2dbc.repositories.naming-strategy.table=org.springframework.data.relational.core.mapping.SnakeCaseNamingStrategy spring.data.r2dbc.repositories.naming-strategy.column=org.springframework.data.relational.core.mapping.SnakeCaseNamingStrategy These properties are the minimum required to get the service going. However, this is only to able to run it locally. And for this service, we also have profile file, and we can have a look at it in over here: prod application-prod.properties spring.r2dbc.url=r2dbc:postgresql://wlsm-database-deployment.wlsm-namespace.svc.cluster.local:5432/wlsm The database connection is in this case referring to the host of the database: wlsm-database-deployment.wlsm-namespace.svc.cluster.local Which again follows the same analysis as we have seen before. To the left, we see the service name, followed by the namespace appending that at the end with . svc.cluster.local And for this service, we also use a controller and a service. The controller looks like this: @RestController @RequestMapping class CollectorController( val collectorService: CollectorService ) { @PostMapping("animals") suspend fun listenAnimalLocation(@RequestBody animalLocationDto: AnimalLocationDto): AnimalLocationDto = run { collectorService.persist(animalLocationDto) animalLocationDto } } And the service looks like this: @Service class CollectorService( val applicationEventPublisher: ApplicationEventPublisher ) { fun persist(animalLocationDto: AnimalLocationDto) = applicationEventPublisher.publishEvent(AnimalLocationEvent(animalLocationDto)) } The service uses an event publisher that is called , that follows an event streaming architecture that gets handled later on in this event listener, which we can readily see that it uses to keep in the reactive architecture implementations paradigms: applicationEventPublisher r2dbc @Service class EventHandlerService( val animalLocationDao: AnimalLocationDao ) { @EventListener fun processEvent(animalLocationEvent: AnimalLocationEvent){ println(animalLocationEvent) runBlocking(Dispatchers.IO) { animalLocationDao.save(animalLocationEvent.animalLocationDto.toEntity()) } } } V. Deploy Scripts Deploying is normally a very straightforward task to do with . However, it is also important to have a look at the configuration needed for our services. For example, let's have a look at the listener implementation: k8s apiVersion: v1 kind: Namespace metadata: name: wlsm-namespace labels: kuma.io/sidecar-injection: enabled --- apiVersion: apps/v1 kind: Deployment metadata: name: wlsm-listener namespace: wlsm-namespace spec: replicas: 1 selector: matchLabels: app: wlsm-listener template: metadata: labels: app: wlsm-listener spec: containers: - name: wlsm-listener-service image: localhost:5001/wlsm-listener-service:latest imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: wlsm-listener-deployment namespace: wlsm-namespace spec: selector: app: wlsm-listener ports: - protocol: TCP appProtocol: http port: 8080 There are three blocks in this configuration. The first block is the namespace block. The namespace configuration is crucial to allow Kuma to be able to inject the envoy sidecars that it needs to apply policies. Without a defined namespace, will not be able to do this. The other thing that we need to pay attention to when configuring kuma is that the namespace must contain the proper label that kuma will recognize: kuma kuma.io/sidecar-injection: enabled. The namespace definition with the correct label is vital to get Kuma working. In the second block, we find the definition of the deployment. This is how we define how the deployment of our pod is going to look like in our Kubernetes cluster. What is important here to focus on is the , the , and the . The image is the complete tag of the Docker image we are using. image imagePullPolicy containerPort The port that gets configured for our docker registry created with is 5001, and this is included in the tag for our image. It works as a tag but also as a connection to our Docker registry. That way, we can pull the images and create our container to run in our Kubernetes environment. kind But, of course, to be able to use images, we need to create them, and for that, let's take a look at how that is done in the example and the example. The docker image for the is defined like this: listener database listener FROM eclipse-temurin:21-jdk-alpine WORKDIR /root ENV LANG=C.UTF-8 COPY entrypoint.sh /root COPY build/libs/wlsm-listener-service.jar /root/wlsm-listener-service.jar ENTRYPOINT ["/root/entrypoint.sh"] This all starts from a base image called . After this, we just copy the jar created by building the project and then make a copy of it into our image. Before that, we copy the to the container as well and define the to use it. The simply calls the jar like this: eclipse-temurin:21-jdk-alpine entrypoint.sh ENTRYPOINT entrypoint #!/usr/bin/env sh java -jar -Dspring.profiles.active=prod wlsm-listener-service.jar The service is quite different because it uses a few scripts that are opensource and available online: database FROM postgres:15 COPY . /docker-entrypoint-initdb.d COPY ./multiple /docker-entrypoint-initdb.d/multiple ENV POSTGRES_USER=admin ENV POSTGRES_PASSWORD=admin ENV POSTGRES_MULTIPLE_DATABASES=wlsm EXPOSE 5432 This script makes a copy of the following file and folder to the docker init directory: and . Finally, we simply define the variables used in those scripts to define our database and username/password combinations. create-multiple-postgresql-databases.sh multiple The database is created using the following schema: CREATE TABLE families( id uuid DEFAULT gen_random_uuid(), name VARCHAR(100), PRIMARY KEY(id) ); CREATE TABLE genuses( id uuid DEFAULT gen_random_uuid(), name VARCHAR(100), PRIMARY KEY(id) ); CREATE TABLE species( id uuid DEFAULT gen_random_uuid(), common_name VARCHAR(100), family uuid, genus uuid, PRIMARY KEY(id), CONSTRAINT fk_species FOREIGN KEY(family) REFERENCES families(id), CONSTRAINT fk_genus FOREIGN KEY(genus) REFERENCES genuses(id) ); CREATE TABLE animal ( id uuid DEFAULT gen_random_uuid(), name VARCHAR(100), species_id uuid, PRIMARY KEY(id), CONSTRAINT fk_species FOREIGN KEY(species_id) REFERENCES species(id) ); CREATE TABLE animal_location ( id uuid DEFAULT gen_random_uuid(), animal_id uuid, latitude BIGINT, longitude BIGINT, PRIMARY KEY(id), CONSTRAINT fk_animal FOREIGN KEY(animal_id) REFERENCES animal(id) ); And, as a data example, we will register one animal by the name of . Piquinho is simply the name of a traveling albatross that is traveling around the world, which has a sensor attached to it, and we are reading the data that the sensor is sending to us. There are two tables that define species. That is the species and the genus that define species. These are tables and . piquinho families genuses The table defines the species that the animal belongs to. Finally, we define an in the table of the same name where the species and the name of the animal get registered. The database looks like this: species animal In order to build, create the images, and start our project, we can run the following commands which are available in the : Makefile make make create-and-push-images make k8s-apply-deployment The first make is just a command. The second command used the variable: gradle build MODULE_TAGS := aggregator \ collector \ listener \ management \ database to run: docker images "*/*wlsm*" --format '{{.Repository}}' | xargs -I {} docker rmi {} @for tag in $(MODULE_TAGS); do \ export CURRENT=$(shell pwd); \ echo "Building Image $$image..."; \ cd "wlsm-"$$tag"-service"; \ docker build . --tag localhost:5001/"wlsm-"$$tag"-service"; \ docker push localhost:5001/"wlsm-"$$tag"-service"; \ cd $$CURRENT; \ done This simply goes through every module and uses a standard generic command that changes per value given in the to create the images and push them to the local registry on port 5001. Following the same strategy, we can then use the third command to deploy our pods. This third command uses a different loop that looks like this: MODULE_TAGS @for tag in $(MODULE_TAGS); do \ export CURRENT=$(shell pwd); \ echo "Applying File $$tag..."; \ cd "wlsm-"$$tag"-service"; \ kubectl apply -f $$tag-deployment.yaml --force; \ cd $$CURRENT; \ done In this case, it applies every deployment script to every single one of the services. If we run the command , we should be getting this output: kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-76f75df574-dmt5m 1/1 Running 0 5m21s kube-system coredns-76f75df574-jtrfr 1/1 Running 0 5m21s kube-system etcd-kind-control-plane 1/1 Running 0 5m38s kube-system kindnet-7frts 1/1 Running 0 5m21s kube-system kube-apiserver-kind-control-plane 1/1 Running 0 5m36s kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 5m36s kube-system kube-proxy-njzvl 1/1 Running 0 5m21s kube-system kube-scheduler-kind-control-plane 1/1 Running 0 5m36s kuma-system kuma-control-plane-5f47fdb4c6-7sqmp 1/1 Running 0 17s local-path-storage local-path-provisioner-7577fdbbfb-5qnxr 1/1 Running 0 5m21s wlsm-namespace wlsm-aggregator-64fc4599b-hg9qw 1/1 Running 0 4m23s wlsm-namespace wlsm-collector-5d44b54dbc-swf84 1/1 Running 0 4m23s wlsm-namespace wlsm-database-666d794c87-pslzp 1/1 Running 0 4m22s wlsm-namespace wlsm-listener-7bfbcf799-f44f5 1/1 Running 0 4m23s wlsm-namespace wlsm-management-748cf7b48f-8cjh9 1/1 Running 0 4m23s What we should observe here at this point is the presence of the , the and all the services running in our own custom . Our cluster is isolated from the outside, and in order to be able to access the different ports, we need to create for every pod we want to access. For that, we can issue these commands in separate tabs: kuma-control-plane kube-controller-manager wlsm-namespace port-forwarding We can also have a look at this by looking at : k9s kubectl port-forward svc/wlsm-collector-deployment -n wlsm-namespace 8081:8081 kubectl port-forward svc/wlsm-listener-deployment -n wlsm-namespace 8080:8080 kubectl port-forward svc/wlsm-database-deployment -n wlsm-namespace 5432:5432 kubectl port-forward svc/kuma-control-plane -n kuma-system 5681:5681 VI. Running the Application In order to run the application, we should open all the ports, and when all of them are open, we should see something like this on our screens: We can connect to the database using and port . The connection string is this one: . And to access it we then use the username/password combination of / . localhost 5432 jdbc:postgresql://localhost:5432/wlsm admin admin The first thing we need to do before we perform any test is to know the id of , and we can do that by using Intellij database tools like this: Piquinho In the root folder of the project, there is a file called . This is a scratch file to create REST requests against our open ports: test-requests.http ### GET http://localhost:8080/app/v1/listener/info ### POST http://localhost:8080/app/v1/listener/create Content-Type: application/json { "animalId": "2ffc17b7-1956-4105-845f-b10a766789da", "latitude": 52505252, "longitude": 2869152 } ### POST http://localhost:8081/api/v1/collector/animals Content-Type: application/json { "animalId": "2ffc17b7-1956-4105-845f-b10a766789da", "latitude": 52505252, "longitude": 2869152 } In order to be able to use this file, we only need to replace the ID, in this example, from to . In this case, we can make requests from the collector or from the listener. Both requests should work, and we should see afterward this kind of response per request: 2ffc17b7-1956-4105-845f-b10a766789da d5ad0824-71c0-4786-a04a-ac2b9a032da4 { "animalId": "d5ad0824-71c0-4786-a04a-ac2b9a032da4", "latitude": 52505252, "longitude": 2869152 } Response file saved. > 2024-04-12T001024.200.json Response code: 200 (OK); Time: 7460ms (7 s 460 ms); Content length: 91 bytes (91 B) Because both ports are opened and they, at this point, share the same payload type, we can perform the same requests to the listener and the collector. After making those two requests, we should find results in the table : animal_locations So, this confirms only that the cluster is running correctly, and now, we are ready to test policies with our Kuma mesh. VII. MeshTrafficPermission - Part I The is one of the features we can choose in Kuma, and it is probably the most used one. MeshTrafficPermission But first, let's take a moment to explore the Kuma control plane. With all the forwarding on, we can just go to and visualize our Kuma meshes. On the main page, we should see something like this: localhost:5681/gui There is nothing much to see at the moment, but let's now apply the : MeshTrafficPermission echo "apiVersion: kuma.io/v1alpha1 kind: MeshTrafficPermission metadata: namespace: kuma-system name: mtp spec: targetRef: kind: Mesh from: - targetRef: kind: Mesh default: action: Allow" | kubectl apply -f - Once we apply this, we should be getting a response like this: . meshtrafficpermission.kuma.io/mtp created VIII. Mesh Applying the mesh doesn't change much when it comes to the setup of our cluster. What it does do is allow us to set up traffic routing policies. There are many things that we can choose from, but one of the most obvious things we can choose from is Otherwise referred to as mutual TLS, which, in very short terms, means that certificates are mutually accepted and validated in order to establish the identity between parties and establish encrypted data traffic. mTLS. This can be automatically done for us using this simple configuration: Mesh echo "apiVersion: kuma.io/v1alpha1 kind: Mesh metadata: name: default spec: mtls: enabledBackend: ca-1 backends: - name: ca-1 type: builtin" | kubectl apply -f - After applying this policy we may come across a warning like this one: Warning: resource meshes/default is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. For now, we can ignore this warning. IX MeshTrafficPermission - Part II Now, comes the fun part, and the first thing we are going to do is to disable all traffic between all pods: echo " apiVersion: kuma.io/v1alpha1 kind: MeshTrafficPermission metadata: namespace: wlsm-namespace name: mtp spec: targetRef: kind: Mesh from: - targetRef: kind: Mesh default: action: Deny" | kubectl apply -f - And after we get the confirmation message , if we try to make any request using any of the port-forwarding, we'll get: meshtrafficpermission.kuma.io/mtp configured HTTP/1.1 500 Internal Server Error Content-Type: application/json Content-Length: 133 { "timestamp": "2024-04-12T07:09:26.718+00:00", "path": "/create", "status": 500, "error": "Internal Server Error", "requestId": "720749ce-56" } Response file saved. > 2024-04-12T090926.500.json Response code: 500 (Internal Server Error); Time: 10ms (10 ms); Content length: 133 bytes (133 B) This means that all traffic between pods is being denied. What we now have is an internal system protected against possible bad actors within our organization, but we have also now blocked traffic between all pods. So, is a great thing, but blocking all traffic is not at all. mTLS The way to make this perfect is simply to make exceptions to that all rule, and to do that, we need a policy that will allow traffic between the listener and the collector and the collector and the database. Let's start with the traffic between the collector and the database: DENY echo " apiVersion: kuma.io/v1alpha1 kind: MeshTrafficPermission metadata: namespace: kuma-system name: wlsm-database spec: targetRef: kind: MeshService name: wlsm-database-deployment_wlsm-namespace_svc_5432 from: - targetRef: kind: MeshService name: wlsm-collector-deployment_wlsm-namespace_svc_8081 default: action: Allow" | kubectl apply -f - In this case, what we are doing is allowing data traffic to flow from the collector to the database. If you don't know this, perhaps it is important to note how Kuma interprets the , which, just like the creation, is also used for functional purposes. targetRef targetRef name hostname The generic way to build these s is like this: name <service name>_<namespace>_svc_<service port> In this case, the separator is an underscore, and creating a name this way lets know exactly what is permitted. In this case, if we apply this policy, we'll be able to send requests to the collector after getting this response: . Kuma meshtrafficpermission.kuma.io/wlsm-database created And when making them, the response should now be confirming that the location record has been sent to the collector: 200 POST http://localhost:8081/api/v1/collector/animals HTTP/1.1 200 OK Content-Type: application/json Content-Length: 91 { "animalId": "a3a1bc1c-f284-4876-a84f-f75184b6998f", "latitude": 52505252, "longitude": 2869152 } Response file saved. > 2024-04-12T091754.200.json Response code: 200 (OK); Time: 1732ms (1 s 732 ms); Content length: 91 bytes (91 B) However, we still didn't define exceptions to the traffic between the listener and the collector, so making a request that way will result in this: HTTP/1.1 500 Internal Server Error Content-Type: application/json Content-Length: 133 { "timestamp": "2024-04-12T07:18:54.149+00:00", "path": "/create", "status": 500, "error": "Internal Server Error", "requestId": "e8973d33-62" } Response file saved. > 2024-04-12T091854-1.500.json Response code: 500 (Internal Server Error); Time: 10ms (10 ms); Content length: 133 bytes (133 B) And this is of course expected. Let's now apply another policy for this data traffic: echo " apiVersion: kuma.io/v1alpha1 kind: MeshTrafficPermission metadata: namespace: kuma-system name: wlsm-collector spec: targetRef: kind: MeshService name: wlsm-collector-deployment_wlsm-namespace_svc_8081 from: - targetRef: kind: MeshService name: wlsm-listener-deployment_wlsm-namespace_svc_8080 default: action: Allow" | kubectl apply -f - Making it possible to now perform requests from the listener to the collector: POST http://localhost:8080/app/v1/listener/create HTTP/1.1 200 OK Content-Type: application/json Content-Length: 91 { "animalId": "a3a1bc1c-f284-4876-a84f-f75184b6998f", "latitude": 52505252, "longitude": 2869152 } Response file saved. > 2024-04-12T092039-2.200.json Response code: 200 (OK); Time: 14ms (14 ms); Content length: 91 bytes (91 B) X - MeshFaultInjection Finally and just to provide another feature as an example, we can also use another feature called , which can be very useful when performing tests with . We can simulate potential problems within our mesh and check if the error handling is being done correctly for example. MeshFaultInjection Kuma We can also check other things like how circuit breakers we may have configured may react to faulty connections or high-rate requests. So, let's try it. One way to apply is like this: MeshFaultInjection echo " apiVersion: kuma.io/v1alpha1 kind: MeshFaultInjection metadata: name: default namespace: kuma-system labels: kuma.io/mesh: default spec: targetRef: kind: MeshService name: wlsm-collector-deployment_wlsm-namespace_svc_8081 from: - targetRef: kind: MeshService name: wlsm-listener-deployment_wlsm-namespace_svc_8080 default: http: - abort: httpStatus: 500 percentage: 50" | kubectl apply -f - With this policy, we are saying that the traffic outbound from the listener and inbound to the collector will have a 50% chance of success. The request results are unpredictable, so after applying this policy, we may expect errors or successful requests to the listener endpoint. POST http://localhost:8080/app/v1/listener/create HTTP/1.1 500 Internal Server Error Content-Type: application/json Content-Length: 133 { "timestamp": "2024-04-12T07:28:00.008+00:00", "path": "/create", "status": 500, "error": "Internal Server Error", "requestId": "2206f29e-78" } Response file saved. > 2024-04-12T092800.500.json Response code: 500 (Internal Server Error); Time: 8ms (8 ms); Content length: 133 bytes (133 B) POST http://localhost:8080/app/v1/listener/create HTTP/1.1 200 OK Content-Type: application/json Content-Length: 91 { "animalId": "a3a1bc1c-f284-4876-a84f-f75184b6998f", "latitude": 52505252, "longitude": 2869152 } Response file saved. > 2024-04-12T092819.200.json Response code: 200 (OK); Time: 13ms (13 ms); Content length: 91 bytes (91 B) Finally, just out of interest, we can have a look at how our table looks like now: animal_location XI - Conclusion I hope you were able to follow this article so far and that you were able to have a cluster running on your machine. Thanks anyway for reading this article and for giving up a bit of your time to understand and learn a bit more about Kuma. I personally see a great usage for this and a great future for as it makes it possible to configure and take a much more granular control of our network and our environment. Kuma Its enterprise version, , seems quite complete. Kuma is open source. and it’s great for testing and also for enterprise it seems. I find the subject of meshes very interesting, and I think provides a great way to learn about how meshes work and to get a feel for how can we better control the data flow within our network. Kong-Mesh Kuma If we want to see the status of our services, we can just go to our control plane in this location: : Kuma localhost http://localhost:5681/gui/meshes/default/services?page=1&size=50 In Kuma control plane, we can also have a look at the policies installed, check out the status of our pods, monitor what is going on in the background, and generally, just have an overview of what is happening in our mesh and how it is configured. I invite you to just go through the application and see if you can check the status of the policies we have installed. The Kuma control plane, a.k.a., the GUI, is made precisely to be easy to understand and follow up on our Mesh. XII - Resources What is mutual TLS (mTLS)? K9s North-South traffic East-West traffic Getting Started With Kuma Service Mesh Deploy Kuma on Kubernetes Local Registry with Kind I have also made a video about it on my YouTube channel right over here: JESPROTECH https://youtu.be/KE3VTYtLvnI?embedable=true