Photo by Ashim D’Silva on Unsplash
This is an excerpt from Traffic Management with Istio module — you can download the 20+ page PDF and supporting YAML files by signing up at 👉 www.LearnIstio.com 👈
By default, any service running inside the service mesh is not automatically exposed outside of the cluster which means that we can’t get to it from the public Internet. Similarly, services within the mesh don’t have access to anything running outside of the cluster either.
To allow incoming traffic to the frontend service that runs inside the cluster, we need to create an external load balancer first. As part of the installation, Istio creates an istio-ingressgateway
service that is of type LoadBalancer
and, with the corresponding Istio Gateway
resource, can be used to allow traffic to the cluster.
If you run kubectl get svc istio-ingressgateway -n istio-system
, you will get an output similar to this one:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) ....istio-ingressgateway LoadBalancer 10.107.249.46 <pending> ...
The above output shows the Istio ingress gateway of type LoadBalancer
. If you’re using a Minikube cluster you will notice how the external IP column shows text <pending>
— that is because we don’t actually have a real external load balancer as everything runs locally. With a cluster running in the cloud from any cloud provider, we would see a real IP address there — that IP address is where the incoming traffic enters the cluster.
We will be accessing the service in the cluster frequently, so we need to know which address to use. The address we are going to use depends on where the Kubernetes cluster is running.
If using Minikube
Use the script below to set the GATEWAY
environment variable we will be using to access the services.
export INGRESS_HOST=$(minikube ip)export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath=’{.spec.ports[?(@.name==”http2")].nodePort}’)
export GATEWAY=$INGRESS_HOST:$INGRESS_PORT
If you run echo $GATEWAY
you should get an IP address with a port, such as: 192.168.99.100:31380
.
If using Minikube (v0.32.0 or higher)
Minikube version `v0.32.0` and higher, has a command called minikube tunnel
. This command creates networking routes from your machine into the Kubernetes cluster as well as allocates IPs to services marked with LoadBalancer
. What this means is that you can access your exposed service using an external IP address, just like you would when you’re running Kubernetes in the cloud.
To use the tunnel command, open a new terminal window and run minikube tunnel
and you should see an output similar to this one:
$ minikube tunnel
Status:machine: minikubepid: 43606route: 10.96.0.0/12 -> 192.168.99.104minikube: Runningservices: [istio-ingressgateway]errors:minikube: no errorsrouter: no errorsloadbalancer emulator: no errors
If you run the kubectl get svc istio-ingressgateway -n istio-system
command to get the ingress gateway service, you will notice an actual IP address in the EXTERNAL-IP
column. It should look something like this:
$ kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 10.107.235.182 10.107.235.182 ...
Now you can use the external IP address (10.107.235.182
above) as the public entry point to your cluster. Run the command below to set the external IP value to the GATEWAY
variable:
export GATEWAY=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath=’{.status.loadBalancer.ingress[0].ip}’)
If using Docker for Mac/Windows
When using Docker for Mac/Windows, the Istio ingress gateway is exposed on localhost:80
export GATEWAY=localhost
If using hosted Kubernetes
If you’re using hosted Kubernetes, run the kubectl get svc istio-ingressgateway -n istio-system
command and use the external IP value.
For the rest of the module, we will use the GATEWAY
environment variable in all examples when accessing the services.
Now that we have the GATEWAY
we could try and access it. Unfortunately, we get back something like this:
$ curl $GATEWAYcurl: (7) Failed to connect to 192.168.99.100 port 31380: Connection refused
Yes, we have the IP and it’s the correct one, however, this IP address alone is not enough — we also need an Ingress
or Gateway
and that to configure what happens with the requests when they hit the cluster. This resource operates at the edge of the service mesh and is used to enable ingress (incoming) traffic to the cluster.
Here’s how a minimal Gateway
resource looks like:
With the above snippet, we are creating a gateway that will proxy all requests to pods that are labeled with istio: ingressgateway
label. You can run kubectl get pod — selector="istio=ingressgateway" — all-namespaces
to get all the pods with that label. The command will return you the Istio ingress gateway pod that’s running in the istio-system
namespace. This ingress gateway pod will then, in turn, proxy traffic further to different Kubernetes services.
Under servers
we define which hosts will this gateway proxy — we are using *
which means we want to proxy all requests, regardless of the host name.
In the real world, the host would be set to the actual domain name (e.g. www.example.com) where cluster services will be accessible from. The `*` should be only used for testing and in local scenarios and not in production.
With the host and port combination above, we are allowing incoming HTTP traffic to port 80
for any host (*
). Let’s deploy this resource:
If you run the curl
command now, you will get a bit of a different response:
$ curl -v $GATEWAY* Rebuilt URL to: 192.168.99.100:31380/* Trying 192.168.99.100…* TCP_NODELAY set* Connected to 192.168.99.100 (192.168.99.100) port 31380 (#0)
> GET / HTTP/1.1> Host: 192.168.99.100:31380> User-Agent: curl/7.54.0> Accept: */*>< HTTP/1.1 404 Not Found< location: http://192.168.99.100:31380/< date: Tue, 18 Dec 2018 00:05:17 GMT< server: envoy< content-length: 0<* Connection #0 to host 192.168.99.100 left intact
Instead of getting a connection refused response, we get a 404. If you think about it, that response makes sense as we only defined the port and hosts with the Gateway resource, but haven’t actually defined anywhere which service we want to route the requests to. This is where the second Istio resource — VirtualService
— comes into play.
This is an excerpt from Traffic Management with Istio module — you can download the 20+ page PDF and supporting YAML files by signing up at 👉 www.LearnIstio.com 👈