Photo by on Ashim D’Silva Unsplash This is an excerpt from module — you can download the and supporting YAML files by signing up at 👉 👈 Traffic Management with Istio 20+ page PDF www.LearnIstio.com By default, any service running inside the service mesh is not automatically exposed outside of the cluster which means that we can’t get to it from the public Internet. Similarly, services within the mesh don’t have access to anything running outside of the cluster either. To allow incoming traffic to the frontend service that runs inside the cluster, we need to create an external load balancer first. As part of the installation, Istio creates an service that is of type and, with the corresponding Istio resource, can be used to allow traffic to the cluster. istio-ingressgateway LoadBalancer Gateway If you run , you will get an output similar to this one: kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) ....istio-ingressgateway LoadBalancer 10.107.249.46 <pending> ... The above output shows the Istio ingress gateway of type . If you’re using a Minikube cluster you will notice how the external IP column shows text — that is because we don’t actually have a real external load balancer as everything runs locally. With a cluster running in the cloud from any cloud provider, we would see a real IP address there — that IP address is where the incoming traffic enters the cluster. LoadBalancer <pending> We will be accessing the service in the cluster frequently, so we need to know which address to use. The address we are going to use depends on where the Kubernetes cluster is running. If using Minikube Use the script below to set the environment variable we will be using to access the services. GATEWAY export INGRESS_HOST=$(minikube ip)export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath=’{.spec.ports[?(@.name==”http2")].nodePort}’) export GATEWAY=$INGRESS_HOST:$INGRESS_PORT If you run you should get an IP address with a port, such as: . echo $GATEWAY 192.168.99.100:31380 If using Minikube (v0.32.0 or higher) Minikube version `v0.32.0` and higher, has a command called . This command creates networking routes from your machine into the Kubernetes cluster as well as allocates IPs to services marked with . What this means is that you can access your exposed service using an external IP address, just like you would when you’re running Kubernetes in the cloud. minikube tunnel LoadBalancer To use the tunnel command, open a new terminal window and run and you should see an output similar to this one: minikube tunnel $ minikube tunnel Status:machine: minikubepid: 43606route: 10.96.0.0/12 -> 192.168.99.104minikube: Runningservices: [istio-ingressgateway]errors:minikube: no errorsrouter: no errorsloadbalancer emulator: no errors If you run the command to get the ingress gateway service, you will notice an actual IP address in the column. It should look something like this: kubectl get svc istio-ingressgateway -n istio-system EXTERNAL-IP $ kubectl get svc istio-ingressgateway -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) istio-ingressgateway LoadBalancer 10.107.235.182 10.107.235.182 ... Now you can use the external IP address ( above) as the public entry point to your cluster. Run the command below to set the external IP value to the variable: 10.107.235.182 GATEWAY export GATEWAY=$(kubectl get svc istio-ingressgateway -n istio-system -o jsonpath=’{.status.loadBalancer.ingress[0].ip}’) If using Docker for Mac/Windows When using Docker for Mac/Windows, the Istio ingress gateway is exposed on localhost:80 export GATEWAY=localhost If using hosted Kubernetes If you’re using hosted Kubernetes, run the command and use the external IP value. kubectl get svc istio-ingressgateway -n istio-system For the rest of the module, we will use the environment variable in all examples when accessing the services. GATEWAY Gateways Now that we have the we could try and access it. Unfortunately, we get back something like this: GATEWAY $ curl $GATEWAYcurl: (7) Failed to connect to 192.168.99.100 port 31380: Connection refused Yes, we have the IP and it’s the correct one, however, this IP address alone is not enough — we also need an or and that to configure what happens with the requests when they hit the cluster. This resource operates at the edge of the service mesh and is used to enable ingress (incoming) traffic to the cluster. Ingress Gateway Here’s how a minimal resource looks like: Gateway With the above snippet, we are creating a gateway that will proxy all requests to pods that are labeled with label. You can run to get all the pods with that label. The command will return you the Istio ingress gateway pod that’s running in the namespace. This ingress gateway pod will then, in turn, proxy traffic further to different Kubernetes services. istio: ingressgateway kubectl get pod — selector="istio=ingressgateway" — all-namespaces istio-system Under we define which hosts will this gateway proxy — we are using which means we want to proxy all requests, regardless of the host name. servers * In the real world, the host would be set to the actual domain name (e.g. where cluster services will be accessible from. The `*` should be only used for testing and in local scenarios and not in production. www.example.com) With the host and port combination above, we are allowing incoming HTTP traffic to port for any host ( ). Let’s deploy this resource: 80 * If you run the command now, you will get a bit of a different response: curl $ curl -v $GATEWAY* Rebuilt URL to: 192.168.99.100:31380/* Trying 192.168.99.100…* TCP_NODELAY set* Connected to 192.168.99.100 (192.168.99.100) port 31380 (#0) > GET / HTTP/1.1> Host: 192.168.99.100:31380> User-Agent: curl/7.54.0> Accept: */*>< HTTP/1.1 404 Not Found< location: < date: Tue, 18 Dec 2018 00:05:17 GMT< server: envoy< content-length: 0<* Connection #0 to host 192.168.99.100 left intact http://192.168.99.100:31380/ Instead of getting a connection refused response, we get a 404. If you think about it, that response makes sense as we only defined the port and hosts with the Gateway resource, but haven’t actually defined anywhere which service we want to route the requests to. This is where the second Istio resource — — comes into play. VirtualService This is an excerpt from module — you can download the and supporting YAML files by signing up at 👉 👈 Traffic Management with Istio 20+ page PDF www.LearnIstio.com