paint-brush
Deploying Java Applications with Kubernetes and an API Gatewayby@rdli
23,035 reads
23,035 reads

Deploying Java Applications with Kubernetes and an API Gateway

by Richard LiMarch 5th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article you’ll learn how to deploy three simple Java services into Kubernetes (running locally via the new Docker for Mac/Windows Kubernetes integration), and expose the frontend service to end-users via the Kubernetes-native Ambassador API Gateway. So, grab your caffeinated beverage of choice and get comfy in front of your terminal!

People Mentioned

Mention Thumbnail
Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Deploying Java Applications with Kubernetes and an API Gateway
Richard Li HackerNoon profile picture

In this article you’ll learn how to deploy three simple Java services into Kubernetes (running locally via the new Docker for Mac/Windows Kubernetes integration), and expose the frontend service to end-users via the Kubernetes-native Ambassador API Gateway. So, grab your caffeinated beverage of choice and get comfy in front of your terminal!

A Quick Recap: Architecture and Deployment

In October last year Daniel Bryant extended his simple Java microservice-based “Docker Java Shopping” container deployment demonstration with Kubernetes support. If you found the time to complete the tutorial you would have packaged three simple Java services — the shopfront and stockmanager Spring Boot services, and the product catalogue Java EE DropWizard service — within Docker images, and deployed the resulting containers into a local minikube-powered Kubernetes cluster. He also showed you how to open the shopfront service to end-users by mapping and exposing a Kubernetes cluster port using a NodePort Service. Although this was functional for the demonstration, many of you asked how you could deploy the application behind an API Gateway. This is a great question, and accordingly we were keen to add another article in this tutorial series (with Daniel’s help) with the goal of deploying the “Docker Java Shopping” Java application behind the open source Kubernetes-native Ambassador API Gateway.

Figure 1. “Docker Java Shopping” application deployed with Ambassador API Gateway

Quick Aside: Why Use an API Gateway?

Many of you will have used (or at least bumped into) the concept of an API Gateway before. Chris Richardson has written a good overview of the details at microservices.io, and the team behind the creation of the Ambassador API Gateway, Datawire, have also talked about the benefits of using a Kubernetes-native API Gateway. An API Gateway allows you to centralise a lot of the cross-cutting concerns for your application, such as load balancing, security and rate-limiting. In addition, an API Gateway can be a useful tool to help accelerate continuous delivery. Running a Kubernetes-native API Gateway also allows you to offload several of the operational issues associated with deploying and maintaining a gateway — such as implementing resilience and scalability — to Kubernetes itself.

There are several API Gateway choices for Java developers, such as Netflix’s Zuul, Spring Cloud Gateway, Mashape’s Kong, a cloud vendor’s implementation (such as Amazon’s API Gateway), and of course the traditional favourites of NGINX and HAProxy, and some of the more modern variants like Traefik. Choosing an API Gateway can involve a lot of work, as this is a critical piece of your infrastructure (touching every bit of traffic into your application), and there are many tradeoffs to be considered. In particular, watch out for potential high-coupling points — for example, the ability to dynamically deploy “Filter” Groovy scripts into Netflix’s Zuul enables business logic to become spread between the service and the gateway — and also the need to deploy complicated datastores as the end-user traffic increases — for example, Kong requires a Cassandra cluster or Postgres installation to scale horizontally.

For the sake of simplicity in this article we’re going to use the open source Kubernetes-native API Gateway, Ambassador. Ambassador has a straightforward implementation which reduces the ability to accidentally couple any business logic to it. It also lets you specify service routing via a declarative approach that is consistent with the “cloud native” approach of Kubernetes and other modern infrastructure. The added bonus is that routes can be easily stored in version control and pushed down the CI/CD build pipeline with all the other code changes.

Getting Started: NodePorts and LoadBalancers 101

First, ensure you are starting with a fresh (empty) Kubernetes cluster. This demonstration will use the new Kubernetes integration within Docker for Mac. If you want to follow along you will need to ensure that you have installed the Edge version of Docker for Mac or Docker for Windows, and also enabled Kubernetes support by following the instructions within the Docker Kubernetes documentation. We’re going to set up ingress first with a NodePort before switching to Ambassador. If you’re interested in learning more about the nuances of Kubernetes ingress, this article has more detail.

Next clone the “Docker Java Shopfront” GitHub repository. If you want to explore the directory structure and learn more about each of the three services that make up the application, then take a look at the previous article in this series or the associated mini-book “Containerizing Continuous Delivery in Java” that started all of this. When the repo has been successfully cloned you can navigate into the kubernetes directory. If you are following along with the tutorial then you will be making modifications within this directory, and so you are welcome to fork your own copy of the repo and create a branch that you can push your work to. I don’t recommend skipping ahead (or cheating), but the [kubernetes-ambassador](https://github.com/danielbryantuk/oreilly-docker-java-shopping/tree/master/kubernetes-ambassador) directory contains the complete solution, in case you want to check your work!









$ git clone [email protected]:danielbryantuk/oreilly-docker-java-shopping.git$ cd oreilly-docker-java-shopping/kubernetes(master) kubernetes $ ls -lsatotal 240 drwxr-xr-x 5 danielbryant staff 160 5 Feb 18:18 .0 drwxr-xr-x 18 danielbryant staff 576 5 Feb 18:17 ..8 -rw-r — r — 1 danielbryant staff 710 5 Feb 18:22 productcatalogue-service.yaml8 -rw-r — r — 1 danielbryant staff 658 5 Feb 18:11 shopfront-service.yaml8 -rw-r — r — 1 danielbryant staff 677 5 Feb 18:22 stockmanager-service.yaml

If you open up the [shopfront-service.yaml](https://github.com/danielbryantuk/oreilly-docker-java-shopping/blob/master/kubernetes/shopfront-service.yaml) in your editor/IDE of choice, you will see that we are exposing the shopfront service as a NodePort accessible via TCP port 8010. This means that the service can be accessed via port 8010 on any of the cluster node IPs that are made public (and not blocked by a firewall).

---apiVersion: v1kind: Servicemetadata: name: shopfront labels: app: shopfrontspec: type: NodePort selector: app: shopfront ports: — protocol: TCP port: 8010 name: http

When running this service via minikube, NodePort allows you to access the service via the cluster external IP. When running the service via Docker, NodePort allows you to access the service via localhost and the Kubernetes allocated port. Assuming that Docker for Mac or Windows has been configured to run Kubernetes successfully you can now deploy this service:







(master) kubernetes $ kubectl apply -f shopfront-service.yamlservice “shopfront” createdreplicationcontroller “shopfront” created(master) kubernetes $ kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19hshopfront NodePort 10.110.74.43 <none> 8010:31497/TCP 0s

You can see the shopfront service has been created, and although there is no external-ip listed, you can see that the port specified in the stockmanager-service.yaml (8010) has been mapped to port 31497 (your port number may differ here). If you are using Docker for Mac or Windows you can now curl data from localhost (as the Docker app works some magic behind the scenes), and if you are using minikube you can get the cluster IP address by typing minikube ip in your terminal.

Assuming you are using Docker, and that you have only deployed the single shopfront service you should see this response from a curl using the port number you can see from the kubectl get svc command (31497 for me):

















(master) kubernetes $ curl -v localhost:31497* Rebuilt URL to: localhost:31497/* Trying ::1…* TCP_NODELAY set* Connected to localhost (::1) port 31497 (#0)> GET / HTTP/1.1> Host: localhost:31497> User-Agent: curl/7.54.0> Accept: */*>< HTTP/1.1 500< X-Application-Context: application:8010< Content-Type: application/json;charset=UTF-8< Transfer-Encoding: chunked< Date: Tue, 06 Feb 2018 17:20:19 GMT< Connection: close<

* Closing connection 0

{“timestamp”:1517937619690,”status”:500,”error”:”Internal Server Error”,”exception”:”org.springframework.web.client.ResourceAccessException”,”message”:”I/O error on GET request for \”http://productcatalogue:8020/products\": productcatalogue; nested exception is java.net.UnknownHostException: productcatalogue”,”path”:”/”}

You’ll notice that you are getting an HTTP 500 error response with this curl, and this is to be expected as you haven’t deployed all of the supporting services yet. However, before you deploy the rest of the services you’ll want to change the NodePort configuration to ClusterIP for all of your services. This means that each services will only be accessible other the network within the cluster. You could of course use a firewall to restrict a service exposed by NodePort, but by using ClusterIP with our local development environment you are forced not to cheat to access our services via anything other than the API gateway we will deploy.

Open shopfront-service.yaml in your editor, and change the NodePort to ClusterIP. You can see the relevant part of the file contents below:















---apiVersion: v1kind: Servicemetadata:name: shopfrontlabels:app: shopfrontspec:type: ClusterIPselector:app: shopfrontports:— protocol: TCPport: 8010name: http

Now you can modify the services contained with the productcatalogue-service.yaml and stockmanager-service.yaml files to also be ClusterIP.

You can also now delete the existing shopfront service, ready for the deployment of the full stack in the next section of the tutorial.



(master *) kubernetes $ kubectl delete -f shopfront-service.yamlservice “shopfront” deletedreplicationcontroller “shopfront” deleted

Deploying the Full Stack

With a once again empty Kubernetes cluster, you can now deploy the full three-service stack and the get the associated Kubernetes information on each service:







(master *) kubernetes $ kubectl apply -f .service “productcatalogue” createdreplicationcontroller “productcatalogue” createdservice “shopfront” createdreplicationcontroller “shopfront” createdservice “stockmanager” createdreplicationcontroller “stockmanager” created






(master *) kubernetes $ kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2hproductcatalogue ClusterIP 10.106.8.5 <none> 8020/TCP 1sshopfront ClusterIP 10.9.19.20 <none> 8010/TCP 1sstockmanager ClusterIP 10.96.27.5 <none> 8030/TCP 1s

You can see that the port that was declared in the service is available as specified (i.e. 8010, 8020, 8030) — each pod running gets its own cluster IP and associated port range (i.e. each pods gets its own “network namespace”). We can’t access this port outside of the cluster (like we can with NodePort), but within the cluster everything works as expected.

You can also see that using ClusterIP does not expose the service externally by trying to curl the endpoint (this time you should receive a “connection refused”):













(master *) kubernetes $ curl -v localhost:8010* Rebuilt URL to: localhost:8010/* Trying ::1…* TCP_NODELAY set* Connection failed* connect to ::1 port 8010 failed: Connection refused* Trying 127.0.0.1…* TCP_NODELAY set* Connection failed* connect to 127.0.0.1 port 8010 failed: Connection refused* Failed to connect to localhost port 8010: Connection refused* Closing connection 0curl: (7) Failed to connect to localhost port 8010: Connection refused

Deploying the API Gateway

Now is the time to deploy the Ambassador API gateway in order to expose your shopfront service to end-users. The other two services can remain private within the cluster, as they are supporting services, and don’t have to be exposed publicly.

First, create a LoadBalancer service that uses Kubernetes annotations to route requests from outside the cluster to the appropriate services. Save the following content within a new file named ambassador-service.yaml. Note the getambassador.io/config annotation. You can use Kubernetes annotations to attach arbitrary non-identifying metadata to objects, and clients such as Ambassador can retrieve this metadata.


















—--apiVersion: v1kind: Servicemetadata:labels:service: ambassadorname: ambassadorannotations:getambassador.io/config: |---apiVersion: ambassador/v0kind: Mappingname: shopfrontprefix: /shopfront/service: shopfront:8010spec:type: LoadBalancerports:





  • name: ambassadorport: 80targetPort: 80selector:service: ambassador

The Ambassador annotation is key to how the gateway works — how it routes “ingress” traffic from outside the cluster (e.g. an end-user request) to services within the cluster. Let’s break this down:

  • getambassador.io/config: | specifies that this annotation is for Ambassador
  • apiVersion: ambassador/v0 specifies the Ambassador API/schema version
  • kind: Mappingspecifies that you are creating a “mapping” (routing) configuration
  • name: shopfront is the name for this mapping (which will show up in the debug UI)
  • prefix: /shopfront/ is the external prefix of the URI that you want to route internally
  • service: shopfront:8010 is the Kubernetes service you want to route to

In a nutshell, this annotation states that any request to the external IP of the LoadBalancer service (which will be localhost in your Docker for Mac/Windows example) with the prefix /shopfront/ will be routed to the Kubernetes shopfront service running on the (ClusterIP) port 8010. In your example, when you enter http://localhost/shopfront/ in your web browser you should see the UI provided by the shopfront service. Hopefully this all makes sense, but if it doesn’t then please visit the Ambassador Gitter and ask any questions, or ping me on twitter!

You can deploy the Ambassador service:


(master *) kubernetes $ kubectl apply -f ambassador-service.yamlservice “ambassador” created

You will also need to deploy the Ambassador Admin service (and associated pods/containers) that are responsible for the heavy-lifting associated with the routing. It’s worth noting that the routing is conducted by a “sidecar” proxy, which in this case is the Envoy proxy. Envoy is responsible for all of the production network traffic within Lyft. Its creator, Matt Klein, has written lots of very interesting content about the details. Today, it’s the fastest growing alternative to NGINX and HAProxy for a variety of reasons. You may have also heard about the emerging “service mesh” technologies, and the popular Istio project also uses Envoy.

Anyway, back to the tutorial! You can find a pre-prepared Kubernetes config file for Ambassador Admin on the getambassador.io website (for this demo you will be using the “no RBAC” version of the service, but you can also find an RBAC-enabled version of the config file if you are running a Kubernetes cluster with Role-Based Access Control (RBAC) enabled. You can download a copy of the config file and look at it before applying, or you apply the service directly via the Interwebs:



(master *) kubernetes $ kubectl apply -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yamlservice “ambassador-admin” createddeployment “ambassador” created

If you issue a kubectl get svc you can see that your Ambassador LoadBalancer and Ambassador Admin services have been deployed successfully:








(master *) kubernetes $ kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEambassador LoadBalancer 10.12.1.42 <pending> 80:31053/TCP 5mambassador-admin NodePort 10.15.58.25 <none> 8877:31516/TCP 1mkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20hproductcatalogue ClusterIP 10.106.8.5 <none> 8020/TCP 22mshopfront ClusterIP 10.98.1.20 <none> 8010/TCP 22mstockmanager ClusterIP 10.96.2.45 <none> 8030/TCP 22m

You will notice on the ambassador service that the external-ip is listed as <pending> and this is a known bug with Docker for Mac/Windows. You can still access a LoadBalancer service via localhost — although you may need to wait a minute or two while everything deploys successfully behind the scenes.

Let’s try and access the shopfront this now using the /shopfront/ route you configured previously within the Ambassador annotations. You can curl localhost/shopfront/ (with no need to specify a port, as you configured the Ambassador LoadBalancer service to listen on port 80):


(master *) kubernetes $ curl localhost/shopfront/<!DOCTYPE html>






<html lang=”en” xmlns=”http://www.w3.org/1999/xhtml"><head><meta charset=”utf-8" /><meta http-equiv=”X-UA-Compatible” content=”IE=edge” /><meta name=”viewport” content=”width=device-width, initial-scale=1" /><! — The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->

...






<! — jQuery (necessary for Bootstrap’s JavaScript plugins) →<script src=”https://ajax.googleapis.com/ajax/libs/jquery/1.12.4/jquery.min.js"></script><! — Include all compiled plugins (below), or include individual files as needed --><script src=”js/bootstrap.min.js”></script></body></html>

That’s it! You are now accessing the shopfront service that is hidden away in the Kubernetes cluster via Ambassador. You can also visit the shopfront UI via your browser, and this provides a much more friendly view!

Bonus: Ambassador Diagnostics

If you want to look at the Ambassador Diagnostic UI then you can use port-forwarding. We’ll explain more about how to use this in a future post, but for the moment you can have a look around by yourself. First you will need to find the name of an ambassador pod:








(master *) kubernetes $ kubectl get podsNAME READY STATUS RESTARTS AGEambassador-6d9f98bc6c-5sppl 2/2 Running 0 19mambassador-6d9f98bc6c-nw6z9 2/2 Running 0 19mambassador-6d9f98bc6c-qr87m 2/2 Running 0 19mproductcatalogue-sdtlc 1/1 Running 0 22mshopfront-gr794 1/1 Running 0 22mstockmanager-bp7zq 1/1 Running 1 22m

Here we’ll pick ambassador-6d9f98bc6c-5sppl. You can now port-forward from your local network adapter to inside the cluster and expose the Ambassador Diagnostic UI that is running on port 8877.

(master *) kubernetes $ kubectl port-forward ambassador-6d9f98bc6c-5sppl 8877:8877

You can now visit http://localhost:8877/ambassador/v0/diag in your browser and have a look around!

When you are finished you can exit the port-forward via ctrl-c. You can also delete all of the services you have deployed into your Kubernetes cluster by issuing a kubectl delete -f . within the kubernetes directory. You will also need to delete the ambassador-admin service you have deployed.










(master *) kubernetes $ kubectl delete -f .service “ambassador” deletedservice “productcatalogue” deletedreplicationcontroller “productcatalogue” deletedservice “shopfront-canary” deletedreplicationcontroller “shopfront-canary” deletedservice “shopfront” deletedreplicationcontroller “shopfront” deletedservice “stockmanager” deletedreplicationcontroller “stockmanager” deleted



(master *) kubernetes $ kubectl delete -f https://getambassador.io/yaml/ambassador/ambassador-no-rbac.yamlservice “ambassador-admin” deleteddeployment “ambassador” deleted

What’s Next?

Ambassador makes canary testing very easy, so look for a future article that explores that topic with Java microservices. Other topics that we’ll explore is integrating all of this into a CD pipeline and how to best to set up a local development workflow. In addition, Ambassador supports gRPC, Istio, and statsd-style monitoring which are all hot topics in cloud-native environments today. If you have any thoughts or feedback, please feel free to comment!

Note: This article is based on Daniel Bryant original work with Java microservice and API Gateway.