paint-brush
Microservice Patterns to Design and Implement Any Java-Based Event-Driven Microservices Applicationby@yagnesh-aegis
569 reads
569 reads

Microservice Patterns to Design and Implement Any Java-Based Event-Driven Microservices Application

by yagnesh aegisApril 5th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Java Developers have described various partitioning strategies that you can use. One of the goals of partitioning is to enable parallel or concurrent development. The development of different services can proceed concurrently with the development of other services as much as possible. Partitioning is as well as partitioning our actual business logic while also partitioning the database. The challenge is how to enforce data consistency across multiple microservices without a using two-phase commit. The term "responsibility" has a rather ambiguous definition, and various people may interpret that in different ways.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Microservice Patterns to Design and Implement Any Java-Based Event-Driven Microservices Application
yagnesh aegis HackerNoon profile picture

When & Why to Go for Microservices

One of the key decisions that you have to make when architecting a system using microservices is figuring out how to partition your application into a set of microservices. So, in this blog, Java Developers have described various partitioning strategies that you can use. It is important to remember that one of the goals of partitioning is to enable parallel or concurrent development. So, the development of different services can proceed concurrently with the development of other services as much as possible.


Because if you can accomplish this, this will significantly increase the velocity of your team. So, there are a few different partitioning strategies that you can use. So, one approach is to


  1. Partition by a noun: So, you create a service that is in charge of all operations on a specific type of business object. The Product Catalog service, which keeps track of product information, is an example of this. The catalog service is where all product information is kept. It would also have a REST API for adding and updating products, as well as searching and retrieving them.


Another way of partitioning your system is to:


  1. Partition by the verb: So, there, you would have a service that's responsible for a particular use case. So, for example, you can have a web application that implements the Add-to-Cart UI. It's just implementing those web pages that are part of adding the items to the shopping cart process. You could, for example, have a shipping service that's responsible for all aspects of shipping. So those are verb-oriented services.


Another way of Partitioning is:


  1. Partition by Subdomain: Take a few ideas from domain-driven design. That would be a domain if you considered everything your company does. However, within that domain, there are various subdomains, each representing a different functional sector of your company. As a result, you may create a service that implements business logic for a specific domain.


Then, we could also think about some ideas from object-oriented design.


  1. Single Responsibility Principle: Bob Martin, an object-oriented design Expert, developed a set of recommendations or principles. One of these was the principle of sole responsibility. And what he meant was that a class should only change for one reason. In other words, it should only be responsible for one thing. We may apply that principle to service design and suggest that each service should have a single point of contact. As a result, we'd end up with services that are quite tiny and cohesive. The term "responsibility" has a rather ambiguous definition. As a result, various people may interpret that in different ways.


  2. UNIX: If we think about the Unix command line, instead of there just being the monolithic command-line process, instead what you have in Unix is a set of utilities-- cat, grep, find, etc. Each one of those things performs a fairly focused task. We could adopt that metaphor for designing our service, have services that just do one focused thing well. And then we can tie them together with communication mechanisms to accomplish larger tasks.


Major Challenges in Partitioning

But one of the interesting challenges with partitioning the services is as well as partitioning our actual business logic while also partitioning the database. And this introduces a really interesting challenge-- how to enforce data consistency or, in other words, how to enforce invariants across multiple microservices without a using two-phase commit. But here's one of the examples that I have used.


Assume you've received orders. There is a total order count for each order. After that, presume you have clients. However, there is a snag when a customer has a credit limit. And the concept here is that a customer's outstanding orders must not exceed the credit limit. So, it's an example of an invariant, which is a business rule that must be followed at all times within the program. If it's a monolithic program, you can merely use ACID transactions to get it done.


You can think that one of the goals of ACID transactions is to ensure consistency or, in other words, enforce these invariants. But then, if we decompose the system into services where we've got order management service that holds the orders and customer management service that holds the customers, how do we enforce an invariant that is spanning those two services? We want to be able to partition up our system. But we also need to enforce consistency.


Another difficulty with partitioning is that it necessitates a thorough understanding of your system's design. You need to understand exactly what you're breaking apart. So, assuming you already have a monolith, figuring out how to partition it is rather simple. You already have a collection of modules with interdependencies. You have a good idea of what you want to dismantle. It may be difficult to untangle it and correctly modularize it in order to accomplish this.


Common Closure Principle:

What the common closure principle says is that components that change for the same reason should actually be packaged together. So, you can think of putting components that change for some similar reason into the same microservice. Because if they were indifferent microservices and they changed in lockstep, you would have to update and redeploy multiple microservices.


When a trip is completed, a billing service bills the traveler. There's also a payment service that pays the driver when the trip is finished, and so on. As you can see, we've divided this ride application into a number of services, each of which is in charge of a specific function. In a nutshell, that's partitioning. And, as I previously stated, it is very much an art. Furthermore, it is very reliant on your specific application. In practice, though, once you have the hang of partitioning, it's usually simple to come up with a system that works well for your application.

Deployment Patterns & Strategies:

So, once you've created your microservices, you need a way of deploying them. So, in this blog, I have outlined some of the issues that you need to address. You've built your microservices. You need to deploy them. There are various forces that you need to or issues that you need to think about.


  • So, services could very well be written using a variety of different languages. They can be using different frameworks.
  • And in some cases, even if they're written in Java, they might be using different versions of Java or different versions of the Spring framework, for example.
  • So, there's lots of variability in terms of what your services look like.
  • At runtime, each service will have many service instances. You may have a general idea of a service, such as a catalog service or a payment service. So there's a snippet of code there. However, you'll almost certainly want to execute numerous copies of that program at runtime, not only to handle the demand but also for high availability.
  • You must have enough remaining instances to take over and handle the load if one of those instances fails. The entire process of developing and launching a service must be quick. Consider the whole idea of continuous deployment.
  • If someone makes a change. It gets automatically tested and deployed. That whole process needs to be fast. So, the services need to be deployed and scaled independently. That's one of the primary motivations behind Java or any other microservices.
  • Independent deployment and scaling of each service. Isolation is required between service instances. If one service is misbehaving, ideally, we will not want it to impact any other services.
  • And ideally, we want to be able to constrain the resources that are to given service users. We want to just say, it can use this much CPU. It can use this much memory and It can use this much bandwidth. We don't want a service just kind of growing arbitrarily and eating up all of the available memory.
  • And then also, the whole process of deploying changes into production needs to be reliable.
  • There are a few different strategies that you can use. At a high level, there's the more traditional multiple services per host pattern, where you run multiple services instances on a given host. And then there's a more modern approach, where each host, which could be a virtual machine or a physical machine, maybe, or a container, runs exactly one service instance.

Services Per Container:

In this java spring boot developers clearly explained the pattern known as service per container. So, the big idea with this approach is that you take the code for your service and you package it up as a container and then when you deploy your service instances, each service instance is a running container, and you will very well have multiple containers running on the same virtual machine.


In a nutshell, that's the strategy, but what does it entail in practice? Docker, as you may know, is synonymous with the concept of containers. Docker is all about containers, thus it's possible that it's insufficient on its own, and you'll need to employ some kind of clustering solution on top of it. For example, you could use Kubernetes, Google's Docker clustering solution, or Marathon, which is a layer on top of Mesos that allows you to manage your containers.


There's also a tool called DCHQ, which provides a user-friendly interface for deploying Docker containers on virtual machines. In all these cases, the idea is to have a pool of VMs that the clustering solution just treats as basically one large pool of resources, and it's responsible for taking our containers and positioning them on the machines and then managing them and keeping them up and running.


Everything is packaged as Docker containers, which are subsequently run on a cluster of servers. As a result, there is a slew of advantages to this strategy. It actually shares many of the advantages of virtual machines, such as excellent isolation. The underlying technology is different—containers of OS-level virtualization technique, which translates to each container being a collection of separated sandbox threads.

So regardless of whatever technology you've used to implement your service, you package it up as a container and give it to someone to deploy-- the interface is the same here as it is just, starts the container. Stop the container. So that makes deployment a very reliable process.


The underlying operating system is shared by all of the services running on the same machine. Because they're so light, we'll obtain excellent resource usage and deployment times. Building a Docker image, for example, takes five seconds in the environment you work in, whereas publishing it to a registry takes 30 seconds. It could take another 30 seconds to pull it down into the production environment. The program thus begins relatively rapidly because there is no OS bootup required—when a container starts, the Java process also starts.

Communication Pattern (API Gateway)

I have described the API gateway pattern. So, the problem that it addresses is this. How do clients of the system interact with the microservices that make up the system? If you think about it, in a monolithic architecture, there's really one thing that exposes some HTTP endpoints. And it's very clear who a client should talk to.


But in a microservice application, you could have hundreds of services. And in that case, which one of those services should that client talk to? There’re some issues there. The example has given here is the Flipkart Product Details page.


The client, in this case, may be an application. This page is rendered by a web application. In order to render the page, it must collect numerous pieces of data from multiple microservices. As a result, it must obtain fundamental product information, as well as reviews, suggestions, and sales ranking, among other things. In order to render this website, it must collect a large number of distinct data snippets. When defining how clients of an application interact with a microservices-based program, there are a number of pressures or issues to consider.


There is usually a mismatch between the fine-grained microservices and the client's requirements. So, the client that is displaying a web page just needs a large amount of information about a product. However, that information is dispersed over a number of services. The review data is held by the review service. The product info service has the product info, whereas the recommendation service has the recommendations. As a result, the data that the web application requires is dispersed among numerous microservices. Furthermore, various clients may want different data.


So, a web application that's rendering a browser page could actually need a lot more data than a client that's running on a mobile device that has a small screen. There are also differences in terms of network characteristics. So for instance, a web application that's running on a LAN has access to a network that's got much higher throughput and much lower latency than an application that's running across the internet, a WAN, or a mobile application that's having to use a mobile network.

In addition, the number of service instances and network locations, as well as their IP addresses and ports, can change on a regular basis. When you shut down a service instance and restart it, it may or may not be running on the same host or on the same port. If you're using auto-scaling, the number of instances you have can change dramatically depending on the load. As a result, it's an extremely dynamic and fluid atmosphere.


The system's actual segmentation into services can alter over time. You might opt to split a service into two halves. It's possible that you'll decide to combine two services into one. And you'd like to be able to keep that from your clients. Clients may be using devices over which you have no control, or you may not be able to force them to upgrade in lockstep as your system improves. As a result, you must keep that enclosed.


So, here's an example of the situation. So, on the left, you've got both a traditional service-side web application that's accessing the microservices on a local area network. And then you also have a mobile client. It might be some JavaScript running in a mobile browser. Or it could be a native app. And it's accessing the application over a mobile network. And then the application itself is comprised of various microservices, some of which have a REST interface and then some others could actually have a non-HTTP interface. In this case, a TCP binary protocol.


As a result, one method of communication is for clients to speak directly to the microservices. And there is a slew of problems with that. For starters, the API might be somewhat talkative. As a result, gathering the data needed to show a Product Details page necessitates making multiple back-end service queries. That is unlikely to be an issue for a typical web application running on a LAN. However, for a mobile client connected to a mobile network, this is unlikely to operate successfully. Another problem you may have is that some services may expose protocols that aren't web-friendly. They don't really go through firewalls.

API Gateway in Java Microservices:

The API gateway pattern has a whole bunch of benefits. It connects all of the clients to the various microservices. It provides a lovely REST API, and when a request comes in, it may forward it to the appropriate microservice in a simple instance. It can truly provide a client-specific API for each client. So, a web application operating on a LAN that's accessed locally on a LAN may just have a chatty interface that maps one-to-one with the microservices could suffice.


However, it could give a coarse-grained API to a client running on a mobile device, such as giving product details inputs, which it could handle by making a request to all back-end services, aggregating the result, and passing it back to the mobile client. So, the mobile client only has to make one round trip, which is a vast improvement if it's using a mobile network.


Protocol translation is something more than the API gateway can accomplish. As a result, it can translate between the web HTTP world and whatever internal protocols are in use. Another wonderful example is a message broker that uses AMQP internally. The API gateway can convert messages into WebSocket messages, which can then be sent to a browser. Netflix is a fantastic example of a corporation that has chosen this strategy.


Rather than giving a one-size-fits-all API to all of their streaming clients, they decided to create their own. What they decided being is in what you could consider being an API gateway, they run client-specific server-side code that provides each client with its own custom API that's well-suited to the needs of that client. And then when a request comes in, that gateway code actually translates that request into calls to various back-end services. It fans out into seven different calls to back-end services.

Major Benefits:

  1. It separates the partitioning from the clients. The client just sees the API that the gateway provides. The gateway then has code that shifts the partitions around. It protects clients from needing to know where all of the service instances are on the network.
  2. API Gateway selects the best API for each client. It makes the client's life easier by ensuring that the API is compatible with the client's network. And in most cases, this corresponds to fewer round trips between the client and the server, which is important on a mobile network.
  3. An API Gateway also simplifies the client by moving logic to make multiple requests into the logic that makes a single call to an API gateway where there's the logic that does the fan out. And it also translates between other protocols to the HTTP-centric web protocols.


The major drawback is that there's increased complexity. You have to operate and maintain this API gateway, which needs to be highly available and so on.

Service Discovery Pattern in Microservices:

Assume you're building some code that will use HTTP to call a service. To do so, your client must first determine the network location of the service instance it is attempting to invoke. In a modern microservice context, the problem you're having is quite difficult to solve. As a result, I've presented some patterns that you can apply to overcome that problem in this article.


A variety of patterns are there can be chosen. There's client-side discovery, service-side discovery, and the service registry pattern, to name a few. So, the problem of discovery is this. So, you have a client that wants to invoke a service. That service consists of various service instances. And there are a couple of different issues that make knowing the network location a little challenging.

IP addresses and port numbers are assigned dynamically. It's just the nature of today's deployment systems, where we may use containers or virtual machines. IP addresses, as well as port numbers, are assigned dynamically. The actual number of service instances can then change on the fly. You may be auto-scaling up and down dependent on load. As a result, not only are network locations dynamically assigned but so is the set of service instances.


You need to know where they are which is tricky. And then you also have to figure out a way of load balancing across them. Here, are two common solutions we can think of for this problem.


  • The first one is known as client-side service discovery, where we're using a smart HTTP client. Service instances, when they start up, register their network locations with a service registry. So, an Order service would start up and tell its service registry that it is here, and it is an Order service, and running on this IP address and port.

  • And the Product service's other service instances would do the same thing. When a client wishes to make an HTTP request to the product catalog, it first looks up the service registry to see which service instances are available. And the service registry would indicate that they're running on a particular port and IP address. The request can then be load-balanced between those service instances by the client. As a result, it relies on services registering with the service registry, which operates as a database, and then having the intelligence to query the service registry to determine the actual network locations.

  • Netflix open source is a wonderful illustration of this. Eureka is a service registration that they use. So, it's a server with an HTTP interface via which services can register and clients can query a database of available service instances.

  • They also feature a service registry-aware smart HTTP client. When you use Ribbon to submit a request, it may query Eureka to find out where the services you're trying to access are located on the network.



  • It allows you to do application-specific load balancing flexibly. As a result, the client learns about the available instances and can apply application-specific logic to choose which one to submit the request to. Perhaps it tries to use consistent hashing strategies, such as sending the same request for the same object to the same service instance to take advantage of caching.
  • You have a lot of flexibility. Also, there are very few network hops. The clients just talk directly to the services. And other than having a service registry, there's no other infrastructure component involved. one drawback is that the client has to know about the service registry.

Server-side Discovery Architecture:

The whole issue of having to have a smart client accessible for a range of frameworks is a bit of a pain. Server-side discovery is an alternative way that overcomes this issue. And it operates the same way it did before: service instances register with the service registry. So, there's a service register. It is aware of all of the occurrences. The client then simply sends a request to a routing component. It doesn't have to perform anything clever; all it has to do is communicate with this routing component. The routing component then performs load balancing by querying the service registry.


So, we've moved the logic out of the service into the router because it means the service can just go back to being a dumb HTTP client. You could say a good example of this is Amazon, it uses an elastic load balancer to load balance both traffic coming in from the internet, but also internal traffic within your system. And there are other examples as well. Like it's quite common to use it use something like Nginx to act as this smart router as well.


As a result, this strategy has several advantages. First and foremost, it removes the client's intelligence. The client doesn't need to be intelligent. You won't have to worry about creating a smart client that works across many languages and frameworks. It's also merely a feature of some environments. You simply receive it for nothing. You can also utilize an elastic load balancer in AWS. If you're using Kubernetes or Marathon, you'll notice that each computer in the Kubernetes cluster has a routing component: a local proxy. It receives a request from a client and transmits it to one of the available service instances. It's simply free and pre-installed.

Event Sourcing Domain Model in Microservices

How to create a domain model that takes advantage of event sourcing. We'll look at a real code sample in this part, the place order use case, which has two services: an order service and customer service. And you'll see how the code in these microservices appears. So, as I mentioned in the last section, there are various distinct programming models, some of which are functional and some of which are object-oriented.


I have explained here the object-oriented version in this part. So, it's Java code, mutable domain objects in the traditional sense. When it comes to object-oriented programming, the main concept is that you have objects that contain state and behavior.

Implementations:

So, in the customer aggregate, there are fields like a credit limit and credit reservations, which are a hash map of order ID to order total. It shows how many reservations have been made against that credit limit. The behavior is then expressed by two types of methods: a process method that accepts a command and returns a list of events, and an application event method that accepts an event and alters the aggregate's state.


As a result, we've organized the domain logic in a somewhat different manner. You may genuinely consider how you would go about regularly doing this. You'd use a mechanism like reserve credit, which takes the order ID and the amount to be paid.


Here's the actual customer aggregate. So, you can see that it has got a credit limit field, a credit reservations field that's a map. Then there's some business logic such as available credit that just uses the nice Java 8 Streams API, to sum up, the credit reservations and subtract that from the credit limit. You can see that it extends Reflective mutable command processing aggregate and defined some process command methods.


The create customer command is passed to the process command method, which returns a customer-created event. That's all there is to it; just pass through logic. The reserve credit command technique of the process command is a little more interesting. It does, however, contain some business logic. Return a customer credit reserved event if the available credit is higher than or equal to the order total. Otherwise, return a customer's credit limit exceeded even if the credit limit check fails. As a result, we've added some business logic to it.


When we look at the applied methods, we can see that there are two of them. There's also apply, which takes a customer-created event and acts like a function Object() { [native code] }. The credit limit is set to zero, and the credit reservations are set to an empty map.

The credit reservations hash map is updated by the actual application method for the client credit reserved event. Then the customer credit limit exceeded the event's application method accomplishes nothing. This does not imply a state change. It doesn't have to change anything, but it does need to be able to apply to that event. So that's an example of a collection.