Scaling is the logical milestone for each software product. When the monolithic architecture prevents you from enjoying further growth, it’s time to consider migrating to microservices.
Although microservices have significant advantages, their implementation may be tricky because of the complexity and absence of the silver bullet migration strategy. In this article, I’ll share my solution architect’s expertise in tackling the challenges of migrating from a monolith to microservices, as well as explain how to implement the project while preserving its security and reliability.
Let’s get straight to what can make the process of moving from a monolith to microservices as seamless as possible and start with the main five steps this transformation involves.
Ensuring a successful migration to microservices begins by justifying the shift from a business perspective. Since every project possesses unique technical capabilities and limitations, it’s vital to consider that the move impacts the entire product architecture, which needs to be effective for future growth. It’s thus advisable to collaborate with both a business analyst and a technical expert to accurately evaluate current system requirements and draft an efficient development roadmap.
While microservices have been in the limelight for years, the truth is they aren’t revolutionary for all projects. Few achieve the microservices ideal without some trade-offs. Although they’ve become somewhat of an industry buzzword, no one has established a perfect microservices environment. Yet, this shouldn’t deter businesses from aiming to transition if it aligns with their goals. It’s essential to manage expectations appropriately and assess if the envisioned tech implementation can meet the predetermined objectives.
Moving to microservices is a sought-after app modernization strategy. Starting a product directly with microservices might not be the optimal approach. Instead, one should consider beginning with a monolith or placing the core business logic within it. From here, extracting services becomes more manageable. Efforts to achieve perfectly isolated microservices might be overambitious, potentially leading to unnecessary complications.
For projects involving large teams, microservices can be the right choice. They provide scalability, not just in system architecture but also in team dynamics. A notable advantage of this approach is the ability to incorporate different technologies.
Before moving to microservices, a comprehensive technical audit is imperative. This audit should pinpoint the current technology underpinning the product. It’s crucial to discern if this technology might pose limitations for future endeavors or if an alternative should be adopted for the microservices. Engaging with experts familiar with the particular technology can provide invaluable insights, ensuring a smooth and efficient architectural transformation.
The next step is to understand what parts of the system can be transitioned to microservices and plan how these microservices will interact (dependencies will describe how changes in one module affect other modules). Essentially, it’s about designing the product’s overarching architecture and prioritizing which services to extract.
Microservices are most effective when they handle specific functionalities. Examples include real-time communication, data processing pipelines, background job processing, or interactions with external services. These functionalities might need data access from the monolith but are technologically distinct.
Breaking a monolith into microservices can be done by applying two approaches.
In the first one, we start by separating the desired functionality within the monolith and gradually disconnect it from other parts but keep it within the monolith. After disconnecting all ties and designing a new API, we move it out to launch as a separate microservice. This approach requires significant changes in the monolith during the process.
The second approach includes making an independent copy of the desired functionality and developing it into a microservice while the original still runs in the monolith. Once the new microservice is fully functional and tested, we remove the original from the monolith.
A key principle of microservices architecture is that each microservice should have its own dedicated database. Yet, breaking up a monolithic database can be challenging due to potential overlaps and interdependencies between database objects.
Different microservices can use distinct databases, programming languages, and data storage solutions. Working with certain databases may be more complex than others, making it impractical to consolidate all data into a single database. Hence, specialized storage systems are often used for specific data types.
Data management operations for microservices can be grouped according to the involved data-related patterns:
DATABASE-PER-SERVICE
The Database-per-Service pattern is a pivotal approach in microservice architecture, underscoring the importance of autonomy and encapsulation. By assigning a dedicated database to each microservice, it ensures data consistency and isolation and reduces contention among services.
However, data integration and cross-service querying can become complex, requiring efficient communication protocols and well-defined interfaces. Additionally, database schema evolutions must be handled with care to avoid unintended service disruptions. But, with the right strategies in place, the Database-per-Service pattern can profoundly enhance scalability and fault tolerance in microservice ecosystems.
SAGA
The Saga pattern is a critical strategy in microservice transactions, addressing the challenges of maintaining data consistency across distributed systems. Instead of relying on traditional database transactions, it breaks operations into a series of isolated, compensatable actions. If a step fails, compensating transactions are triggered to ensure system consistency.
While this decentralized approach bolsters system resilience and scalability, it demands careful orchestration and error handling to effectively manage potential failures.
API COMPOSITION
The API Composition pattern is a foundational method within microservices architecture that manages the challenge of data retrieval from multiple services. In an environment where each microservice is responsible for its unique piece of data, direct client-side data querying can be both complex and inefficient.
The API Composition pattern addresses this by using an intermediary – often called an API composer or aggregator – to assemble the required data from various microservices into a unified response. By doing so, it provides clients with a singular access point, simplifying queries and streamlining data delivery. While this approach enhances the client experience and reduces direct microservice interactions, developers must ensure the composer remains efficient and doesn’t become a bottleneck or a single point of failure.
CQRS
Given that microservices advocate for decentralized data management, complexities can arise, especially when one service needs to both update data and query it.\
CQRS addresses this by segregating the responsibility of command operations (writes) from query operations (reads). In a microservices environment, this means a service can be optimized for its most common task: some services might be read-heavy, while others deal primarily with write operations. As a result, each can scale independently based on its workload.
CQRS in microservices offers many advantages but introduces additional complexity, particularly in ensuring data consistency across services. Thus, its adoption should be thoughtfully considered within the context of the system’s specific requirements.
EVENT SOURCING
Event Sourcing emphasizes the capture and storage of all changes to the application state as a sequence of events. Instead of storing the current state of data in a domain, it stores a series of state transitions, enabling a system to reconstruct its state by playing back the events.
In the context of microservices, this allows each service to maintain its own history, promoting autonomy and decoupling between services. As events become the single source of truth, they can be leveraged for multiple purposes, from analytics to audit trails.
Additionally, this approach simplifies error handling and recovery mechanisms, as one can revert to a previous state or reprocess events. While powerful, the pattern necessitates careful consideration of event versioning and storage scalability.
SHARED DATABASE
The Shared Database anti-pattern arises when multiple microservices or components in an architecture interact directly with a common database instead of through APIs or messaging. This direct access not only compromises the autonomy of services but also introduces potential data integrity concerns.
A system built on this approach may face security challenges, as services can inadvertently expose sensitive data. Such a design also complicates system evolution, turning seemingly minor database changes into complex tasks that demand coordination across multiple services.
NEXT STEPS
Whatever data management strategy is chosen, you need to avoid a Distributed Monolith. If services cannot be fully isolated, it often leads to more issues. Such a solution remains monolithic in terms of structure and databases. Even though it seems we have separated them, they end up having many connections, and most of the advantages of microservices are lost.
Next, we need to develop API interfaces through which the microservice will communicate with the monolith and other microservices. The referenced service provides data via an API that the requesting service requires.
When designing interservices communication, consider the interaction process. Service interactions can be classified into two groups:
One service processes each client request (one‑to‑one interaction)
Each request is processed by a number of services (one‑to‑many interactions)
Also, take into account the synchronous or asynchronous nature of the interaction:
If, from a business logic perspective, something can be done asynchronously, it’s better to do it that way. This ensures greater stability and facilitates load balancing.
Testing microservices also has some specifics. In traditional monolithic structures, the entire application could be tested as a cohesive unit. In contrast, a microservices-based application may encompass numerous services that might not always be available for simultaneous testing. This interdependence makes end-to-end testing complex and demands alternative testing methods.
There are specific testing types tailored to microservices:
For instance, errors in one microservice can set off a chain reaction, complicating root cause analysis. With microservices often communicating through various channels and protocols, it demands specialized skills and knowledge. Furthermore, the necessity of testing multiple endpoints, coupled with the imperative for automated testing, requires proficiency in script writing and automated testing tools.
Microservices development requires a whole set of approaches to tackle typical challenges. Awareness of these difficulties allows you to level them out at the early stages and prepare the environment for the successful implementation of microservices.
Ensuring accurate transactions and data consistency across different services can be challenging. While there’s no one-size-fits-all solution, there are some general guidelines for managing data within a microservices structure.
For areas demanding robust consistency, one service can be the primary source of truth for a specific entity. Other services can access this primary source via an API. Some services might maintain their own version of the data or a portion of it. This data can be consistent with the primary data but isn’t viewed as the main source.
For instance, in an e-commerce system, there might be a customer order service and a recommendation service. While the recommendation service could be privy to the order service’s events, in cases like a customer seeking a refund, it’s the order service that retains the comprehensive transaction record.
Experienced developers can select the most suitable approaches to ensure data consistency, considering the nuances of the specific case. This is because there’s no universal way to achieve this seamlessly.
While developing microservices, each team can operate with its own management, methodology, etc. Therefore, it’s essential to have a clear structure for synchronization and communication between teams. This is especially relevant if you plan to split the workload between in-house and outsourced engineers.
The problem with highly formalized structures is that teams might efficiently create their own service, but if they don’t consider how other teams use that service, it can happen that the data provided by these services come in different formats, etc. Hence, it’s necessary for the project to have the role of an optimizer who coordinates the actions of the teams.
Basically, we need to employ Conway’s Law here, which states that the architecture of software systems is similar to the communication structure of teams working on them. Therefore, if you want to create an architecture of autonomous services, you should first organize small autonomous development teams that can interact with each other effectively.
Working with microservices requires DevOps efforts, as the infrastructure is more complex. With microservices, deployments must be coordinated to ensure all versions of the microservices can interact seamlessly, given they are all interconnected and might have backward incompatible changes. It’s crucial to prepare all dependencies in advance and employ flexible tooling (like Kubernetes and Docker). A DevOps engineer can help to launch and deploy this infrastructure.
Microservice architecture troubleshooting also becomes more challenging as pinpointing the root cause of an error is tougher. In a monolithic structure, this process is relatively straightforward. With microservices, it’s more complicated since one service might receive data and then pass it to another, making it harder and more time-consuming to determine where in the chain things broke down. To address this, a centralized log aggregator, a deployment orchestration system, and a separate distributed tracing system need to be integrated, making the entire setup more manageable.
When migrating to microservices, a company needs to take into account many factors, including which parts of the system to bring to microservices, data management, establishing an effective infrastructure, and organizing the work of the team. Working with experienced engineers and solution architects is key here to achieve your goals in the best possible way.
Also published here
Author:
Yevhen Kuzminov
Ruby Team Leader/Solution Architect at MobiDev