Something I really like about living in the city is the fact that it is made for the masses. Despite its many defects (the rain not being one), Seattle is architected to enable hundreds of thousands of people to go through their busy days. It has a transportation system that interconnects different areas, it mandates different land usage policies for parks, residences, commerces and schools, and it provides restricted parking zones. It is designed for walking (assuming you like hills), it provides easy access to hospitals and it is guarded by police and fire departments.
But Seattle wasn’t initially a big city, its growth is more of a work-in-progress kind of thing. Like many other cities including all-mighty New York, Seattle is constantly under development and re-planning so it can scale to support even more people. It needs more efficient transportation (think subway), bigger highways, more parking and more recreational areas and residential zones.
The similarities between city planning and software engineering are fascinating to me, they are well described by Sam Newman in his “Building Microservices” book. Just like cities started as small towns, most services started as simple servers sitting under someone’s desk, and processing a few hundred requests per day. Given some time and if the idea is right, a service may become popular —that’s a great thing to happen except that the challenge of increasing demand quickly turns into a problem of dealing with higher customer expectations. This is similar to how we expect better transportation and more efficient police enforcement when a town evolves into a city.
I personally like the story of Twitter’s Fail Whale. As Yao Yue mentions, the whale is a story of growing up. Twitter started with a monolithic service called Monorail, represented as a gigantic box of functionality with enormous scope. While this might be the best way of getting started, Monorail soon became too complex to reason about. Availability and slow performance problems surfaced as Twitter’s engineering team grew to a bigger set of people that were constantly adding features. This required more resources and a more principled architecture with better failure handling. Twitter nicely covered up its system errors with the image of a failing whale.
If any of that sounds familiar, perhaps one should consider following Twitter’s approach of embracing a microservices-based architecture.
Evolving a monolithic architecture into a set of microservices is about splitting a big I-can-do-everything box into more manageable boxes that have scoped responsibility. It is also about splitting teams into smaller teams that can focus on a subset of those services (Conway’s law). The resulting services are autonomous —like the teams who manage them—, so they can be independently deployed.
Getting a microservice architecture right is quite an engineering journey, it requires discipline and patience. The Microservices Maestro (i.e. you) must be able to orchestrate principles of separation of concerns, the system’s overall cohesion, failure degradation, security and privacy.
I found that establishing some initial key architecture principles and ensuring they are attained can help alleviate some of these challenges.
From the engineering and operational readiness standpoints:
From the internal architecture standpoint:
In general, I think scale is hard and often underestimated; it is an area that I expect will have dramatic innovation over the next few years.
Did you like this article? Subscribe to get new posts by email.
Image credit: Steve Dennis