paint-brush
How Microservices Saved the Internetby@hwang97
14,317 reads
14,317 reads

How Microservices Saved the Internet

by Will WangAugust 22nd, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

For how impactful it is today, distributed systems remain one of the most overlooked topics, at least at the college level. Not many students understand concepts like containerization and fault tolerance and you’ll never see a systems project win a hackathon. Despite this, I think its very important to at least have a simple understanding of how large scale systems work today.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - How Microservices Saved the Internet
Will Wang HackerNoon profile picture

An Introduction to Building Applications for Scale

Introduction

For how impactful it is today, distributed systems remain one of the most overlooked topics, at least at the college level. Not many students understand concepts like containerization and fault tolerance and you’ll never see a systems project win a hackathon. Despite this, I think its very important to at least have a simple understanding of how large scale systems work today.

This story serves as the first of a series aimed at beginners, such as someone with 1–2 years of a general computer science education or someone who has extensively self studied web development. The first few articles will be high level introductions to the major concepts with a few in depth dives into the technical details. Later I hope to explore topics in networking, Kubernetes, and cools things I see in my research. This first one is quite simple and aims to just explain the motivation and basic concept behind microservices.

Distributed Systems Keep The Internet Running

Back in the olden days, applications were built monolithically. The application likely consisted of the web server itself and perhaps some kind of data storage system and were just built and packaged into one or two binaries. These binaries were then uploaded to a server rack and run directly on the machine. This was fine for the internet of the 80s and the 90s, but today Google receives 3.5 billion search queries every day and no server — no matter how big — will be able to handle that.

In the past, engineers have scaled vertically by buying better servers, better cables, etc. With the growth of the internet and the end of Moore’s law, this quickly became unsustainable and the need to have horizontal scaling was painfully obvious. Instead of buying better, more expensive hardware, simply buy a ton of cheap servers and distribute the load across all of them.

The earliest horizontal scaling was just running duplicates of the web server. However recently with the advancement of the cloud, microservice architecture has rapidly begun to dominate the playground. The idea is to split up a huge application into individual components, called services, that each perform a specialized task. So instead of having a single web server process a request from start to finish, developers break the application into services like a user authenticator, page server, api server, database model service, etc.

Each microservice consists of one or more replicas. For a website with a backend running on Django, we can horizontally scale by increasing the number of copies of the Django server that we are running. These copies are the replicas. If we are running a database, we can increase the number of replicas by increasing the number of replicas, which for a database is usually called shards. Think of a shard as a piece of the overall database that you can run individually and holds a portion of the overall data_._ By increasing the number of shards, we can horizontally scale the database. These replicas can then be run on different machines. So if we want a huge web server running on 100 machines, we can just make 100 replicas and run each on a seperate machine.

Lets look at a complete example. A Google search query may be accepted by a load balancer, and forwarded to one of thousands of replicas of the api server. The api server then forwards the request to the indexer, the ad generator, and the ML neural net all at the same time. Each of these services can be running thousands of replicas as well. These individual services complete their own task, each a portion of the entire original request, and then the api server aggregates the results and returns them to you in the form of a search result.

Aggregation Style Microservice Architecture (Source: Arun Gupta)

For more information of microservice architecture patterns, visit Arun Gupta’s Blog. He is a Principal Technologist at AWS.

Advantages

Below are some of the biggest advantages of using micro services. While there are plenty more, these 4 are to be the most important.

  • The biggest and most undeniable advantage to the microservice framework is the ability to horizontally scale any of its components. If one service (say the neural net) is under heavy load, you can simply run more replicas of that particular service.
  • Services are independent. Each component can live in a seperate repository and be maintained by a dedicated team of engineers. The component is written in the language best suited for its purpose, so a database service might be written in C and the web server in Python. These services can be updated and new components written without affecting the rest of the pipeline.
  • The architecture is pluggable. This one builds on the service’s independence, but takes it to a whole new level. A company does not even need to write all of its services. Third party applications like MySQL, Elasticsearch, Redis are all microservices that can be easily integrated into your system.
  • Fault tolerance (by failure isolation). This advantage actually only exists if your system is not poorly designed. The failure of one replica of a service should not cause your whole pipeline or even that service to fail. Likewise a bug in a particular service does not impact the uptime of other non related services. This is not always easy, and fault tolerance is a intesting problem in both industry and research, especially for topics like distributed databases and networks.

And of course as with everything, there are some disadvantages. There is overhead and complexity involved in bouncing requests from one service to another. Distributed systems can often have dependency, replication, and shared fate issues. Deployment and debugging difficulty grow exponentially with the number and complexity of micro services.

The good news is there are a ton of researchers, corporations, and start ups working on all aspects of these problems. Projects like Jenkins, Spinnaker, and Jaegertracing are among the most popular open source solutions today, with new innovations being made daily in both industry and academia.

So while distributed systems are still relativetly new, their impact can be felt constantly every day. I mean, just think about how you’re even reading this article.