It’s The Future, Again

Written by ben11kehoe | Published 2017/10/25
Tech Story Tags: serverless | docker | microservices | software-development | api

TLDRvia the TL;DR App

(with apologies to Paul Biggar and assistance from Corey Quinn of Last Week in AWS)

…so I just need to split my simple CRUD app into 12 microservices, each with their own APIs which call each others’ APIs but handle failure resiliently, put them into Docker containers, launch a fleet of 8 machines which are Docker hosts running CoreOS, “orchestrate” them using a small Kubernetes cluster running etcd, figure out the “open questions” of networking and storage, and then I continuously deliver multiple redundant copies of each microservice to my fleet. Is that it?

No, everything’s changed. You need to use serverless now. It’s the future.

What’s that?

Everything you said after “microservices” goes away! Isn’t it amazing? You just write a little code and upload it to service that runs it for you.

Oh, like PaaS?

No, it’s much better because there are no servers involved.

No servers? What does the code run on?

Well, uh, obviously, there are servers, but you don’t need to worry about them. Mostly. Until you do.

Ok, so I just write my Ruby code —

No, Ruby is dead. You must use one of the supported languages. Try JavaScript — the correct version of JavaScript.

Ok, so I just rewrite everything in JavaScript, upload it, and it runs it on a web server — I mean, a web…something?

No! there’s no web server. Your code just runs as functions. It’s simpler that way.

Ok, so I write functions, and it listens for requests?

Absolutely not. It doesn’t listen, and apparently neither do you. It runs in response to events. Event-driven architectures are the new thing. Nobody likes synchronous invocations anymore.

So how does it get called from a client?

You create REST HTTP endpoints that synchronously call your functions, obviously. It keeps your code in nice separate units.

Isn’t REST being supplanted by GraphQL?

Sure! For that you can skip all the different endpoints and cram everything together in one function.

…I thought you said different functions were better? Anyway, I upload the code and it runs, where do I deploy my database?

No, you don’t deploy your database. You use a service for a database. And god help you if you want SQL — your connection will probably get disconnected between function invocations, so it’ll make your warm starts as long as your cold starts.

Warm and cold starts? What are those?

You know how I said your function runs in response to events? It runs in a container — but you don’t have to worry about that. Except, sometimes the same container gets reused. But sometimes it doesn’t, and then it takes longer. And if it’s reused, it’s frozen in between.

So my function is a Docker container?

No! Docker is three hype cycles old! It’s just a function. Don’t worry about what runs it.

Ok, so back to my database. I need a service for that? Why can’t I use Cassandra?

Cassandra runs on servers! Everything that’s not your functions needs to be from a service.

But I have really specific needs for my database. Can’t I use services for queues and blob storage and auth and monitoring and everything else, and just use servers for my database?

No! If you have even one server everything is ruined!

Ok…so I use all these services. How do I debug that locally when I’m developing?

Hahaha, you can’t integration test locally! Local debugging is only barely supported anyway. Real integration testing can only be done on the deployed system.

Ok, so how do I deploy?

Well, you’ve got to chose from one of about a dozen different opinionated frameworks. Lots of people choose Serverless.

I thought we were already talking about serverless.

No — listen, this is really easy: the Serverless Framework, from Serverless, Inc. Obviously that’s different from the general concept, so no one else has ever gotten confused.

Ok, so I deploy each of my services. How do I control canaries and blue-green deployments?

Well…

Also, how is orchestration handled? How do my microservices find each other?

Um…people are working on that, I think.

Why is this better?

Don’t you see? Without the operational overhead of maintaining servers, you’re free to spend your time solving all these rough edges!

So let me see if I’ve got this straight:I need to take my microservice, which were already small pieces of my original application, into even smaller functions (unless I’m using GraphQL), in a supported language, and this runs on servers that I don’t need to care about, except that I do sometimes, especially for cold starts. I replace all of the rest of my application with services, which are probably similar to the things I’m using, but if they are different, I don’t have a choice anyway. Deploying anything beyond a simple application is super complicated, nobody’s figured out how to update them properly, and local debugging is barely possible. Is that it?

Yes! Isn’t it glorious?

I’m going back to Heroku.

Note: if you look at my other posts, you’ll see I believe that, despite the current drawbacks, serverless architecture is actually the future! I’m always happy to call out the bad parts, but in terms of total cost of ownership and feature velocity, serverless is completely worth the trouble. Ping me on Twitter if you want to know more.


Published by HackerNoon on 2017/10/25