paint-brush
Dev without Ops - Why we are building The Vercel for Backendby@wunderstef
1,766 reads
1,766 reads

Dev without Ops - Why we are building The Vercel for Backend

by Stefan Avram September 28th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

When I started my career as a developer, my continuous integration workflow looked like this: 1. Open FTP client2. Connect to server, connect index.php to server and deploy application. If you think about it, this was actually "Serverless" I didn't have to worry about servers. I just had to upload my code to the "webspace" and it was live. With NextJS, I was able to stop my search for the perfect boilerplate. NextJS is the most popular ReactJS framework. Vercel is the first company to make deploying front-end applications easy.

Company Mentioned

Mention Thumbnail
featured image - Dev without Ops - Why we are building The Vercel for Backend
Stefan Avram  HackerNoon profile picture


When I started my career as a developer, my continuous integration workflow looked like this:

  1. Open FTP client

  2. Connect to the server

  3. Drag index.php from left to right

  4. Close FTP client


My internet connection was slow, but it only took a few seconds to "deploy" my PHP application. I was new to programming, but I was able to get my code into "production" in just a few seconds. It was a great feeling to see my first website live on the internet. I have to admit, testing wasn't really automated, but my human CD pipeline was fast and reliable.


If you think about it, this was actually "Serverless". I didn't have to worry about servers. I just had to upload my code to the "webspace" and it was live.


Next, I wanted to add a database to my application. Here's the simplest code example I can find today:


<?php$servername = "localhost";$username = "username";$password = "password";$dbname = "myDB";

// Create connection$conn = new mysqli($servername, $username, $password, $dbname);
// Check connectionif ($conn->connect_error) {  die("Connection failed: " . $conn->connect_error);}

$sql = "SELECT id, firstname, lastname FROM MyGuests";$result = $conn->query($sql);

if ($result->num_rows > 0) {  // output data of each row  while($row = $result->fetch_assoc()) 
{    echo "id: " . $row["id"]. " - Name: " . $row["firstname"]. " " . $row["lastname"]. "<br>";  }} else {  echo "0 results";}$conn->close();?>

I

f you copy this file to the web server, you have a fully functional application that can read and write data from a database. It's incredible how simple this concept was. A complete application in just one file. The "application" starts when the user opens the URL in the browser, and it ends when we're done sending the response to the user. That's as "Serverless" as it gets.


Nowadays, we've gone absolutely crazy. We have AWS, Kubernetes, CDK, Terraform, Docker, Serverless, GraphQL, gRPC, REST, and so much more. You need many years of experience to be able to set up a project with this "modern stack". It's pure madness.


You need Webpack or another tool to bundle your code, minify and optimize it. Build and publish Docker images, or upload zip files to AWS Lambda. And then there's yaml all over the place to configure your infrastructure.


Architects proudly show off their "serverless" architecture diagrams on conferences with hundreds of components. I think to myself: "Why is this so complicated?" Why are there so many moving parts? There are specialised AWS consultants who earn a lot of money because nobody understands Cloud anymore.


Our goal is to change this. We want to make using the cloud as easy as it was in the old days. But first, let's take a step back and look at the problem from a different angle.

[

The rise of NextJS and Vercel


From the early days on, I really enjoyed working with ReactJS. But for the first few years, there was a huge problem. I was constantly on the lookout for the perfect "boilerplate". Getting ReactJS right wasn't easy, especially if you want to have server-side rendering, hot reloading, and all the other features that make for a great developer experience.


Then, in 2016, a new project called NextJS was released. It took NextJS a while to win the hearts of the community, but today it's the most popular ReactJS framework.

With NextJS, I was able to stop my search for the perfect boilerplate. The framework might not be perfect in every way, but it's good enough for most projects. I can focus on the actual application instead of spending hours on figuring out how to do X.


That said, NextJS wasn't alone. Along with NextJS came Zeit, a company that was founded by Guillermo Rauch, later renamed to Vercel.


In Germany, we like to say "Es wird höchste Zeit" which means "It's about time". Vercel was the first company to understand that it's about "Zeit" to make deploying frontend applications easy.


Vercel developed a simple but powerful playbook:

  1. Create an opinionated open source framework to consolidate the work of the community

  2. Add features that focus on developer experience, productivity and ergonomics

  3. Build a hosting platform that deploys your code on git push

  4. Abstract away the complexity of the underlying infrastructure so developers can focus on their code


With this playbook, we're almost back to the simplicity of the PHP example above. Unsurprisingly, NextJS adds more complexity than a single PHP file, but we get a lot of value in return.


Most importantly, we're back to the simplicity in terms of DevOps. Instead of dragging files from left to right, we just push our code to a git repository. The rest is handled by the hosting platform for us, and we're not just talking about a simple hosted NodeJS server.


Vercel creates Serverless functions for server-side rendering, provides a CDN for static assets, runs middleware on the edge, integrates with GitHub, and builds previews on every git push.

Now that we have NextJS and Vercel, we can say that "Frontend" is a solved problem. But what about the backend?


Finding the right platform for the backend is hard


When you ask developers where to host their backend, you'll get a lot of different answers.

What all of them have in common is that they require a lot of DevOps and infrastructure knowledge. They offer you Redis and Postgres, they talk about Docker and containers. They have endless documentation and tutorials on how to set up the database and so on.

Compare this to the simplicity of NextJS and Vercel. Backend seems to be years behind frontend in terms of simplicity and developer experience.


That's why we decided to build WunderGraph the way we did. We want to create a "pure" backend framework that is as simple as NextJS, combined with a hosting platform that abstracts away the complexity of the underlying infrastructure.

WunderGraph, an opinionated approach to backend development

Similarly to NextJS, WunderGraph needs to be opinionated to create a great developer experience.


Here's an overview of the most important decisions we made:

  • The primary programming language is TypeScript
  • WunderGraph supports multiple API styles
    • server-side GraphQL as an integration layer
    • JSON-RPC to expose internal APIs
    • REST to expose external APIs
    • Event-driven APIs for asynchronous workflows
  • Configuration through code
  • Touching infrastructure is optional
  • Type-safety whenever possible



TypeScript

TypeScript is a perfect balance between adoption, language features, and performance. It's not ideal in every way, but perfect for configuration as code and implementing middleware and APIs.

Multi API style

No one API style is perfect for every use case. For us, GraphQL is the perfect language to integrate and combine multiple APIs. However, we're not directly exposing GraphQL, but only using it as a server-side integration layer.


WunderGraph exposes internal APIs via JSON-RPC, while using REST when we'd like to expose APIs to third parties.


For internal workflows and complex use cases, we're also offering event-driven APIs with asynchronous event handlers.


Instead of forcing our users into a single API style, we're offering different options to choose from.

Configuration through code

Middleware, API, and event handlers are implemented in TypeScript, and we're using the same language for configuration as well. It's a single codebase for everything, not a single YAML or JSON file is needed.


Touching infrastructure is optional

It's possible to go from zero to production without touching any infrastructure, but we don't want to magically hide the underlying infrastructure from the user. You can always plug in your own infrastructure, like a Postgres database or Redis cache, but you don't have to.


Type-safety whenever possible

We want to avoid runtime errors as much as possible. The type system of TypeScript helps us a lot to achieve this goal. Sometimes, it might be favourable to have a bit more flexibility, but we're trying to avoid this as much as possible.

Infraless backend development

What are the most critical building blocks of a backend (in no particular order)?

  1. authentication & authorization

  2. database

  3. key-value store

  4. caching

  5. file storage

  6. cron jobs

  7. long-running operations

  8. job queues and async workflows

  9. logging

  10. monitoring

  11. automated deployments


We're not reinventing the wheel here. We're using existing open source projects to implement the interfaces and offer them as one unified set of lego blocks. Let's dive into the details.

Authentication & Authorization

It's a problem that is already solved by WunderGraph. For authentication, we rely on OpenID Connect. For authorization, we offer rbac and abac as well as injecting claims into API Operations, allowing for a lot of flexibility.

Database

We're using Prisma as our database abstraction layer. Prisma not only allows us to offer simple migrations, but also to introspect the database schema and generate TypeScript types for it.

Key-value store

A typical key-value interface might look like this:

const client = new Client()await client.set('foo', 'bar')const value = await client.get('foo')

The problem with this interface is that it violates one of our principles: type-safety. A more type-safe approach might look like this:

// define key-value store with zod schemaconst store = new Store('fooStore', {    foo: z.string(),})await store.set('foo', 'bar')const value = await store.get('foo')console.log(value.foo) // bar

This store is now fully type-safe. By giving it a name, we're able to evolve the schema over time. As long as we're not introducing breaking changes, the schema can be evolved.


Caching

Caching is a very important part of a backend. We'd like to offer multiple caching strategies, HTTP-layer caching (already implemented), and application-layer caching in the form of a key-value store with TTLs and eviction policies. We'll probably use one of the existing well-known solutions to implement this.

File storage

File storage is already implemented in WunderGraph. WunderGraph is compatible with any S3-compatible storage.

Cron jobs

It's very common to have cron jobs in a backend that should run at a specific time or interval. If you do some research on how to run cron jobs with AWS Lambda, here's what you'll find:

  1. You need to create a CloudWatch Event Rule
  2. You need to create a Lambda function
  3. You need to create a CloudWatch Event Target
  4. You need to create a CloudWatch Event Permission
  5. You need to create a CloudWatch Event Rule Input
  6. You need to create a CloudWatch Event Rule Input Target
  7. You need to create a CloudWatch Event Rule Input Target Permission

That's a lot of steps to run a simple cron job.


What if we could just write this:

const cron = new Cron('myCronJob', {    schedule: '0 0 * * *',   
handler: async () => {        // do something    },})

Git push this code to WunderGraph cloud and you're done. Even if we were using CDK, this would still be a lot of boilerplate code.


Long-running operations

There are many use cases where you'd like to implement workflows that run for a long time. Temporal (formerly Cadence) is a great source of inspiration for this, as well as AWS Step Functions.


Here's an example that made me fall in love with Temporal:


func RemindUserWorkflow(ctx workflow.Context, userID string, intervals []int) error {   // Send reminder emails, e.g. after 1, 7, and 30 days   
for _, interval := range intervals {      _ = workflow.Sleep(ctx, days(interval)) // Sleep for days!      _ = workflow.ExecuteActivity(ctx, SendEmail, userID).Get(ctx, nil)      // Activities have timeouts, and will be retried by default!   }}


The idea that you can write synchronous code which can sleep for days is amazing. However, this love quickly turned into confusion when digging into their documentation. I have to understand too many new concepts to get started with Temporal, like activities, workflows, signals, queries, etc.


The same applies to AWS Step Functions. They don't feel intuitive to me. Configuring state machines via JSON is not how I envision writing workflows.


I'd like to offer a similar experience to our users, but without this steep learning curve.

Here's an example of how a workflow to onboard users could be implemented:


const workflow = new Workflow<{ userID: string }>('onboarding workflow', {    states: {        welcome: {            handler: async (ctx: Context) => {                console.log("send welcome email");                
const ok = await ctx.internalClient.sendEmail('userID', 'welcome');                if (ok) {                    ctx.next('sendReminder');                }            }        },        sendReminder: {            waitBeforeExecution: '5 days',            handler: async (ctx: Context) => {                const loggedIn = await ctx.internalClient.queries.userLoggedIn('userID');                if (!loggedIn) {                    console.log("send reminder email");                    ctx.internalClient.sendEmail('userID', 'reminder');                }                ctx.done();            }        }    }});
const id = await workflow.start({userID: '1234'}); // id: 1const state = await workflow.getState(id); // state: 'welcome'// after one dayconst state2 = await workflow.getState(id); // state: sendReminder (4 days remaining)// after 5 daysconst state3 = await workflow.getState(id); // state: 'sendReminder'// after 5 days and 1 secondconst state4 = await workflow.getState(id); // state: 'finished'


What's the beauty of this approach? Other solutions to this problem already exist. WunderGraph Workflows will be, like everything else, type-safe and easy to use. But most importantly, they are fully integrated into the rest of your application. Start workflows from an API Operation, trigger them from a cron job, or run them in the background. You can get the state of a workflow from anywhere in your application, like an API handler.


Finally, you're able to run workflows locally and debug them just by running wunderctl up. Deploy your WunderGraph application to WunderGraph cloud and the state of your workflows will automatically be persisted.


Final note: Workflows will be Serverless, meaning that they will sleep when they're not running. When a workflow sleeps, it really does. Therefore, you don't have to pay for a workflow that's not running.


Job queues and async workflows

Another common use case is to run jobs asynchronously. For this to work, you usually need a job queue and a worker that processes the jobs. This is a very common pattern, but it's not very easy to implement. You have to configure a queue, a worker, and a way to communicate between them. As the queue, you can use AWS SQS, RabbitMQ, Nats, or similar solutions. Then you have to configure workers to process jobs, report state, handle errors, etc. Finally, you need to implement the logic to enqueue jobs and handle the responses.


But isn't this just a workflow with a different name? Yes, it is. Workflows will automatically be backed by a job queue. The communication is already handled by the workflow engine. All you have to do is to define a handler, possible states, and the transitions between them. Workflows don't have to have multiple states, they can also be just a single handler that runs asynchronously.


All the steps to implement a job queue and a worker are already handled by WunderGraph. We're trying to abstract away the complexity to the minimum.

Insights: Logging, Monitoring, Metrics, Alerting and Tracing

Another important aspect of developing APIs and services is to have insights into what's going on. This includes logging, monitoring, metrics, alerting, and tracing.

WunderGraph will offer a unified way to collect and analyze all of these insights. In addition to that, we'll offer you ways of integrating with your existing tools.


Automated Deployments / CI-CD

A critical part of achieving a great developer experience is to not have to worry about deployments. One of our users reported that they were able to build a POC with WunderGraph within one hour, but then it took them multiple days to figure out how to properly deploy it to AWS, including health checks, DNS configuration, autoscaling, and so on.


We believe that this should be abstracted away, just like Vercel does it. You should be able to push your code to a git repository and have it deployed automatically. No additional configuration should be required. Good defaults should be used, but you can always override them if you want to.


We've invested heavily to build the perfect CI-CD pipeline for WunderGraph. It's based on top of Firecracker and is able to build and deploy WunderGraph applications in less than a minute. We'll talk about this topic in depth in a future post.

Summary

In this post, we demonstrated how an opinionated approach can radically simplify the existing workflow of building software. You don't have to understand the entire AWS ecosystem to build rich backend applications. It's clear that we're making trade-offs here, but we believe that the benefits outweigh the downsides.


As I said earlier in the post, it's about "Zeit" (time) to simplify cloud-native software development. Serverless was a great start, but I think that we should go one layer above, abstracting away infrastructure completely. Back to the old days of PHP, but with a modern twist.

We'd love to hear your thoughts on this topic. How do you think about abstracting away infrastructure as proposed in this post? Do you want to control every aspect of your infrastructure, or do you prefer to have a more opinionated approach?


Additionally, we'd love to invite you to the WunderGraph community. Please share your opinion on our Discord server, and join the early access program if you're interested in trying out WunderGraph Cloud as soon as it's available. WunderGraph Cloud Early Access

Are you ready for the next generation of Serverless Infraless API Development? Join the waitlist for WunderGraph Cloud Early Access and be the first to try it out.


Also Published Here