Time is crucial in startups. We are experiencing a constant race against time. Your market might be time sensitive, and you need to grow fast. Most startups fail because they run out of cash and time. Serverless helps you spend time only on things that matter instead of stuff that seems shiny but isn’t essential.
“Get the important things right” — N. P. Calderwood.
Anthony Casalena wants to empower every single person out there to build personal websites without having to code. In his own words, Squarespace is “working to solve the problem of self-expression”.
To get started, he needed $30000. That’s how much money he had spent on buying two servers that would host his first version of Squarespace. After his initial investment, he went on to first hosting his servers into his dorm room.
It didn’t take him long to move onwards to renting space in a data center. And not long after that, he experienced his first outages, the panic of losing all his user data and with that his business. Eventually, he had to replace the original servers and buy new ones.
$30000, this was the initial cost of starting a business for Anthony and probably a lot of other entrepreneurs like him. Virtualisation, also known as Infrastructure as a Service, freed the burden of that initial investment for a lot of startups.
Serverless is the latest step on the path of taking away the burden of infrastructure, and has a similar impact on getting you started with your business. Only now it frees you up from spending time on planning, configuring and managing servers.
Much like high-level programming languages are an abstraction of machine code, Serverless is an abstraction for cloud infrastructure. When programming in a low-level language, we need to understand the memory requirements for our code to run, explicitly allocate and de-allocate it.
Same with traditionally hosted applications, we need to estimate the workload at any given moment in time and provision the infrastructure required to run it. Just like how high-level programming languages abstract away the burden of configuration, enabling us to build applications faster, Serverless enables us to focus on the code that’s relevant to our product without having to worry about babysitting servers.
And, if you do ever need to go to lower-level abstractions like containers or even deploying VMs, starting with serverless means that your application has a better chance of being factored well enough to let you focus those optimisation efforts on just what needs it.
Serverless gives us the ability to focus on the important things right from the beginning. And, in our case, important means solving customer problems first.
“It’s not that we need to do more but rather that we need to focus on less” — Nathan W Morris.
Every five years, millions of Australians fill out a survey for the census. For the 2016 census, the Australian government decided to do the census online. They spent 9 million dollars on building this survey form, and they told everyone to go home, fill out the form. And that’s exactly what happened, everyone went back home and filled in the survey at the same time.
Next thing you know, the website has crashed, and
#censusfail
is trending on social media. The government managed to gather very little data and ended up spending other tens of millions to investigate and fix issues related to the failure.https://github.com/ashleymcnamara/Developer-Advocate-Bit
The weekend after the failure, two students at a hackathon re-built the census website. They did it in 72 hours and used serverless. By the end of the three days, they also ran load tests. It turns out they were able to serve more requests than the initially predicted load for the census website.
With serverless, we can be sure that our applications will scale automatically to meet the current workload.
Scalability is a term used to describe a system’s ability to cope with increased load. What happens when our application grows from 10000 concurrent users to 100000?
How about from 1 million to 100 million? Our system will need to deal with an increased number of concurrent requests and process larger volumes of data. Autoscaling is a way to scale up or down the number of computing resources depending upon the actual load.
And the goal is to maintain reliable performance even when our load parameters have changed. With Serverless, the platform will dynamically add and remove resources based on the number of incoming events.
Scalability is a hard problem to solve. By outsourcing the job of monitoring and spinning up new instances, we get to focus on understanding how components in our system communicate and optimise for that.
Finally, probably one of the most critical aspects of Serverless is that you only pay for the resources you’re using. Say in a month you have a million invocations using up to 128MB of memory and running for less than a second.
That’s going to cost you ZERO dollars — yes, you read that correctly — ZERO dollars. Nowadays, you can have someone do all the hard work of spinning up and configuring servers for you, scale depending on load, run 1 million times and you pay absolutely nothing for it.
What a great time to be building applications! Now let’s increase the number of requests from one million to five million, the same amount of memory and CPU — it will now cost you less $5 — less than the cost of a Starbucks coffee! I call that a win!
And if your app gets really popular and the cost model of serverless becomes an ongoing concern, then you know exactly the place to focus efforts on moving to a low-level abstraction while leaving the rest of your system in a place that is at the higher level.
“Serverless architectures are application designs that incorporate third-party “Backend as a Service” (BaaS) services, and/or that include custom code run in managed, ephemeral containers on a “Functions as a Service” (FaaS) platform.” — Mike Roberts
You might already be using some serverless components. When your application relies on services like Azure Storage, Azure Event Grid, API Management you’re already benefiting from using a Serverless architecture.
Fully managed and highly scalable services are core tenants of any serverless system. They clear the path for us to focus on features that are truly relevant to our business by removing the need for us to learn, configure and host them.
Fully managed and highly scalable services are core tenants of any serverless system.
At the core of Serverless computing are cloud functions. They enable us to run code in ephemeral containers in reaction to an event. The execution can be triggered by any of the managed services or some custom sources you might define.
In your functions, there’s a couple of things you need to be aware of. You’ll end up writing code mostly in the same way you did before while using the programming languages that you normally code in.
But because Serverless is abstracting away the server management for you, it also means that you don’t have access to any of the low-level OS APIs. To communicate with some of them, you’ll need to use a platform API.
Remember, in serverless you share a physical machine with other users, so the platform needs to ensure perfect isolation for your environment. Because your code is running in ephemeral containers and for us to be able to scale out your code infinitely you’ll need to write stateless code.
This means that you cannot rely on any state being preserved between function calls. If you do end up having to save state, then you’ll have to use a data store like a message queue or a database.
The anatomy of a serverless processing model is:
A function includes the code it executes and metadata describing the event it listens to and other resources it connects to.
Our code runs in response to specific triggers which can be of type HTTP when we react to HTTP requests, blob trigger when we run code in response to a file being uploaded to a storage account. Other commonly used triggers can be of type queue, to process a message uploaded on a queue or time triggers to run code at specified time intervals.
Function bindings are used to read and write data to data sources or services like databases or send emails. They enable developer simplicity, better performance, and security. The Serverless platform manages the data connections and reusability between executions.
It can also enable more advanced features like data prefetching and caching. By using these constructs, you avoid hardcoding access to other services. Your function receives data (for example, the content of a queue message) in function parameters.
You send data (for example, to create a queue message) by using the return value of the function.
In Azure Functions, we have two types of bindings — input and output. You use input bindings to read from a data source and output bindings to write. Currently, you can connect to services like Azure CosmosDB, Table Storage, Event Grid, Blob and Queue Storage.
An incoming HTTP request will be intercepted by our scale controller which will deploy a function instance, create a database connection and execute a query to retrieve the data, inject the request and binding data into our function. Once our execution has completed, it will get the return value from our function and include that into an HTTP response.
Using Azure Functions, you can write Serverless code in C#, Javascript, F#, Java, Typescript and Python. With every new technology, we need to figure out what tools are available for us and how we can integrate them into our existing toolset. When getting started with serverless, we have a few options to consider.
First, we can use the good old browser to create, write and test functions. It’s powerful, and it enables us to code wherever we are; all we need is a computer and a browser running. The browser is a good starting point for writing our very first serverless function.
Next, as you get more accustomed to the new concepts and become more productive, you might want to use your local environment to continue with your development. Typically you’ll want support for a few things:
With Azure Functions, you’ll have support for all these features when working with the Azure Functions Core Tools and Visual Studio Code.
Serverless is a great fit for building backend APIs due to its ability to listen to HTTP requests at a given URL. Very high level, a web application is a collection of URLs that we can call from our client applications.
But there are many other types of use cases that work well with Serverless. In the Cloud Native Computing Foundation whitepaper, the authors have identified four different types of workloads that are a good fit:
Asynchronous, concurrent, easy to parallelize into independent units of workInfrequent or has sporadic demand, with large, unpredictable variance in scaling requirementsStateless, ephemeral, without a major need for instantaneous cold start timeHighly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity
In practice, we see Serverless being used for:
Remember, you want to spend a minimal amount of time on solving the exact problem you need to address. If you’re in the business of configuring and managing databases or servers, by all means, you should spend all your time on that.
But if you want to work out an idea and see if there’s something to it with a minimum of cost and minimum of the ceremony of running servers, then Serverless might be your best choice.
And of course, in a startup, the last thing you want to do is spend time on anything else than figuring out what your business is.
Recommendations
“The single responsibility principle is a computer programming principle that states that every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class.”
The single responsibility principle is very important when building serverless applications. A function (app) should do one thing. How you separate functionality into functions will determine how your application scales and impact your deployment strategy. You want to find the critical components in your application and isolate them in separate functions.
You need to have a clear understanding of how your system scales and which are your bottlenecks. There is no use for an application that can accept 500 HTTP req/sec if your database only accepts ten connections.
Otherwise your system will very quickly overload and you’ll end up with very unhappy users. It is recommended that all components in your application scale at a similar pace. You can achieve that either by using managed services or having smart throttling strategies in place.
Our serverless applications truly scale with our organisation by enabling us to onboard users as we grow.
Serverless enables us to solve problems creatively, and at a fraction of the cost, we usually pay for using traditional platforms. Our serverless applications truly scale with our organisation by enabling us to onboard users as we grow.
I have nothing but gratitude and hopeful feelings about how your companies are going to change the world. Based on my experience using Serverless, I know this technology is going to serve you well in achieving your goals. I’m so excited that it is here for you to keep making the world a better place!