paint-brush
Empower the cloud to scale applications with easeby@Journerist
270 reads

Empower the cloud to scale applications with ease

by Sebastian BarthelAugust 8th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

As a developer I want to build and <strong>ship features continuously</strong>. If something goes wrong then I want to be able to <strong>rollback</strong>. Additionally <strong>central logging</strong> should work out of the box. On top of that there should be something like a <strong>self-healing</strong> mechanism to avoid 24/7 awareness. When I create applications with scalability in mind I want to be able to <strong>scale automatically</strong>. That allows me to reduce costs and to handle a lot of visitors if some one shared my application in a social network.
featured image - Empower the cloud to scale applications with ease
Sebastian Barthel HackerNoon profile picture

As a developer I want to build and ship features continuously. If something goes wrong then I want to be able to rollback. Additionally central logging should work out of the box. On top of that there should be something like a self-healing mechanism to avoid 24/7 awareness. When I create applications with scalability in mind I want to be able to scale automatically. That allows me to reduce costs and to handle a lot of visitors if some one shared my application in a social network.

Things have changed since the last decade. Storage is amazingly cheap. Clouds became payable and reduce the required effort to handle changing traffic peaks of applications. Furthermore there are production ready ways to run applications in isolation like docker.

I will show you how you can empower cloud solutions in combination with best practices of application development.

1. Containerize applications

Containers are very old and docker is nothing really new. But what has changed is the tooling around containers. If I put my application in a container then I get a lot of flexibility and avoid a vendor lock-in.

For example the serverless offering from AWS (Lambda) is a great thing. I can deploy little functions to the cloud and connect them to have a highly scale-able system. In this case I don’t need to manage anything. But what happens if I want to switch to another offering ? Usually this is hard. Often it’s recommended to use their database. In this case the database is also fully managed and highly scaleable. If I agree to use their beneficial tooling then they got me.

Therefore they can change their prices and offers how they want.

Putting my code in isolation protects me from such a jail. I also try to run the same application in dev, test and production environments to reduce complexity. By using environment variables I can control debug-settings and other environment specifics. I will demonstrate how easy this is.

A NodeJs application can be containerised by using the right Docker image. I create a simple file that contains some application code:



// server.jsvar http = require('http');var ifaces = require('os').networkInterfaces();






http.createServer((req, res) => {// simulate some io like a db accesssetTimeout(() => {res.end('Hello it\'s '+ new Date())}, 50)}).listen(8080)

To have a valid NodeJs application a NodeJs project needs to be initialized.

$ npm init

(...)












{"name": "gcloud","version": "1.0.0","description": "","main": "server.js","scripts": {"test": "echo \"Error: no test specified\" && exit 1","start": "node server.js"},"author": "","license": "ISC"}

Is this ok? (yes) yes

The next step is to create a Dockerfile to put the application in a container. I use the node:onbuild image that contains every required step to start the application.



# DockerfileFROM node:onbuildEXPOSE 8080

That’s it ! Now the container can be build and run locally.


$ docker build -t my-app:v1 .$ docker run -p 80:8080 my-app:v1

Afterwards the application is running: http://localhost:8080

This means I can now :

  • Run this container in every environment (Windows, MacOS, Linux, …),
  • Start it multiple times (use different ports: -p 81:8080, -p 82:8080, …),
  • Pause, run and renew it,
  • Share this container with others,
  • Run it without any side-effect to the system.

This flexibility allows me to freely choose the best environment for my application. Feel free to experiment with it. Put it on AWS, Google Cloud, Azure or old fashioned bare metal server.

2. Create stateless applications

The foundation for scaling is to be able to distribute traffic to multiple machines that run my application. For sure I can replace my current server by a more powerful one. But this is more expensive and you will reach a point where scaling is not possible anymore.

Clouds have out of the box working load-balancer that distribute the traffic to multiple machines. Keeping the state in the database or at the client makes scaling possible. It does not matter what machine handles what request of a specific visitor.

To demonstrate how to setup such an environment I use Google Cloud Engine. I can also achieve similar results by using other providers like AWS.

To follow my instructions you need to create a Google Cloud account. There is a free amount of traffic you can use for 1 year. Therefore you can just play around with it. But take care, always shut all instances down. Otherwise there will be unexpected costs. To be safe define some limits before you start playing around. For myself it was an annoying burden because I feared to have unexpected costs.

Playing around with cloud environments is so important. Just do it and explore how to host nowadays modern applications. It is not as dangerous as your gut tells you :).

Especially on Google Cloud there are excellent interactive tutorials that always pay attention regarding clearing up used resources.

After installing the Google cloud sdk you should be ready. I will use the flexible variant of Google App Engine. This is the fully managed variant of their products. If I need more control I use their container engine.

I can upload a docker container and I will have a running setup. There is a little helper tool to prepare my application for the cloud.

$ gcloud beta app gen-config --custom

This creates an app.yaml file and updates my Dockerfile. Now I am ready to deploy my app to Google App Engine.

$ gcloud app deploy

That’s it. This will take a while because this is my first deployment. Afterwards I have 2 running instances.

$ gcloud app instances list



SERVICE VERSION ID VM_STATUSdefault 20170813t112 aef-default-20170813t112927-cgvs RUNNINGdefault 20170813t112 aef-default-20170813t112927-k4qg RUNNING

Now I have a highly scaleable environment based upon my container that gives me a lot of cool functionality. To name a few of them:

  • Up- and downscale automatically as soon as the traffic increases,
  • Redeploy my application with no downtime,
  • Split traffic to different versions to test new features,
  • Rollback to older versions when something went wrong,
  • Central logging and
  • Healthcheck and automatically restarts when something goes wrong.

I have a lot of beneficial tools applied long before I even think about such things that will matter in the future if my application evolves.

3. Use a CDN for static content

Besides container related features clouds offer much more. One of the useful features are the simple integration of a content delivery network (CDN). Cloud storage is not real CDN but has comparable benefits and is very easy to use. In general CDNs are used to serve static content. These files will be stored at many locations to shorten response times of requests. Automatically the nearest available location will be chosen to handle the request of the resource. Especially modern applications have a lot of them:

  • Pictures,
  • Css,
  • Fonts and
  • Dynamic client scripts (JavaScript).

IO (Database access, file access) is always a bottleneck for web applications. Therefore it’s always good to strive for a reduction of real requests. All static files that can be cached should be cached. This results in a vast reduction of work for the application. I outsource simple work as much as I can to systems that are highly specialized on what they do.

If I open any complex site like amazon I see a huge amount of requests. Their ratio is like this:

287 requests, 5.8 MB transferred, Finished after 1.4 seconds

initial request is about 103 KB

~99% cachable

Let’s do some math for creating a page that can handle 1 million users:


198 / 200 requests are static1,000,000 users



# a user will go from page to page every 10 seconds1,000,000 users / 10 s = 100,000 full page loads / s20,000,000 requests per seconds without a CDN

200,000 requests per second with a CDN

Coming from 20,000,000 rq / s to 200,000 sounds achievable. Without a CDN this goal seems very heavy. This is only the load on traffic peaks. But especially then I want to ride the crowd wave. The worst thing that can happen is that my application crashes during such an exciting moment.

Back to my example, I add a JavaScript and CSS to my basic site. Therefore the requests that are necessary increased 2 times. I can outsource my static files using Google Cloud Storage. First I create a storage, allow public read access and sync my static resource folder:



$ gsutil mb gs://medium-example$ gsutil defacl set public-read gs://medium-example$ gsutil -m rsync -r ./public gs://medium-example/public

I can see the exact location by opening the google cloud storage overview.

I probably only want to access the storage hosted resource on my production environment. Therefore I need to adapt some resource paths in my application like this:

const resourcePath = process.env.ENVIRONMENT === "production" ? 'https://storage.googleapis.com/medium-example' : '.'

















const simulateSomeWorkToDo = (pathname, res) => {// simulate some io like a db accesssetTimeout(() => {res.end(`<html><head><link rel="stylesheet" type="text/css"href="${resourcePath}/public/general.css"></head><body>Hello,it's ${new Date()}.</body><script src="${resourcePath}/public/general.js"></script></html>`)}, 50)}

To set the variable in my production application I can adapt the app.yaml file.


env: flexruntime: custom


env_variables:ENVIRONMENT: 'production'

That’s how I use nowadays clouds to stay focused on app development instead of operations. I use containers to run my application wherever I want. By using a container management solution like Google App Engine I have a preconfigured cluster with a lot of functionality. And by putting static content to a CDN I get low response times, save costs and increase scalability even further.

It’s so fun to be able to quickly update applications. Or to ship them to a production ready environment by executing a single command. I can make use of canary releases or rollback if something goes wrong. I get HTTPS out of the box without doing anything. Central logging, self healing mechanism, automated scaling and much more. And the best thing: you don’t need to manage your server. No updates, no restarts, no issues on your server. If something goes wrong, the cloud automatically replaces the defect instance with a new one.

All cloud provider spent so much time in automating operations. It is not a devs main business. Therefore I use existing solutions. Furthermore I can spent more time in doing what is important for my business.

If you want to see more and enjoyed the reading then feel free to hit the little clap button. And if you have a really good day follow me on twitter.

References: