paint-brush
erverlessFrom the trenchesby@fooshm
1,120 reads
1,120 reads

erverlessFrom the trenches

by Efi Merdler-KravitzJuly 23rd, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I was wondering whether I had made a good choice. Creating a <a href="https://hackernoon.com/tagged/serverless" target="_blank">serverless</a> <a href="https://hackernoon.com/tagged/architecture" target="_blank">architecture</a> that combines two cloud providers seemed crazy. I was mostly afraid of the integration between the two platforms. Surprisingly, however, I found out that integration was the easy part; provisioning them in an automated way was much harder and eventually became the tricky part.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - erverlessFrom the trenches
Efi Merdler-Kravitz HackerNoon profile picture

Serverless multi-cloud

“photo of two orange flying biplanes” by Andrew Palmer on Unsplash

From the trenches series

The right decision

I was wondering whether I had made a good choice. Creating a serverless architecture that combines two cloud providers seemed crazy. I was mostly afraid of the integration between the two platforms. Surprisingly, however, I found out that integration was the easy part; provisioning them in an automated way was much harder and eventually became the tricky part.

According to the latest serverless framework survey, more than a quarter of the respondents used two or more cloud providers. When thinking about the results and the journey I took in using multi-cloud environments, using two or more cloud providers makes sense. Each cloud provider has its strong points. Google is known for its AI and mobile support, for instance, so why not use these capabilities?

Serverless, at its purist form, means outsourcing anything that is not related to your core business to external parties, VCS, CI, compute engines, ML, etc. Concentrate on what you do best.

Our use case

Our system is composed of two main parts: a mobile client which collects data and uploads it to digestion and a backend which digests the data, updates mobile DB, and sends push notifications.

For the mobile part, we chose Firebase as our “tooling.” It provides crash reporting (the amazing Crashlytics), analytics, and push notifications, and we had good prior experience with it. Thus, for us, it was a no-brainer. Two major components were still open, and we did not know what to choose — authentication or mobile DB.

Originally, we chose Auth0. We liked the fact that it talks naively with both Google cloud and AWS, but the authentication UI on mobile was not good as the one that Firebase provided, specifically the ability to sign-in via SMS without the need to require any SMS permissions. For us, this choice was a big win. In the end, we preferred components that are part of the same package. In addition, Firebase was already integrated in our system, so we simply added authentication and Firestore to the mix.

Authentication

Using Lambda for authentication

  1. Everything starts with registration, and we use SMS based registration.
  2. If the registration is successful, a token is saved in the device’s internal memory.
  3. An authentication Lambda receives the token as part of the payload and makes an API call against Firebase to validate the token.
  4. If the token is validated successfully, a pre-signed POST URL is returned to the client.
  5. Using the URL, the client is then able to upload the collected data.

The following gist demonstrates a decorator which you can put on any of your API calls to validate that the token is a valid Firebase token.

For more cool usages on S3 go ahead and read


S3 the best of 2 worlds_S3 can be used more than just storing data. View novel ways to extend its functionality._hackernoon.com

I’ll be waiting for you right here.

Updating mobile DB

Previously, mobile DB, or MD for short, was considered the backbone of any respectable BaaS (backend as a service) platform, and it allows any mobile device to keep information in the cloud, usually a NoSql database. When moving to serverless, one of the major paradigm shifts is the move to a richer client, e.g., clients that do more on their side and are not a simple presentation layer. Our MD, Firestore, holds a summary of the data digestion that the backend performs and per device configuration.

S3, in our case, acts as a queue. You just put a message inside the bucket that contains the relevant device ID, the piece of data you want to write as a json document, and the path inside Firestore. A Lambda function picks up the message and then writes or updates the relevant fields in Firestore.

For those who are not familiar with it, Firestore may contain either a document or a collection of documents, and each document may contain another collection of documents. This structure allows you to build complex hierarchies. In our case, at the root of Firestore, we keep a single collection called users, and each user is identified by a unique ID, which in turn implies the document name.

Firestore hierarchy

A crucial part of the process is access permission. You do not want all devices in the system to access each others’ content. We use a shared device identifier that is created the first time a device successfully registers; pay attention that the device identifier is not the token used for authentication. Each device is allowed to access any documents or collections underneath its device identifier. Firestore exposes this identifier as part of its permission rules, so you can use something like the gist below to allow per device read.

Deployment

This Is Tough by Robert Baker on Unsplash

The serverless world is notoriously difficult when moving from a simple one-cloud function to multiple cloud functions. Now, add to the mix another cloud provider, and things become really tough. Our requirement is dead simple: Allow each developer to deploy the full product in a real cloud environment.


Serverless testing from the trenches_A quick overview on Serverless testing paradigms_hackernoon.com

We have four parts to the process:

  • Create account for each developer in both AWS and Firebase.
  • Provision resources on AWS and Firebase.
  • Push code changes.
  • Update Firestore rules.

I’m going to skip anything that is AWS related and concentrate on Firebase.

Right now, the biggest problem with Firebase is the inability to create and provision resources automatically. Things might change in the future, but right now, the process is manual:

  • Go to https://firebase.google.com/ and click on sign-in.
  • Add a new project.
  • Click on the small cog and choose the project settings and then SERVICE ACCOUNTS
  • Choose Firebase Admin SDK and download a private key. This key will be used by the backend to authenticate against Firebase.
  • Next is Database configuration. Go to Database on the left panel and choose Create database. Choose Start in lock mode, we will update the rules later on.

When done, install Firebase CLI and make sure to login

Update Firestore rules

Rules, for those who have forgotten, are used to restrict access. Mobile devices can access only documents belonging to them. The update rules script operates in two modes. The first enables each developer to manually update their own environment. All they have to supply is the application name. The second is used by a CI environment in which a special authentication token is supplied to do exactly the same.

Fin

Multiple cloud providers give you as a developer the best of all worlds, an ability to choose the best serverless solution that fits your job. Right now, the Achilles heel in the entire process is in the ability to create a development environment in a seamless way. It is a question of tooling, which I believe will improve as time goes by.

Let’s hear a bit more about your experience with multi-cloud environments. Share with us.