From the trenches series
- Serverless testing from the trenches
- Serverless multi-cloud from the trenches
- Development flow in serverless environment from the trenches
The right decision
I was wondering whether I had made a good choice. Creating a serverless architecture that combines two cloud providers seemed crazy. I was mostly afraid of the integration between the two platforms. Surprisingly, however, I found out that integration was the easy part; provisioning them in an automated way was much harder and eventually became the tricky part.
According to the latest serverless framework survey, more than a quarter of the respondents used two or more cloud providers. When thinking about the results and the journey I took in using multi-cloud environments, using two or more cloud providers makes sense. Each cloud provider has its strong points. Google is known for its AI and mobile support, for instance, so why not use these capabilities?
Serverless, at its purist form, means outsourcing anything that is not related to your core business to external parties, VCS, CI, compute engines, ML, etc. Concentrate on what you do best.
Our use case
Our system is composed of two main parts: a mobile client which collects data and uploads it to digestion and a backend which digests the data, updates mobile DB, and sends push notifications.
For the mobile part, we chose Firebase as our “tooling.” It provides crash reporting (the amazing Crashlytics), analytics, and push notifications, and we had good prior experience with it. Thus, for us, it was a no-brainer. Two major components were still open, and we did not know what to choose — authentication or mobile DB.
Originally, we chose Auth0. We liked the fact that it talks naively with both Google cloud and AWS, but the authentication UI on mobile was not good as the one that Firebase provided, specifically the ability to sign-in via SMS without the need to require any SMS permissions. For us, this choice was a big win. In the end, we preferred components that are part of the same package. In addition, Firebase was already integrated in our system, so we simply added authentication and Firestore to the mix.
- Everything starts with registration, and we use SMS based registration.
- If the registration is successful, a token is saved in the device’s internal memory.
- An authentication Lambda receives the token as part of the payload and makes an API call against Firebase to validate the token.
- If the token is validated successfully, a pre-signed POST URL is returned to the client.
- Using the URL, the client is then able to upload the collected data.
The following gist demonstrates a decorator which you can put on any of your API calls to validate that the token is a valid Firebase token.
For more cool usages on S3 go ahead and read
S3 can be used more than just storing data. View novel ways to extend its functionality.hackernoon.com
I’ll be waiting for you right here.
Updating mobile DB
Previously, mobile DB, or MD for short, was considered the backbone of any respectable BaaS (backend as a service) platform, and it allows any mobile device to keep information in the cloud, usually a NoSql database. When moving to serverless, one of the major paradigm shifts is the move to a richer client, e.g., clients that do more on their side and are not a simple presentation layer. Our MD, Firestore, holds a summary of the data digestion that the backend performs and per device configuration.
S3, in our case, acts as a queue. You just put a message inside the bucket that contains the relevant device ID, the piece of data you want to write as a json document, and the path inside Firestore. A Lambda function picks up the message and then writes or updates the relevant fields in Firestore.
For those who are not familiar with it, Firestore may contain either a document or a collection of documents, and each document may contain another collection of documents. This structure allows you to build complex hierarchies. In our case, at the root of Firestore, we keep a single collection called
users, and each user is identified by a unique ID, which in turn implies the document name.
A crucial part of the process is access permission. You do not want all devices in the system to access each others’ content. We use a shared device identifier that is created the first time a device successfully registers; pay attention that the device identifier is not the token used for authentication. Each device is allowed to access any documents or collections underneath its device identifier. Firestore exposes this identifier as part of its permission rules, so you can use something like the gist below to allow per device read.
The serverless world is notoriously difficult when moving from a simple one-cloud function to multiple cloud functions. Now, add to the mix another cloud provider, and things become really tough. Our requirement is dead simple: Allow each developer to deploy the full product in a real cloud environment.
We have four parts to the process:
- Create account for each developer in both AWS and Firebase.
- Provision resources on AWS and Firebase.
- Push code changes.
- Update Firestore rules.
I’m going to skip anything that is AWS related and concentrate on Firebase.
Right now, the biggest problem with Firebase is the inability to create and provision resources automatically. Things might change in the future, but right now, the process is manual:
- Go to https://firebase.google.com/ and click on sign-in.
- Add a new project.
- Click on the small cog and choose the
project settingsand then
Firebase Admin SDKand download a private key. This key will be used by the backend to authenticate against Firebase.
- Next is Database configuration. Go to Database on the left panel and choose
Create database. Choose
Start in lock mode, we will update the rules later on.
When done, install Firebase CLI and make sure to login
Update Firestore rules
Rules, for those who have forgotten, are used to restrict access. Mobile devices can access only documents belonging to them. The update rules script operates in two modes. The first enables each developer to manually update their own environment. All they have to supply is the application name. The second is used by a CI environment in which a special authentication token is supplied to do exactly the same.
Multiple cloud providers give you as a developer the best of all worlds, an ability to choose the best serverless solution that fits your job. Right now, the Achilles heel in the entire process is in the ability to create a development environment in a seamless way. It is a question of tooling, which I believe will improve as time goes by.
Let’s hear a bit more about your experience with multi-cloud environments. Share with us.