Query a custom AutoML model with Cloud Functions and Firebase

Written by srobtweets | Published 2018/08/21
Tech Story Tags: firebase | google-cloud-platform | machine-learning | serverless | automl

TLDRvia the TL;DR App

If you haven’t heard about AutoML yet, it‘s the newest ML offering on Google Cloud and lets you build custom ML models trained on your own data — no model code required. It’s currently available for images, text, and translation models. There are lots of resources out there to help you prepare your data and train models in AutoML, so in this post I want to focus on the prediction (or serving) part of AutoML.

I’ll walk you through building a simple web app to generate predictions on your trained model. It makes use of Firebase and Cloud Functions so it’s entirely serverless (yes, I put serverless and ML in the same blog post 🙄). Here’s the app architecture:

Want to skip to the code? It’s all available in this GitHub repo.

The AutoML API

I was particularly excited to discover that in addition to providing an entire UI for building and training models, AutoML has an API for adding training data, deploying models, generating predictions, and more. Let’s say you’re crowdsourcing training data for your model: with the AutoML API you could dynamically add new data to your project’s dataset and regularly train updated versions of your model. I’ll cover that in a future post, here I’ll focus on the prediction piece.

For this demo we’ll build a web app for generating predictions on a trained AutoML Vision model (though it could easily be adapted to AutoML NL since they use the same API). The particular model I’ll be querying can detect the type of cloud in an image. On the frontend, users will be able to upload an image for prediction. Our app will upload that image to Firebase Storage, which will kick off a Cloud Function. Inside the function we’ll call the AutoML API and return the prediction data to our frontend client. The finished product looks like this:

Setting up your Firebase project

Firebase is a great way to get apps up and running quickly without worrying about managing servers. It provides a variety of SDKs that make it easy to do things like upload images, save data, and authenticate users directly from client-side JavaScript.

For this blog post I’ll assume you already have a trained AutoML Vision model that’s ready for predictions. The next step is to associate this project with Firebase. Head over to the Firebase console and click Add project. Then click on the dropdown and select the Cloud project where you’ve created your AutoML model. If you’ve never used Firebase before, you’ll also need to install the CLI.

Next, clone the code from this GitHub repo and cd into the directory where you’ve downloaded it. To initialize Firebase in that directory run firebase init and select Firestore, Functions, Hosting, and Storage when prompted (this demo uses all four):

Now we’re ready to go. In the next step we’ll set up and deploy the Cloud Function that calls AutoML.

AutoML + Cloud Functions for Firebase

You can use Cloud Functions independently of Firebase, but since I’m using so many Firebase features in my app already, I’ll make use of the handy Firebase SDK for Cloud Functions. Take a look at the [functions/index.js](https://github.com/sararob/automl-api-demo/blob/master/functions/index.js) file and update the 3 variables at the top to reflect the info for your project:

Our Cloud Function is defined in exports.callCustomModel. To trigger this function whenever a file is added to our Storage bucket we use: functions.storage.object().onFinalize(). Here’s what’s happening in the function:

  1. Download the image to our Cloud Functions file system(we can use the tmp/ dir to do this)
  2. Base64 encode the image to prepare it for the AutoML prediction request
  3. Make the AutoML prediction request using the handy nodejs-automl package
  4. Write the prediction response to Cloud Firestore

We can create an AutoML prediction client with 2 lines of code:

The request JSON to make an AutoML prediction looks like this:

All we need to do to send this to the AutoML API is created a prediction client and call predict():

Time to deploy the function. From the root directory of this project, run firebase deploy --only functions. When the deploy completes you can test it out by navigating to the Storage section of your Firebase console and uploading an image:

Uploading an image to Firebase Storage

Then, head over to the Functions part of the console to look at the logs. If the prediction request completed successfully, you should see the prediction response JSON in the logs:

Function logs

Inside the function, we also write the prediction metadata to Firestore so that our app can display this data on the client. In the Firestore console, you should see the metadata saved in a images/ collection:

Prediction metadata in Cloud Firestore

With the function working, it’s time to set up the app frontend.

Putting it all together

To test the frontend locally, run the command firebase serve from the root directory of your project and navigate to localhost:5000. Click on the Upload a cloud photo button in the top right. If the image you uploaded returned a prediction from your model, you should see that displayed in the app. Remember that this app is configured for my cloud detector model, but you can easily modify the code to make it work for your own domain. When you upload a photo, check your Functions, Firestore, and Storage dashboards to ensure everything is working.

Finally, let’s make use of Firebase Hosting to deploy the frontend so we can share it with others! Deploying the app is as simple as running the command firebase deploy --only hosting. When the deploy finishes your app will be live at your own firebaseapp.com domain.

That’s it! We’re getting predictions from a custom ML model with entirely serverless technology. To dive into the details of everything covered in this post, check out these resources:

Let me know what you think in the comments or find me on Twitter at @SRobTweets.


Published by HackerNoon on 2018/08/21