paint-brush
Adding AI to Your Mobile Appby@withfries2
3,580 reads
3,580 reads

Adding AI to Your Mobile App

by Frank ChenJune 5th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

[This post is republished from the <a href="http://aiplaybook.a16z.com/" target="_blank">a16z AI Playbook</a>. Click <a href="http://aiplaybook.a16z.com/docs/guides/dl-start" target="_blank">here for part 1 of this coding tutorial</a>. —Frank]

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Adding AI to Your Mobile App
Frank Chen HackerNoon profile picture

[This post is republished from the a16z AI Playbook. Click here for part 1 of this coding tutorial. —Frank]

Now let’s add AI superpowers to our iPhone app by retaining the model with our own data and connecting the trained model to our iPhone app.

Setup and Test the iOS App

0. Open Terminal

Like in the initial TensorFlow setup, we start by having a trusty Terminal always open by our side — please follow that link if you need a reminder of the steps. We will continue our convention of using full paths instead of bash shortcuts to keep problems to a minimum, so whenever you see /Users/joe/ you will have to replace it with your own home path ([here's why we do this]](/docs/guides/dl-start#openTerminal)).

Retraining the Model

Retraining the model follows the same steps described earlier and some additional optimizations that we can make on the dataset for running on mobile.

Below, you will find each step with a brief description of what it accomplishes. To make it easier, however, we have created a shell script that will run the entire process you can download the script here as a zipfile which you can then uncompress in your home directory.

At the top of the script you’ll find the following line

TARGET_ROOT_FOLDER=/Users/joe

Change that value to the folder you have been using to store both tf_files and tensorflowdirectories, and then you can run the script:

$ ./runtraining.sh

While the script is running, you can keep reading to see what the different commands are. In case of errors, the script should direct you to the right section of this FAQ.

If you followed the previous examples you should already have the Inception model downloaded. However, if you want to make sure you have an unmodified copy, you can download it again by running

curl -o /Users/joe/tf_files/inception.zip \ https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip \ && unzip /Users/joe/tf_files/inception.zip -d /Users/joe/tf_files/inception

Then, download the training data for the app we are going to build:

curl -o /Users/joe/tf_files/a16zset.zip \ https://cryptic-alpha.herokuapp.com/a16zset.zip \ && unzip /Users/joe/tf_files/a16zset.zip -d /Users/joe/tf_files/a16zset

In the end, you should have both the /tf_files/inception folder and a new folder /tf_files/a16zset which contains two main datasets: business_card and not_business_card.

Note: as mentioned previously, our dataset is based on the Stanford Mobile Dataset, built as follows:

  • We selected the dataset that includes images of business cards, book covers and CD covers but did not use all 500 images for each category.
  • We then expanded the sets with additional images obtained via web searches, verifying that they had been marked for reuse (according to Google Search) and added a new category for credit cards, which can be very similar to business cards.
  • Finally we normalized the images to dimensions of 640x480 with medium JPEG compression. This isn’t required by the process but we wanted to keep the dataset a relatively small download and have a reference image size that we could use later with the mobile app.

This showcases the flexibility afforded by DL for real-world applications while keeping complexity of the training process to a minimum. In our app the image recognition process is intended to speed up, rather than completely replace, human interaction, so even lower-probability matches can be useful.

To retrain the model, we use the same command as before, pointing it to the new dataset:

$ bazel-bin/tensorflow/examples/image_retraining/retrain \--bottleneck_dir=/Users/joe/tf_files/a16z_bottlenecks \--model_dir=/Users/joe/tf_files/inception \--output_graph=/Users/joe/tf_files/a16z_retrained_graph.pb \--output_labels=/Users/joe/tf_files/a16z_retrained_labels.txt \--image_dir /Users/joe/tf_files/a16zset

Using the Model in the iOS App

Our iOS App uses the iOS static TensorFlow library along with our own Swift Code for it.

Originally published in Andreessen Horowitz’s AI Playbook.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.

To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!