Now let’s add AI superpowers to our iPhone app by retaining the model with our own data and connecting the trained model to our iPhone app.
Setup and Test the iOS App
0. Open Terminal
Like in the initial TensorFlow setup, we start by having a trusty Terminal always open by our side — please follow that link if you need a reminder of the steps. We will continue our convention of using full paths instead of bash shortcuts to keep problems to a minimum, so whenever you see
/Users/joe/ you will have to replace it with your own home path ([here's why we do this]](/docs/guides/dl-start#openTerminal)).
Retraining the Model
Below, you will find each step with a brief description of what it accomplishes. To make it easier, however, we have created a shell script that will run the entire process you can download the script here as a zipfile which you can then uncompress in your home directory.
At the top of the script you’ll find the following line
Change that value to the folder you have been using to store both
tensorflowdirectories, and then you can run the script:
While the script is running, you can keep reading to see what the different commands are. In case of errors, the script should direct you to the right section of this FAQ.
If you followed the previous examples you should already have the Inception model downloaded. However, if you want to make sure you have an unmodified copy, you can download it again by running
curl -o /Users/joe/tf_files/inception.zip \
&& unzip /Users/joe/tf_files/inception.zip -d /Users/joe/tf_files/inception
Then, download the training data for the app we are going to build:
curl -o /Users/joe/tf_files/a16zset.zip \
&& unzip /Users/joe/tf_files/a16zset.zip -d /Users/joe/tf_files/a16zset
In the end, you should have both the
/tf_files/inception folder and a new folder
/tf_files/a16zset which contains two main datasets:
Note: as mentioned previously, our dataset is based on the Stanford Mobile Dataset, built as follows:
- We selected the dataset that includes images of business cards, book covers and CD covers but did not use all 500 images for each category.
- We then expanded the sets with additional images obtained via web searches, verifying that they had been marked for reuse (according to Google Search) and added a new category for credit cards, which can be very similar to business cards.
- Finally we normalized the images to dimensions of 640x480 with medium JPEG compression. This isn’t required by the process but we wanted to keep the dataset a relatively small download and have a reference image size that we could use later with the mobile app.
This showcases the flexibility afforded by DL for real-world applications while keeping complexity of the training process to a minimum. In our app the image recognition process is intended to speed up, rather than completely replace, human interaction, so even lower-probability matches can be useful.
To retrain the model, we use the same command as before, pointing it to the new dataset:
$ bazel-bin/tensorflow/examples/image_retraining/retrain \
Using the Model in the iOS App
Our iOS App uses the iOS static TensorFlow library along with our own Swift Code for it.
Originally published in Andreessen Horowitz’s AI Playbook.
Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!