paint-brush
Integrating TensorFlow Model in an iOS Appby@azamsharp
12,045 reads
12,045 reads

Integrating TensorFlow Model in an iOS App

by Mohammad AzamDecember 18th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Apple released Core ML framework at WWDC 2017, which allowed developers to integrate machine learning into their <a href="https://hackernoon.com/tagged/ios" target="_blank">iOS</a> applications. For getting started Apple provided a <a href="https://developer.apple.com/machine-learning/" target="_blank">list</a> of models compatible with the Core ML framework. These models are ready for use and can be integrated into an iOS application.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Integrating TensorFlow Model in an iOS App
Mohammad Azam HackerNoon profile picture

http://tomtunguz.com/images/ml_icons.jpg

Apple released Core ML framework at WWDC 2017, which allowed developers to integrate machine learning into their iOS applications. For getting started Apple provided a list of models compatible with the Core ML framework. These models are ready for use and can be integrated into an iOS application.

But what if you have to train your own model using deep learning. Luckily Google provided an open source library called TensorFlow which is capable of creating a deep learning graph based on numerical computation. This means you can develop a custom deep learning model that fits your needs. A lot of different models has been created using TensorFlow but unfortunately using them in an iOS application required a lot of work. Recently, Google released a tool “tfcoreml” which allows the developers to convert TensorFlow models to Core ML models.

In this post I will explain how to use tfcoreml tool to convert a TensorFlow model into Core ML model. This process is complicated and took me few days to figure out. Special thanks to the Google tfcoreml team for their continuous support.

Installing tfcoreml

There are multiple ways of installing tfcoreml tool. The quickest way is to use the pip tool to install it.

pip install -U tfcoreml

At the time of this writing I would recommend against the above method. The reason is that there were some fixes to the source of the tfcoreml tool in master branch which are only available if you build the tool from source.

To install from the source you must clone the tfcoreml repository.

git clone https://github.com/tf-coreml/tf-coreml.git

Once, the repository is cloned go inside the directory and run the following command:

python setup.py bdist_wheel

Finally, install the package using the following command:

pip install -e .

Congratulations you have successfully installed tfcoreml tool!

Converting TensorFlow Model

Before performing the actual conversion let’s get a hold of the TensorFlow model. You can find several TensorFlow compatible models included at the end of the documentation. We are going to the “Inception v1 (Slim) model for our demo. Download the model and you will notice that it contains two files.

  • inception_v1_2016_08_28_frozen.pb
  • imagenet_slim_labels.txt

The file “inception_v1_2016_08_28_frozen.pb” is the actual model and the “imagenet_slim_labels.txt” is the class labels. You can think of class labels as the label/title that will be attached to each prediction.

In the documentation you will see the following conversion code:

import tfcoreml as tf_convertertf_converter.convert(tf_model_path = 'my_model.pb',                     mlmodel_path = 'my_model.mlmodel',                     output_feature_names = ['softmax:0'])

I created a file called “convertor.py” and placed all the above code in that file. As you can see I have substituted the variables with the correct file names.

Go to the terminal inside the folder where the TensorFlow model exist and execute the Python script.

python convertor.py

Welcome to the wonderful world of converting TensorFlow model to Core ML! I know what you are thinking, WTF is that suppose to mean. The good people at Google helped me out in explaining that I need to pass the correct operator for the tfcoreml tool to work. In order to find the operator, you can convert the TensorFlow model to text summary and search for the operator in a text file.

TensorFlow tool already contains the Python script which is used to convert the model to text based summary. You can check out the implementation of the script at the following location:

tf-coreml/utils/inspect_pb.py

I have copied inspect_pb.py file into my local folder so it will be easier to reference it. The code below shows how to convert TensorFlow model to a text summary.

The “text_summary.txt” file is generated. At the end of the file you will see a listing of all the operators for the particular model.

















OPS counts:Squeeze : 1Softmax : 1BiasAdd : 1Placeholder : 1AvgPool : 1Reshape : 2ConcatV2 : 9MaxPool : 13Sub : 57Rsqrt : 57Relu : 57Conv2D : 58Add : 114Mul : 114Identity : 231Const : 298

Now, let’s search for the Softmax and you will find several different entries. I have highlighted the one that corresponds to the output feature name as shown below:

Now, update your convertor file with the new “output_feature_names” argument as shown below:

Run the convertor.py again from the terminal. This time the conversion will go through successfully as shown below:

The Core ML model “InceptionV1.mlmodel” will be generated. Go ahead and double click on the generated model. This will open it up in Xcode as shown below:

Although, the model has been created successfully it is not really useful for us because we want our model to take image as an image parameter and also provide class labels to identify the detected object.

Luckily, we already have access to the class labels file “imagenet_slim_labels.txt” and we can use the text_summary.txt file to find out the required operator for providing the image_input_names. The updated code of “convertor.py”is shown below:

Hop onto the terminal and run the convertor.py file as shown below:

python convertor.py

Check out the generated model in the screenshot below:

Wohoo!!! We got our inputs as an image and our outputs as a Dictionary of predictions and a classLabel.

Let’s go ahead and import this model into our iOS project and see the predictions. I already have a Core ML iOS project setup and ready to go which you can download from Github. I simply plugged this model in and here is the result I got.

Bananas! Yup! Unfortunately, the converted model prediction is way off. After talking to Google developers they suggested that the reason is because the images needs to be preprocessed before they can be used. This means we need to update our “convertor.py” to include preprocessing of images. This is shown below:

Run the “convertor.py” again from the terminal and it will generate a new model. Import the model into an iOS app and you will see that now the predictions work correctly.

Congratulations you have successfully converted a TensorFlow model to Core ML and integrated it into your app!

If you want to support my writing and learn more about Core ML then please check out my course “Mastering Core ML for iOS”. Make sure to rate and review the course as it helps to add more amazing content.

Github

Thank you,

Azam