This is part 2 of the two-part article on deploying ML models on mobile. We saw how to convert our ML models to TfLite format . For those of you who came here first, I recommend you click on the above link to get the whole picture. If you just want the android part ,the demo app we are building has a GAN model generating handwritten digits and a classifier model predicting the generated digit. here : Find the code for the android app . Tl;dr here Part 2: The Android Story Now that we have the TfLite model ready along with information about inputs and outputs to the model, we need to add it to our Android Studio project. : Create a new Android Studio Project. Step 1 : Go to the location ,in your android project folder and create a directory called . Step 2 Project -> app -> src -> main assets : Paste your file(s) in the assets directory. The structure will look something like this - Step 3 .tflite Let us proceed to adding dependencies in android studio.. Add the following in . buid.gradle(Module:app) android{ aaptOptions { noCompress noCompress } } dependencies { implementation } "tflite" "lite" 'org.tensorflow:tensorflow-lite:0.0.0-nightly' To load the models in Android, we must import and use TfLite interpreter. TfLite Interpreter is used for the following: Loading the model Transform data Running inference Interpreting output The model can be loaded as follows: Create an Interpreter object and use the function to load the TfLite model as a MappedByteBuffer. Here, the holds the name of your model as saved in the assets folder. loadModelFile() MODEL_FILE Interpreter tflite_gan; String gan_model= ; { tflite_gan = Interpreter(loadModelFile(context, gan_model)); } (IOException e) { e.printStackTrace(); } } { AssetFileDescriptor fileDescriptor = c.getAssets().openFd(MODEL_FILE); FileInputStream inputStream = FileInputStream(fileDescriptor.getFileDescriptor()); FileChannel fileChannel = inputStream.getChannel(); startOffset = fileDescriptor.getStartOffset(); declaredLength = fileDescriptor.getDeclaredLength(); fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); } "gan_model.tflite" try new catch MappedByteBuffer IOException private loadModelFile (Context c, String MODEL_FILE) throws new long long return For the GAN model, let us take the input and output ByteBuffers for easy pixel manipulation for image display. First let us generate random latent points of a float array of [1x100] to provide as input ,as shown in the model details in part 1 of the article. [][] generateLatentPoints( n, dim){ Random rn = Random(); [][] gan_input= [ ][ ]; ( i = ; i< ; i ++){ random_number=rn.nextFloat(); gan_input[ ][i]=random_number; } gan_input; } private float int int new float new float 1 100 for int 0 100 float 0 return Next, we will copy this data into a ByteBuffer and also allocate a ByteBuffer to store the output. As discussed, the size allocation is based on the model information. The last line in the code runs the ML model [][] input = generateLatentPoints(N,DIM); ByteBuffer input_data = ByteBuffer.allocate( * BYTE_SIZE_OF_FLOAT); input_data.order(ByteOrder.nativeOrder()); ByteBuffer gan_output = ByteBuffer.allocate( * * * * BYTE_SIZE_OF_FLOAT); gan_output.order(ByteOrder.nativeOrder()); input_data.clear(); gan_output.clear(); input_data.rewind(); ( i= ;i< ;i++) { input_data.putFloat(input[ ][i]); } tflite_gan.run(input, gan_output); float 100 1 28 28 1 for int 0 100 0 In the above code, takes the input and gives output as a ByteBuffer which we shall render as an image. The output is essentially an array of [1x28x28x1]. The following snippet of code can be used to generate a Bitmap. tflite_gan.run(input,output) gan_output.rewind(); Bitmap bitmap = Bitmap.createBitmap(IMAGE_WIDTH, IMAGE_HEIGHT, Bitmap.Config.RGB_565); [] pixels = [IMAGE_WIDTH * IMAGE_HEIGHT]; ( i = ; i < IMAGE_WIDTH*IMAGE_HEIGHT; i++) { a = ; r = gan_output.getFloat() * ; pixels[i] = a << | ((( ) r)<< ); } bitmap.setPixels(pixels, , , , , IMAGE_WIDTH, IMAGE_HEIGHT); int new int for int 0 int 0xFF float 255.0f 24 int 8 //This is a colored image which you can display 0 28 0 0 All you have to do is set this image to an ImageView or wherever you want to display it on your . app Yay! you just deployed a GAN model on android. Let us proceed to MNIST classifier model. The process remains same for loading and running the TfLite model. Replace the MODEL_NAME with the classifier model name. The input to the classifier is the output from GAN, which is a 28x28 image. We have to process this image in Black and White which the classifier is trained to identify. This can be done by following - Bitmap bwBitmap = Bitmap.createBitmap( bitmap.getWidth(), bitmap.getHeight(), Bitmap.Config.RGB_565 ); [] hsv = [ ]; ( col = ; col < bitmap.getWidth(); col++ ) { ( row = ; row < bitmap.getHeight(); row++ ) { Color.colorToHSV( bitmap.getPixel( col, row ), hsv ); ( hsv[ ] > ) { bwBitmap.setPixel( col, row, ); } { bwBitmap.setPixel( col, row, ); } } } bwBitmap; //Create a Black & white image by taking HSV values from RGB Image float new float 3 for int 0 for int 0 if 2 0.5f 0xff000000 else 0xffffffff return Again, we shall provide input as a BYteBuffer. This has to be processed. Let us do so by setting 0 for white and 255 for black pixels. { ByteBuffer inputBuffer = ByteBuffer.allocateDirect(BYTE_SIZE_OF_FLOAT * DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE); inputBuffer.order(ByteOrder.nativeOrder()); [] pixels = [ * ]; bitmap.getPixels(pixels, , , , , , ); ( i = ; i < pixels.length; ++i) { pixel = pixels[i]; channel = pixel & ; inputBuffer.putFloat( - channel); } inputBuffer; } ByteBuffer public Process (Bitmap bitmap) int new int 28 28 // Load bitmap pixels into the temporary pixels variable 0 28 0 0 28 28 for int 0 // Set 0 for white and 255 for black pixels int int 0xff 0xff return We have reached the last step of the process - getting the predicted digit from the classifier based on the most probable value. The output contains a 2D array with index against a probability.Iterating through it, we get the most probable value and that index will be our answer. Since we know that the output is a [1x10] float array, we will save it in the same form and iterate through it. { [][] mnistOutput = [DIM_BATCH_SIZE][NUMBER_LENGTH]; tflite_mnist.run(input, mnistOutput); min_probability= Float.MIN_VALUE; digit = ; ( i = ; i < mnistOutput[ ].length; i++) { value = mnistOutput[ ][i]; (value>min_probability){ digit=i; } } digit; } public float Classify (ByteBuffer input) float new float float float 0 for int 0 0 float 0 if return That digit returned right there is your predicted output! You can deploy as many models as you like in the same way. The key is to know and understand the input and outputs required by each of them and you are good to go. We can also get multiple outputs from the Interpreter using the following piece of code. org.tensorflow.lite.Interpreter; Interpreter tflite; tflite.runForMultipleInputsOutputs(inputs,outputs) import Make sure the data structure used to save your outputs is able to accommodate multiple arrays. For instance, you can define the output as a HashMap of arrays as shown. output1 = [ ][ ][ ][ ]; output2 = [ ][ ][ ][ ]; Map<Integer, Object> outputs = HashMap<>(); outputs.put( , out1); outputs.put( , out2); new float 1 28 28 1 new float 1 14 14 34 new 0 1 So finally the app looks something like this - The code for the whole app is mentioned right the beginning. We have come to the end of the story. For more resources on TfLite on android, visit the official quick-start . site Explore some more example applications with slightly more complexities involving camera activities etc, . here For exploring more on GANs, visit . here