Before you go, check out these stories!

0
Hackernoon logoHow To Use Firebase Machine Learning Kit by@mohit

How To Use Firebase Machine Learning Kit

Author profile picture

@mohitMohit

https://www.warmodroid.xyz

There changed into a time when gaining knowledge of and enforcing device learning changed into no longer an smooth task. And if we talk about implementing the device getting to know inside the cellular devices then it turned into now not possible most effective due to the fact the execution of the heavy algorithm desires heigh computing power. But as we know, mobile generation has grown exponentially in the past few years.

Firebase is one of them. It has recently announced a new characteristic that's Firebase Machine Learning package. In this tutorial, I will explain everything approximately it in detail. I will also show you a way to Integrate the Firebase system getting to know package to your android app.

What is the Firebase Machine Learning kit?

ML package is one of the new functions of the firebase. It is a mobile SDK that provides system getting to know functions to the mobile app. Firebase has made machine mastering for cell devices so smooth that even if you don’t have the experience with the neural community then you may construct it with only a few traces of code.

Firebase Machine learning kit comes in three variants. They are:

  • APIs: These are pre-trained models by google. We just need to pass the input data and we get the result. Let’s take an example to understand. Suppose you are using a ‘Text Recognition” API then you just need to pass an image to the API and it will give all the text recognized by the model.
  • Custom: In this feature, you can upload your own custom model to the firebase portal and then serve them to your mobile device.
  • AutoML: Train high-quality custom machine learning models with minimum effort and machine learning expertise.

Let’s build a text recognition android app

STEP 1 Create an android project and connect your firebase account. Follow my latest firebase tutorial to learn how to connect the android studio with firebase.

STEP 2 Open the app level gradle file, add firebase dependency.

dependencies {
  // ...

  implementation 'com.google.android.gms:play-services-mlkit-text-recognition:16.0.0'
}

STEP 3 Open AndroidManifest.xml and add this before the closing of the application tag.

<application ...>
  ...
  <meta-data
      android:name="com.google.mlkit.vision.DEPENDENCIES"
      android:value="ocr" />
  <!-- To use multiple models: android:value="ocr,model2,model3" -->
</application>

Google says this is an optional but recommended. If you do this step then the model will be downloaded when the app starts the first time itself.

Now add the permission to access the image from the gallery.

<uses-permission android:name="android.permission.INTERNET"/>
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

STEP 4 Build the UI for the app. Open activity_main.xml , add the this code.

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:layout_centerInParent="false"
    android:layout_centerHorizontal="true"
    android:layout_centerVertical="false"
    tools:context=".MainActivity">

    <LinearLayout
        android:layout_width="0dp"
        android:layout_height="0dp"
        android:baselineAligned="true"
        android:orientation="vertical"
        app:layout_constraintBottom_toBottomOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintTop_toTopOf="parent">

        <ImageView
            android:id="@+id/imageView"
            android:layout_width="300dp"
            android:layout_height="300dp"
            android:layout_gravity="center"
            android:visibility="visible"
            app:srcCompat="@android:drawable/ic_menu_gallery"
            tools:srcCompat="@tools:sample/backgrounds/scenic" />

        <Button
            android:id="@+id/button"
            android:layout_width="wrap_content"
            android:layout_height="48dp"
            android:layout_gravity="center"
            android:text="Open Gallary" />

        <TextView
            android:id="@+id/textView"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            android:layout_gravity="center"
            android:gravity="center"
            android:text="Choose an image from the gallary." />
    </LinearLayout>
</androidx.constraintlayout.widget.ConstraintLayout>

The UI should look like the below screenshot. It has one image view, one button to open the gallery, and one text view to display all the texts identified by the model.

STEP 5 This is the step where we actually use the API code in the MainActivity.kt. Copy the below code to your file.

package com.example.textrecognition

import android.content.Intent
import android.database.Cursor
import android.graphics.BitmapFactory
import android.net.Uri
import android.os.Bundle
import android.provider.MediaStore
import android.util.Log
import androidx.appcompat.app.AppCompatActivity
import com.google.mlkit.vision.common.InputImage
import com.google.mlkit.vision.text.Text
import com.google.mlkit.vision.text.TextRecognition
import kotlinx.android.synthetic.main.activity_main.*
import java.net.URI


var RESULT_LOAD_IMAGE = 1

class MainActivity : AppCompatActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        button.setOnClickListener {
            val i = Intent(
                    Intent.ACTION_PICK,
                    MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
            startActivityForResult(i, RESULT_LOAD_IMAGE)
        }
    }

    override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
        super.onActivityResult(requestCode, resultCode, data)

        if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) {
            var uri = data.data
            val filePathColumn = arrayOf(MediaStore.Images.Media.DATA)
            val cursor: Cursor? = contentResolver.query(uri!!,
                    filePathColumn, null, null, null)
            cursor?.moveToFirst()
            val columnIndex: Int = cursor!!.getColumnIndex(filePathColumn[0])
            val picturePath: String = cursor.getString(columnIndex)
            cursor?.close()
            imageView.setImageBitmap(BitmapFactory.decodeFile(picturePath));
            runTextRecognition(uri)
        }
    }

    private fun runTextRecognition(uri: Uri) {
        val image = InputImage.fromFilePath(this, uri)
        val recognizer = TextRecognition.getClient()
        button.isEnabled = false
        recognizer.process(image)
                .addOnSuccessListener { texts ->
                    button.isEnabled = true
                    processTextRecognitionResult(texts)
                }
                .addOnFailureListener { e -> // Task failed with an exception
                    button.isEnabled = true
                    e.printStackTrace()
                }
    }

    private fun processTextRecognitionResult(texts: Text) {
        Log.i("my", texts.text)
        //Toast.makeText(this, texts.toString(), Toast.LENGTH_LONG).show();
        textView.text = texts.text
    }
}

Thanks for reading 🤩

This blog is originally published @ https://www.warmodroid.xyz/tutorial/firebase/firebase-machine-learning-kit-tutorial/

Tags

Become a Hackolyte

Level up your reading game by joining Hacker Noon now!