There changed into a time when gaining knowledge of and enforcing device learning changed into no longer an smooth task. And if we talk about implementing the device getting to know inside the cellular devices then it turned into now not possible most effective due to the fact the execution of the heavy algorithm desires heigh computing power. But as we know, mobile generation has grown exponentially in the past few years. Firebase is one of them. It has recently announced a new characteristic that's Firebase Machine Learning package. In this tutorial, I will explain everything approximately it in detail. I will also show you a way to Integrate the Firebase system getting to know package to your android app. What is the Firebase Machine Learning kit? ML package is one of the new functions of the firebase. It is a mobile SDK that provides system getting to know functions to the mobile app. Firebase has made machine mastering for cell devices so smooth that even if you don’t have the experience with the neural community then you may construct it with only a few traces of code. Firebase Machine learning kit comes in three variants. They are: These are pre-trained models by google. We just need to pass the input data and we get the result. Let’s take an example to understand. Suppose you are using a ‘Text Recognition” API then you just need to pass an image to the API and it will give all the text recognized by the model. APIs: : In this feature, you can upload your own custom model to the firebase portal and then serve them to your mobile device. Custom : Train high-quality custom machine learning models with minimum effort and machine learning expertise. AutoML Let’s build a text recognition android app Create an android project and connect your firebase account. Follow my latest firebase tutorial to learn . STEP 1 how to connect the android studio with firebase Open the app level gradle file, add firebase dependency. STEP 2 dependencies { implementation } // ... 'com.google.android.gms:play-services-mlkit-text-recognition:16.0.0' pen and add this before the closing of the application tag. STEP 3 O AndroidManifest.xml <application ...> ... <meta-data android:name= android:value= /> "com.google.mlkit.vision.DEPENDENCIES" "ocr" <!-- To use multiple models: android:value="ocr,model2,model3" --> </ > application Google says this is an optional but recommended. If you do this step then the model will be downloaded when the app starts the first time itself. Now add the permission to access the image from the gallery. <uses-permission android:name= /> <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> "android.permission.INTERNET" < = /> uses-permission android:name "android.permission.WRITE_EXTERNAL_STORAGE" Build the UI for the app. Open , add the this code. STEP 4 activity_main.xml <?xml version= encoding= ?> <LinearLayout android:layout_width="0dp" android:layout_height="0dp" android:baselineAligned="true" android:orientation="vertical" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent"> <ImageView android:id="@+id/imageView" android:layout_width="300dp" android:layout_height="300dp" android:layout_gravity="center" android:visibility="visible" app:srcCompat="@android:drawable/ic_menu_gallery" tools:srcCompat="@tools:sample/backgrounds/scenic" /> <Button android:id="@+id/button" android:layout_width="wrap_content" android:layout_height="48dp" android:layout_gravity="center" android:text="Open Gallary" /> <TextView android:id="@+id/textView" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:gravity="center" android:text="Choose an image from the gallary." /> </LinearLayout> </androidx.constraintlayout.widget.ConstraintLayout> "1.0" "utf-8" < = = = = = = = = = > androidx.constraintlayout.widget.ConstraintLayout xmlns:android "http://schemas.android.com/apk/res/android" xmlns:app "http://schemas.android.com/apk/res-auto" xmlns:tools "http://schemas.android.com/tools" android:layout_width "match_parent" android:layout_height "match_parent" android:layout_centerInParent "false" android:layout_centerHorizontal "true" android:layout_centerVertical "false" tools:context ".MainActivity" The UI should look like the below screenshot. It has one image view, one button to open the gallery, and one text view to display all the texts identified by the model. This is the step where we actually use the API code in the . Copy the below code to your file. STEP 5 MainActivity.kt package com.example.textrecognition android.content.Intent android.database.Cursor android.graphics.BitmapFactory android.net.Uri android.os.Bundle android.provider.MediaStore android.util.Log androidx.appcompat.app.AppCompatActivity com.google.mlkit.vision.common.InputImage com.google.mlkit.vision.text.Text com.google.mlkit.vision.text.TextRecognition kotlinx.android.synthetic.main.activity_main.* java.net.URI RESULT_LOAD_IMAGE = { override fun onCreate(savedInstanceState: Bundle?) { .onCreate(savedInstanceState) setContentView(R.layout.activity_main) button.setOnClickListener { val i = Intent( Intent.ACTION_PICK, MediaStore.Images.Media.EXTERNAL_CONTENT_URI) startActivityForResult(i, RESULT_LOAD_IMAGE) } } override fun onActivityResult(requestCode: Int, : Int, : Intent?) { .onActivityResult(requestCode, resultCode, data) (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && != data) { uri = data.data val filePathColumn = arrayOf(MediaStore.Images.Media.DATA) val cursor: Cursor? = contentResolver.query(uri!!, filePathColumn, , , ) cursor?.moveToFirst() val columnIndex: Int = cursor!!.getColumnIndex(filePathColumn[ ]) val picturePath: = cursor.getString(columnIndex) cursor?.close() imageView.setImageBitmap(BitmapFactory.decodeFile(picturePath)); runTextRecognition(uri) } } private fun runTextRecognition(uri: Uri) { val image = InputImage.fromFilePath( , uri) val recognizer = TextRecognition.getClient() button.isEnabled = recognizer.process(image) .addOnSuccessListener { texts -> button.isEnabled = processTextRecognitionResult(texts) } .addOnFailureListener { e -> button.isEnabled = e.printStackTrace() } } private fun processTextRecognitionResult(texts: Text) { Log.i( , texts.text) textView.text = texts.text } } import import import import import import import import import import import import import var 1 : () class MainActivity AppCompatActivity super resultCode data super if null var null null null 0 String this false true // Task failed with an exception true "my" //Toast.makeText(this, texts.toString(), Toast.LENGTH_LONG).show(); Thanks for reading 🤩 This blog is originally published @ https://www.warmodroid.xyz/tutorial/firebase/firebase-machine-learning-kit-tutorial/