In the ever-evolving landscape of mobile app development, the ability to process and analyse images in real time has become increasingly important. If you are a Flutter developer you must have relied on the google ml kit for flutter to bring powerful machine learning capabilities to their apps. Today, I’m excited to announce a significant enhancement to this package with the addition of subject segmentation. Subject segmentation allows developers to easily separate multiple subjects from the background in a picture, enabling use cases such as sticker creation, background swap, or adding cool effects to subjects. Subject refers to the primary people, animals, or objects that appear in the foreground of an image. If you have two subjects very close or touching each other, they are considered a single subject. The subject segmentation processes an input image and produces an output mask or bitmap for the foreground. If you are new to theGoogle ML Kit Flutter, it is a set of Flutter plugins that enable Flutter apps to use Google’s standalone ML Kit, making it easy to use these powerful ML features in Flutter apps. Before this addition, google_ml_kit for Flutter already offered a range of capabilities including text recognition, face detection, pose estimation and more. These features have enabled developers to create sophisticated apps with minimal effort in implementing complex ML algorithms. Using Subject Segmentation on your Flutter app To use the new subject segmentation on your app, you can follow these simple steps Firstly, what are the requirements? iOS: This feature is still in Beta, and it is only available for Android. Stay tuned for updates on Google’s website and request the feature here Android minSdkVersion: 24 targetSdkVersion: 33 complieSdkVersion: 34 You can configure your app to automatically download the model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app’s AndroidManifest.xml file: <application ...> ... <meta-data android:name="com.google.mlkit.vision.DEPENDENCIES" android:value="subject_segment" > <!-- To use multiple models: android:value="subject_segment,model2,model3" --> </application> Secondly, update your pubspec.yaml file by adding the google_ml_kit_subject_segmentation and running an implicit flutter pub get dependencies: google_mlkit_subject_segmentation: ^0.0.1 Or run this command on your terminal flutter pub add google_mlkit_subject_segmentation Now in our Dart code, you can use: import 'package:google_mlkit_subject_segmentation/google_mlkit_subject_segmentation.dart'; Usage Create an instance of InputImage with either of these three ways: From path: final inputImage = InputImage.fromFilePath(filePath); From file: final inputImage = InputImage.fromFile(file); From bytes: final inputImage = InputImage.fromBytes(bytes: bytes, metadata: metadata); Create an instance of SubjectSegmenter final options = SubjectSegmenterOptions( enableForegroundConfidenceMask: true, enableForegroundBitmap: false, enableMultipleSubjects: SubjectResultOptions( enableConfidenceMask: false, enableSubjectBitmap: false, ), ); final segmenter = SubjectSegmenter(options: options); Let’s discuss the options. We have four of them. Don’t worry; I will explain them one after the other. Foreground confidence mask The foreground confidence mask lets you distinguish the foreground subject from the background. To enable the confidence mask, you have to pass true to enableForegroundConfidenceMask enableForegroundConfidenceMask: true Foreground bitmap Similarly, you can also get a bitmap of the foreground subject, To enable that you have to passtrue to enableForegroundBitmap enableForegroundBitmap: true, Multi-subject confidence mask As for the foreground options, you can use the SubjectResultOptions to enable the confidence mask for each foreground subject as follows: SubjectResultOptions( enableConfidenceMask: true, enableSubjectBitmap: false, ) Multi-subject bitmap Similarly, you can enable the bitmap for each subject: SubjectResultOptions( enableConfidenceMask: false, enableSubjectBitmap: true, ) Process image final result = await segmenter.processImage(inputImage); Release resources with close segmenter.close(); https://vimeo.com/1019321142 In the example above, I used the Foreground Bitmap you can also check the source code below https://github.com/bensonarafat/subject_segmentation?source=post_page-----e1a954e7ec09-------------------------------- I can’t wait to see what you all build with this. Cheers 🍻 🥂. In the ever-evolving landscape of mobile app development, the ability to process and analyse images in real time has become increasingly important. If you are a Flutter developer you must have relied on the google ml kit for flutter to bring powerful machine learning capabilities to their apps. Today, I’m excited to announce a significant enhancement to this package with the addition of subject segmentation. google ml kit for flutter google ml kit for flutter Subject segmentation allows developers to easily separate multiple subjects from the background in a picture, enabling use cases such as sticker creation, background swap, or adding cool effects to subjects. Subject segmentation Subject segmentation Subject refers to the primary people, animals, or objects that appear in the foreground of an image. If you have two subjects very close or touching each other, they are considered a single subject. The subject segmentation processes an input image and produces an output mask or bitmap for the foreground. If you are new to the Google ML Kit Flutter , it is a set of Flutter plugins that enable Flutter apps to use Google’s standalone ML Kit , making it easy to use these powerful ML features in Flutter apps. Google ML Kit Flutter Google ML Kit Flutter Google’s standalone ML Kit Google’s standalone ML Kit Before this addition, google_ml_kit for Flutter already offered a range of capabilities including text recognition, face detection, pose estimation and more. These features have enabled developers to create sophisticated apps with minimal effort in implementing complex ML algorithms. Using Subject Segmentation on your Flutter app To use the new subject segmentation on your app, you can follow these simple steps Firstly, what are the requirements? iOS: This feature is still in Beta, and it is only available for Android. Stay tuned for updates on Google’s website and request the feature here iOS: This feature is still in Beta, and it is only available for Android. Stay tuned for updates on Google’s website and request the feature here iOS: This feature is still in Beta, and it is only available for Android. Stay tuned for updates on Google’s website and request the feature here Google’s website Google’s website here here Android minSdkVersion: 24 targetSdkVersion: 33 complieSdkVersion: 34 minSdkVersion: 24 targetSdkVersion: 33 complieSdkVersion: 34 You can configure your app to automatically download the model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app’s AndroidManifest.xml file: AndroidManifest.xml <application ...> ... <meta-data android:name="com.google.mlkit.vision.DEPENDENCIES" android:value="subject_segment" > <!-- To use multiple models: android:value="subject_segment,model2,model3" --> </application> <application ...> ... <meta-data android:name="com.google.mlkit.vision.DEPENDENCIES" android:value="subject_segment" > <!-- To use multiple models: android:value="subject_segment,model2,model3" --> </application> Secondly, update your pubspec.yaml file by adding the google_ml_kit_subject_segmentation and running an implicit flutter pub get pubspec.yaml google_ml_kit_subject_segmentation google_ml_kit_subject_segmentation flutter pub get dependencies: google_mlkit_subject_segmentation: ^0.0.1 dependencies: google_mlkit_subject_segmentation: ^0.0.1 Or run this command on your terminal flutter pub add google_mlkit_subject_segmentation flutter pub add google_mlkit_subject_segmentation Now in our Dart code, you can use: import 'package:google_mlkit_subject_segmentation/google_mlkit_subject_segmentation.dart'; import 'package:google_mlkit_subject_segmentation/google_mlkit_subject_segmentation.dart'; Usage Create an instance of InputImage with either of these three ways: InputImage From path: final inputImage = InputImage.fromFilePath(filePath); final inputImage = InputImage.fromFilePath(filePath); From file: final inputImage = InputImage.fromFile(file); final inputImage = InputImage.fromFile(file); From bytes: final inputImage = InputImage.fromBytes(bytes: bytes, metadata: metadata); final inputImage = InputImage.fromBytes(bytes: bytes, metadata: metadata); Create an instance of SubjectSegmenter SubjectSegmenter final options = SubjectSegmenterOptions( enableForegroundConfidenceMask: true, enableForegroundBitmap: false, enableMultipleSubjects: SubjectResultOptions( enableConfidenceMask: false, enableSubjectBitmap: false, ), ); final segmenter = SubjectSegmenter(options: options); final options = SubjectSegmenterOptions( enableForegroundConfidenceMask: true, enableForegroundBitmap: false, enableMultipleSubjects: SubjectResultOptions( enableConfidenceMask: false, enableSubjectBitmap: false, ), ); final segmenter = SubjectSegmenter(options: options); Let’s discuss the options. We have four of them. Don’t worry; I will explain them one after the other. discuss the options. We have four of them. Don’t worry; I will explain them one after the other. Foreground confidence mask Foreground confidence mask The foreground confidence mask lets you distinguish the foreground subject from the background. To enable the confidence mask, you have to pass true to enableForegroundConfidenceMask true true enableForegroundConfidenceMask enableForegroundConfidenceMask enableForegroundConfidenceMask: true enableForegroundConfidenceMask: true Foreground bitmap Similarly, you can also get a bitmap of the foreground subject, To enable that you have to pass true to enableForegroundBitmap true true enableForegroundBitmap enableForegroundBitmap enableForegroundBitmap: true, enableForegroundBitmap: true, Multi-subject confidence mask As for the foreground options, you can use the SubjectResultOptions to enable the confidence mask for each foreground subject as follows: SubjectResultOptions SubjectResultOptions SubjectResultOptions( enableConfidenceMask: true, enableSubjectBitmap: false, ) SubjectResultOptions( enableConfidenceMask: true, enableSubjectBitmap: false, ) Multi-subject bitmap Similarly, you can enable the bitmap for each subject: SubjectResultOptions( enableConfidenceMask: false, enableSubjectBitmap: true, ) SubjectResultOptions( enableConfidenceMask: false, enableSubjectBitmap: true, ) Process image final result = await segmenter.processImage(inputImage); final result = await segmenter.processImage(inputImage); Release resources with close segmenter.close(); segmenter.close(); https://vimeo.com/1019321142 https://vimeo.com/1019321142 In the example above, I used the Foreground Bitmap you can also check the source code below Foreground Bitmap Foreground Bitmap https://github.com/bensonarafat/subject_segmentation?source=post_page-----e1a954e7ec09-------------------------------- https://github.com/bensonarafat/subject_segmentation?source=post_page-----e1a954e7ec09-------------------------------- I can’t wait to see what you all build with this. Cheers 🍻 🥂.