https://firebase.google.com/products/ml-kit/
Yesterday at Google I/O, the Firebase team released the machine learning capabilities for the Firebase platform in the form of Firebase ML Kit. Firebase ML Kit comes with a ready-to-use APIs for common mobile use cases which inludes recognizing text, detecting faces, scanning barcodes, labeling images and recognizing landmarks.
Firebase ML Kit can be integrated with apps using on device APIs or Cloud APIs. The on device usage is free of charge and the cloud usage comes with a generous limit. In this post we will cover how to create a simple iOS app which will integrate with the Firebase ML framework and label images.
After creating your Xcode project you need to install the required Firebase CocoaPods. Before doing that make sure you have the latest version of CocoaPods installed. Create a pod file and and specify which pods you want to install as shown below:
Execute the pod install command from the terminal which will install the pods as specified by the Podfile. After you have install the required pods make sure to use the workspace file for your project and not the xcodeproj file.
In order to use most/all of Firebase features you must create a Firebase project on Firebase Console. Firebase console provides a nice user interface where you can create your Firebase projects. Each Firebase project can be configured to use many of the services provided by Firebase platform.
Firebase setup is going to create a GoogleService-Info.plist file which you can download and copy it in your Xcode project. Finally, configure Firebase by calling Firebase.configure() inside the didFinishLaunchingWithOptions function as shown below:
Firebase ML On Device API:
Now, that we have setup the Firebase platform next step is to integrate it with Firebase ML API. Believe it or not below is all the code you need to integrate with the Firebase ML On Device API. We have a list of images that we want to detect. The images are part of the Xcode project and the names are stored in an array.
The vision instance has access to the labelDetector function which returns a labelDetector. The labelDetector help in detecting labels associated with the images. The VisionImage class uses UIImage or CMSampleBufferRef to create the VisionImage instance. The labelDetector instance calls the detect function, passing in the visionImage object which results in either predictions about the image or errors. If there are no errors then we simply find the label with the highest using the max function and set that as text for the UILabel.
The result is shown below:
Firebase ML Cloud API performs the image labeling on the cloud. This also gives it the ability to have a very large dataset which is continuously being updated and refined.
The code for Cloud API is very similar to the on device API. The only minor change is that we are using the cloudLabelDetector instead of the default labelDetector as shown below:
If you run the above code you will see error messages in the console. Most probably error messages are related to the Firebase ML Cloud account not being setup. Follow the link and setup your Firebase ML Cloud account. Google will credit $300 in your account which can be used during your first year.
The result is shown below:
I hope you liked the post, happy coding!
If you are interested in learning more about integrating Firebase with your iOS apps then check out my course “Mastering Firebase for iOS Using Swift Language” below. Thanks for your support!
Mastering Firebase for iOS Using Swift Language_Learn to integrate Firebase with your iOS apps by building real world projects!_www.udemy.com