https://firebase.google.com/products/ml-kit/ Yesterday at Google I/O, the team released the machine learning capabilities for the Firebase platform in the form of . Firebase ML Kit comes with a ready-to-use APIs for common mobile use cases which inludes recognizing text, detecting faces, scanning barcodes, labeling images and recognizing landmarks. Firebase Firebase ML Kit Firebase ML Kit can be integrated with apps using on device APIs or Cloud APIs. The on device usage is free of charge and the cloud usage comes with a generous limit. In this post we will cover how to create a simple app which will integrate with the Firebase ML framework and label images. iOS Installing Firebase CocoaPods: After creating your Xcode project you need to install the required Firebase CocoaPods. Before doing that make sure you have the latest version of CocoaPods installed. Create a pod file and and specify which pods you want to install as shown below: Execute the command from the terminal which will install the pods as specified by the . After you have install the required pods make sure to use the file for your project and not the file. pod install Podfile workspace xcodeproj Configuring Firebase Project on Firebase Console: In order to use most/all of Firebase features you must create a Firebase project on . Firebase console provides a nice user interface where you can create your Firebase projects. Each Firebase project can be configured to use many of the services provided by Firebase platform. Firebase Console Firebase setup is going to create a file which you can download and copy it in your Xcode project. Finally, configure Firebase by calling inside the function as shown below: GoogleService-Info.plist Firebase.configure() didFinishLaunchingWithOptions Firebase ML On Device API: Now, that we have setup the Firebase platform next step is to integrate it with Firebase ML API. Believe it or not below is all the code you need to integrate with the Firebase ML On Device API. We have a list of images that we want to detect. The images are part of the Xcode project and the names are stored in an array. The vision instance has access to the labelDetector function which returns a labelDetector. The labelDetector help in detecting labels associated with the images. The VisionImage class uses UIImage or CMSampleBufferRef to create the VisionImage instance. The labelDetector instance calls the detect function, passing in the visionImage object which results in either predictions about the image or errors. If there are no errors then we simply find the label with the highest using the max function and set that as text for the UILabel. You don’t need to run the loop (52–55), I am just displaying all the different predictions along with their confidence as received from Firebase ML. The result is shown below: Firebase ML Cloud API: Firebase ML Cloud API performs the image labeling on the cloud. This also gives it the ability to have a very large dataset which is continuously being updated and refined. The code for Cloud API is very similar to the on device API. The only minor change is that we are using the instead of the default as shown below: cloudLabelDetector labelDetector If you run the above code you will see error messages in the console. Most probably error messages are related to the Firebase ML Cloud account not being setup. Follow the and setup your Firebase ML Cloud account. Google will credit in your account which can be used during your first year. link $300 The result is shown below: I hope you liked the post, happy coding! [ ] Download Sample Code If you are interested in learning more about integrating Firebase with your iOS apps then check out my course “ ” below. Thanks for your support! Mastering Firebase for iOS Using Swift Language _Learn to integrate Firebase with your iOS apps by building real world projects!_www.udemy.com Mastering Firebase for iOS Using Swift Language