Make an Eye tracking and Face detection app as a beginnerby@pradyumandixit
12,958 reads
12,958 reads

Make an Eye tracking and Face detection app as a beginner

by PradyumanJanuary 10th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

We all know, how cool an android app looks, when it can detect our face or track if our eyes are closed or open. It becomes way more cooler when the app can even detect if we are smiling, reading on phone or not looking at it.

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Make an Eye tracking and Face detection app as a beginner
Pradyuman HackerNoon profile picture

We all know, how cool an android app looks, when it can detect our face or track if our eyes are closed or open. It becomes way more cooler when the app can even detect if we are smiling, reading on phone or not looking at it.


I believe.. whatever does appeals me, simply makes me build it!

Sorry, tried a “The Dark Knight” pun :)

So let us make an android eye tracking and face detection app using Google Vision API.

From Google:

Cloud Vision API enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy-to-use REST API. It quickly classifies images into thousands of categories (such as, “sailboat”), detects individual objects and faces within images, and reads printed words contained within images. You can build metadata on your image catalog, moderate offensive content, or enable new marketing scenarios through image sentiment analysis.

Here we will be making an android app that can track our face, and detect if our eyes are closed or open.

Sounds cool? It’s not much hard even.

So let’s dive into understanding how this would work in a simple flow chart.

Our android app → Uses Camera → Detects face → Starts some operation → Checks if eyes of the viewer is open → Continues the operation → If eyes are closed → Stop the operation.

This is the basic idea for our android app. For learning purposes we will just be doing this much in our app, but much more advanced features can be added using Google Vision API.

The operation that we’d be carrying out in our app is playing a simple video. Well, I chose a video from PIXAR Animation.

You can find the full source code for the app here.

Start a new Android project in your IDE like Android Studio or any other.

Go to the manifest file and add the following permission, as for this app we need the permission to use front Camera of the device.

<uses-permission android:name="android.permission.CAMERA" />

We also have to import Google Vision API in our android app, so head to the Build Gradle for app, mostly written as build.gradle (Module: app).

You will see a file with text that resembles this:

apply plugin: ''

android {compileSdkVersion 28defaultConfig {applicationId "com.pd.trackeye"minSdkVersion 15targetSdkVersion 28versionCode 1versionName "1.0"testInstrumentationRunner ""}buildTypes {release {minifyEnabled falseproguardFiles getDefaultProguardFile('proguard-android.txt'), ''}}}

dependencies {implementation fileTree(dir: 'libs', include: ['*.jar'])implementation ''implementation ''testImplementation 'junit:junit:4.12'androidTestImplementation '🏃1.0.2'androidTestImplementation ''implementation ''}

If it looks like this, then you’re at the correct file destination. Copy the whole code above and paste it on that.

You’ll get a top-bar pop-up stating to sync files. Click on Sync Now and wait for the project to build.

After you get all green OK signs in your Build log, you can move ahead.

Now let’s write some android code that will do the magic. Head to the MainActivity of the file, and declare following variables, like this:

private static final String TAG = "MainActivity";VideoView videoView;EditText textView;

//For looking logsArrayAdapter adapter;ArrayList<String> list = new ArrayList<>();

CameraSource cameraSource;

These lines of code should be beneath the line that states something like this:

public class MainActivity extends AppCompatActivity {

Now let’s do the magic code inside the onCreate() method that you will see on the same file.

Let’s first add something in our code that lets the user grant permission to add the android app, as without that our app will crash and not work.

if (ActivityCompat.checkSelfPermission(this, android.Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.CAMERA}, 1);Toast.makeText(this, "Grant Permission and restart app", Toast.LENGTH_SHORT).show();}else {videoView = findViewById(;textView = findViewById(;adapter = new ArrayAdapter<>(this, android.R.layout.simple_list_item_1, list);videoView.setVideoURI(Uri.parse("android.resource://" + getPackageName() + "/" + R.raw.videoplayback));videoView.start();createCameraSource();}

Add the above code inside the **onCreate()** method that we talked about above.

Now we should add the code that tracks the eyes using Google vision API. We will make a new private class for that, named **EyesTracker**. The code for the class looks like this:

private class EyesTracker extends Tracker<Face> {

private final float THRESHOLD = 0.75f;  

private EyesTracker() {  


public void onUpdate(Detector.Detections<Face> detections, Face face) {  
    if (face.getIsLeftEyeOpenProbability() > THRESHOLD || face.getIsRightEyeOpenProbability() > THRESHOLD) {  
        Log._i_(_TAG_, "onUpdate: Eyes Detected");  
        showStatus("Eyes Detected and open, so video continues");  
        if (!videoView.isPlaying())  

    else {  
        if (videoView.isPlaying())  

        showStatus("Eyes Detected and closed, so video paused");  

public void onMissing(Detector.Detections<Face> detections) {  
    showStatus("Face Not Detected yet!");  

public void onDone() {  


Here videoView is the reference to the video that is set to VideoView attribute in activity_main.xml that is for the UI of the app.

Now we can add the code that detects the user’s face in a new private class named **FaceTrackerFactory**. The code will look something like this:

private class FaceTrackerFactory implements MultiProcessor.Factory<Face> {

private FaceTrackerFactory() {  


public Tracker<Face> create(Face face) {  
    return new EyesTracker();  


public void createCameraSource() {FaceDetector detector = new FaceDetector.Builder(this).setTrackingEnabled(true).setClassificationType(FaceDetector.ALL_CLASSIFICATIONS).setMode(FaceDetector.FAST_MODE).build();detector.setProcessor(new MultiProcessor.Builder(new FaceTrackerFactory()).build());

cameraSource = new CameraSource.Builder(this, detector)  
        .setRequestedPreviewSize(1024, 768)  

try {  
    if (ActivityCompat._checkSelfPermission_(this, Manifest.permission._CAMERA_) != PackageManager._PERMISSION\_GRANTED_) {  
        // _TODO: Consider calling_            //    ActivityCompat#requestPermissions  
        // here to request the missing permissions, and then overriding  
        //   public void onRequestPermissionsResult(int requestCode, String\[\] permissions,  
        //                                          int\[\] grantResults)  
        // to handle the case where the user grants the permission. See the documentation  
        // for ActivityCompat#requestPermissions for more details.  
catch (IOException e) {  


We also need to pause the video and end the video which we will take care in onDestory() and onPause() methods of MainActivity.

Now let’s add some XML code for basic UI of the app.

Head to the file that is named something like activity_main.xml. Now just replace the code in it with the following code.

<?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android=""xmlns:app=""xmlns:tools=""android:layout_width="match_parent"android:layout_height="match_parent"tools:context=".MainActivity"android:orientation="vertical">




All this code does is add an EditText to display the text which states what’s being detected and what’s not to user. And VideoView that plays the video for the user.

You can find the full source code for the app here.

And that’s pretty much it. You’ve built your own eye tracking and face detecting android app.

Everything done, looks like this:

Eyes open

Eyes closed

Now go and show off !!

Read my previous post about an android library that easily helps you customise Snackbar with very few lines of code. ChocoBar.