paint-brush
Building a Dashboard that recognizes youby@jzeoli
1,131 reads
1,131 reads

Building a Dashboard that recognizes you

by Joe ZeoliMarch 1st, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

At 20nine, we recently got a big wall-mounted TV to use in our common area — mostly for our daily status meetings and the occasional YouTube clip. But I noticed that when it came to linking up with our internal resource management tool (we call it The Forge) it was off 90% of the time. To remedy this I thought it would be an exciting R&D project to create a real-time dashboard that would grab data from our resource management app and display a high-level weekly overview of what’s due, who’s working on it, and the statuses of our open projects. To encourage the team to engage with it, I thought it would be fun to build in facial recognition so that when someone got close to the screen, it would show a welcome message, status updates and content relevant to them.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Building a Dashboard that recognizes you
Joe Zeoli HackerNoon profile picture

At 20nine, we recently got a big wall-mounted TV to use in our common area — mostly for our daily status meetings and the occasional YouTube clip. But I noticed that when it came to linking up with our internal resource management tool (we call it The Forge) it was off 90% of the time. To remedy this I thought it would be an exciting R&D project to create a real-time dashboard that would grab data from our resource management app and display a high-level weekly overview of what’s due, who’s working on it, and the statuses of our open projects. To encourage the team to engage with it, I thought it would be fun to build in facial recognition so that when someone got close to the screen, it would show a welcome message, status updates and content relevant to them.

Creating a public API and the dashboard

The Forge is our own proprietary custom resource management tool, and it runs mainly on NodeJS. It helps us manage things like individual workload, milestones, deliverables and project timelines. I needed to create a public API that could not only grab that data, but also connect via Socket IO to update in real-time when someone makes an edit.

I created the dashboard with NodeJS and Angular 1 to seamlessly connect to the main application. I’m also getting information to pull from our conference room calendars to show in real-time if they’re being used, by whom, and for what.

A mockup of the dashboard layout

Facial Recognition

After some research, I landed on using OpenCV and Python to handle the face detection and recognition. I heavily relied on Bytefish’s tutorial and code (link below) to get a basic system up and running, but the basic idea is that the program first detects a face then checks it against the trained model. The ability to find a face in an image is possible using Haar-Cascade frontal face detection. Once the program detects a face it checks it against the trained model and returns the closest match using the k-Nearest Neighbor Model where k=1. The k-Nearest Neighbor is a relatively simple algorithm used for classification and regression.


GitHub - bytefish/facerec: Implements face recognition algorithms for MATLAB/GNU Octave and Python._facerec - Implements face recognition algorithms for MATLAB/GNU Octave and Python._github.com

I realized very early on that manually finding, cropping and creating training images for each user was going to be way too time consuming. To avoid this tedious labor I used the above method of finding faces to take a cropped picture of every face it found. I had the team stand in front of the camera a few times to capture. Below are some Facebook profile ready, quality examples of myself.

What the OpenCV crop captures

To build out the training data, created folders for each user in The Forge named with their user id. If a match is detected it pulls the folder name (the user id), makes a post request to the dashboard Node API and emits an event via Socket IO to the client. The client makes a Get request to the public API that grabs only milestones and projects associated to that user and updates the screen via the angular scope.

After a face was detected, the algorithm was set up to always return a match regardless of how far away it was. Through some trial and error, I set a limit to return false if the match wasn’t reasonably close. In this case accuracy was more important than always finding a match.

The hardware

I grabbed a Raspberry Pi 3 setup and a pi camera to host the dashboard and facial recognition apps. Setting up the dependencies was definitely more time consuming than I had initially expected, but for the most part everything ran smoothly. The only issue I ran into was with OpenCVs video capture and the pi camera compatibility. I ended up adding the below codecs to the Pi startup script for time’s sake, but I plan to use the Pi Camera utility in the future.

sudo modprobe bcm2835-v4l2

The Pi is setup to first run the node application via Forever (https://github.com/foreverjs/forever) and then run chromium in development mode. I also installed XScreenSaver as an easy way to prevent the Pi from turning the screen off after a certain period of time. Below is a video testing out the dashboard and facial recognition.

Thanks for reading! If you have any questions feel free to reach out on Twitter @joezeoli .