Moodify in action
March was a pretty productive month, all thanks to this major event hosted by Association for Computing Machinery, NIT Surat Student Chapter. Basically build what you can in a month and show it off in a presentation. The key was innovation and implementation in all aspects including programming involved, design and even the code itself.
Well, the idea (which Dhanush came up with) was an app that recognizes your mood from a snap taken and generates a suitable playlist in the music player. Pretty neat, eh?
To start off with, we were actually lost for quite a while cause this idea used technologies that were less documented and of course we had no tutorials to follow. But with research and experimentation, we decided to use OpenCV with Python to create scripts that could extract mood from a captured image, and since coding was already started with Python, Flask was chosen as the framework to use for wrapping all this as a WebApp.
The development work was split into multiple components, like a Camera, Music Player and the server which handles the routes and database. So the front end was programmed with AngularJS to give it a app like flow while the UI was designed over the Materialize CSS library.
Talking about individual components,
The Camera and the Music Player
Camera : WebcamJS was mainly used in handling the cam with just the frontend, with generated base64 image passed into the server.
Music Player : SoundManager 2 API implemented with AngularJS used to handle audio files
Server : It’s a basic Flask server which connects to the MongoDB database and also handles routes. The basic routes for this app was:
WebApp file structure and Database JSON data
You start at the /cam , where you can take snaps, retake and proceed. This basically saves the snap to the server and goes to the /emotion route to call the Python script that extracts mood and returns it, redirects to /player with mood as a parameter. /player calls /songs with the mood parameter to return songs as per generated mood. This is displayed in the music player.
Okay, this part isn’t exactly mine, but I’ll try explain it in simple terms the way I understood. Basically, /emotion route gets the image from the cam and stores it in server, on which the mood recognition script runs on.
The script initially optimizes the image so that detection happens most accurately, basically cropping it to just the face and then converting to grayscale. Now that the image is ready to work on, OpenCV analyses the contours and detects the features or landmarks of the face, as depicted in the picture.
Detecting facial landmarks
Using this data, along with machine learning, mood can be identified. Models are made by training on many faces. Multiple faces showing the same mood is fed as training data and so it can identify the nearest mood when it gets a new face.
Training data for anger mood
This is achieved using OpenCV, more precisely, Haar Cascades for face recognition and dlib Predictors for facial landmark analysis.
The project’s official repository :
ajayns/amoc-project_amoc-project - Moodify: Recognizes emotion from face, generates a suitable playlist in the music player_github.com
Developers :
ajayns (Ajay NS)_ajayns has 14 repositories available. Follow their code on GitHub._github.com
dhanushkamath (Dhanush Kamath)_dhanushkamath has 2 repositories available. Follow their code on GitHub._github.com
Also, feel free to contact me at [email protected], check out my GitHub. Do recommend and share my post if you found it useful.