Ashot Gabrelyanov


How we used AI to hybridize humans with cartoon animals and made a business out of it.

Have you ever imagined yourself as a cartoon character? Well, now this is more than real.

We are a team of 20 engineers and art designers who have developed a machine learning technology that morphs human faces with animated characters.

The process starts by constructing a user’s 3D face model from just a single selfie shot. Importantly, our technology even works with older, regular smartphone cameras. With this single photo, our neural network builds a 3D mesh of the user’s head that looks like this:

The neural network regresses a 3D model from a 2D photo

Next, 3 other neural networks swing into action. The first draws eyebrows, the second detects and matches eye color, and the third detects and draws glasses if the user is wearing them. When these elements are ready, we morph the user with one of our cartoon characters. This takes a fraction of a second.

Each neural network does its job to instantly personalize the character to the user

As you can see outlined in the red boxes, we only morph the restricted area of the user’s face.

We do this because human beings and animals have very different face topologies. Here’s what would happen if we tried to make the characters more human-like:

Here’s how avatar would look like if we morphed larger areas

Another interesting challenge was how to integrate eyewear into the animal characters. If you’ve ever tried to put sunglasses on a dog, you might understand this issue first hand.

It wasn’t easy to integrate eyewear on an animal character that doesn’t have a human-like nose.

Feeding AI with 3D faces

It took us 3 years to collect a proprietary dataset of 15,000 face scans. We ensured our dataset captured a range of races, ages, and genders.

We built a special equipment made of 24 cameras to collect dataset

We used this data to train our neural networks. This is the backbone of our personalization and facial performance capture technologies.

Our machine learning models can process 60 frames per second and recognizes an innumerable quantity of micro expression combinations in real-time.

The AI based technology detects emotions in real-time using smartphone

The neural network can even track tongue positioning in many directions.

By comparison, other avatar apps do not have this functionality and simply have two states for tongue positioning.

Many our users were asking to add this feature

We integrated all of this technology into a mobile messaging app called Chudo. We launched it a couple of months ago to introduce our technological developments to the world. Now Chudo is growing 25x faster than early Snapchat.

Accessibility is one of the reason why it’s growing so fast. Unlike other avatar apps that work only on expensive smartphones, our technology supports budget Android devices, old iPhones and even an iPod.

We made our technology work on iPod

Unreal characters, real money

Most of the characters in Chudo are free. As an experiment, we started selling a few avatars to help pay the bills. Here’s the leaderboard of our top selling creatures:

Use cases

There are many industries where people would love to see themselves as fictional characters:

  • Computer Gaming
  • Cartoon Animation
  • Toys
  • Publishing
  • Communication

With Chudo, our team has begun taking over the communication industry.

Do you know any other industries where this technology could be applied?

Share your ideas here in comments or shoot me an email at

More by Ashot Gabrelyanov

Topics of interest

More Related Stories