paint-brush
AI Is Getting Emotional. Here’s Why That’s A Good Thing For Companies And Consumers Alikeby@jonshalowitz
568 reads
568 reads

AI Is Getting Emotional. Here’s Why That’s A Good Thing For Companies And Consumers Alike

by Jon ShalowitzFebruary 6th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

On season two of the trendy HBO drama <em>Westworld</em>, the company co-founders host a party for investors. At the end of the party, the founder asks one of the investors if he can tell which of the attendees is a robot. The investor sizes everyone up and finally guesses one woman, who looks too impeccable to be a human. But ultimately, all the guests are robots. They were so lifelike the investor couldn’t tell they were machines.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - AI Is Getting Emotional. Here’s Why That’s A Good Thing For Companies And Consumers Alike
Jon Shalowitz HackerNoon profile picture

On season two of the trendy HBO drama Westworld, the company co-founders host a party for investors. At the end of the party, the founder asks one of the investors if he can tell which of the attendees is a robot. The investor sizes everyone up and finally guesses one woman, who looks too impeccable to be a human. But ultimately, all the guests are robots. They were so lifelike the investor couldn’t tell they were machines.

Of course, we’re a long way from living among robots that lifelike. But emotional AI — or emotion-recognition technology — is the next frontier of artificial intelligence.

That said, it isn’t new. MIT professors created a computer program called ELIZA that interacted with humans as early as the 1960s. But what once required specialized labs and supercomputers can now be deployed to the cloud much more easily and at a fraction of the cost.

The democratized access to the necessary hardware to run AI has made it increasingly popular with all types of companies in recent years.

Annette Zimmermann, vice president of research at Gartner, says that by the year 2022, “your personal device will know more about your emotional state than your own family.” And a recent study from the University of Ohio claimed that their algorithm was better at detecting emotions than people are. In the near future, a variety of AI systems and devices will recognize, interpret, process, and simulate human emotions.

Not everyone is ready to welcome AI with open arms.

Many fear that AI will become conscious and seek to destroy us. Others worry that bad actors will use AI to achieve evil ends. But these fears that AI will develop awareness and overthrow humanity, however, are often grounded in serious misconceptions.

Any technology often seems daunting when it’s new, but emotional AI isn’t scary.

As it becomes more a part of everyday life and everyday experiences online, it’s going to improve user experience for customers and help companies provide the best service they can.

Here’s what you need to know:

Humans are basically biological algorithms.

People who are intimidated by AI often find solace in the fact that it can’t replicate human emotion — but that’s not exactly true.

In Homo Deus, Professor Yuval Noah Harari writes that humans are essentially a collection of biological algorithms shaped by millions of years of evolution. He says that there is no reason to think that non-organic algorithms couldn’t replicate and surpass everything that organic algorithms can do.

This sentiment is echoed by Max Tegmark in his book Life 3.0: Being Human in the Age of Artificial Intelligence.

The idea is that our emotions and feelings are the product of organic algorithms, which are shaped by our cultural history, upbringing, and life experiences. They can, therefore, be reverse-engineered. If Professor Harari and Tegmark are right, computers will eventually become better at manipulating human emotions than humans themselves.

People already touch their phones an average 2,617 times a day, a level of engagement that indicates a future controlled by technology is fast approaching.

As artificial intelligence gets more sophisticated, people will happily allow algorithms to handle the more cumbersome parts of their lives.

Emotional AI is the new lie detector.

The old-fashioned polygraph monitors physiological changes in the body (blood pressure, pulse) to determine if someone is telling the truth. Machine learning does the same thing, but without all the wires and cumbersome machinery.

In fact, The U.S. Department of Homeland Security and authorities in Canada and the European Union are testing a system called AVATAR, which uses AI to monitor people for lies during border crossing. “The system can detect changes in the eyes, voice, gestures, and posture to determine potential risk,” Aaron Elkins, an assistant professor of management information systems at San Diego State, has said. “It can even tell when you’re curling your toes.”

AVATAR isn’t the only digital lie detection system.

A company called Converus announced last month that its EyeDetect system, which administers a 30-minute test judging truthfulness based on a computer’s observations of eye movement, would be accepted as evidence in a New Mexico court.

As the tech becomes more sophisticated, the average smartphone user will be able to benefit from emotional AI.

The iPhone XR, which I recently bought, has facial recognition software. This means there’s no touch interaction for things like payments, login, and verification. You don’t have to remember passwords and PIN numbers, and it makes everything more secure.

Overall, it’s makes using your phone much faster and easier.

Dozens of phones now come with face unlock features, like Google’s Pixel 2, Samsung’s Galaxy Note 9, and Motorola’s Moto G6. And using what are called deep neural networks — vast networks of hardware and software that approximate the web of neurons in the human brain — companies like Google and Facebook are working on similar face recognition technology and have already rolled it into their online services.

It’s only a matter of time until emotional AI is fully mainstream.

AI can pick up on emotional cues to improve your online experience.

Consumers tend to rely more on their emotions than anything else to make purchase decisions.

When you go into a store and it’s obvious you’re having a bad day, the clerk can likely look at your face and know that. This will dictate how they interact with you, how to approach you, and what to sell you.

But with so much of our purchasing activity moving online, it’s more important than ever for software to pick up on emotional cues in the same way a store clerk does.

That’s where emotional AI comes in.

AI and machine learning allow retailers to provide a better experience for the consumer by personalizing each interaction in ways that human employees cannot replicate — even better than a store clerk. Emotional AI can be empathetic to a customer’s frustration, can take abuse and come back with a smile. It can also pick up on your desires to deliver to your precise emotional needs.

AI does this by already leveraging our online history — advanced learning algorithms developed at Facebook and Google have been applied to a treasure trove of data from billions of people. By analyzing our communications, friends, and cultural context, these algorithms are already able to identify many of our desires and emotional triggers.

As with the early days of the Web, there remains much work to do — and many ethical and regulatory conversations still to be had — before emotional AI can be seamlessly integrated into the daily lives of consumers.

But compared to expectations of even a few years ago, things are a lot farther along than many expected. Emotional AI isn’t being a human — it’s about making our experience online as human-like as possible.

It’s not something to be intimidated by, and for tech companies (and consumers), it’s a major opportunity.