paint-brush
Voice is the Safest and Most Accurate for Emotion AI Analysisby@RanaGujral
1,078 reads
1,078 reads

Voice is the Safest and Most Accurate for Emotion AI Analysis

by Rana Gujral August 19th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The use of biometric identifiers is ubiquitous in the business world. Facial recognition is achieved by applying AI software to facial geometry data, which results in a rapid match. Conversational AI would match an employee’s voice to a different biometric data set, a voice print, derived from a previous recording of the employee's speech. Business is justifiably concerned about the risks associated with the use of facial recognition, but may opt in favor of conversational AI, which has been the subject of criticism in recent weeks.

People Mentioned

Mention Thumbnail
featured image - Voice is the Safest and Most Accurate for Emotion AI Analysis
Rana Gujral  HackerNoon profile picture

Voice is one of several unique, innate, and immutable biometric identifiers. Other identifiers include retinal scans, iris scans and facial geometry scans. As technology evolves, so does the public’s concern regarding issues of privacy. Consider, for example, the turmoil DNA evidence introduced to our legal system in the late 1990’s, or, the controversial introduction of fingerprint evidence in the 1910 murder trial of Clarence Hiller in Chicago, Illinois.

Today, the use of biometric identifiers is ubiquitous in the business world. Fingerprints, for example, are used to unlock your iPhone, to access your laptop and may soon authenticate your credit card purchases. Many businesses use biometric data to ensure the accuracy of employee timekeeping or to limit employee access to specific areas of the workplace.

Predictably, privacy advocates have raised concerns that biometric data has the potential to undermine anonymity or exploit consumers for monetary gain. As a result, state and federal legislatures are considering laws to protect the public from potential harm. In the case of DNA, the result was the Genetic Information Nondiscrimination Act (GINA) and, for biometrics, it is Illinois’ 2008 passage in of the Biometric Information Privacy Act (BIPA), which requires a customer’s affirmative consent for a company to collect biometric markers, including fingerprints and facial geometry scans.

Additionally, some courts have determined that face geometry includes facial recognition technology, which brings us to the nexus of artificial intelligence (AI) and biometric data, and the implications of that technology for business use.

Facial Recognition Vs Conversational AI

Facial recognition is achieved by applying AI software to facial geometry data, which results in a rapid match to an individual whose facial geometry data is on file. For example, an employee might pass by a camera as he enters his employer’s building.

That image would then be compared via AI to the facial geometry data on file with the employer’s HR department. If the AI software fails to generate a match, the individual could be barred from entry. It doesn’t require a great deal of imagination to see the possibilities here and to understand why business is so welcoming of this technology.

Conversational AI, on the other hand, would match an employee’s voice to a different biometric data set, a voice print, derived from a previous recording of the employee’s speech. By way of example, an employee entering the building would speak into a microphone, saying, for example, “Good Morning! My name is Joe Worker, from accounting.” Again, if the speech matches the data on file, entry is permitted—if not, entry is denied.

The simplistic examples given above, only begin to scratch the surface of what is possible when AI and biometric data join forces. However, the examples aresufficient to illustrate the level of privacy an individual concedes in each scenario.

Several companies have touted facial recognition AI as a means to evaluate and respond to human emotion. But a recent study published by the Association for Psychological Science illustrates that there is limited scientific data to back up these claims.

After two years of evaluating more than 1,000 studies, the team found that emotions are too nuanced to be identified strictly by facial expressions. Conversational AI is unique in that it doesn’t just capture and evaluate the movement of facial muscles. It carefully evaluates a series of spoken and sometimes unspoken cues that represent emotion and intent. 

Mitigating Risk

Business is no stranger to risk, because without the acceptance of risk, there would be no business, Nonetheless, business is a huge fan of mitigatingrisk. Business is justifiably concerned about the risks associated with the use of biometric data. Those that are may opt in favor of Conversational AI, rather than facial recognition, which has been the subject of criticism in recent weeks.

London’s Metropolitan Police conducted a test of its facial recognition software recently. Trial results were less than impressive, with an 80% failure rate, prompting the University of Essex researchers monitoring the trial to suggest that the Metropolitan Police stop using facial recognition immediately.

What makes facial recognition and conversational AI unique from a privacy perspective? To start, we wear our faces everywhere. A thousand cameras could be capturing your every move and you’d never be the wiser. Your voice you control. You consciously decide to speak and can control what you say, and to some degree, how you say it.

While you can certainly be recorded without your knowledge, you have control over what is said and when. Both public and private use of facial recognition have come under fire in recent weeks, as have the revelations that tech company employees and contractors have access to recordings from voice assistants. Both methods will need to be regulated to meet basic privacy requirements, but in the long term it will be easier for people to feel they have control over who uses their voice than their face. 

At the same time, users are overwhelmingly willing to share data if it means a more personalized experience, as long as the companies with which they share that data are transparent about its use. With 38% of information conveyed by speech and tone of voice, the more personalized a voice interface becomes, the more accurate it will be. Trust and transparency will make this viable and acceptable to many consumers if implemented properly.  

Voice Biometrics Can Do What Other Technologies Cannot

While facial recognition software lays dubious claim to the ability to determine the emotional state of a human being by virtue of overt and subtle facial expressions, Conversational AI makes a strong case for predicting human behavior.

For example, a company conducted a trial with a bank, and using customer voice profiles, identified a group whose profile suggested a low risk of loan default. The test showed a 6% default rate for these subjects. The second set of subjects, identified as high risk, demonstrated a 27% rate of default.

Bias and Privacy for Private Citizens

Experts in the field suggest that error rates in facial recognition software may be the direct result of programmer biases. Computer programmers come from varied backgrounds, which gives each programmer a world view unique to their life experiences.

Invariably, this plays a sub-conscious role in the way they code. Joy Buolamwini, of the American Justice League, is a pioneer in researching this phenomenon. In her research, she found that facial recognition accuracy was significantly more accurate in light skin facial recognition than in dark skin facial recognition, in fact, wrongly identifying 96% of females of color as male.

For businesses concerned about the risk they take on in managing large volumes of consumer data, and for consumers who worry about what can be done with their voice data, conversation AI offers a clear advantage. Facial recognition is a passive technology. If your face is visible, it is captured, and for that reason privacy advocates and some governments have been wary of its implications.

Voice isn’t free of these concerns. Users must be aware that their voice is being analyzed before they can make an explicit decision to not speak. But they do have that choice in some circumstances. Combined with the higher degree of accuracy we already see in voice recognition (compared to facial recognition technologies), there is less intense scrutiny of voice technologies as a whole.

From a privacy standpoint, there are substantial issues to be addressed with both technologies, but voice offers a more secure, consistent experience that is generally less invasive than facial capture at this time.