The recent use of facial recognition systems in Louisiana has resulted in the mistaken-identity arrest of Randall Reid, a black man from Georgia. Local authorities used facial recognition technology to pin him for stealing purses from a store in a New Orleans suburb – one in an entirely separate state Reid claims to have
Reid is yet another in an
After police in Woodbridge, New Jersey had a fake ID belonging to a suspected shoplifter assessed by FRTs in early 2019, Nijeer Parks, who worked and lived 30 miles away in Paterson, N.J., served
Michael Oliver was wrongfully accused of
In January 2020, Robert Williams spent over an entire day in jail after allegedly being caught on video stealing nearly $4,000 worth of luxury watches from a Shinola store in Detroit. His charges were dropped two months after new evidence revealed he was singing on Instagram Live 50 miles away at the time of the crime.
These cases are some of the United States’ most significant major misidentifications of people of color within the past five years. They serve as a direct reflection of the state of facial recognition technologies and their ability to effectively discern and differentiate individuals of color.
Facial recognition technology thrives, or falters, on the vital biometric data – photos of various faces and other physical characteristics – it is given to assess. The set of data the system receives is what ultimately determines the overall effectiveness of the system as a whole.
That said, these systems cannot recognize faces belonging to those of a specific race if the datasets used to support and train these systems contain minimal data on the race in question.
Yashar Behzadi, CEO and Founder of
In other words, the less biometric data there is on people of color, the less likely facial recognition technology is to successfully identify people of color.
Until recently, FRTs have been “primarily developed and tested on datasets that had a majority of fair-skin individuals,” according to Tatevik Baghdasaryan, Content Marketer at
“As a result, the algorithms used in facial recognition technology perform worse on people with darker skin tones and specific facial features such as broader noses and fuller lips,” says Baghdasaryan. “This leads to higher rates of false positives and false negatives."
For instance, a 2018 landmark study by Joy Buolamwini and Timnit Gebru found that many algorithms responsible for analyzing key facial features in FRTs have been known to misidentify black women
Facial recognition technology has become quite ubiquitous in the tech world and is now being used by nearly 100 countries across the globe.
Singapore, famously known for its
In late 2020, Smart Nation added a
However, while the use of facial recognition technologies has become widely accepted, there still remains a handful of countries that limit their use or, in some cases, outright refuse them. Countries such as Belgium and Luxembourg fall into the latter category, opting to ban FRTs entirely, with
Argentina serves as a unique example; a country that, at first, adopted the technology with open arms and then later
Currently, it has become clear that facial recognition technology’s biggest issues stem from the quality and type of data its systems receive.
If the system’s data is not representative of a diverse body of demographics – only including data for those with lighter skin, for example – or the quality of the images assessed by the system is poor – blurry, dimly lit, taken from nonoptimal angles, etc. – errors like false positives in people of color become far more likely to occur.
Thus, the simplest solution to this long-standing problem with FRTs is to incorporate higher volumes of data that represent those with a variety of skin tones and facial features.
If we, as a people, must trust and rely on this technology to aid us in matters of equal and fairly dispensed justice, the least we can do is learn more about facial recognition’s core issues and how they affect its ability to properly identify people of color.