paint-brush
Face Recognition Tech: With All the Press, What Are the Real Risks?by@bennat-berger
291 reads

Face Recognition Tech: With All the Press, What Are the Real Risks?

by Bennat BergerDecember 16th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Facial recognition technology (FRT) functions primarily by identifying and creating templates based on measurements of our nodal points. These measurements are transcribed into a template with a unique code, which can then be compared to and potentially matched to known photographs within a database. The cliche of a government group being able to find a person of interest in grainy street surveillance footage within seconds is mostly overstated. The FaceApp debacle is only the latest in a series of concerns stemming from online services that seem fun, convenient, and exploitative.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Face Recognition Tech: With All the Press, What Are the Real Risks?
Bennat Berger HackerNoon profile picture

In July, a cutesy face-morphing app threw most Americans with a smartphone into utter panic. The goofy photo filter, appropriately (if unimaginatively) dubbed FaceApp, attracted hordes of users with its ability to add years of eerily-realistic age onto photographed faces. It rocketed into Internet virality, its edited photos sweeping across social media platforms and photo-sharing sites. It was a blast -- until some suspicious internet investigator realized that the firm behind FaceApp was Russian. 

Chaos erupted. Across social sites and message boards, people began fretting: who are these Russians, why do they want our faces, and what could they do with the data they took? Media newscasters issued warnings; one Democratic politician called for an investigation. All around the country, people wondered whether shady Russian foreign intelligence agents were using their duck-faced selfies to hone their nefarious facial-recognition programs. 

As it turns out, the connection between the firm behind FaceApp and the Russian government was a tad overblown -- that is to say, nonexistent. An investigation conducted by the Washington Post reported that the firm has no ties to international intelligence, does not enable unauthorized surveillance, and is not using the photos it stores to hone a facial-recognition program. The whole episode was a tempest in a digital teapot. 

The uproar, however, is hardly surprising. The FaceApp debacle is only the latest in a series of concerns stemming from online services that seem fun, convenient, and exploitative. We wonder, is the ability to automatically tag ourselves in group photos on Facebook letting the company profile, track, and take advantage of us? When we walk into a cashier-less Amazon store, will the retail giant somehow use the surveillance footage it gathers against us?

Mostly, we don’t know -- and that’s a problem. It raises the question: what can current AI programs do with our faces, exactly? How many of our fears are based in myth, rather than truth? Do the technology’s actual capabilities stack up to our concerns? 

Let’s consider. 

Facial Recognition -- What Is It, Exactly? 

Before we delve into the risks of facial recognition, it’s crucial to understand how such programs work. Facial recognition technology (FRT) functions primarily by identifying and creating templates based on measurements of our nodal points; these include but are not limited to the width of the nose, the length of the jaw, the distance between our eyes, etc. These measurements are transcribed into a template with a unique code, which can then be compared to and potentially matched to known photographs within a database. Casinos, for example, might use FRT to cross-reference photos of incoming guests to internal files that list problem gamblers. Other investigators can source their photo references from driver’s license databases, mug shots, government records, or even social media platforms like Facebook. However, the breadth of their photo pool -- and, as we’ll get into later, the accuracy of their search -- depends on how many sources they have the authority to use. 

What Can (and Can’t) FRT Do? 

Contrary to what a binge-watch of almost all spy dramas would suggest, FRT rarely returns accurate results instantaneously or within complex environments. The cliche of a government group being able to find a person of interest in grainy street surveillance footage within seconds is mostly overstated. 

While facial recognition surveillance is indeed possible, it is nevertheless fraught with inaccuracies and subject to environmental factors. Camera quality, algorithm, time of day, distance, database size, and even demographic points like race and gender can impact the effectiveness of FRT searches. Memorably, one test conducted by the ACLU in 2018 reported that Amazon’s FRT service, Rekognition, falsely matched 28 members of Congress with police mugshots. A different investigation deployed by the MIT Media Lab found that the same program struggled to identify gender, mistaking women for men 19% of the time and misidentifying women with darker skin for men over a third of the time. Amazon pointed to poor calibration as the reason behind the inaccuracies. 

Amazon may be right, at least in part. After all, researchers have found that Facebook is likely more accurate in its facial identification than the FBI, given that the social media platform has considerably more photos to reference and actively asks its users to refine its algorithm by verifying its auto-tagging results. This leads us to the misunderstanding behind Rekognition, and indeed all FRT software: it’s only ever as good as the reference pool it uses. 

As one Amazon spokesperson told the Verge, “While 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases, it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty.” A fair point, although the comment is somewhat ironic given that the company has shopped Rekognition to law enforcement and ICE and yet not instituted a mandatory degree of FRT confidence. The consistent inaccuracy has made some barriers to how FRT can be deployed in enforcement necessary. For example, the technology cannot yet be used as the sole basis for an arrest, but it can be used to develop leads during investigations. 

Make no mistake, though; while FRT in the United States is not currently capable of picking out a target on a dark street via grainy surveillance photos, it is likely only a matter of time before it gains the ability to do so. China, a country known for its focus on citizen surveillance, recently made headlines for tracking down a BBC reporter via FRT within a mere seven minutes. This sets a precedent for how the technology may be used in enforcement in the United States someday.

One point is clear: while facial recognition technology in the United States is not yet consistently accurate, there are entities within the country that are dedicated to improving and using it. For now, the problem we face is twofold; first, that people may face undeserved censure or detainment due to a faulty identification from unreliable technology. Second, that a surveillance trend may, as in China, develop as a result of the technology becoming both more accurate and widespread. 

Where Does This Leave Us, the FaceApp Users? 

The development of FRT programs is inevitable. While they may not be entirely accurate or well-used now, they will be one day. In the meantime, we need to take steps to establish how, when, and to what degree these technologies can be used -- and set hard limits now, before the usage of FRT spreads beyond the point of easy regulatory restriction. As one writer for the Washington Post put the matter: “Today, facial recognition may be pleasantly useful when it can admit you to a baseball game, but may seem far less so if a distant database thinks, on the basis of preprogrammed visual assumptions, that you are likely to be criminally violent.” 

Thus, the place we, as FaceApp users, stand is more philosophical than anything else. A goofy app that adds realistic wrinkles to our faces might be fun, but the data it collects could ultimately be used against us. Sure, those photos might languish on a Russian server somewhere, untouched and unused -- or they could be used as reference points to make a more effective and accurate FRT system that can be used to surveil civilians. This episode points at the heart of our digital insecurities and pushes us to ask; are we harming ourselves in the long run by being so trusting with even our most innocuous data? 

The lesson here? The next time a goofy photo filter or online game or digital trend goes viral, take a beat to consider whether you want to participate. Even if you think your duck-face selfie is languishing on a server in digital -- or literal -- Siberia, assume that someone’s using it. 

Because, in all honesty, unless someone sparks a viral panic over it, you’ll really never know.