paint-brush
Will Mask-Scanning Tech Help Stop COVID or Create A Privacy Disaster?by@bennat-berger

Will Mask-Scanning Tech Help Stop COVID or Create A Privacy Disaster?

by Bennat BergerJanuary 15th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Will the surveillance measures we take to encourage protective behaviors and prevent the spread of disease ultimately expose us to even greater (privacy) risks?

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Will Mask-Scanning Tech Help Stop COVID or Create A Privacy Disaster?
Bennat Berger HackerNoon profile picture

2021 has arrived and it brought an ever-pressing need for disease tracing and mitigation measures. We face a difficult time; but will the surveillance measures we take to encourage protective behaviors and prevent the spread of disease ultimately expose us to even greater (privacy) risks?

Beyond implementing social distancing and testing measures, health authorities have pressed the general public to wear masks in public settings and when around people not in their household, especially when other social distancing measures (i.e., maintaining a six-foot distance) are difficult to maintain. Doing so has proven benefits; one study in Germany recently found that mask mandates lessened the growth of infections by about 40 percent.

“I think the biggest thing with COVID now that shapes all of this guidance on masks is that we can’t tell who’s infected,” Dr. Peter Chin-Hong, an infectious disease specialist at UC San Francisco, recently shared for the University’s news bulletin. “You can’t look in a crowd and say, oh, that person should wear a mask. There’s a lot of asymptomatic infection, so everybody has to wear a mask.”

To that end, some have posited the use of mask-scanning tech as a means to enforce mask-wearing compliance and limit disease spread. In September, National Geographic reported that the San Francisco tech company LeewayHertz had pioneered a mask recognition algorithm that could be used to identify non-compliance and facilitate enforcement efforts. 

As reporters for the magazine wrote: “LeewayHertz’s algorithm [...] could be used in real-time and integrated with closed-circuit television (CCTV) cameras. From a given frame in a video, it isolates images and organizes them into two categories, people who are wearing masks and those who are not.” 

LeewayHertz’s mask-recognition software has been deployed in “stealth mode” in several settings across the United States and Europe. Several restaurants, hotels, and even one East Coast airport have used the algorithm to ensure that their staff members comply with mask-wearing policies. 

The benefits of such technology are evident at a glance. LeewayHertz’s algorithm could lift the burden of identifying maskless shoppers and personnel and allow authorities to better use their time for targeted enforcement efforts. This tactic would empower public health authorities to enforce mask-wearing, limit non-compliance, and minimize disease spread in heavily-trafficked public spaces. 

Of course, anyone remotely concerned with data privacy would also immediately wonder how invasive such technology could be. The answer? Not very - at least, not yet. 

The loophole is that mask recognition software doesn’t identify faces, only whether or not that face is covered. In fact, research indicates that masks can drastically limit the efficacy of facial recognition technology. According to one study by the US National Institute of Standards and Technology (NIST), masks cause the most-used facial recognition algorithms’ error rates to spike to between 5 percent and 50 percent. 

Technically, mask recognition sidesteps the privacy issue quagmire by not identifying those it flags - for now. 

We find ourselves in an awkward spot. On the one hand, the idea of sending enforcers after non-compliant shoppers or staff when flagged by mask-recognition-empowered CCTV surveillance feels a little too close to an Orwellian dystopia for comfort. On the other, the sheer scale of the pandemic compels public health authorities to do what they can to limit the spread of potentially deadly diseases. 

“There’s a willingness to relax the rules when it comes to anything related to COVID,” James Lewis, the director of the Technology Policy Program for the Center of Strategic and International Studies, recently told reporters. “The issue is, when this is over, will we go back?”

Lewis raises an important question if only because while mask recognition does not currently identify faces, the capability is already undergoing research and development. In August, CNN Business reported that the California-based company Trueface is presently working on tailoring their facial recognition technology to focus on the upper (unmasked) part of the face. They hope that in doing so, the tech will be better able to identify a masked subject. As of the time of the CNN article’s publication, the company’s research team planned on rolling out its advancements within two months -- that is, around now. 

With this in mind, it is possible to envision a world in which our already-deployed mask-recognition technology gains an identification capability. This is problematic, given previous attempts to ban some aspects of authority-deployed technology while keeping others. 

In 2019, Wired reported that when San Francisco’s anti-surveillance laws and facial recognition ban were proposed, police officials for the city claimed that they had shelved all facial recognition testing as of 2017. What the authorities didn’t publicly mention, however, is that the police department had contracted with a facial recognition firm that same year to maintain a mug shot database, facial recognition software, and a facial recognition server through the summer of 2020. 

After the ban took effect, the department rushed to dismantle the software; however, the notion that the city’s police force could deploy facial recognition technology without public oversight is troubling and stands as a concerning case study. 

Of course, you could argue that mask-recognition tech lacks the privacy concerns that facial-recognition tech poses. Some cities already have made a case to this effect. This August, Portland, Oregon became the first US city to ban the public and private use of facial recognition - however, according to National Geographic, “Hector Dominguez, the Smart City Open Data Coordinator for Portland, sees mask recognition as different from facial recognition with regards to its privacy risks.”

This argument orients mask recognition software as an exception to the facial recognition bans’ rule -- and does so with both merit and cause. After all, the technology does not currently pose privacy risks and could serve a valuable purpose in limiting disease spread via mask-wearing enforcement. However, it also creates a pattern of public tracking - and our experiences in San Francisco and Oregon inform us that we may press the moral boundaries of such technology if it is made available. 

Suppose we accept mask recognition software as a (temporary) means to identify non-compliance during COVID-19. In that case, it becomes easy to argue that applying newly-developed facial-recognition capabilities to that software would help public health authorities find and identify virus-exposed people during contact tracing efforts. It would be a logical, helpful move. However, at that point of acceptance, we establish a precedent - intended or not - of surveillance and tracking people “for their own good.” 

The slippery slope very nearly speaks for itself. Mask-recognition software presents a short-term public health opportunity that could open the door to a long-term privacy nightmare. Our fears around COVID-19 are warranted and deserve addressing, but the measures we take to protect ourselves shouldn’t expose us to privacy ills.