A couple of weeks ago Canada released the COVID Alert app to the general public. The app is intended to help tracing efforts in the province to curb the spread of COVID-19. Understandably, there are some questions regarding its privacy and security implications. Although I focus here on the Canadian version of the app, the underlying architecture is used by a lot of other public health authorities all over the world. You can find the full list of countries here.
The first important thing about COVID Alert is that although the app itself is being branded and released by Health Canada (and other public health authorities) much of the core technology is provided by Apple and Google for iOS and Android respectively.
Earlier this year, Apple and Google, in a joint effort, released a framework called the Exposure Notification System (ENS) designed specifically for building COVID tracing apps. The framework provides a set of tools for developers on both platforms and is intended to be used by public health authorities all over the world. It has been created with the explicit goal of preserving user privacy by design in an effort to encourage broader adoption. More on that here.
As of now, the system is 100% opt-in. You don’t have to download the apps if you don’t feel comfortable. Nor would you be forced to opt-in at a later stage. However, the success of the program does depend on having a certain percentage of the population signed up.
The mechanism itself is fairly straightforward and uses a decentralized approach. Let’s say we are living in an ideal scenario where a majority of people have the app on their phone. Once you install it, it runs in the background and sends out randomized “codes” every 10–15 minutes via Bluetooth.
These codes are essentially just gibberish and not tied to you in any way. Over the course of the day, you would generate dozens of these codes. For simplicity, let’s say that your code never changes and is fixed to something ridiculously simple like 12345.
Phones that are near you, within a distance of 10m, and also have the app installed, are listening for these codes. If you’re hanging out with a friend, or at a busy supermarket, your phone will be exchanging codes with those around you. Your phone will store the codes that it “came in contact with”. And likewise, all the phones around will you store your code. At this point, all data is decentralized, meaning that the information is stored purely on phones and not on a server somewhere.
Now, hypothetically, one of the people you ran into at the supermarket later tests positive. The person can then tell the app that they’ve been infected. In most cases, they would be required to submit some sort of medical proof or confirmation from a public health authority. In Germany, for example, you would scan a QR code issued by the health authority confirming your diagnosis. This is to prevent people from falsely claiming that they’ve been infected to intentionally create false alarms.
The app would then upload all of the codes issued to that person over the last 14 days to a web server. Or in our simplistic model, the single code that they were assigned, say 56789. The server is only storing a master list of these “infected” codes.
As the final step, your app downloads a list of these infected codes from the server a few times a day and compares them against all the codes of other people that you have run into. If it finds a match, it means that you’ve been in contact with an infected person and you’re at a higher risk and are notified immediately. This would then likely translate into a higher testing priority for you. And likewise, if you test positive, then the people that came into contact with you, and so on.
The “privacy-protecting” approach comes from the fact that the system does not theoretically need access to your location or GPS at any point. If your location is needed for any reason, you will be prompted for permission. Nor does it need access to any sort of personal information. The only thing the system really needs is a Bluetooth based device. Additionally, your codes never leave your phone unless you get infected, meaning that nobody gets access to your data, including the health authorities.
Whether or not this technology is too far-reaching, or not extensive enough, is a whole other debate. But at the very least, I hope that having a better understanding of the underlying tech can help folks make an informed decision about whether or not they’d like to opt-in.
In Canada, the source code for the application is freely available on GitHub. I highly recommend taking a look if you want to get your hands dirty and dive into some code or send in some patches! :)
Originally published on The Digital Archive.