[ Original @ Privacy International ]
Increased trust makes every response to COVID-19 stronger. Lack of trust and confidence can undermine everything. Should we trust governments and industry with their app solutions at this moment of global crisis?
Key findings
- There are many apps across the world with many different uses.
- Some apps won’t work in the real world; the ones that do work best with more virus testing.
- They all require trust, and that has yet to be built after years of abuses and exploitation.
- Apple and Google’s contributions are essential to making contact-tracing work for their users; these are positive steps into privacy-by-design, but strict governance is required for making, maintaining and removing this capability.
‘Let’s build an app for that’ has become the response to so many things. It’s no surprise it’s happening now.
Apps are notorious for their lack of security and privacy safeguards, exploiting people’s data and devices. Now we’re being asked to trust governments with their proposed apps — of which there are many. These are the very same governments who have been keen to exploit data in the past. For instance, PI currently has four outstanding legal cases arising from the last times governments have overreached with their use of surveillance powers.
We cannot ignore these previous misdeeds. In reality, we have little reason to trust what these apps and capabilities claim, nor the emergency speed and adoption they all are demanding. They must be vigilently monitored, scrutinised, and tested. If done well these apps could potentially protect people and their data and enhance public trust. Doing so will enable us all to engage in public life once again. We are cautiously optimistic that apps can play a role in dealing with COVID-19, but we need trust and confidence.
The starting point must be what helps health — apps are a small part of a public health response to this pandemic. Any single measure must be integrated with a comprehensive healthcare response, must prioritise people, and minimise data. It must empower people so that they know that their data and their devices are secure, and any new functionality must be destroyed at the end of this global pandemic.
First and foremost, they have to work. This is a fundamental step that’s often ignored in all the excitement. Some key questions are:
- Do the apps combat the virus spread or just satiate a desire that something is being done?
- Do the apps function on our phones, and are they able to work at the same time as we use our phones to do our work, stay in touch with friends and family, or play games?
- Do they significantly drain our batteries and what happens when your phone turns off?
- Do they over-report and over-notify, or is there intelligence in the system to make determinations of your risk?
There is also a huge variety in the apps. There are three types of Coronavirus-response apps we’ve seen:
- apps that inform people, i.e. self-assessment apps
- apps that report on people, i.e. quarantine enforcement apps
- apps that make us aware of our interactions with the virus, i.e. contact-tracing apps
To be clear there may be a thin line between each of them. The lines must be made far thicker, and red.
Apps that inform people: self-assessment
Some apps are for people to inform themselves. People who want to know more about their health, the illness, and what they can do. Apps have been rolled out around the world by governments, health authorities, and other third-parties designed to allow people to assess whether or not they are likely to be infected or simply to access information about the outbreak.
For example, in Afghanistan the Ministry of Public Health and Ministry of Telecommunications and Information Technology have launched the “corona.asan.gov.af” software to provide health advice in three English, Dari, and Pashto; using the questions embedded in the software users can evaluate themselves for the virus.
These could be a crucial way to keep people informed. They may yet pose risks if they are unreliable, both in function and in content (misleading or poor information) or transfer or leak data to third parties intentionally or otherwise.
While these apps don’t need access to data that identifies an individual or transfer any data to third parties, a number of them do require just that. A web form to screen COVID-19 cases developed by the Mexico City government collects a wide range of personal information such as name, age, telephone number, home address, social network username, and cellphone number. The privacy notice establishes that such data may be transferred to a vast array of judicial and administrative federal and local authorities.
The self-testing web app issued by Argentina’s Secretariat of Public Innovation, asked for national ID number, email and phone as mandatory fields in order to submit the test, while the Android version required numerous permissions, including contacts, geolocation data (both network-based and GPS), and access to the microphone and camera.
Such wide-ranging access to data can quickly transform such apps from tools of advice to tools of surveillance.
Apps that inform on people: quarantining
Some apps are for governments to keep track of people for quarantine enforcement. Governments are using apps and other techniques to track peoples’ movements, and to make sure they do not move or interact with others. As already occurs in Hong Kong, and is now proposed in Russia, governments are requiring travellers and visitors to submit themselves to quarantining enforced by these tracking apps.
In Poland, people under quarantine are tracked using an app that requires them upon request by the police to send their location an a photo to verify their identity. Other examples include: Kazakhstan’s tracking app, and a proposed app in Egypt that would provide names and details of people.
In Taiwan, there are even reports of authorities demanding answers if mobile phones are turned off.
Firms are keen to enter this market. In Argentina one company is proposing an quarantine-enforcement app; and in Estonia another firm is piloting an app combined with ankle bracelet, akin to their other products used for house arrest.
Apps to inform us: contact tracing apps
Some apps may help us understand if we’ve been in contact with people with the virus. These ‘contact-tracing apps’ could help us understand if we’ve interacted with someone who has tested positive for the Coronavirus (assuming they have actually been tested). These apps could help us manage our risks and health in relation to people around us. And they could help the general public health response, and assist in accelerating responsiveness to people in need. Some policy-makers aspire for them to accelerate a phased reduction of the worldwide lockdowns which are in effect.
These apps may rely on different types of location technology. Early reports on the Norwegian contact tracing app, Infection Stop, state that it relies on GPS to track and store users’ locations.
Others propose to use Bluetooth for your phone to monitor and collect data on other app-enabled Bluetooth devices that come into proximity.
In our early assessment, these Bluetooth proximity apps have the following three domains of challenges.
First, from a technical perspective, do they even work? Apps can only ever be useful as part of a broad approach, and cannot replace manual tracing or other priorities such as testing. As Professor Ross Anderson at Cambridge indicated, if an app becomes so important to the health and economy of a country, it will be under constant attack and abuse – meaning it also has to be secure.
Second, from a public adoption perspective, will they be used? Any app that uses Bluetooth has to essentially be running all the time on your phone. This will affect your battery, your phone and data security, and the functionality of your phone. It will only be of use to people with smartphones, and it’s unclear if it will work for all smartphones in a given country.
Third, from a privacy perspective, do any of these apps leak data — inadvertently or by design? Apps are notorious for leaking data, to companies and platforms like Google and Facebook. Any app has to be closely scrutinised, and every update has to be reviewed carefully to make sure that nothing is changing in the code that would make it less secure or leak. We would also have to monitor that governments haven’t changed the purposes of the apps through a simple update.
All of these dynamics will affect adoption rates. For such apps to be useful, they would need to be used by a very large proportion of the population — the numbers we’ve heard are that more than 60% of the population would need to use the apps. This will only happen if people trust them, and if the apps actually work in useful ways on the devices. In Singapore, for instance, though its app is much heralded as a model for other countries, the adoption rate was limited to 13% — this may be related to the fact that the app had to be running in the foreground all the time.
To remedy the specific problem of whether these Bluetooth apps could work, a number of different approaches have been proposed. In Europe, a key battle is now between developers who foresee an approach that would send identifiers to a centralised server, and those opting for a decentralised approach aimed at allowing the processing to occur locally on individual smartphones. Apple and Google have also now gotten involved, and are creating an interface that any national app could use. Interestingly, Apple and Google have made their own announcements about the privacy considerations that they’ve taken — and some surprisingly welcome developments. Their design choices will have knock-on effects on how any national app could generate and use our data — with potentially some additional strong safeguards — which may not have been what governments had imagined when they first imagined the data empires they could build.
Even still the path isn’t entirely clear. These apps’ functionalities have to work on both expensive and cheaper smartphones, for people who can afford the latest models and for those who cannot. The wide variety of operating systems versions, hardware variances, battery management, security patches, and other apps running — makes this a complex space.
Even if these apps can be made to work; and even if the privacy and security problems can be resolved; and even if people are actually able to use them on their own devices, the apps must be meaningful to people for them to download.
(Of course the apps could be made mandatory, like quarantine apps. But trust and compulsion do not go hand in hand, and problems may still arise. )
It is absolutely vital that the responses given by the apps are genuinely meaningful to people. Are the apps linked to testing or to people just self-reporting that they are unwell? If testing is not used, then the apps are likely to over-report: that is, if everyone who feels unwell triggers an alarm amongst everyone they’ve interacted with over a week-long period, and if this happens repeatedly based on mere suspicions of being unwell — or even abusive reporting — then people may begin to question and ignore the notifications. The apps must also guide peoples’ next steps: if you are notified that you’ve been in proximity to someone who has the virus, but your employer won’t accept what an app says, will you have to go to work anyway?
For the contact-tracing apps to be successful, they must be wildly popular. Early surveys have shown that, in the UK for instance, nearly two thirds of people support the use of some sort of contact-tracing app. For quarantining, however, the support drops dramatically to less than half. Though none of these initiatives are subject to polling, these numbers do not look good for governments: for voluntary compliance to work, support for contact tracing would have to be far higher to deliver the numbers of users required.
This isn’t about functionality, it’s about trust
Fundamentally, people want access to healthcare, and they want to be free to safely go back out and for their lives return to normality. Any apps that exploit these motivations but primarily serve institutional interests will seriously and possibly irrevocably destroy trust and confidence at a time when we need it most.
In policy-makers’ minds, voluntary apps would all work perfectly and like waving a magic wand, we would be free from our containment. This would however require so much more than what governments and companies are actually offering.
For decades governments and companies have exploited data and systems to their advantage. It took hard work to uncover and expose, and to hold them to account; and their past behaviours are why it takes so much effort and suspension of disbelief to take them at their words now.
- Apps could be designed for emergency and healthcare use only; except we would have to verify that governments and companies aren’t exploiting our data now — and how that may change in the future.
- Data-intensive healthcare responses could be developed; except for the last few years there are so many indications of poor security, expansive collection and datasharing with industry, including for advertising.
- Policy promises could be made; if only we could trust ministers and policymakers who love to change their intentions once they have our data in their grasp; and who then conduct and expand surveillance in secret.
In some imaginary and entirely benevolent world, we might be able to centralise all data and solve significant social problems with it. However this world is a fantasy. In our real world, such simplistic approaches will always create new problems, wicked risks, and side-effects – fundamentally because our institutions are human.
Most of the issues these apps raise aren’t about privacy. The fundamental problems all hinge on the fact that we must trust these apps, hand over our autonomy and health data for them to work, and hope that somehow, miraculously, it will all work out. For the first time ever.