How governments can protect personal privacy in contact tracing while saving peoples' lives

Shuman Ghosemajumder, Global Head of Artificial Intelligence, F5 Networks

May 12, 2020

5 Min Read

When Jean-Jacques Rousseau wrote The Social Contract in 1762, he argued that only humans possess sovereign power, and that they alone may choose which freedoms they surrender in exchange for the benefits and stability of government. Now, for the first time in more than a century, we are debating amending or rebalancing aspects of the social contract in order to deal with a deadly pandemic.

One of the key challenges associated with containing the spread of the coronavirus that causes COVID-19 is contact tracing: identifying other individuals and groups with whom a COVID-19-positive individual may have been in contact. Under normal circumstances, the mere idea of using any form of mobile phone data to track users en masse for a purpose they never consented to would be anathema to the spirit of regulations like GDPR and CCPA. But, of course, these are not normal circumstances.

COVID-19 contact tracing is different in that complete anonymization is not possible when identifying COVID-19-positive individuals. To protect others, health systems already track COVID-19 cases and do everything in their power to perform contact tracing. The question is: How can technology help in a way that doesn't fundamentally violate our expectations around privacy?

Privacy vs. Public Health
Governments could use or access mobile phone location information without user consent. The drawback of this is clear: If governments can justify accessing this data in this circumstance, in what other contexts might they also unilaterally decide to use it after COVID-19 has passed? There are also purely opt-in approaches where individuals who want to participate in contact tracing can download an app. But you need a very large number of people to run an app for a contact-tracing program built around it to be effective.

Google and Apple have proposed an intriguing middle ground. By building new capabilities into the iOS and Android operating systems specifically to enable close proximity contact tracing with some anonymization built in, they are using technology that helps limit data collection and analysis to just what is essential. For example, the system will use Bluetooth signals, which have an inherently limited range, and only the ability to determine relative proximity to other devices, while banning the use of location tracking, according to MIT Technology Review, which would store a device's absolute geographic position. When these capabilities are released, if they are turned on by default, they could allow apps built on the platform to gain more users and result in more effective contact-tracing programs.

With greater access to this information, governments could have accurate contact tracing, which would allow the entire societal approach to prevention, containment, and mitigation of COVID-19 to improve. But the pivotal word here is "could." Privacy advocates are quick to point out there's no way of knowing with certainty that this data would improve contact-tracing efficacy enough to save many lives. But there's really no way of knowing until access is granted, and there's no way to get the people of an entire country — let alone an entire planet — to consent to their data being utilized.

The Andon Cord, Digital Edition
The possibility that technology we already have could protect us against the pandemic is too great a public health benefit to not at least explore. The question is, how to guard against the obvious downsides of eroding data privacy controls? The answer lies in recognizing this is an extraordinary circumstance, and should be handled as such. Many industries have spent a fair bit of time thinking about how to deal with emergencies. In the heyday of manufacturing, when a problem arose in a Toyota factory, employees were empowered to halt the assembly line the moment anyone discovered a problem. The trigger was pulling the "andon cord" so that team leaders and workers could huddle together to solve the problem and restart production following formalized steps.

Today, governments need a similar system they can utilize when a dire emergency overrides privacy concerns. But an andon cord of data usage should have three major components:

1. A point of instigation. The protocol should indicate factors for determining catastrophe on a spectrum — for instance, the highest level would be if the continuation of our species was at risk.

2. A point of demarcation. Privacy limits need to be reimposed after de-escalation. That should be set with an actual date and time at the outset of the andon cord pull.

3. A point of privacy. Wherever additional data is collected, it should be collected in a privacy-preserving fashion if possible. Approaches such as MIT's Private Kit, a contact-tracing app, allows infected persons to share their location trail to health officials, but that information is anonymized and patient data is stored locally.

It's necessary for governments to outline the three components above publicly, placing emphasis on steps they take to maintain privacy and the exact date they plan on stopping data usage. In the Patriot Act of 2001, Congress agreed to extend the government's powers in exchange for greater national security for a period of time, but the sunsetting kept being postponed. That cannot happen in an andon cord situation. Any time a government violates a social contract, it risks losing the trust of the public. Without transparency around pulling a data andon cord, there will be backlash. And that backlash would likely end in people actively blocking their devices, defeating the whole point of utilizing that data for public health.

Related Content:

Check out The Edge, Dark Reading's new section for features, threat data, and in-depth perspectives. Today's featured story: "Cybersecurity Home School: Garfield Teaches Security."

About the Author(s)

Shuman Ghosemajumder

Global Head of Artificial Intelligence, F5 Networks

Shuman Ghosemajumder is global head of artificial intelligence at F5 Networks (NASDAQ: FFIV). Shuman was previously chief technology officer of Shape Security, which was acquired by F5 in 2020. Shape's technology platform is the primary application defense for the world's largest banks, airlines, retailers, and government agencies. It was named by Fortune as one of the world's leading AI companies and by CNBC as one of the 50 most disruptive companies in the world.

Shuman previously led global product management for click fraud at Google, where his team safeguarded and enabled the $23 billion annual revenue AdWords business. He joined Google in 2003 as one of the early product managers for AdSense, held key product management roles in growing that business to $2 billion in annual revenue, and helped launch Gmail.

Shuman is the co-author of CGI Programming Unleashed, a contributing author to Crimeware, and a regular guest lecturer at Stanford. In 2011, the Boston Globe named him to their MIT150 list, as one of the top innovators in history of MIT.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights