What does the GDPR and the "right to explanation" mean for AI?

Security teams increasingly rely on machine learning and artificial intelligence to protect assets. Will a requirement to explain how they make decisions make them less effective?

"But I'm not guilty," said K., "there's been a mistake. How is it even possible for someone to be guilty? We're all human beings here, one like the other."

"That is true," said the priest, "but that is how the guilty speak."

Sound Kafkaesque? That's because it's Kafka's The Trial, a nightmare story of an innocent man caught in an inscrutable bureaucracy, condemned to this or that fate, and with no way to challenge the decisions rendered against him. Machine learning has been compared to automated bureaucracy, and European regulators are clearly concerned that unfettered proliferation of machine learning could lead to a world in which we are all K.

But what does the GDPR, the sweeping overhaul of the 1995 European Data Protection Directive that affects any company that does business with Europeans, say about machine learning and artificial intelligence? Not a lot, it turns out, prompting legal scholars to debate what rights EU citizens have under the new law--and what GDPR compliance ought to look like for global companies operating in Europe.

The debate centers on the single occurrence of the phrase "right to explanation" that occurs in Recital 71, a companion document to the GDPR that is not itself legally enforceable. However, the GDPR states that data controllers must notify consumers how their data will be used, including "the existence of automated decision-making, and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject." [our emphasis]

A common-sense reading would mean that if a computer is making real-world decisions without a human in the loop, then there should be some accountability for how those decisions are made. For example, if a bank's machine learning model denies you credit, and does so without meaningful human intervention, then, some scholars argue, the bank owes you an explanation of how it arrived at that decision.

"People are selling what are essentially black box systems," Andrew Selbst, a researcher at Data & Society in New York, says. "When things go wrong, the programmers are hiding behind a lack of transparency, saying 'nobody can understand these systems.' Ultimately, that's not going to be an acceptable answer. It's an expression of human dignity, of human autonomy, not to have decisions made about us which have no reviewability," he adds.

Despite strong arguments that a right to explanation should exist, it remains less clear whether such a right does exist under European law--and if it does exist, it likely has loopholes an autonomous truck could drive through.

Is there a GDPR "right to explanation" for AI?

In a touchstone legal analysis published last year, Sandra Wachter, a research fellow at the Oxford Internet Institute in the UK, criticized the notion, popular at the time, that AI would be heavily regulated by a "right to explanation" when the GDPR comes into force.

"From a legal perspective it's quite obvious what's going to be legally mandated as of May," Wachter says. "The phrase 'right to explanation' is mentioned once in a recital. Recitals are not legally binding, they are not separately enforceable."

Recitals, she explains, are meant to clarify any ambiguities in the law--but in this case, she says, there is no ambiguity. "I wrote that paper because I wanted to alarm people that we have problems that need to be fixed," she adds. "I'm very much in favor of a legally-binding right to explanation."

Wachter's opponents disagree with her interpretation of the GDPR, however. "The intent of the legislation is very clear," Julia Powles, a research fellow at NYU Law, and co-author, with Selbst, of a paper arguing for the existence of a right to explanation in the GDPR, says. "It's about giving meaningful rights to individuals about when [their data] is used and how it's accessed, how it affects life and life chances."

Two core principles of the GDPR are transparency and accountability for how data is processed. One can therefore infer a right to explanation in the GDPR, Powles and Selbst argue. However, the technical difficulty of implementing a "right to explanation" for increasingly complex AI systems has led some to question whether compliance with such a requirement would even be possible.

Is a "right to explanation" technically possible?

Nobody is quite sure. At present there are well-recognized trade-offs between accuracy and explainability when developing a machine learning model. The more accurate the model, the less obvious the model's decisions. This makes it difficult for humans to trust those decisions, though, especially when the stakes are high.

The stakes don't get much higher than the battlefield, and the Pentagon's skunkworks are investigating how to solve the problem. DARPA, the Defense Advanced Research Projects Agency, the same folks who brought you ARPANET, the precursor to the internet, launched an "Explainable AI" initiative in 2016, and gave a ton of money to researchers to spend four years banging away at this problem.

"[AI] systems offer tremendous benefits, but their effectiveness will be limited by the machine's inability to explain its decisions and actions to human users," DARPA's RFP says. "This issue is especially important for the Department of Defense...Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners."

cat machine learning system chart darpa DARPA

Machine learning analyzes attributes of an image to understand what it represents. Since it is constantly adjusting what it knows about the data patterns, the "reasons" why it has decided can be unclear.

DARPA-funded research may well trickle down into GDPR compliance, but in the meanwhile, private industry faces related, but different, concerns: explaining decisions might result in disclosure of closely-held trade secrets or otherwise violate their intellectual property rights. Finding the right balance between accountability and encouraging innovation remains an unsolved problem. Nevertheless, Powles argues, we must try.

"The idea that because it's difficult, because the system is complicated, as many are, that's an excuse to not have to explain how the real-world consequences are derived. That seems backwards as a direction for technology and any sort of social policy," she says. "Because of the promise unproven of greater good we abandon these deeply held and hugely significant rights to autonomy and dignity and privacy and so on?"

While the GDPR is deliberately vague on regulating AI, the spirit of the law seems to be to both encourage innovation, but also maintain enough regulatory muscle to intervene, if necessary. "Europe does not want to ban machine learning," Wachter says. "That's not what they set out to do. The whole aim of the framework is to balance the interests of data controllers with the interests of data subjects."

GDPR right to explanation: vague on purpose

The GDPR's vagueness is a feature, not a bug, Shannon Yavorsky, an attorney at Venables LLP in San Francisco, who has worked on European data protection law for the last 15 years, says. "There is definitely a sense that the GDPR lacks specificity on a number of different ponts," she says. "Part of it is intentional; it wants to leave room for technology to evolve."

The GDPR replaces the 1995 Data Protection Directive, and the old law was out of date, Yavorsky says. European lawmakers learned their lesson and now aim to regulate technology using high-level guiding principles, rather than getting down into the weeds of specific technological design choices.

While that vagueness may leave affected companies with a sense of unease, enforcing a legally ambiguous right to explanation is by no means a priority for European regulators when the GDPR comes into force in May, Yavorsky says. "Artificial intelligence and machine learning are sort of in their infancy," Yavorsky says. "It could be this becomes more critical as the technology evolves and it does have a greater impact on society."

GDPR enforcement of right to explanation

European regulators are sending signals that they intend to enforce the GDPR and will likely single out egregiously bad actors in the first six to 12 months to set an example. However, the existence of a right to explanation, and what it means if it exists, is uncertain enough that we probably won't know where the line gets drawn until it goes to court.

"What's going to happen is that at some point somebody will sue," Wachter says. A test case will work its way through the courts of a member state, probably Ireland, where many international companies have their European headquarters. "The final say will be the European Court of Justice, the highest court in European law," she adds. "They will make a decision and say how the framework will be interpreted. We will have clarity at that point."

Show Comments