or even outright discriminate against them. Luxury goods may be advertised to people with
certain profiles on social media and not to others, creating a consumer “under class.”
A severe obstacle to challenging such systems is that outputs, which translate with or
without human intervention to decisions, are made not by humans or even human-legible
rules, but by less scrutable mathematical techniques. A loan applicant denied credit by a
credit-scoring ML algorithm cannot easily understand if her data was wrongly entered, or
what she can do to have a greater chance of acceptance in the future, let alone prove the
system is illegally discriminating against her (perhaps based on race, sex, or age). This opacity
has been described as creating a “black box” society.
2
2 Enter the Right to an Explanation
Since the 1990s, the law in Europe has been concerned with this kind of opaque and difficult-
to-challenge decision making by automated systems. In consequence, the Data Protection
Directive (DPD), a measure that harmonized relevant law across EU member states in 1995,
provided that a “significant” decision could not be on based solely on automated data pro-
cessing (article 15). Some EU members interpreted this as a strict prohibition, others as giving
citizens a right to challenge such a decision and ask for a “human in the loop.” A second
right, embedded within article 12, which generally gives users rights to obtain information
about whether and how their particular personal data was processed, gave users the specific
right to obtain “knowledge of the logic involved in any automatic processing” of their data.
Both these provisions, but especially the latter, were not much noticed, even by lawyers,
and scarcely ever litigated, but have revived in significance in the latest iteration of EU data
protection (DP) law within the General Data Protection Regulation (GDPR), which passed in
2016 and will come into operation across Europe in 2018.
In the GDPR, article 15 has been transformed into Article 22 and has arguably created
what the media and some technical press have portrayed as a new “right to an explanation”
of algorithms. The former article 12 has also been revamped to a new article 15 and now
includes a right to access to “meaningful information about the logic involved, as well as
the significance and the envisaged consequences of such processing” (article 15(1)(h)). This
provision, notably, applies only in the context of “automated decision making in the context
of” Article 22. This leaves it unclear if all the constraints on Article 22 (discussed below) are
ported into article 15 (though our view is that it does not). Sadly, all this adds up to a reality
considerably foggier than the media portrayal.
Several factors undermine the idea that Article 22 contains a right to an explanation.
Primarily, Article 22 does not in its main thrust even contain a right to an explanation, but
is merely a right to stop processing unless a human is introduced to review the decision on
challenge. However, Article 22 does refer at points to a requirement of “safeguards,” both
where the right to prevent processing (paradoxically) does not operate, and where it does
but sensitive personal data is processed. In relation to the first case, safeguards are partly
listed in Article 22(3), but in the second case, the only guidance is in Recital 71. (“Sensitive”
personal data in DP law refers to a restricted list of factors regarded as particularly important
such as health, race, sex, sexuality, and religious beliefs.)
2
Electronic copy available at: https://ssrn.com/abstract=3052831
评论0