没有合适的资源?快使用搜索试试~ 我知道了~
奴役算法:从“解释权”到“更好决定权”?-研究论文
需积分: 3 1 下载量 33 浏览量
2021-05-20
07:47:31
上传
评论
收藏 115KB PDF 举报
温馨提示
试读
14页
随着人们越来越关注“黑匣子”机器学习系统中的不公平和歧视,合法的“解释权”应运而生,成为一种极具吸引力的挑战和补救方法。 我们概述了有关欧洲数据保护法中有限条款的最新辩论,并介绍和分析了法国行政法和现代化的《欧洲理事会第108号公约》草案中较新的解释权。虽然个人权利可能有用,但在隐私法中,它们在历史上已不合理地负担了沉重的负担。平均数据主体。 关于算法逻辑的“有意义的信息”在技术上比通常认为的更有可能,但这加剧了新的“透明性谬误”-一种补救的幻想,而不是任何实质性的帮助。 虽然基于权利的方法应在工具箱中占据稳固的位置,但其他形式的治理(例如影响评估,“软法”,司法审查和模型库)也应得到更多关注,此外,催化机构还应促使用户控制用户对算法系统设计的控制。
资源详情
资源评论
资源推荐
Enslaving the Algorithm: From a “Right to an
Explanation” to a “Right to Better Decisions”?
Lilian Edwards, University of Strathclyde [l.edwards@strath.ac.uk]
Michael Veale, University College London [m.veale@ucl.ac.uk]
Published in
IEEE Security & Privacy (2018) 16(3), 46–54, doi:10.1109/MSP.2018.2701152
As concerns about unfairness and discrimination in “black box” machine learn-
ing systems rise, a legal “right to an explanation” has emerged as a compellingly
attractive approach for challenge and redress. We outline recent debates on the
limited provisions in European data protection law, and introduce and analyze
newer explanation rights in French administrative law and the draft modernized
Council of Europe Convention 108. While individual rights can be useful, in pri-
vacy law they have historically unreasonably burdened the average data subject.
“Meaningful information” about algorithmic logics is more technically possible
than commonly thought, but this exacerbates a new “transparency fallacy”—an
illusion of remedy rather than anything substantively helpful. While rights-based
approaches deserve a firm place in the toolbox, other forms of governance, such
as impact assessments, “soft law,” judicial review, and model repositories de-
serve more attention, alongside catalyzing agencies acting for users to control
algorithmic system design.
1 Introduction
Businesses and governments are increasingly deploying machine learning (ML) systems to
make and support decisions that have a crucial impact on everyday life: decisions about
(inter alia) criminal sentencing and release on bail, medical treatment, eligibility for welfare
benefits, what entertainment we see and can access, the price and availability of goods and
services delivered online, and the political information to which we are exposed. These ML
systems—colloquially entering public consciousness as just algorithms, or even just AI—have
been extensively criticized in the past few years as a result of a number of well-known “war
stories” that have revealed patterns of discrimination embedded but invisible to casual users
in such systems.
1
Because algorithms are trained on historical data, they risk replicating unwanted historical
patterns of unfairness and/or discrimination. For example, in hiring systems, a lack of women
being hired in the past may mean the systems fail to recognize the worth of female applicants,
1
Electronic copy available at: https://ssrn.com/abstract=3052831
or even outright discriminate against them. Luxury goods may be advertised to people with
certain profiles on social media and not to others, creating a consumer “under class.”
A severe obstacle to challenging such systems is that outputs, which translate with or
without human intervention to decisions, are made not by humans or even human-legible
rules, but by less scrutable mathematical techniques. A loan applicant denied credit by a
credit-scoring ML algorithm cannot easily understand if her data was wrongly entered, or
what she can do to have a greater chance of acceptance in the future, let alone prove the
system is illegally discriminating against her (perhaps based on race, sex, or age). This opacity
has been described as creating a “black box” society.
2
2 Enter the Right to an Explanation
Since the 1990s, the law in Europe has been concerned with this kind of opaque and difficult-
to-challenge decision making by automated systems. In consequence, the Data Protection
Directive (DPD), a measure that harmonized relevant law across EU member states in 1995,
provided that a “significant” decision could not be on based solely on automated data pro-
cessing (article 15). Some EU members interpreted this as a strict prohibition, others as giving
citizens a right to challenge such a decision and ask for a “human in the loop.” A second
right, embedded within article 12, which generally gives users rights to obtain information
about whether and how their particular personal data was processed, gave users the specific
right to obtain “knowledge of the logic involved in any automatic processing” of their data.
Both these provisions, but especially the latter, were not much noticed, even by lawyers,
and scarcely ever litigated, but have revived in significance in the latest iteration of EU data
protection (DP) law within the General Data Protection Regulation (GDPR), which passed in
2016 and will come into operation across Europe in 2018.
In the GDPR, article 15 has been transformed into Article 22 and has arguably created
what the media and some technical press have portrayed as a new “right to an explanation”
of algorithms. The former article 12 has also been revamped to a new article 15 and now
includes a right to access to “meaningful information about the logic involved, as well as
the significance and the envisaged consequences of such processing” (article 15(1)(h)). This
provision, notably, applies only in the context of “automated decision making in the context
of” Article 22. This leaves it unclear if all the constraints on Article 22 (discussed below) are
ported into article 15 (though our view is that it does not). Sadly, all this adds up to a reality
considerably foggier than the media portrayal.
Several factors undermine the idea that Article 22 contains a right to an explanation.
Primarily, Article 22 does not in its main thrust even contain a right to an explanation, but
is merely a right to stop processing unless a human is introduced to review the decision on
challenge. However, Article 22 does refer at points to a requirement of “safeguards,” both
where the right to prevent processing (paradoxically) does not operate, and where it does
but sensitive personal data is processed. In relation to the first case, safeguards are partly
listed in Article 22(3), but in the second case, the only guidance is in Recital 71. (“Sensitive”
personal data in DP law refers to a restricted list of factors regarded as particularly important
such as health, race, sex, sexuality, and religious beliefs.)
2
Electronic copy available at: https://ssrn.com/abstract=3052831
It is important to note that, in European legislation, the articles in the main text are binding
on member states but are accompanied by “recitals,” which are designed to help states
interpret the articles and understand their purpose. Recitals are usually regarded as helpful
rather than binding, but this is contested and differs among states. Unfortunately, in relation
to Article 22, Recital 71 mentions some key matters not included in the main text. Article
22(3) mandates that safeguards include “at least the right to obtain human intervention on
the part of the controller, to express his or her point of view and to contest the decision,” but
the safeguards listed in Recital 71 “should include specific information to the data subject
and the right to obtain human intervention, to express his or her point of view, to obtain an
explanation of the decision reached after such assessment and to challenge the decision” (italics
added).
This strange mishmash of texts thus cannot firmly be said to mandate a right to explanation
in all or indeed any circumstances and may not be interpreted the same way from state to
state.
This is a serious, but not the only, problem with Article 22.
•
Article 22 applies only to systems where decisions are made in a “solely” automated
way—that is, there is no human in the loop—and there are very few of these and fewer
that are “significant” (see below). How meaningful this input has to be is subject to
recent regulatory guidance,
3
but remains unclear and untested.
•
What is a “decision”? The GDPR gives us no help with this at all other than that it
includes a “measure” (Recital 71). Is sending a targeted ad to a user using an algorithmic
system a decision? It produces no binding effect; the advert may be ignored; and in
many cases, it is hard to see what action causally flows from it. Yet as in the well-
publicized Latanya Sweeney example,
4
sending adverts promoting help with criminal
arrests solely to “black-sounding” names was worrying and offensive—and potentially
dangerous if these characterizations were inherited by systems selecting individuals for
stop and search or airport screening. Although a single advert delivery decision might
not have a significant effect on an individual’s life, the cumulative effect on an entire
group or class may be worrying. Such group privacy impacts are not dealt with well
by DP law—an area based on individualistic human rights—and are exacerbated by a
continuing lack of provision for class actions in EU states.
•
Article 22 applies only to a decision that produces legal or other “significant” effects.
This is vague in the extreme. Some would argue this could only apply to systems that
make important, binding decisions on things like criminal justice, risk assessment, credit
scoring, education applications, or employment. Yet such systems are rarely if ever
entirely automated, even if the human’s involvement is often nominal. Furthermore,
some commercial decisions may seem trivial as a one-off, but are significant in aggreg-
ate. Mendoza and Bygrave argue that advertising decisions can never be significant,
5
while European regulators recently produced guidance indicating the opposite.
3
Might
systems recommending buying choices or targeting adverts not limit a user’s worldview
or choices, or disseminate “fake news” via algorithmic filter bubbles? Arguably, such
phenomena are becoming deeply and significantly destructive to our democracy. We
3
Electronic copy available at: https://ssrn.com/abstract=3052831
剩余13页未读,继续阅读
weixin_38538585
- 粉丝: 3
- 资源: 956
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论0