没有合适的资源?快使用搜索试试~ 我知道了~
美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英).pdf
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 54 浏览量
2023-08-10
22:24:08
上传
评论
收藏 670KB PDF 举报
温馨提示
试读
86页
美国电子隐私信息中心-生成式人工智能的潜在有害影响与未来之路(英).pdf
资源推荐
资源详情
资源评论
MAY 2023
Generative AI’s Impact &
Paths Forward
CONTRIBUTIONS BY
Grant Fergusson
Caitriona Fitzgerald
Chris Frascella
Megan Iorio
Tom McBrien
Calli Schroeder
Ben Winters
Enid Zhou
EDITED BY
Grant Fergusson, Calli Schroeder, Ben Winters, and Enid Zhou
Thank you to Sarah Myers West and Katharina Kopp for your generous
comments on an earlier draft of the paper.
Notes on this Paper:
This is version 1 of this paper and is reflective of documented and
anticipated harms of Generative AI as of May 15, 2023. Due to the fast-
changing pace of development, use, and harms of Generative AI, we want to
acknowledge that this is an inherently dynamic paper, subject to changes in
the future.
Throughout this paper, we use a standard format to explain the typology of
harms that generative AI can produce. Each section first explains relevant
background information and potential risks imposed by generative AI, then
highlights specifics harms and interventions that scholars and regulators
have pursued to remedy each harm. This paper draws on two taxonomies of
A.I. harms to guide our analysis:
1. Danielle Citron’s and Daniel Solove’s Typology of Privacy Harms,
comprising physical, economic, reputational, psychological, autonomy,
discrimination, and relationship harms;
1
and
2. Joy Buolamwini’s Taxonomy of Algorithmic Harms, comprising loss of
opportunity, economic loss, and social stigmatization, including loss of
liberty, increased surveillance, stereotype reinforcement, and other
dignitary harms.
2
These taxonomies do not necessarily cover all potential AI harms, and our
use of these taxonomies is meant to help readers visualize and
contextualize AI harms without limiting the types and variety of AI harms that
readers consider.
Table of Contents
Introduction ......................................................................................................................... i
Turbocharging Information Manipulation .................................................................. 1
Harassment, Impersonation, and Extortion .............................................................. 9
Spotlight: Section 230 .................................................................................................. 19
Profits Over Privacy: Increased Opaque Data Collection ................................... 24
Increasing Data Security Risk .................................................................................... 30
Confronting Creativity: Impact on Intellectual Property Rights ......................... 33
Exacerbating Effects of Climate Change ................................................................ 40
Labor Manipulation, Theft, and Displacement ....................................................... 44
Spotlight: Discrimination .............................................................................................. 53
The Potential Application of Products Liability Law ............................................. 54
Exacerbating Market Power and Concentration ................................................... 57
Recommendations ....................................................................................................... 60
Appendix of Harms ....................................................................................................... 64
References ..................................................................................................................... 68
EPIC | Generating Harm: Generative AI’s Impact and Paths Forward
i
IInnttrroodduuccttiioonn
OpenAI’s decision to release ChatGPT, a chatbot built on the Large
Language Model GPT-3, last November thrust AI tools to the forefront of
public consciousness. In the last six months, new AI tools used to generate
text, images, video, and audio based on user prompts exploded in
popularity. Suddenly, phrases like Stable Diffusion, Hallucinations, and Value
Alignment were everywhere. Each day, new stories about the different
capabilities of generative AI—and their potential for harm—emerged without
any clear indication of what would come next or what impacts these tools
would have.
While generative AI may be new, its harms are not. AI scholars have been
warning us of the problems that large AI models can cause for years.
3
These
old problems are exacerbated by the industry’s shift in goals from research
and transparency to profit, opacity, and concentration of power. The
widespread availability and hype of these tools has led to increased harm
both individually and on a massive scale. AI replicates racial, gender, and
disability discrimination, and these harms are weaved inextricably through
every issue highlighted in this report.
OpenAI and other companies’ decisions to rapidly integrate generative AI
technology into consumer-facing products and services have undermined
longstanding efforts to make AI development transparent and accountable,
leaving many regulators scrambling to prepare for the repercussions. And it
is clear that generative AI systems can significantly amplify risks to both
individual privacy and to democracy and cybersecurity generally. In the
words of the OpenAI CEO, who indeed had the power not to accelerate the
release of this technology, “I’m especially concerned that these models
could be used for widespread misinformation…[and] offensive cyberattacks.”
Introduction
EPIC | Generating Harm: Generative AI’s Impact and Paths Forward
ii
This rapid deployment of generative AI systems without adequate
safeguards is clear evidence that self-regulation has failed. Hundreds of
entities, from corporations to media and government entities, are
developing and looking to rapidly integrate these untested AI tools into a
wide range of systems. And this rapid rollout will have disastrous results
without necessary fairness, accountability, and transparency protections
built in from the beginning.
We are at a critical juncture as policymakers and industry around the globe
are focusing on the substantial risks and opportunities posed by AI. There is
an opportunity to make this technology work for people. Companies should
be required to show their work, make it clear when AI is in use, and offer
informed consent throughout the training, development, and use process.
One thread of public concern focuses on AI’s “existential” risks—speculative
long-term risks in which robots replace humans at work, socially, and
ultimately taking over, a la “I, Robot.” Some legislators on the state and
federal level have begun to take the issue of addressing AI more seriously—
however, it remains to be seen if their focus will be only on supporting
companies with their development of AI tools and requiring marginal
disclosure and transparency requirements. Enacting clear prohibitions on
high-risk uses, addressing the easy spread of disinformation, requiring
meaningful and proactive disclosures that facilitate informed consent, and
bolstering consumer protection agencies are necessary to address the
harms and risks specific to generative AI. This paper strives to provide a
broad outline of different issues that the use of generative AI brings up,
educate lawmakers and the public, and offer some paths forward to mitigate
harm.
- Ben Winters, Senior Counsel
剩余85页未读,继续阅读
资源评论
网络研究观
- 粉丝: 6787
- 资源: 2211
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功