没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
For more information on this publication, visit www.rand.org/t/RRA1773-1
About RAND Europe
RAND Europe is a not-for-prot research organisation that helps improve policy and
decision making through research and analysis. To learn more about RAND Europe, visit
www.randeurope.org.
Research Integrity
Our mission to help improve policy and decision making through research and analysis is
enabled through our core values of quality and objectivity and our unwavering commitment to
the highest level of integrity and ethical behaviour. To help ensure our research and analysis
are rigorous, objective, and nonpartisan, we subject our research publications to a robust and
exacting quality-assurance process; avoid both the appearance and reality of nancial and
other conicts of interest through staff training, project screening, and a policy of mandatory
disclosure; and pursue transparency in our research engagements through our commitment to
the open publication of our research ndings and recommendations, disclosure of the source
of funding of published research, and policies to ensure intellectual independence. For more
information, visit www.rand.org/about/principles.
RAND’s publications do not necessarily reect the opinions of its research clients and sponsors.
Published by the RAND Corporation, Santa Monica, Calif., and Cambridge, UK
© 2022 RAND Corporation
R® is a registered trademark.
Cover: Adobe Stock
Limited Print and Electronic Distribution Rights
This publication and trademark(s) contained herein are protected by law. This representation
of RAND intellectual property is provided for noncommercial use only. Unauthorised posting of
this publication online is prohibited; linking directly to its webpage on rand.org is encouraged.
Permission is required from RAND to reproduce, or reuse in another form, any of its research
products for commercial purposes. For information on reprint and reuse permissions, please
visit www.rand.org/pubs/permissions.
III
Articial intelligence (AI) is recognised as
a strategically important technology that
can contribute to a wide array of societal
and economic benets. However, it is also a
technology that may present serious risks,
challenges and unintended consequences.
Within this context, trust in AI systems and
products is recognised as a key prerequisite
for the broader uptake of these technologies
in society. It is therefore vital that AI products,
services and systems are developed and
implemented responsibly, safely and ethically.
This research aimed to bring together evidence
on the use of labelling initiatives and schemes,
codes of conduct and other voluntary, self-
regulatory mechanisms for the ethical and
safe development of AI applications. Through
a literature review, a crowdsourcing exercise
and a series of interviews we identied and
analysed such mechanisms across diverse
geographical contexts, sectors, AI applications
and stages of development. We draw out a
set of common themes, highlight notable
divergences between these initiatives,
and outline anticipated opportunities and
challenges associated with developing and
implementing them.
We also offer a series of topics for further
consideration to best balance these
opportunities and challenges. These topics
present a set of key learnings that stakeholders
can take forward to understand the potential
implications for future action when designing
and implementing voluntary, self-regulatory
mechanisms. The analysis is intended to
stimulate further discussion and debate across
stakeholders as applications of AI continue
to multiply across the globe and particularly
considering the European Commission’s
(EC’s) recently published draft proposal for
AI regulation. In this regard, the research
presented in this report will be of interest to a
range of stakeholders including policymakers,
regulators, those in academia and industry, but
also more broadly to anyone – including the
public – who is interested in the development,
adoption and impact of AI and other emerging
technologies.
This research was prepared for Microsoft.
However, RAND Europe had full editorial control
and independence of the analyses performed
and presented in this report, which has been
peer-reviewed in accordance with RAND’s
quality assurance standards. This work is
intended to inform the public good and should
not be taken as a commercial endorsement of
any product or service.
We were able to undertake this research
because of the support and contributions
of many individuals. First, we would like to
thank the team at Microsoft Belgium for their
support throughout the study, in particular,
Cornelia Kutterer, Vassilis Rovilos and Evdoxia
Nerantzi. We are also grateful for the expertise
Preface
IV
and insights provided by the numerous
stakeholders we engaged with through the
interviews and crowdsourcing exercise over
the course of the project. We would like to
thank Jessica Plumridge at RAND Europe for
her contributions to the design of this report.
Finally, we would like to thank our reviewers
at RAND Europe, Susan Guthrie and Erik
Silfversten, for their helpful and constructive
comments on this report during the quality
assurance process.
RAND Europe is a not-for-prot research
organisation that aims to improve policy and
decision making in the public interest, through
research and analysis. RAND Europe’s clients
include European governments, institutions,
non-governmental organisations and rms
with a need for rigorous, independent,
multidisciplinary analysis.
For more information about RAND Europe or
this document, please contact:
Dr Salil Gunashekar (Associate Research Group
Director, Science and Emerging Technology)
RAND Europe
Rue de la Loi 82 / Bte 3
1040 Brussels
Belgium
Westbrook Centre, Milton Road
Cambridge CB4 1YG
United Kingdom
Email: sgunashe@randeurope.org
V
Summary
Background and context
AI has emerged as a critical area of interest
to numerous stakeholders across the world.
It has been recognised as a strategically
important technology that can contribute to a
wide array of economic and societal benets
across the entire spectrum of industries and
social activities. While numerous benets
of AI have been widely acknowledged, there
are a number of potential barriers which may
hinder the adoption of AI, including around
trust and transparency, ethics, liability and
security. Notably, trust in AI systems is widely
recognised as a key prerequisite for the broader
uptake of AI in society. Given the expected
impact that AI can have on our society, and
the need to build trust and trustworthiness, it
is vital that AI applications are developed and
implemented responsibly, safely and ethically.
With these issues in mind, over the past few
years multiple actors around the world have
been considering approaches for the regulation
of AI. Notably, the EC has recently developed a
concrete proposal to regulate AI. Furthermore,
a growing number of voluntary, self-regulatory
initiatives for the ethical and safe development
of AI have been put forward by stakeholders
from the private sector, civil society, and
scientic and policymaking spheres.
Objectives of the study
Against the backdrop of a complex and
evolving environment in which the applications
of AI are increasingly impacting the way we
live and work, the aim of this research was to
bring together evidence on the use of labelling
initiatives and schemes, codes of conduct and
other voluntary, self-regulatory mechanisms
for the ethical and safe development of AI
applications. While the focus of the study was
on labelling initiatives and codes of conduct,
the scope also extended to other voluntary
mechanisms – such as seals and certications
– that are used or proposed as potential
tools to signal to users and consumers
that AI-enabled products and services are
trustworthy, and which enable users and
consumers to make informed decisions about
how to engage with AI-based applications.
Research approach
We adopted a mixed-methods approach
to the study:
In Phase 1 of the research, we carried
out a series of semi-structured scoping
interviews with key stakeholders with
knowledge of developments within
the wider AI accountability ecosystem
(including labelling and codes of conduct)
to rene the scope of the research and
to develop a better understanding of the
existing state of play.
PHASE 1
剩余135页未读,继续阅读
资源评论
如此醉123
- 粉丝: 231
- 资源: 9万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功