III
Articial intelligence (AI) is recognised as
a strategically important technology that
can contribute to a wide array of societal
and economic benets. However, it is also a
technology that may present serious risks,
challenges and unintended consequences.
Within this context, trust in AI systems and
products is recognised as a key prerequisite
for the broader uptake of these technologies
in society. It is therefore vital that AI products,
services and systems are developed and
implemented responsibly, safely and ethically.
This research aimed to bring together evidence
on the use of labelling initiatives and schemes,
codes of conduct and other voluntary, self-
regulatory mechanisms for the ethical and
safe development of AI applications. Through
a literature review, a crowdsourcing exercise
and a series of interviews we identied and
analysed such mechanisms across diverse
geographical contexts, sectors, AI applications
and stages of development. We draw out a
set of common themes, highlight notable
divergences between these initiatives,
and outline anticipated opportunities and
challenges associated with developing and
implementing them.
We also offer a series of topics for further
consideration to best balance these
opportunities and challenges. These topics
present a set of key learnings that stakeholders
can take forward to understand the potential
implications for future action when designing
and implementing voluntary, self-regulatory
mechanisms. The analysis is intended to
stimulate further discussion and debate across
stakeholders as applications of AI continue
to multiply across the globe and particularly
considering the European Commission’s
(EC’s) recently published draft proposal for
AI regulation. In this regard, the research
presented in this report will be of interest to a
range of stakeholders including policymakers,
regulators, those in academia and industry, but
also more broadly to anyone – including the
public – who is interested in the development,
adoption and impact of AI and other emerging
technologies.
This research was prepared for Microsoft.
However, RAND Europe had full editorial control
and independence of the analyses performed
and presented in this report, which has been
peer-reviewed in accordance with RAND’s
quality assurance standards. This work is
intended to inform the public good and should
not be taken as a commercial endorsement of
any product or service.
We were able to undertake this research
because of the support and contributions
of many individuals. First, we would like to
thank the team at Microsoft Belgium for their
support throughout the study, in particular,
Cornelia Kutterer, Vassilis Rovilos and Evdoxia
Nerantzi. We are also grateful for the expertise
Preface