没有合适的资源?快使用搜索试试~ 我知道了~
什么是通用人工智能?.pdf
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 145 浏览量
2024-03-26
10:36:06
上传
评论
收藏 207KB PDF 举报
温馨提示
试读
6页
What Is Artificial General Intelligence? Clarifying The Goal ... 什么是通用人工智能?
资源推荐
资源详情
资源评论
What Is Artificial General Intelligence?
Clarifying The Goal For Engineering And Evaluation
Mark R. Waser
Books International
22883 Quicksilver Drive, Dulles, VA 20166
MWaser@BooksIntl.com
Abstract
Artificial general intelligence (AGI) has no consensus
definition but everyone believes that they will recognize it
when it appears. Unfortunately, in reality, there is great
debate over specific examples that range the gamut from
exact human brain simulations to infinitely capable systems.
Indeed, it has even been argued whether specific instances
of humanity are truly generally intelligent. Lack of a
consensus definition seriously hampers effective discussion,
design, development, and evaluation of generally intelligent
systems. We will address this by proposing a goal for AGI,
rigorously defining one specific class of general intelligence
architecture that fulfills this goal that a number of the
currently active AGI projects appear to be converging
towards, and presenting a simplified view intended to
promote new research in order to facilitate the creation of a
safe artificial general intelligence.
Classifying Artificial Intelligence
Defining and redefining “Artificial Intelligence” (AI) has
become a perennial academic exercise so it shouldn’t be
surprising that “Artificial General Intelligence” is now
undergoing exactly the same fate. Pei Wang addressed this
problem (Wang 2008) by dividing the definitions of AI
into five broad classes based upon on how a given artificial
intelligence would be similar to human intelligence: in
structure, in behavior, in capability, in function, or in
principle. Wang states that
These working definitions of AI are all valid, in the
sense that each of them corresponds to a description
of the human intelligence at a certain level of
abstraction, and sets a precise research goal, which is
achievable to various extents. Each of them is also
fruitful, in the sense that it has guided the research to
produce results with intellectual and practical values.
On the other hand, these working definitions are
different, since they set different goals, require
different methods, produce different results, and
evaluate progress according to different criteria.
Copyright © 2008, The Second Conference on Artificial General
Intelligence (agi-09.org). All rights reserved.
We contend that replacing the fourth level of abstraction
(Functional-AI) with “similarity of architecture of mind (as
opposed to brain)” and altering its boundary with the fifth
would greatly improve the accuracy and usability this
scheme for AGI. Since Stan Franklin proposed (Franklin
2007) that his LIDA architecture was “ideally suited to
provide a working ontology that would allow for the
discussion, design, and comparison of AGI systems” since
it implemented and fleshed out a number of psychological
and neuroscience theories of cognition and since the
feasibility of this claim was quickly demonstrated when
Franklin and the principals involved in NARS (Wang
2006), Novamente (Looks, Goertzel and Pennachin 2004),
and Cognitive Constructor (Samsonovitch et. al. 2008) put
together a comparative treatment of their four systems
based upon that architecture (Franklin et al. 2007), we
would place all of those systems in the new category.
Making these changes leaves three classes based upon
different levels of architecture, with Structure-AI equating
to brain architecture and Principle-AI equating to the
architecture of problem-solving, and two classes based
upon emergent properties, behavior and capability.
However, it must be noted that both of Wang’s examples
of the behavioral category have moved to more of an
architectural approach with Wang noting the migration of
Soar (Lehman, Laird and Rosenbloom 2006; Laird 2008)
and the recent combination of the symbolic system ACT-R
(Anderson and Lebiere 1998, Anderson et al. 2004) with
the connectionist [L]eabra (O’Reilly, and Munakata 2000),
to produce SAL (Lebiere et al. 2008) as the [S]ynthesis of
[A]CT-R and [L]ibra. Further, the capability category
contains only examples of “Narrow AI” and Cyc (Lenat
1995) that arguably belongs to the Principle-AI category.
Viewing them this way, we must argue vehemently with
Wang’s contentions that “these five trails lead to different
summits, rather than to the same one”, or that “to mix them
together in one project is not a good idea.” To accept these
arguments is analogous to resigning ourselves to being
blind men who will attempt only to engineer an example of
elephantness by focusing solely on a single view of
elephantness, to the exclusion of all other views and to the
extent of throwing out valuable information. While we
certainly agree with the observations that “Many current
AI projects have no clearly specified research goal, and
资源评论
百态老人
- 粉丝: 1903
- 资源: 2万+
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功