没有合适的资源?快使用搜索试试~ 我知道了~
A vision-centered multi-sensor fusing
需积分: 9 4 下载量 13 浏览量
2017-03-17
11:35:55
上传
评论
收藏 1.49MB PDF 举报
温馨提示
2017 A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars
资源推荐
资源详情
资源评论
























122 Xue et al. / Front Inform Technol Electron Eng 2017 18(1):122-138
Frontiers of Information Technology & Electronic Engineering
www.zju.edu.cn/jzus; engineering.cae.cn; www.springerlink.com
ISSN 2095-9184 (print); ISSN 2095-9230 (online)
E-mail: jzus@zju.edu.cn
A vision-centered multi-sensor fusing approach to
self-localization and obstacle perception for robotic cars
∗
Jian-ru XUE
†
, Di WANG, Shao-yi DU, Di-xiao CUI, Yong HUANG, Nan-ning ZHENG
(Lab of Visual Cognitive Computing and Intelligent Vehicle, Xi’an Jiaotong University, Xi’an 710049, China)
†
E-mail: jrxue@xjtu.edu.cn
Received Dec. 29, 2016; Revision accepted Jan. 8, 2017; Crosschecked Jan. 10, 2017
Abstract: Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver
understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual
perception, while the machine perception of traffic environments needs to fuse information from several different kinds
of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results
for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in
which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered
multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses
camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-
localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully
integrated with the framework and address multiple levels of machine vision techniques, from collecting training
data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment
mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic
cars for eight years. The empirical results validate its robustness and efficiency.
Key words: Visual perception; Self-localization; Mapping; Motion planning; Robotic car
http://dx.doi.org/10.1631/FITEE.1601873 CLC number: TP181
1 Introduction
Rapid integration of artificial intelligence with
applications has provided some notable break-
throughs in recent years (Pan, 2016). The robotic car
is one such disruptive technology that may enter real
life in the near future, and this is also a good example
of a hybrid artificial intelligence system (Zheng et al.,
2017). A robotic car needs to answer three questions
during all the time of its driving: where it is, where
it is going, and how to go. To answer these ques-
tions, the robotic car needs to integrate three cou-
pled consequential tasks: self-localization, decision
*
Project supported by the National Key Program Project of
China (No. 2016YFB1001004) and the National Natural Science
Foundation of China (Nos. 91320301 and 61273252)
ORCID: Jian-ru XUE, http://orcid.org/0000-0002-4994-9343
c
Zhejiang University and Springer-Verlag Berlin Heidelberg 2017
making and motion planning, and motion control.
Among these tasks, the ability to understand the
robot’s surroundings lies at the core, and the robotic
car’s performance heavily depends on the accuracy
and reliability of its environment perception tech-
nologies including self-localization and perception of
obstacles.
Almost all relevant information required for au-
tonomous driving can be acquired through vision
sensors. This includes but goes well beyond lane
geometry, drivable road segments, traffic signs, traf-
fic lights, obstacle positions and velocity, and obsta-
cle class. However, exploiting this potential of vision
sensors imposes more difficulties than LIDAR, radar,
or ultrasonic sensors. The sensing data of LIDAR,
radar, or ultrasonic sensors involve distance and/or

Xue et al. / Front Inform Technol Electron Eng 2017 18(1):122-138 123
velocity, i.e., information necessary for vehicle con-
trol. Nevertheless, camera-based driver assistance
systems have entered the automotive markets (Ul-
rich, 2016). However, the computer vision based ap-
proach for autonomous driving in urban environment
is still an open research issue, since these state-of-the-
art vision technologies are still incapable of providing
the high rate of success demanded by autonomous
driving. Fortunately, recent approaches to scene un-
derstanding using deep learning technologies suggest
a promising future of vision-centered approach for
robotic cars (Hoiem et al., 2015).
In this paper, we propose a vision-centered
multi-sensor fusing framework for the robotic cars’
perception problem, which fuses camera, LIDAR,
and GIS information consistently via geometrical
constraints and driving knowledge. The framework
consists of self-localization and processing of obsta-
cles surrounding the robotic car. At first glance these
two problems seem to have been well studied, and
early works in this field were quickly rewarded with
promising results. However, the large variety of sce-
narios and the high rates of success demanded by
autonomous driving have kept this research alive.
Specifically, integrating computer vision algorithms
within a compact and consistent machine perception
system is still a challenging problem in the field of
robotic cars.
Self-localization is the first critical problem of
the aforementioned challenges. The capability of a
robot to accurately and efficiently determine its po-
sition at all times is one of the fundamental tasks es-
sential for a robotic car to interact with the environ-
ment. Different accuracies and update frequencies of
self-localization are required by various applications
of a robotic car. Taking parking as an example, the
accuracy needed is at the centimeter level, and the
update frequency is about 100 Hz. In contrast, for
routing and guidance, the required accuracy is re-
duced to 10–100 m level and the update frequency
is about 0.01 Hz. To address GPS measurements’
critical problems of low accuracy and being easily
affected, the map-based method becomes one of the
most popular methods for robotic cars, in which one
map is used to improve upon GPS measurements and
to fill in when signals are unavailable or degraded.
In the line of the map-based localization method, an
ideal map should provide not only a geometrical rep-
resentation of the traffic environment, but also some
kinds of sensor-based descriptions of the environment
to alleviate the difficulty of self-localization as well
as of motion planning (Fuentes-Pacheco et al., 2015).
However, traditional road maps for a human driver
cannot be used directly for a robotic car, since it is
composed of evenly sampled spatial points connected
via polylines, with low accuracy, especially in urban
areas, over about 5–20 m. Inevitably, building a high
definition map becomes one of the core competencies
of robotic cars.
Mapping approaches build geometric represen-
tations of environments. They adopt sensor-based
environment description models, which integrate vi-
sual and geometric features and have been designed
in conjunction with Bayesian filters so that the
sensor-based description can be updated over time
(Douillard et al., 2009). Mapping for robotic cars
through local perception information is a challenging
problem for a number of reasons. Firstly, maps are
defined over a continuous space; the solution space of
map estimation has infinitely many dimensions. Sec-
ondly, learning a map is a ‘chicken-and-egg’ problem,
for which reason it is often referred to as the simulta-
neous localization and mapping (SLAM) or concur-
rent mapping and localization problem (Thrun and
Leonard, 2008). More specifically, the difficulty of
the mapping problem can be increased by a collection
of factors including map size, noise in perception and
actuation, perceptual ambiguity, and alignment of
spatial-temporal sensing data acquisition from differ-
ent types of sensors running asynchronously. With a
given map of the traffic environment, self-localization
becomes the problem of determining its pose in the
map.
Another critical problem is the need for high
reliability in processing obstacles surrounding the
robotic car. This guarantees the robotic car’s safety
in driving through real traffic. The robotic car needs
to know positions, sizes, and velocities of the sur-
rounding obstacles to make high-level driving de-
cisions. However, real-time detection and tracking
algorithms relying on a single sensor often suffer
from low accuracy and poor robustness when con-
fronted with difficult, real-world data (Xue et al.
,
2008). For example, most state-of-the-art object
trackers present noisy estimates of velocities of obsta-
cles, and are difficult to track due to heavy occlusion
and viewpoint changes in the real traffic environment
(Ess et al., 2010; Mertz et al., 2013). Additionally,

124 Xue et al. / Front Inform Technol Electron Eng 2017 18(1):122-138
without robust estimates of velocities of nearby ob-
stacles, merging onto or off highways or changing
lanes becomes a formidable task. Similar issues
will be encountered by any robot that must act au-
tonomously in crowded, dynamic environments.
Fusing multiple LIDARs and radars is an essen-
tial module of a robotic car and of advanced driver
assistance systems. With the improvement of vision-
based object detection and tracking technologies, in-
tegrating vision technologies with LIDAR and radars
makes it possible to make a higher level of driving
decision than previous methods which fuse only LI-
DARs with radars.
In this paper, we summarize our 8-year effort
on a vision-centered multi-sensor fusing approach
to the aforementioned problems, as well as lessons
we have learned through the long-term and exten-
sive test of the proposed approach with our robotic
car autonomously driving in real urban traffic (Xue
et al., 2008; Du et al., 2010; Cui et al., 2014; 2016).
Fig. 1 illustrates the timeline of the robotic cars we
developed for the test of the vision-centered multi-
sensor fusing approach.
2013-201520112009
201620122010
Fig. 1 The timeline of the robotic cars for the long-
term test of the vision-centered multi-sensor fusing
approach
2 Related works
In this section, we present a brief survey of re-
cent works on self-localization, and obstacle detec-
tion and tracking.
2.1 Self-localization
The core problem of self-localization is mapping,
and mapping and localization were initially stud-
ied independently. More specifically, mapping for
robotic cars is realized as a procedure of integrat-
ing local, partial, and sequential measurements of
the car’s surroundings into a consistent representa-
tion, which forms the basis for further navigation.
The key to the integration lies in the joint alignment
of spatial-temporal sensing data from multiple het-
erogenous sensors equipping the robotic car, which
is usually performed off-line.
With a given map, one needs to establish corre-
spondence between the map and its local percep-
tion, and then determines the transformation be-
tween the map coordinate system and the local per-
ception coordinate system based on these correspon-
dences. Knowing this transformation enables the
robotic car to locate the surrounding obstacles of in-
terest within its own coordinate frame—a necessary
prerequisite for the robotic car to navigate through
the obstacles. This means that the localization is ac-
tually a registration problem (Du et al., 2010), and
can be solved via map-matching technologies (Cui
et al., 2014). With its localization in the global map,
the robot can obtain navigation information from
the map. Additionally, the navigation information
can be further used as a prior in verifying the local
perception results, for the purpose of increasing the
accuracy and reliability of the local perception (Cui
et al., 2014; 2016).
Mapping and localization were eventually
known as SLAM (Dissanayake et al., 2001). SLAM
methods are able to reduce the accumulative drift
relative to the initial position of the robotic car by
using landmarks and jointly optimizing over all or
a selection of poses and landmarks. Efficient opti-
mization strategies using incremental sparse matrix
factorization (Montemerlo et al., 2002) or relative
structure representation (Grisetti et al., 2010) have
been proposed to make these algorithms tractable
and scalable. Thus, at a theoretical and concep-
tual level, SLAM is now considered a solved prob-
lem in the case that LIDARs are used to build 2D
maps of small static indoor environments (Thrun and
Leonard, 2008). Comprehensive surveys and tuto-
rial papers on SLAM can be found in the literature
(Durrant-Whyte and Bailey, 2006).
For large-scale localization and mapping,
metric-topological mapping constructs maps that
navigate between places which can be recognized
perceptually (Blanco et al., 2007). A popular rep-
resentation is to use sub-maps that are metrically
consistent, and connect them with topological con-
straints. Generating such a metric-topological map
is based on the reconstruction of the robot path in a
剩余16页未读,继续阅读
资源评论

- #完美解决问题
- #运行顺畅
- #内容详尽
- #全网独家
- #注释完整

xiaguxingyun
- 粉丝: 9
- 资源: 48
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助


最新资源
- 西门子PLC案例程序 S7-200SMART项目案例04_S7-200SMART入煤炉.zip
- 西门子PLC案例程序 S7-200SMART项目案例20_S7-200SMART锅炉控制.zip
- 西门子PLC案例程序 S7-200SMART项目案例19_S7-200SMART钢管水压机.zip
- 西门子PLC案例程序 S7-200SMART项目案例18_S7-200SMART超声波清洗机.zip
- 西门子PLC案例程序 S7-200SMART项目案例01_S7-200SMART三辊卷板.zip
- 西门子PLC案例程序 S7-200SMART项目案例02_S7-200SMART低压注塑-项目案例.zip
- 西门子PLC案例程序 S7-200SMART项目案例08_S7-200SMART卷板机.zip
- 西门子PLC案例程序 S7-200SMART项目案例09_S7-200SMART反应罐控制.zip
- 西门子PLC案例程序 S7-200SMART项目案例07_S7-200SMART包装机.zip
- 西门子PLC案例程序 S7-200SMART项目案例10_S7-200SMART喷墨机械手.zip
- 西门子PLC案例程序 S7-200SMART项目案例11_S7-200SMART堆垛-项目案例.zip
- 西门子PLC案例程序 S7-200SMART项目案例12_S7-200SMART废水处理系统.zip
- 西门子PLC案例程序 S7-200SMART项目案例13_S7-200SMART恒压供水-项目案例.zip
- 西门子PLC案例程序 S7-200SMART项目案例14_S7-200SMART押出机-项目案例.zip
- 西门子PLC案例程序 S7-200SMART项目案例15_S7-200SMART植毛设备-项目案例.zip
- 西门子PLC案例程序 S7-200SMART项目案例17_S7-200SMART真空乳化机.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈



安全验证
文档复制为VIP权益,开通VIP直接复制
