没有合适的资源?快使用搜索试试~ 我知道了~
A vision-centered multi-sensor fusing
需积分: 9 4 下载量 77 浏览量
2017-03-17
19:35:55
上传
评论
收藏 1.49MB PDF 举报
温馨提示
2017 A vision-centered multi-sensor fusing approach to self-localization and obstacle perception for robotic cars
资源推荐
资源详情
资源评论
![rar](https://img-home.csdnimg.cn/images/20241231044955.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![application/pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![thumb](https://img-home.csdnimg.cn/images/20250102104920.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![application/pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![application/pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![application/pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![](https://csdnimg.cn/release/download_crawler_static/9784994/bg1.jpg)
122 Xue et al. / Front Inform Technol Electron Eng 2017 18(1):122-138
Frontiers of Information Technology & Electronic Engineering
www.zju.edu.cn/jzus; engineering.cae.cn; www.springerlink.com
ISSN 2095-9184 (print); ISSN 2095-9230 (online)
E-mail: jzus@zju.edu.cn
A vision-centered multi-sensor fusing approach to
self-localization and obstacle perception for robotic cars
∗
Jian-ru XUE
†
, Di WANG, Shao-yi DU, Di-xiao CUI, Yong HUANG, Nan-ning ZHENG
(Lab of Visual Cognitive Computing and Intelligent Vehicle, Xi’an Jiaotong University, Xi’an 710049, China)
†
E-mail: jrxue@xjtu.edu.cn
Received Dec. 29, 2016; Revision accepted Jan. 8, 2017; Crosschecked Jan. 10, 2017
Abstract: Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver
understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual
perception, while the machine perception of traffic environments needs to fuse information from several different kinds
of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results
for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in
which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered
multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses
camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-
localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully
integrated with the framework and address multiple levels of machine vision techniques, from collecting training
data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment
mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic
cars for eight years. The empirical results validate its robustness and efficiency.
Key words: Visual perception; Self-localization; Mapping; Motion planning; Robotic car
http://dx.doi.org/10.1631/FITEE.1601873 CLC number: TP181
1 Introduction
Rapid integration of artificial intelligence with
applications has provided some notable break-
throughs in recent years (Pan, 2016). The robotic car
is one such disruptive technology that may enter real
life in the near future, and this is also a good example
of a hybrid artificial intelligence system (Zheng et al.,
2017). A robotic car needs to answer three questions
during all the time of its driving: where it is, where
it is going, and how to go. To answer these ques-
tions, the robotic car needs to integrate three cou-
pled consequential tasks: self-localization, decision
*
Project supported by the National Key Program Project of
China (No. 2016YFB1001004) and the National Natural Science
Foundation of China (Nos. 91320301 and 61273252)
ORCID: Jian-ru XUE, http://orcid.org/0000-0002-4994-9343
c
Zhejiang University and Springer-Verlag Berlin Heidelberg 2017
making and motion planning, and motion control.
Among these tasks, the ability to understand the
robot’s surroundings lies at the core, and the robotic
car’s performance heavily depends on the accuracy
and reliability of its environment perception tech-
nologies including self-localization and perception of
obstacles.
Almost all relevant information required for au-
tonomous driving can be acquired through vision
sensors. This includes but goes well beyond lane
geometry, drivable road segments, traffic signs, traf-
fic lights, obstacle positions and velocity, and obsta-
cle class. However, exploiting this potential of vision
sensors imposes more difficulties than LIDAR, radar,
or ultrasonic sensors. The sensing data of LIDAR,
radar, or ultrasonic sensors involve distance and/or
![](https://csdnimg.cn/release/download_crawler_static/9784994/bg2.jpg)
Xue et al. / Front Inform Technol Electron Eng 2017 18(1):122-138 123
velocity, i.e., information necessary for vehicle con-
trol. Nevertheless, camera-based driver assistance
systems have entered the automotive markets (Ul-
rich, 2016). However, the computer vision based ap-
proach for autonomous driving in urban environment
is still an open research issue, since these state-of-the-
art vision technologies are still incapable of providing
the high rate of success demanded by autonomous
driving. Fortunately, recent approaches to scene un-
derstanding using deep learning technologies suggest
a promising future of vision-centered approach for
robotic cars (Hoiem et al., 2015).
In this paper, we propose a vision-centered
multi-sensor fusing framework for the robotic cars’
perception problem, which fuses camera, LIDAR,
and GIS information consistently via geometrical
constraints and driving knowledge. The framework
consists of self-localization and processing of obsta-
cles surrounding the robotic car. At first glance these
two problems seem to have been well studied, and
early works in this field were quickly rewarded with
promising results. However, the large variety of sce-
narios and the high rates of success demanded by
autonomous driving have kept this research alive.
Specifically, integrating computer vision algorithms
within a compact and consistent machine perception
system is still a challenging problem in the field of
robotic cars.
Self-localization is the first critical problem of
the aforementioned challenges. The capability of a
robot to accurately and efficiently determine its po-
sition at all times is one of the fundamental tasks es-
sential for a robotic car to interact with the environ-
ment. Different accuracies and update frequencies of
self-localization are required by various applications
of a robotic car. Taking parking as an example, the
accuracy needed is at the centimeter level, and the
update frequency is about 100 Hz. In contrast, for
routing and guidance, the required accuracy is re-
duced to 10–100 m level and the update frequency
is about 0.01 Hz. To address GPS measurements’
critical problems of low accuracy and being easily
affected, the map-based method becomes one of the
most popular methods for robotic cars, in which one
map is used to improve upon GPS measurements and
to fill in when signals are unavailable or degraded.
In the line of the map-based localization method, an
ideal map should provide not only a geometrical rep-
resentation of the traffic environment, but also some
kinds of sensor-based descriptions of the environment
to alleviate the difficulty of self-localization as well
as of motion planning (Fuentes-Pacheco et al., 2015).
However, traditional road maps for a human driver
cannot be used directly for a robotic car, since it is
composed of evenly sampled spatial points connected
via polylines, with low accuracy, especially in urban
areas, over about 5–20 m. Inevitably, building a high
definition map becomes one of the core competencies
of robotic cars.
Mapping approaches build geometric represen-
tations of environments. They adopt sensor-based
environment description models, which integrate vi-
sual and geometric features and have been designed
in conjunction with Bayesian filters so that the
sensor-based description can be updated over time
(Douillard et al., 2009). Mapping for robotic cars
through local perception information is a challenging
problem for a number of reasons. Firstly, maps are
defined over a continuous space; the solution space of
map estimation has infinitely many dimensions. Sec-
ondly, learning a map is a ‘chicken-and-egg’ problem,
for which reason it is often referred to as the simulta-
neous localization and mapping (SLAM) or concur-
rent mapping and localization problem (Thrun and
Leonard, 2008). More specifically, the difficulty of
the mapping problem can be increased by a collection
of factors including map size, noise in perception and
actuation, perceptual ambiguity, and alignment of
spatial-temporal sensing data acquisition from differ-
ent types of sensors running asynchronously. With a
given map of the traffic environment, self-localization
becomes the problem of determining its pose in the
map.
Another critical problem is the need for high
reliability in processing obstacles surrounding the
robotic car. This guarantees the robotic car’s safety
in driving through real traffic. The robotic car needs
to know positions, sizes, and velocities of the sur-
rounding obstacles to make high-level driving de-
cisions. However, real-time detection and tracking
algorithms relying on a single sensor often suffer
from low accuracy and poor robustness when con-
fronted with difficult, real-world data (Xue et al.
,
2008). For example, most state-of-the-art object
trackers present noisy estimates of velocities of obsta-
cles, and are difficult to track due to heavy occlusion
and viewpoint changes in the real traffic environment
(Ess et al., 2010; Mertz et al., 2013). Additionally,
![](https://csdnimg.cn/release/download_crawler_static/9784994/bg3.jpg)
124 Xue et al. / Front Inform Technol Electron Eng 2017 18(1):122-138
without robust estimates of velocities of nearby ob-
stacles, merging onto or off highways or changing
lanes becomes a formidable task. Similar issues
will be encountered by any robot that must act au-
tonomously in crowded, dynamic environments.
Fusing multiple LIDARs and radars is an essen-
tial module of a robotic car and of advanced driver
assistance systems. With the improvement of vision-
based object detection and tracking technologies, in-
tegrating vision technologies with LIDAR and radars
makes it possible to make a higher level of driving
decision than previous methods which fuse only LI-
DARs with radars.
In this paper, we summarize our 8-year effort
on a vision-centered multi-sensor fusing approach
to the aforementioned problems, as well as lessons
we have learned through the long-term and exten-
sive test of the proposed approach with our robotic
car autonomously driving in real urban traffic (Xue
et al., 2008; Du et al., 2010; Cui et al., 2014; 2016).
Fig. 1 illustrates the timeline of the robotic cars we
developed for the test of the vision-centered multi-
sensor fusing approach.
2013-201520112009
201620122010
Fig. 1 The timeline of the robotic cars for the long-
term test of the vision-centered multi-sensor fusing
approach
2 Related works
In this section, we present a brief survey of re-
cent works on self-localization, and obstacle detec-
tion and tracking.
2.1 Self-localization
The core problem of self-localization is mapping,
and mapping and localization were initially stud-
ied independently. More specifically, mapping for
robotic cars is realized as a procedure of integrat-
ing local, partial, and sequential measurements of
the car’s surroundings into a consistent representa-
tion, which forms the basis for further navigation.
The key to the integration lies in the joint alignment
of spatial-temporal sensing data from multiple het-
erogenous sensors equipping the robotic car, which
is usually performed off-line.
With a given map, one needs to establish corre-
spondence between the map and its local percep-
tion, and then determines the transformation be-
tween the map coordinate system and the local per-
ception coordinate system based on these correspon-
dences. Knowing this transformation enables the
robotic car to locate the surrounding obstacles of in-
terest within its own coordinate frame—a necessary
prerequisite for the robotic car to navigate through
the obstacles. This means that the localization is ac-
tually a registration problem (Du et al., 2010), and
can be solved via map-matching technologies (Cui
et al., 2014). With its localization in the global map,
the robot can obtain navigation information from
the map. Additionally, the navigation information
can be further used as a prior in verifying the local
perception results, for the purpose of increasing the
accuracy and reliability of the local perception (Cui
et al., 2014; 2016).
Mapping and localization were eventually
known as SLAM (Dissanayake et al., 2001). SLAM
methods are able to reduce the accumulative drift
relative to the initial position of the robotic car by
using landmarks and jointly optimizing over all or
a selection of poses and landmarks. Efficient opti-
mization strategies using incremental sparse matrix
factorization (Montemerlo et al., 2002) or relative
structure representation (Grisetti et al., 2010) have
been proposed to make these algorithms tractable
and scalable. Thus, at a theoretical and concep-
tual level, SLAM is now considered a solved prob-
lem in the case that LIDARs are used to build 2D
maps of small static indoor environments (Thrun and
Leonard, 2008). Comprehensive surveys and tuto-
rial papers on SLAM can be found in the literature
(Durrant-Whyte and Bailey, 2006).
For large-scale localization and mapping,
metric-topological mapping constructs maps that
navigate between places which can be recognized
perceptually (Blanco et al., 2007). A popular rep-
resentation is to use sub-maps that are metrically
consistent, and connect them with topological con-
straints. Generating such a metric-topological map
is based on the reconstruction of the robot path in a
剩余16页未读,继续阅读
资源评论
![avatar-default](https://csdnimg.cn/release/downloadcmsfe/public/img/lazyLogo2.1882d7f4.png)
![avatar](https://profile-avatar.csdnimg.cn/4c74f0a6d23545c6899053a3721d8d1c_xiaguxingyun.jpg!1)
xiaguxingyun
- 粉丝: 9
- 资源: 48
上传资源 快速赚钱
我的内容管理 展开
我的资源 快来上传第一个资源
我的收益
登录查看自己的收益我的积分 登录查看自己的积分
我的C币 登录后查看C币余额
我的收藏
我的下载
下载帮助
![voice](https://csdnimg.cn/release/downloadcmsfe/public/img/voice.245cc511.png)
![center-task](https://csdnimg.cn/release/downloadcmsfe/public/img/center-task.c2eda91a.png)
最新资源
- OPCDA转OPCUA转换工具:实现DA Server数据双向转换至UA Server的软件解决方案,OPCDA转OPCUA转换工具:实现DA Server数据与UA Server双向传输功能,OPC
- 基于Simulink的四永磁同步电机偏差耦合转速同步控制仿真模型研究与应用,Simulink上的四永磁同步电机偏差耦合转速同步控制仿真模型研究,simulink上搭建的四永磁同步电机偏差耦合转速同步控
- 纯电动汽车Simulink仿真模型建模详解:步骤指南与操作技巧,附带完整模型及参考设计能力的提升,纯电动汽车Simulink仿真模型建模详解:步骤指南与附带模型,助力提升建模能力与思路借鉴,纯电动汽车
- 永磁同步电机PMSM谐波注入降低转矩脉动技术研究与实践:文献复现及优化控制策略,永磁同步电机PMSM的5-7次谐波注入与转矩脉动抑制研究:文献复现与实践探讨,永磁同步电机PMSM电机5 -7次谐波注入
- Xilinx FPGA千兆以太网通信与DDR内存读写测试工程代码:基于KCU105与KC705平台的10/100/1000Mbps LWIP协议实现及DDR4内存读写性能测试,基于KCU105和KC7
- 基于Python和HTML的学生就业画像分析后端设计源码
- Dugoff轮胎模型的验证与对比分析:基于MATLAB 2018与CarSim 2020.0的仿真研究,MATLAB CarSim中的Dugoff轮胎模型仿真验证:高附路面不同速度下模型与真实情况对比
- DS18B20温度传感器.zip 51单片机代码
- 基于Java语言的艾斯医药系统自动搜索功能设计源码
- 基于Vue框架的留学项目管理与管理系统设计源码
- 基于HTML+CSS的纯静态豆瓣首页开源设计源码
- 基于C++ Primer Plus的深入C++教材学习与源码分析
- 基于HTML+CSS+JavaScript的临沂市新能源协会前端页面设计源码
- 断网急救箱python源码
- 基于Python与多语言结合的科研文献工作流设计源码
- 51单片机LED从左到右流水灯实验详解-STC89C52RC晶振与Keil编程入门
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
![feedback](https://img-home.csdnimg.cn/images/20220527035711.png)
![feedback](https://img-home.csdnimg.cn/images/20220527035711.png)
![feedback-tip](https://img-home.csdnimg.cn/images/20220527035111.png)
安全验证
文档复制为VIP权益,开通VIP直接复制
![dialog-icon](https://csdnimg.cn/release/downloadcmsfe/public/img/green-success.6a4acb44.png)