没有合适的资源?快使用搜索试试~ 我知道了~
资源推荐
资源详情
资源评论
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED APRIL, 2024 1
Pixel to Elevation: Learning to Predict Elevation Maps at Long
Range using Images for Autonomous Offroad Navigation
Chanyoung Chung
1,2
, Georgios Georgakis
1
, Patrick Spieler
1
, Curtis Padgett
1
, Ali Agha
2
, Shehryar Khattak
1
Abstract—Understanding terrain topology at long-range is
crucial for the success of off-road robotic missions, especially
when navigating at high-speeds. LiDAR sensors, which are
currently heavily relied upon for geometric mapping, provide
sparse measurements when mapping at greater distances. To
address this challenge, we present a novel learning-based ap-
proach capable of predicting terrain elevation maps at long-
range using only onboard egocentric images in real-time. Our
proposed method is comprised of three main elements. First,
a transformer-based encoder is introduced that learns cross-
view associations between the egocentric views and prior bird-
eye-view elevation map predictions. Second, an orientation-
aware positional encoding is proposed to incorporate the 3D
vehicle pose information over complex unstructured terrain with
multi-view visual image features. Lastly, a history-augmented
learnable map embedding is proposed to achieve better temporal
consistency between elevation map predictions to facilitate the
downstream navigational tasks. We experimentally validate the
applicability of our proposed approach for autonomous offroad
robotic navigation in complex and unstructured terrain using
real-world offroad driving data. Furthermore, the method is qual-
itatively and quantitatively compared against the current state-
of-the-art methods. Extensive field experiments demonstrate that
our method surpasses baseline models in accurately predicting
terrain elevation while effectively capturing the overall terrain
topology at long-ranges. Finally, ablation studies are conducted
to highlight and understand the effect of key components of the
proposed approach and validate their suitability to improve off-
road robotic navigation capabilities.
I. INTRODUCTION
Autonomous offroad robotic navigation over unstructured
and complex terrains is of considerable interest across a
diverse set of robotic application domains, ranging from
planetary exploration to agriculture, and search and rescue.
In these offroad robotic missions, in particular for high-speed
(>10 m s
−1
) traversal, on-board autonomy should be able to
reason about the geometric terrain at long-ranges (∼100 m) to
efficiently plan trajectories that ensure both safe and optimal
navigation.
Existing off-road autonomous systems primarily rely on
LiDAR sensors to navigate the environments by either uti-
lizing maps provided by the Simultaneous Localization and
Manuscript received: December 14, 2023; Revised: February 9, 2024;
Accepted: April 7, 2024.
This paper was recommended for publication by Editor Javier Civera upon
evaluation of the Associate Editor and Reviewers’ comments.
1
NASA Jet Propulsion Laboratory, California Institute of Technology,
Pasadena, CA, USA.
2
Field AI, Mission Viejo CA, USA.
The research was carried out at the Jet Propulsion Laboratory, California
Institute of Technology, under a contract with the National Aeronautics and
Space Administration (80NM0018D0004). This work was partially supported
by Defense Advanced Research Projects Agency (DARPA).
©2024. California Institute of Technology. Government sponsorship ac-
knowledged. All rights reserved.
Digital Object Identifier (DOI): see top of this page.
Figure 1: Top row shows the robot navigating in an offroad natural
environment during a field experiment conducted in Paso Robles,
USA. The red box indicates the mounting position of the visual
cameras used in this work, with the inset figure showing a zoomed-in
view. Middle row shows the images taken by the left, front and right
cameras during an instance of the experiment. Bottom row shows
the elevation map output of the proposed method and compares it to
the ground truth provided by the USGS. Using only visual camera
images as input, our method is able to reliably predict the elevation
map up to a distance of 100 m to facilitate high-speed off-road robot
navigation.
Mapping (SLAM) system [1], [2] or explicitly building them
in real-time [3]. These maps are then processed by employing
heuristics to estimate elevation maps within the mapped space
and to understand the traversability of the surrounding area.
However, despite the precision of LiDAR sensors in measuring
depth, LiDAR-based approaches suffer from issues associ-
ated with the sparsity of LiDAR returns at longer distances.
Consequently, this limited perceptual range results in sub-
optimal mapping for downstream navigation tasks such as path
planning and mission execution. Furthermore, the problem is
exacerbated at higher speeds as LiDAR sparsity coupled with
larger robot motion between LiDAR scans results in fewer
depth measurements per unit area, thus reducing the reliability
of the constructed map.
To overcome these limitations, recent works [4]–[7] have
arXiv:2401.17484v3 [cs.RO] 20 Apr 2024
资源评论
weixin_52048028
- 粉丝: 0
- 资源: 3
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功