没有合适的资源?快使用搜索试试~ 我知道了~
Method of pose estimation for UAV landing
1 下载量 177 浏览量
2021-02-06
00:28:54
上传
评论
收藏 407KB PDF 举报
温馨提示
In order to achieve the goal of the autonomous landing of fixed-wing unmanned aerial vehicles (UAVs), a new method is put forward for using the monocular camera on board to provide the information, which is need for landing. It is not necessary to install additional equipment in the airport. The vision-based method only makes use of the two edge lines on both sides of the main runway and the front edge line of the airport without using the horizon. While the runway width is known, the method can
资源推荐
资源详情
资源评论
COL 10(Suppl.), S20401(2012) CHINESE OPTICS LETTERS December 30, 2012
Method of pose estimation for UAV landing
Likui Zhuang (BBBwww¾¾¾)
1
, Yadong Han (¸¸¸æææÀÀÀ)
2
, Yanming Fan (µµµ)
3
,
Yunfeng Cao (ùùù¸¸¸)
1,2∗
, Biao Wang ( JJJ)
2
, and Qin Zhang (ÜÜÜ )
2
1
Academy of Frontier Science, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
2
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
3
Shenyang Aircraft Design and Research Institute, AVIC, Shenyang 110035, China
∗
Corresponding author: cyfac@nuaa.edu.cn
Received May 22, 2012; accepted June 29, 2012; posted online September 23, 2012
In order to achieve the goal of the autonomous landing of fixed-wing unmanned aerial vehicles (UAVs), a
new method is put forward for using the monocular camera on board to provide the information, which
is need for landing. It is not necessary to install additional equipment in the airport. The vision-based
method only makes use of the two edge lines on both sides of the main runway and the front edge line of
the airport without using the horizon. While the runway width is kn own, the method can produce the
attitude and position parameters of landing. The results of the hardware-in-the-lo op simulation show the
proposed method has better accuracy and faster computation speed.
OCIS codes: 040.1490, 080.1235, 080.2720, 200.4560.
doi: 10.3788/COL201210.S20401.
In recent years, the study on autonomous landing of un-
manned aerial vehicle (UAV) has being a research foc us ,
especially based on vision. This approach has two attrac-
tive features. Firstly, video cameras/sensors are passive
sensors; hence, they cannot be detected and artificially
interfered with easily. Secondly, most UAVs are alr e ady
equipped with video cameras (e.g., for ground reconnais-
sance); hence, vision-based navigation does not require
much additional hardware or payload. Research on the
key technolo gy of vision-based navigation is the measure-
ment of landing attitude and position of UAV using the
runway images taken by the camera on UAV.
There is some work done on the use of machine vision
for landing of UAVs at present. Sara et al.
[1]
used a single
camera to shoot the runway edge lines and the horizon
for pose estimation. If the horizon ca nnot be shot, the
attitude parameters can be got from the inertial navi-
gation system (INS). Liu et al.
[2]
used three points on
the runway and the horizon to e stimate the attitude and
position parameters. But two cameras were installed in
both the wings, which caused the vibration of the UAV
having a deep influence o n the precision. It was not sta-
ble to extract the feature points and the image pro cessing
system must have a high computing power. It is shown
that there are many methods to calculate the position of
the UAV to the runway.
In order to estimate the attitude and po sition of UAV,
the features of the runway image must be extracted
firstly. These fea tur e s are including points and lines.
The feature points can be used to get high-precision atti-
tude and position parameters, but it is very sensitive to
noise in the process of the feature points extraction. The
image definition must be high, or it will greatly increase
the probability of feature point mis-match and lead to
an estimation failure of attitude and position parame-
ters. However, these disadvantages can be overcome by
the use of the feature lines. In generally, militar y airport
has two parallel runways: one is main, the other is auxil-
iary. The aprons connect the main runway ends and the
auxiliary r unway ends, which makes the military airport
rectangular structure. So, the runway image has obvious
edge line features. Due to the above reas ons, many re-
searchers use the estimation method based on the feature
lines.
In this letter, a new method is designed to solve the
problems in the research of UAV autonomous landing,
for example, the processing results must have high preci-
sion and the processing time must be in real-time. The
two edge lines on both sides of the main runway and the
front edge line of the airpor t are used to estimate the
attitude and position parameters.
In this letter, the pin-hole model is chosen as the cam-
era imaging model. It is assumed that there is no rel-
ative motion between the camera and the UAV. Before
taking-off, the intrinsic parameters and lens aberration
coefficient of the camera are offline calibrated.
As shown in Fig. 1, it is assumed that the two edge
lines on both sides of the main runway L
1
and L
2
and
the front edge line of the airport L
3
are detected. The
three lines have two intersections P
1
and P
2
. The width
of the main runway is W .
World coordinate system (O
W
X
W
Y
W
Z
W
): the middle
Fig. 1. Runway.
1671-7694/2012/S20401(4) S20401-1
c
2012 Chinese Optics Letters
资源评论
weixin_38538264
- 粉丝: 5
- 资源: 932
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功