没有合适的资源?快使用搜索试试~ 我知道了~
显微图像轮廓提取的基元兴趣提取方法及其在姿态测量中的应用
资源推荐
资源详情
资源评论
1348 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 48, NO. 8, AUGUST 2018
Contour Primitives of Interest Extraction
Method for Microscopic Images and Its
Application on Pose Measurement
Fangbo Qin, Fei Shen, Dapeng Zhang, Xilong Liu, and De Xu, Senior Member, IEEE
Abstract—This paper proposes a suite of methods to real-
ize high precision pose measurement in 3-D Cartesian space
based on a multicamera microscopic vision system. Since it is
inefficient to develop a specific image algorithm for each kind
of object and the imaging condition might be unsatisfactory,
we propose a method of contour primitives of interest extrac-
tion, which allows flexible reconfiguration for novel object image
and owns robustness under different imaging conditions. The
object is detected in grayscale image based on a template of
contour primitives. Edges are extracted according to derivatives
along the normal vectors of these contour primitives. The posi-
tions and directional derivatives of these edges are used for
feature extraction and autofocus, respectively. The point fea-
tures and line features extracted from multiview images are
utilized to measure 3-D vectors and orientations, respectively,
based on image Jacobian matrices. Cameras’ linear motions
are considered in the imaging model, so that the measurement
range is expanded beyond the limitation of microscopes’ shal-
low depths of field. The affine epipolar constraint and focused
planes intersection constraint between cameras are applied to
improve the real time performances of image feature extraction
and multicamera autofocus, respectively. A series of experiments
are conducted to verify the effectiveness of the proposed methods.
The root mean square errors of pose measurement are evalu-
ated as 3 µm in position and 0.05
◦
in orientation, while the
measurement range is about 5000 µm in position and 20
◦
in
orientation.
Index Terms—Geometric constraint, image feature extraction,
microscopic vision, pose measurement, precision assembly
I. INTRODUCTION
M
ICROSCOPIC vision systems have been widely
used for noncontact, real-time, and high precision
measurement of objects with millimeters or microme-
ters sizes, such as biological cells, microelectromechanical
systems (MEMSs), micro optical devices, microstructures,
etc. [1], [2]. A microscopic vision system mainly consists
Manuscript received June 22, 2016; accepted February 1, 2017. Date of
publication March 1, 2017; date of current version July 17, 2018.This work
was supported by the National Natural Science Foundation of China under
Grant 61227804, Grant 61421004, Grant 61503378, and Grant 61673383. This
paper was recommended by Associate Editor Z. Liu.
The authors are with the Research Center of Precision Sensing and Control,
Institute of Automation, Chinese Academy of Sciences, Beijing 100190,
China, and also with the School of Computer and Control Engineering,
University of Chinese Academy of Sciences, Beijing 101408, China (e-mail:
de.xu@ia.ac.cn).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TSMC.2017.2669219
of cameras, microscopes, light sources, and adjusting stages.
Microscope offers the advantages of high magnification and
ignorable distortion. Telecentric microscope, whose magnifica-
tion remains constant with object depth changing in a specific
range, is preferred in high accuracy measurement for 3-D
objects [3]. With the advances of micro-scale technology,
the objects are with more precise and complex structures.
It remains challenging to improve the accuracy, flexibility,
and robustness of microscopic vision measurement. This paper
concerns the measurement of pose information, which can be
applied in precision assembly, structure inspection, and motion
monitoring.
Image feature extraction is an essential issue in microscopic
vision measurement, which means transforming the source
image into a set of informative features of interest. The
feasibility, robustness, and accuracy of measurement highly
depend on image feature extraction [26], [27]. Low-level fea-
tures like edge and corner are detected according to local
pixels. Edge feature is popular in pose measurement. Canny
algorithm [4] detects single-pixel edges with nonmaximum
suppression, and eliminates noise edges by edge tracking with
hysteresis. A challenging problem is to extract the object-
related features that lie in an image at unknown poses. With
various object images to process, extraction methods allow-
ing flexible reconfiguration for novel objects are preferred
than those designed for specific tasks. Hough transform-based
methods are widely used for detection of line, circle, and even
arbitrary shape [5], [6]. Shark et al. [7] proposed the feature
matching method based on line segments. Both the Hough
transform-based and line segments-based methods rely on the
preprocessing step of edge extraction. However, microscopic
image is prone to defocus, so that the edge extraction is
not robust as a preliminary step. Grayscale template-based
matching methods search for object in grayscale image [8].
The template contains a region of pixels and can describe
complex objects. However, grayscale template matching is
computationally expensive. The fast affine template match-
ing (fast-match) algorithm accelerated the matching by the
random sampling and the branch-and-bound scheme [9]. The
shape-based matching (SBM) method was presented in [10].
It constructs a point set as the shape model by edge extraction
from template image, and searches for the object based on
the consistency of image gradient. SBM is robust against illu-
mination change and occlusion. However, the shape model in
SBM might involve edges that are produced by features of no
2168-2216
c
2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
QIN et al.: CPIE METHOD FOR MICROSCOPIC IMAGES AND ITS APPLICATION ON POSE MEASUREMENT 1349
interest, such as texture. Although changes of magnification
and viewpoint are minor in microscopic vision, bad imaging
conditions, such as overlap, illumination change, occlusion,
blur, and weak feature, often cause failures of image feature
extraction. In addition, a microscopic image is with large size,
and algorithms need be fast for real time operation.
Autofocusing is necessary to a microscopic camera, which
realizes acquiring clear images by adjusting its object dis-
tance automatically. It is implemented by translating the
camera along a linear motion stage to search for the posi-
tion where the focus measure (FM) is maximum. The FM
versus camera position curve is expected to have a sharp
global peak at the exact best focus position and no false local
peaks at other positions. The mountain climbing algorithm is
a broadly adopted searching strategy with high efficiency [11].
Sun et al. [12] presented a systematic investigation of 18 FM
algorithms, among which the normalized variance algorithm
provided the best overall performance. However, autofocus
might be not accurate for a 3-D object, because its visible
parts locate at different depths. Only the part of interest need
be focused on, which depends on the selection of image region.
As is different from conventional lens, microscope is
with the limitations of narrow field of view and shal-
low depth of field, which are challenges to microscopic
vision. Monocular microscopic camera is suitable for planar
measurement [13]–[15]. Wang et al. [16] presented an auto-
mated 3-D micro-grasping method, in which the microscopic
camera offered 2-D position feedback. 3-D information of
object could be recovered from multiview images using stereo
vision [17], [18]. Yamamoto and Sano [19] used a stereo-
scopic microscope to measure the needle tip’s 3-D position
in a micromanipulation system. In [20], a multicamera visual
tracking system was developed for microassembly, which pro-
vided six degree-of-freedom (DOF) pose feedback of MEMS
components in real time. It relied on the CAD model and
initial value estimation. Its position measurement accuracy
decreased with the object’s motion distance. Shen et al. [21]
and Liu et al. [22] implemented pose alignment control based
on visual feedbacks of multiple microscopic cameras in the
3-D precision assembly tasks. The objects’ relative poses were
measured using image Jacobian matrices and image features.
However, their relative position measurements were limited
within the optical depth of field, i.e., submillimeter range. In
many cases, the relative depth between two objects exceeds
the optical depth of field, so that the camera needs to be
translated to focus on them sequentially. The measurement
accuracy cannot be guaranteed when the camera motion is
not considered. In addition, the relative attitudes between
objects were obtained in the joint space. However, some
tasks require attitudes that are uniformly represented in 3-D
Cartesian space.
The motivation of this paper is to design a microscopic
vision system for high precision pose measurement in 3-D
Cartesian space. A method of contour primitives of interest
extraction (CPIE) is proposed to obtain object-related features
from grayscale image in real time. The object detection and
edge extraction are based on the object’s contour primitives
of interest, which do not involve parts that are irrelevant
to measurement. The method can output image features
and contour sharpness for pose measurement and autofocus,
respectively. Given a novel object, user can reconfigure the
method using a drawing interface, instead of reprogramming
for it. The 3-D vector and orientation measurement methods
are proposed, which are based on the image Jacobian matrices
of multiple telecentric cameras. Camera motions along linear
stages are considered to expand the measurement range beyond
the limitation of depth of field. The affine epipolar constraint
and focused planes intersection constraint between cameras
are applied to reduce the time cost of corresponding feature
extraction and multicamera autofocus, respectively. The effec-
tiveness of the proposed methods is verified by the experiments
on a precision assembly system. The main contributions of this
paper are as follows.
1) A robust feature extraction method allowing reconfigu-
ration for novel object images is proposed.
2) The pose measurement is realized in a range larger than
the depth of field.
3) The geometric constraints between cameras are utilized
to improve the measurement efficiency.
The remainder of this paper is organized as follows.
Section II is the system overview. The CPIE method is
proposed in Section III. Section IV describes the imaging
model and the pose measurement methods. The geometric
constraints between cameras and their applications are given
in Section V. Section VI provides the experimental results.
Finally, this paper is concluded in Section VII.
II. S
YSTEM OVERVIEW
The precision assembly system consists of three telecen-
tric microscopic cameras, three linear motion stages, and two
manipulators, as shown in Fig. 1(a). Each camera is mounted
on a linear motion stage, whose translation axis is approxi-
mately parallel to the camera’s optical axis. The linear stage
is used to adjust the camera’s object distance, so that fea-
tures at different depths can be clearly imaged sequentially.
A ring light and a backlight are installed for each cam-
era, so that either surface image or silhouette image can be
acquired. The manipulator 1 is with three translational DOFs.
The manipulator 2 has three rotational DOFs and one vertical
translational DOF.
The camera coordinates {C
1
}, {C
2
}, and {C
3
} are established
at the top left corners of the charge-coupled devices of the
cameras 1, 2, and 3, respectively. Their z
c
axes are parallel
to the cameras’ optical axes and point to the scene. Their x
c
axes are corresponding to the horizontal axes of the image
coordinates. The world coordinates {W} are established to be
identical with the manipulator 1 coordinates, whose origin is
at the manipulator 1’s base. The manipulator 2 coordinates
are established at the manipulator 2’s base, whose axes are
parallel to those of {W}.
The block diagram of the microscopic vision system is
given in Fig. 1(b). It consists of autofocusing, image cap-
ture, feature extraction, and pose measurement modules. The
autofocusing module is implemented via mountain-climbing
search according to the contour sharpness given by the feature
剩余11页未读,继续阅读
资源评论
weixin_38638163
- 粉丝: 3
- 资源: 975
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- js基础但是这个烂怂东西要求标题不能少于10个字才能上传然后我其实还没有写完之后再修订吧.md
- electron-tabs-master
- Unity3D 布朗运动算法插件 Brownian Motion
- 鼎微R16中控升级包R16-4.5.10-20170221及强制升级方法
- 鼎微R16中控升级包公版UI 2015及强制升级方法,救砖包
- 基于CSS与JavaScript的积分系统设计源码
- 生物化学作业_1_生物化学作业资料.pdf
- 基于libgdx引擎的Java开发连连看游戏设计源码
- 基于MobileNetV3的SSD目标检测算法PyTorch实现设计源码
- 基于Java JDK的全面框架设计源码学习项目
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功