# Grasp Pose Detection (GPD)
* [Author's website](http://www.ccs.neu.edu/home/atp/)
* [License](https://github.com/atenpas/gpd/blob/master/LICENSE.md)
* [ROS wrapper](https://github.com/atenpas/gpd_ros/)
Grasp Pose Detection (GPD) is a package to detect 6-DOF grasp poses (3-DOF
position and 3-DOF orientation) for a 2-finger robot hand (e.g., a parallel
jaw gripper) in 3D point clouds. GPD takes a point cloud as input and produces
pose estimates of viable grasps as output. The main strengths of GPD are:
- works for novel objects (no CAD models required for detection),
- works in dense clutter, and
- outputs 6-DOF grasp poses (enabling more than just top-down grasps).
<a href="http://www.youtube.com/watch?feature=player_embedded&v=kfe5bNt35ZI
" target="_blank"><img src="readme/ur5_video.jpg"
alt="UR5 demo" width="320" height="240" border="0" /></a>
GPD consists of two main steps: sampling a large number of grasp candidates, and classifying these candidates as viable grasps or not.
##### Example Input and Output
<img src="readme/clutter.png" height=170px/>
The reference for this package is:
[Grasp Pose Detection in Point Clouds](http://arxiv.org/abs/1706.09911).
## Table of Contents
1. [Requirements](#requirements)
1. [Installation](#install)
1. [Generate Grasps for a Point Cloud File](#pcd)
1. [Parameters](#parameters)
1. [Views](#views)
1. [Input Channels for Neural Network](#cnn_channels)
1. [CNN Frameworks](#cnn_frameworks)
1. [Network Training](#net_train)
1. [Grasp Image](#descriptor)
1. [References](#References)
1. [Troubleshooting](#troubleshooting)
<a name="requirements"></a>
## 1) Requirements
1. [PCL 1.9 or newer](http://pointclouds.org/)
2. [Eigen 3.0 or newer](https://eigen.tuxfamily.org)
3. [OpenCV 3.4 or newer](https://opencv.org)
<a name="install"></a>
## 2) Installation
The following instructions have been tested on **Ubuntu 16.04**. Similar
instructions should work for other Linux distributions.
1. Install [PCL](http://pointclouds.org/) and
[Eigen](https://eigen.tuxfamily.org). If you have ROS Indigo or Kinetic
installed, you should be good to go.
2. Install OpenCV 3.4 ([tutorial](https://www.python36.com/how-to-install-opencv340-on-ubuntu1604/)).
3. Clone the repository into some folder:
```
git clone https://github.com/atenpas/gpd
```
4. Build the package:
```
cd gpd
mkdir build && cd build
cmake ..
make -j
```
You can optionally install GPD with `sudo make install` so that it can be used by other projects as a shared library.
If building the package does not work, try to modify the compiler flags, `CMAKE_CXX_FLAGS`, in the file CMakeLists.txt.
<a name="pcd"></a>
## 3) Generate Grasps for a Point Cloud File
Run GPD on an point cloud file (PCD or PLY):
```
./detect_grasps ../cfg/eigen_params.cfg ../tutorials/krylon.pcd
```
The output should look similar to the screenshot shown below. The window is the PCL viewer. You can press [q] to close the window and [h] to see a list of other commands.
<img src="readme/file.png" alt="" width="30%" border="0" />
Below is a visualization of the convention that GPD uses for the grasp pose (position and orientation) of a grasp. The grasp position is indicated by the orange cross and the orientation by the colored arrows.
<img src="readme/hand_frame.png" alt="" width="30%" border="0" />
<a name="parameters"></a>
## 4) Parameters
Brief explanations of parameters are given in [cfg/eigen_params.cfg](cfg/eigen_params.cfg).
The two parameters that you typically want to play with to **improve the
number of grasps found** are *workspace* and *num_samples*. The first defines the
volume of space in which to search for grasps as a cuboid of dimensions [minX,
maxX, minY, maxY, minZ, maxZ], centered at the origin of the point cloud frame.
The second is the number of samples that are drawn from the point cloud to
detect grasps. You should set the workspace as small as possible and the number
of samples as large as possible.
Most of the code is parallelized. To **improve runtime**, set *num_threads* to
the number of (physical) CPU cores that your computer has available.
<a name="views"></a>
## 5) Views
![rviz screenshot](readme/views.png "Single View and Two Views")
You can use this package with a single or with two depth sensors. The package
comes with CAFFE model files for both. You can find these files in
*models/caffe/15channels*. For a single sensor, use
*single_view_15_channels.caffemodel* and for two depth sensors, use
*two_views_15_channels_[angle]*. The *[angle]* is the angle between the two
sensor views, as illustrated in the picture below. In the two-views setting, you
want to register the two point clouds together before sending them to GPD.
Providing the camera position to the configuration file (*.cfg) is important,
as it enables PCL to estimate the correct normals direction (which is to point
toward the camera). Alternatively, using the
[ROS wrapper](https://github.com/atenpas/gpd_ros/), multiple camera positions
can be provided.
![rviz screenshot](readme/view_angle.png "Angle Between Sensor Views")
To switch between one and two sensor views, change the parameter `weight_file`
in your config file.
<a name="cnn_channels"></a>
## 6) Input Channels for Neural Network
The package comes with weight files for two different input representations for
the neural network that is used to decide if a grasp is viable or not: 3 or 15
channels. The default is 15 channels. However, you can use the 3 channels to
achieve better runtime for a loss in grasp quality. For more details, please see
the references below.
<a name="cnn_frameworks"></a>
## 7) CNN Frameworks
GPD comes with a number of different classifier frameworks that
exploit different hardware and have different dependencies. Switching
between the frameworks requires to run CMake with additional arguments.
For example, to use the OpenVino framework:
```
cmake .. -DUSE_OPENVINO=ON
```
You can use `ccmake` to check out all possible CMake options.
GPD supports the following three frameworks:
1. [OpenVino](https://software.intel.com/en-us/openvino-toolkit): [installation instructions](https://github.com/opencv/dldt/blob/2018/inference-engine/README.md) for open source version
(CPUs, GPUs, FPGAs from Intel)
1. [Caffe](https://caffe.berkeleyvision.org/) (GPUs from Nvidia or CPUs)
1. Custom LeNet implementation using the Eigen library (CPU)
Additional classifiers can be added by sub-classing the `classifier` interface.
##### OpenVINO
OpenVINO is **recommended for speed**. To use OpenVINO, you need to run the following command before compiling GPD.
```
export InferenceEngine_DIR=/path/to/dldt/inference-engine/build/
```
<a name="net_train"></a>
## 8) Network Training
To create training data with the C++ code, you need to install [OpenCV 3.4 Contribs](https://www.python36.com/how-to-install-opencv340-on-ubuntu1604/).
Next, you need to compile GPD with the flag `DBUILD_DATA_GENERATION` like this:
```
cd gpd
mkdir build && cd build
cmake .. -DBUILD_DATA_GENERATION=ON
make -j
```
There are four steps to train a network to predict grasp poses. First, we need to create grasp images.
```
./generate_data ../cfg/generate_data.cfg
```
You should modify `generate_data.cfg` according to your needs.
Next, you need to resize the created databases to `train_offset` and `test_offset` (see the terminal output of `generate_data`). For example, to resize the training set, use the following commands with `size` set to the value of `train_offset`.
```
cd pytorch
python reshape_hdf5.py pathToTrainingSet.h5 out.h5 size
```
The third step is to train a neural network. The easiest way to training the network is with the existing code. This requires the **pytorch** framework. To train a network, use the following commands.
```
cd pytorch
python train_net3.py pathToTrainingSet.h5 pathToTestSet.h5 num_channels
```
The fourth
没有合适的资源?快使用搜索试试~ 我知道了~
检测点云中 的 6-DOF抓取姿势_C++_代码_下载
共173个文件
cpp:42个
h:31个
py:23个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
5星 · 超过95%的资源 1 下载量 140 浏览量
2022-06-20
12:34:01
上传
评论 1
收藏 162.22MB ZIP 举报
温馨提示
Grasp Pose Detection (GPD) 是一个包,用于在 3D 点云中检测 2 指机器人手(例如,平行下颚夹持器)的 6-DOF 抓取姿势(3-DOF 位置和 3-DOF 方向)。GPD 将点云作为输入,并生成可行抓取的姿态估计作为输出。GPD的主要优势是: 适用于新物体(检测不需要 CAD 模型), 在密集的混乱中工作,并且 输出 6-DOF 抓取姿势(不仅仅是自上而下的抓取)。 效果展示: http://www.youtube.com/watch?feature=player_embedded&v=kfe5bNt35ZI https://github.com/atenpas/gpd/blob/master/readme/views.png 更多详情、使用方法,请下载后阅读README.md文件
资源推荐
资源详情
资源评论
收起资源包目录
检测点云中 的 6-DOF抓取姿势_C++_代码_下载 (173个子文件)
single_view_15_channels.bin 13.86MB
two_views_15_channels_90_deg.bin 13.86MB
two_views_15_channels_53_deg.bin 13.86MB
two_views_12_channels_curv_axis.bin 13.86MB
two_views_3_channels_curv_axis.bin 13.84MB
ip1_weights.bin 13.73MB
ip1_weights.bin 13.73MB
single_view_15_channels.bin 6.93MB
conv2_weights.bin 98KB
conv2_weights.bin 98KB
conv1_weights.bin 29KB
conv1_weights.bin 6KB
ip2_weights.bin 4KB
ip2_weights.bin 4KB
ip1_biases.bin 2KB
ip1_biases.bin 2KB
conv2_biases.bin 200B
conv2_biases.bin 200B
conv1_biases.bin 80B
conv1_biases.bin 80B
ip2_biases.bin 8B
ip2_biases.bin 8B
two_views_15_channels_53_deg.caffemodel 13.86MB
two_views_15_channels_90_deg.caffemodel 13.86MB
single_view_15_channels.caffemodel 13.86MB
two_views_15_channels_90_deg_no_flipping.caffemodel 13.86MB
bottles_boxes_cans_5xNeg.caffemodel 13.84MB
ros_vino_params.cfg 4KB
bigbird_train_params.cfg 4KB
vino_params_12channels.cfg 4KB
ros_eigen_params.cfg 4KB
all_axes_vino_12channels.cfg 3KB
eigen_params.cfg 3KB
ur5.cfg 3KB
caffe_params.cfg 3KB
cem_vino_params.cfg 3KB
generate_data.cfg 2KB
ur5_hand_geometry.cfg 509B
hand_geometry.cfg 508B
image_geometry_15channels.cfg 453B
image_geometry_12channels.cfg 451B
image_geometry_1channels.cfg 450B
image_geometry.cfg 449B
image_geometry_3channels.cfg 448B
plot.cpp 31KB
data_generator.cpp 27KB
grasp_detector.cpp 22KB
cloud.cpp 22KB
detect_grasps_python.cpp 18KB
hand_set.cpp 10KB
sequential_importance_sampling.cpp 10KB
image_strategy.cpp 9KB
hand_search.cpp 8KB
generate_candidates.cpp 6KB
eigen_classifier.cpp 6KB
test_grasp_image.cpp 6KB
image_15_channels_strategy.cpp 5KB
finger_hand.cpp 5KB
antipodal.cpp 5KB
image_generator.cpp 5KB
label_grasps.cpp 4KB
image_12_channels_strategy.cpp 4KB
clustering.cpp 4KB
config_file.cpp 4KB
openvino_classifier.cpp 4KB
conv_layer.cpp 3KB
cem_detect_grasps.cpp 3KB
test_hdf5.cpp 3KB
frame_estimator.cpp 3KB
hand.cpp 3KB
detect_grasps.cpp 3KB
classifier.cpp 3KB
caffe_classifier.cpp 3KB
candidates_generator.cpp 2KB
generate_data.cpp 2KB
image_1_channels_strategy.cpp 2KB
point_list.cpp 2KB
hand_geometry.cpp 2KB
image_geometry.cpp 2KB
local_frame.cpp 2KB
image_3_channels_strategy.cpp 1KB
test_conv_layer.cpp 1KB
eigen_utils.cpp 879B
test_lenet.cpp 567B
dense_layer.cpp 496B
layer.cpp 23B
plot.h 13KB
cloud.h 12KB
hand_set.h 10KB
finger_hand.h 9KB
hand.h 8KB
grasp_detector.h 8KB
hand_search.h 7KB
image_strategy.h 7KB
data_generator.h 6KB
point_list.h 6KB
sequential_importance_sampling.h 5KB
config_file.h 5KB
candidates_generator.h 5KB
image_generator.h 4KB
共 173 条
- 1
- 2
资源评论
- 2201_758624392024-04-26感谢大佬分享的资源给了我灵感,果断支持!感谢分享~
快撑死的鱼
- 粉丝: 1w+
- 资源: 9156
下载权益
C知道特权
VIP文章
课程特权
开通VIP
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- QuestionTwo.java
- QuestionOne.java
- OA办公自动化管理系统(Struts1.2+Hibernate3.0+Spring2+DWR).rar
- 简历-求职简历-word-文件-简历模版免费分享-应届生-高颜值简历模版-个人简历模版-简约大气-大学生在校生-求职-实习
- 南京邮电大学数学实验:熟练掌握 Matlab 软件的基本命令和操作
- 简历-求职简历-word-文件-简历模版免费分享-应届生-高颜值简历模版-个人简历模版-简约大气-大学生在校生-求职-实习
- 2017校招真题校园招聘真题算法题(37道)Python源码.zip
- 基于单片机protues仿真的多功能自动饮水机系统设计(仿真图、源代码、演示视频)
- 论文《一种修复流程挖掘事件日志中缺失活动标签的深度学习方法》翻译
- 智慧电厂相关资料发电控制的方式
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功