# LiDAR-Camera Calibration using 3D-3D Point correspondences
[Ankit Dhall](https://ankitdhall.github.io/ "Ankit Dhall"), [Kunal Chelani](http://www.chalmers.se/en/Staff/Pages/chelani.aspx "Kunal Chelani"), Vishnu Radhakrishnan, KM Krishna
![ROS Noetic](https://github.com/ankitdhall/lidar_camera_calibration/actions/workflows/noetic.yml/badge.svg)
![ROS Melodic](https://github.com/ankitdhall/lidar_camera_calibration/actions/workflows/melodic.yml/badge.svg)
![ROS Kinetic](https://github.com/ankitdhall/lidar_camera_calibration/actions/workflows/kinetic.yml/badge.svg)
![ROS2 Humble](https://github.com/ankitdhall/lidar_camera_calibration/actions/workflows/humble.yml/badge.svg)
Did you find this package useful and would like to contribute? :smile:
See how you can contribute and make this package better for future users. Go to [Contributing section](#contributing). :hugs:
---
## ROS package to calibrate a camera and a LiDAR.
![alt text](images/pcl.png "Pointcloud of the setup")
The package is used to calibrate a LiDAR (config to support Hesai and Velodyne hardware) with a camera (works for both monocular and stereo).
The package finds a rotation and translation that transform all the points in the LiDAR frame to the (monocular) camera frame. Please see [Usage](#usage) for a video tutorial. The `lidar_camera_calibration/pointcloud_fusion` provides a script to fuse point clouds obtained from two stereo cameras. Both of which were extrinsically calibrated using a LiDAR and `lidar_camera_calibration`. We show the accuracy of the proposed pipeline by fusing point clouds, with near perfection, from multiple cameras kept in various positions. See [Fusion using `lidar_camera_calibration`](#fusion-using-lidar_camera_calibration) for results of the point cloud fusion (videos).
For more details please refer to our [paper](http://arxiv.org/abs/1705.09785).
### Citing `lidar_camera_calibration`
Please cite our work if `lidar_camera_calibration` and our approach helps your research.
```
@article{2017arXiv170509785D,
author = {{Dhall}, A. and {Chelani}, K. and {Radhakrishnan}, V. and {Krishna}, K.~M.
},
title = "{LiDAR-Camera Calibration using 3D-3D Point correspondences}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1705.09785},
primaryClass = "cs.RO",
keywords = {Computer Science - Robotics, Computer Science - Computer Vision and Pattern Recognition},
year = 2017,
month = may
}
```
## Contents
1. [Setup and Installation](#setup-and-installation) :hammer_and_wrench:
2. [Contributing](#contributing) :hugs:
3. [Getting Started](#getting-started) :zap:
4. [Usage](#usage) :beginner:
5. [Results and point cloud fusion using `lidar_camera_calibration`](#fusion-using-lidar_camera_calibration) :checkered_flag:
## Setup and Installation
Please follow the installation instructions for your Ubuntu Distrubtion **[here](https://github.com/ankitdhall/lidar_camera_calibration/wiki/Welcome-to-%60lidar_camera_calibration%60-Wiki!)** on the Wiki
## Contributing
As an open-source project, your contributions matter! If you would like to contribute and improve this project consider submitting a pull request.
That way future users can find this package useful just like you did.
Here is a non-exhaustive list of features that can be a good starting point:
- [x] Iterative process with ~~weighted~~ average over multiple runs
- [x] Passing Workflows for Kinetic, Melodic and Noetic
- [x] Hesai and Velodyne LiDAR options (see [Getting Started](#getting-started))
- [ ] Integrate LiDAR hardware from other manufacturers
- [ ] Automate process of marking line-segments
- [ ] Github Workflow with functional test on dummy data
- [ ] Support for upcoming Linux Distros
- [ ] Support for running the package in ROS2
- [ ] Tests to improve the quality of the project
## Getting Started
<img src="images/setup_view1.jpg" width="432"/> <img src="images/setup_view2.jpg" width="432"/>
There are a couple of configuration files that need to be specfied in order to calibrate the camera and the LiDAR. The config files are available in the `lidar_camera_calibration/conf` directory. The `find_transform.launch` file is available in the `lidar_camera_calibration/launch` directory.
### config_file.txt
>1280 720
>-2.5 2.5
>-4.0 4.0
>0.0 2.5
>0.05
>2
>0
>611.651245 0.0 642.388357 0.0
>0.0 688.443726 365.971718 0.0
>0.0 0.0 1.0 0.0
>1.57 -1.57 0.0
>0
The file contains specifications about the following:
>image_width image_height
>x- x+
>y- y+
>z- z+
>cloud_intensity_threshold
>number_of_markers
>use_camera_info_topic?
>fx 0 cx 0
>0 fy cy 0
>0 0 1 0
>MAX_ITERS
>initial_rot_x initial_rot_y initial_rot_z
>lidar_type
`x-` and `x+`, `y-` and `y+`, `z-` and `z+` are used to remove unwanted points in the cloud and are specfied in meters. The filtred point cloud makes it easier to mark the board edges. The filtered pointcloud contains all points
(x, y, z) such that,
x in [`x-`, `x+`]
y in [`y-`, `y+`]
z in [`z-`, `z+`]
The `cloud_intensity_threshold` is used to filter points that have intensity lower than a specified value. The default value at which it works well is `0.05`. However, while marking, if there seem to be missing/less points on the cardboard edges, tweaking this value will might help.
The `use_camera_info_topic?` is a boolean flag and takes values `1` or `0`(**Though you can set it to 1 with using the camera_info topic, but we still recommend you strongly to set it to 0 and then using the calibration file, unless you make sure the camera info topic's value is consistent with calibration file or there is only a pretty small difference between them, otherwise, you won't the result you want**). The `find_transform.launch` node uses camera parameters to process the points and display them for marking. If you wish to use the `camera_info` topic to read off the parameters, set this to `1`. Else, the explicitly provided camera parameters in `config_file.txt` are used.
`MAX_ITERS` is the number of iterations, you wish to run. The current pipeline assumes that the experimental setup: the boards are almost stationary and the camera and the LiDAR are fixed. The node will ask the user to mark the line-segments (see the video tutorial on how to go about marking [Usage](#usage)) for the first iteration. Once, the line-segments for each board have been marked, the algorithm runs for `MAX_ITERS`, collecting live data and producing n=`MAX_ITERS` sets of rotation and translation in the form of 4x4 matrix. Since, the marking is only done initially, the quadrilaterals should be drawn large enough such that if in the iterations that follow the boards move slightly (say, due to a gentle breeze) the edge points still fall in their respective quadrilaterals. After running for `MAX_ITERS` number of times, the node outputs an average translation vector (3x1) and an average rotation matrix (3x3). Averaging the translation vector is trivial; the rotations matrices are converted to quaternions and averaged, then converted back to a 3x3 rotation matrix.
`initial_rot_x initial_rot_y initial_rot_z` is used to specify the initial orientation of the lidar with respect to the camera, in radians. The default values are for the case when both the lidar and the camera are both pointing forward. The final transformation that is estimated by the package accounts for this initial rotation.
`lidar_type` is used to specify the lidar type. `0` for Velodyne; `1` for Hesai-Pandar40P.
Hesai driver by default **does not** publish wall time as time stamps. To solve this, modify `lidarCallback` function in `/path/to/catkin_ws/src/HesaiLidar-ros/src/main.cc` as follows:
```
void lidarCallback(boost::shared_ptr<PPointCloud> cld, double timestamp)
{
pcl_conversions::toPCL(ros::Time(timestamp), cld->header.stamp);
sensor_msgs::PointCloud2 output;
没有合适的资源?快使用搜索试试~ 我知道了~
lidar-camera-calibration-master.zip
共190个文件
cpp:31个
h:25个
jpg:23个
需积分: 5 1 下载量 68 浏览量
2023-09-20
20:19:05
上传
评论
收藏 7.58MB ZIP 举报
温馨提示
lidar-camera-calibration-master.zip
资源推荐
资源详情
资源评论
收起资源包目录
lidar-camera-calibration-master.zip (190个子文件)
feature_tests.bin 12KB
CMakeDetermineCompilerABI_CXX.bin 8KB
CMakeDetermineCompilerABI_CXX.bin 8KB
CMakeDetermineCompilerABI_C.bin 8KB
CMakeDetermineCompilerABI_C.bin 8KB
CMakeCCompilerId.c 16KB
CMakeCCompilerId.c 16KB
feature_tests.c 688B
ArucoThreshold.cfg 556B
ArucoThreshold.cfg 556B
ArucoThreshold.cfg 556B
ArucoThreshold.cfg 556B
cmake.check_cache 85B
Makefile.cmake 18KB
CMakeCXXCompiler.cmake 4KB
CMakeCXXCompiler.cmake 4KB
DependInfo.cmake 2KB
CMakeCCompiler.cmake 2KB
CMakeCCompiler.cmake 2KB
cmake_install.cmake 1KB
CMakeDirectoryInformation.cmake 713B
CMakeSystem.cmake 398B
CMakeSystem.cmake 398B
cmake_clean.cmake 240B
markerdetector.cpp 35KB
markerdetector.cpp 34KB
aruco_mapping.cpp 29KB
CMakeCXXCompilerId.cpp 16KB
CMakeCXXCompilerId.cpp 16KB
cameraparameters.cpp 14KB
cameraparameters.cpp 14KB
arucofidmarkers.cpp 14KB
arucofidmarkers.cpp 14KB
simple_single.cpp 13KB
simple_single.cpp 12KB
marker.cpp 11KB
marker.cpp 11KB
board.cpp 11KB
board.cpp 11KB
simple_double.cpp 11KB
simple_double.cpp 11KB
marker_publish.cpp 10KB
marker_publish.cpp 9KB
Corners.cpp 8KB
cvdrawingutils.cpp 8KB
cvdrawingutils.cpp 8KB
boarddetector.cpp 7KB
boarddetector.cpp 7KB
find_velodyne_points.cpp 7KB
aruco_selectoptimalmarkers.cpp 7KB
aruco_selectoptimalmarkers.cpp 7KB
aruco_ros_utils.cpp 3KB
aruco_ros_utils.cpp 3KB
fusion.cpp 2KB
main.cpp 2KB
feature_tests.cxx 10KB
fusion 543KB
markerdetector.h 15KB
markerdetector.h 15KB
Find_RT.h 10KB
aruco.h 8KB
aruco.h 8KB
PreprocessUtils.h 8KB
aruco_mapping.h 6KB
board.h 6KB
board.h 6KB
arucofidmarkers.h 6KB
arucofidmarkers.h 6KB
cameraparameters.h 6KB
cameraparameters.h 6KB
boarddetector.h 5KB
boarddetector.h 5KB
marker.h 5KB
marker.h 5KB
Utils.h 2KB
cvdrawingutils.h 2KB
cvdrawingutils.h 2KB
exports.h 2KB
exports.h 2KB
aruco_ros_utils.h 1007B
aruco_ros_utils.h 977B
Corners.h 822B
CXX.includecache 38KB
zed_left_uurmi.ini 604B
ptgrey.ini 466B
geniusF100.ini 380B
depend.internal 13KB
experimental_setup.jpg 1.13MB
setup_mm1.jpg 799KB
setup_view2.jpg 785KB
setup_view1.jpg 763KB
deg802.jpg 700KB
deg803.jpg 667KB
setup_mm2.jpg 664KB
deg801.jpg 650KB
board_dim_label.jpg 54KB
marker_in_hand.jpg 12KB
marker_in_hand.jpg 12KB
marker582_5cm.jpg 10KB
marker582_5cm.jpg 10KB
共 190 条
- 1
- 2
资源评论
JANGHIGH
- 粉丝: 8384
- 资源: 50
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功