# ROS Wrapper for Intel® RealSense™ Devices
These are packages for using Intel RealSense cameras (D400 series SR300 camera and T265 Tracking Module) with ROS.
## Installation Instructions
The following instructions support ROS Indigo, on **Ubuntu 14.04**, and ROS Kinetic, on **Ubuntu 16.04**.
#### The simplest way to install on a clean machine is to follow the instructions on the [.travis.yml](https://github.com/intel-ros/realsense/blob/development/.travis.yml) file. It basically summerize the elaborate instructions in the following 3 steps:
### Step 1: Install the latest Intel® RealSense™ SDK 2.0
- #### Install from [Debian Package](https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages) - In that case treat yourself as a developer. Make sure you follow the instructions to also install librealsense2-dev and librealsense-dkms packages.
#### OR
- #### Build from sources by downloading the latest [Intel® RealSense™ SDK 2.0](https://github.com/IntelRealSense/librealsense/releases/tag/v2.24.0) and follow the instructions under [Linux Installation](https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md)
### Step 2: Install the ROS distribution
- #### Install [ROS Kinetic](http://wiki.ros.org/kinetic/Installation/Ubuntu), on Ubuntu 16.04
### Step 3: Install Intel® RealSense™ ROS from Sources
- Create a [catkin](http://wiki.ros.org/catkin#Installing_catkin) workspace
```bash
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src/
```
- Clone the latest Intel® RealSense™ ROS from [here](https://github.com/intel-ros/realsense/releases) into 'catkin_ws/src/'
```bashrc
git clone https://github.com/IntelRealSense/realsense-ros.git
cd realsense-ros/
git checkout `git tag | sort -V | grep -P "^\d+\.\d+\.\d+" | tail -1`
cd ..
```
- Make sure all dependent packages are installed. You can check .travis.yml file for reference.
- Specifically, make sure that the ros package *ddynamic_reconfigure* is installed. If *ddynamic_reconfigure* cannot be installed using APT, you may clone it into your workspace 'catkin_ws/src/' from [here](https://github.com/pal-robotics/ddynamic_reconfigure/tree/kinetic-devel) (Version 0.2.0)
```bash
catkin_init_workspace
cd ..
catkin_make clean
catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release
catkin_make install
echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc
```
## Usage Instructions
### Start the camera node
To start the camera node in ROS:
```bash
roslaunch realsense2_camera rs_camera.launch
```
This will stream all camera sensors and publish on the appropriate ROS topics.
Other stream resolutions and frame rates can optionally be provided as parameters to the 'rs_camera.launch' file.
### Published Topics
The published topics differ according to the device and parameters.
After running the above command with D435i attached, the following list of topics will be available (This is a partial list. For full one type `rostopic list`):
- /camera/color/camera_info
- /camera/color/image_raw
- /camera/depth/camera_info
- /camera/depth/image_rect_raw
- /camera/extrinsics/depth_to_color
- /camera/extrinsics/depth_to_infra1
- /camera/extrinsics/depth_to_infra2
- /camera/infra1/camera_info
- /camera/infra1/image_rect_raw
- /camera/infra2/camera_info
- /camera/infra2/image_rect_raw
- /camera/gyro/imu_info
- /camera/gyro/sample
- /camera/accel/imu_info
- /camera/accel/sample
The "/camera" prefix is the default and can be changed. Check the rs_multiple_devices.launch file for an example.
If using D435 or D415, the gyro and accel topics wont be available. Likewise, other topics will be available when using T265 (see below).
### Launch parameters
The following parameters are available by the wrapper:
- **serial_no**: will attach to the device with the given serial number. Default, attach to available RealSense device in random.
- **rosbag_filename**: Will publish topics from rosbag file.
- **initial_reset**: On occasions the device was not closed properly and due to firmware issues needs to reset. If set to true, the device will reset prior to usage.
- **align_depth**: If set to true, will publish additional topics with the all the images aligned to the depth image.</br>
The topics are of the form: ```/camera/aligned_depth_to_color/image_raw``` etc.
- **filters**: any of the following options, separated by commas:</br>
- ```colorizer```: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values .
- ```pointcloud```: will add a pointcloud topic `/camera/depth/color/points`. The texture of the pointcloud can be modified in rqt_reconfigure (see below) or using the parameters: `pointcloud_texture_stream` and `pointcloud_texture_index`. Run rqt_reconfigure to see available values for these parameters.</br>
The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting `allow_no_texture_points` to true.
- The following filters have detailed descriptions in : https://github.com/IntelRealSense/librealsense/blob/master/doc/post-processing-filters.md
- ```disparity``` - convert depth to disparity before applying other filters and back.
- ```spatial``` - filter the depth image spatially.
- ```temporal``` - filter the depth image temporally.
- ```hole_filling``` - apply hole-filling filter.
- ```decimation``` - reduces depth scene complexity.
- **enable_sync**: gathers closest frames of different sensors, infra red, color and depth, to be sent with the same timetag. This happens automatically when such filters as pointcloud are enabled.
- ***<stream_type>*_width**, ***<stream_type>*_height**, ***<stream_type>*_fps**: <stream_type> can be any of *infra, color, fisheye, depth, gyro, accel, pose*. Sets the required format of the device. If the specified combination of parameters is not available by the device, the stream will not be published. Setting a value to 0, will choose the first format in the inner list. (i.e. consistent between runs but not defined). Note: for gyro accel and pose, only _fps option is meaningful.
- **enable_*<stream_name>***: Choose whether to enable a specified stream or not. Default is true. <stream_name> can be any of *infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose*.
- **tf_prefix**: By default all frame's ids have the same prefix - `camera_`. This allows changing it per camera.
- **base_frame_id**: defines the frame_id all static transformations refers to.
- **odom_frame_id**: defines the origin coordinate system in ROS convention (X-Forward, Y-Left, Z-Up). pose topic defines the pose relative to that system.
- **All the rest of the frame_ids can be found in the template launch file: [nodelet.launch.xml](realsense2_camera/launch/includes/nodelet.launch.xml)**
- **unite_imu_method**: The D435i and T265 cameras have built in IMU components which produce 2 unrelated streams: *gyro* - which shows angular velocity and *accel* which shows linear acceleration. Each with it's own frequency. By default, 2 corresponding topics are available, each with only the relevant fields of the message sensor_msgs::Imu are filled out.
Setting *unite_imu_method* creates a new topic, *imu*, that replaces the default *gyro* and *accel* topics. Under the new topic, all the fields in the Imu message are filled out.
- **linear_interpolation**: Each message contains the last original value of item A interpolated with the previous value of item A, combined with the last original value of item B on last item B's timestamp. Items A and B are accel and gyro interchangeably, according to which type recently arrived from the sensor. The idea is to give the most recent information, united and without repetitions.
- **copy**: Fo
天天501
- 粉丝: 625
- 资源: 5906
最新资源
- 图形配置运动控制软件框架Demo 开发语言:C# 1.图形可放大缩小,任意位置摆放,工具增删改; 2.参数加载,另存为,保存; 3.仿真界面显示,程序可增删改; 4.目前适配SMC-604控制器
- Matlab通信边缘计算通信仿真 雷达跟踪算法matlab 跟踪滤波:卡尔曼滤波、扩展卡尔曼滤波、无迹卡尔曼滤波 matlab遗传算法粒子群路径规划算法改进
- 狼群算法求解柔性车间调度matlab版 有源码提供学习 可直接运行
- lunwen复现基于改进人工鱼群法的机器人,无人机,无人车,无人船的路径规划算法,MATLAB 在基本算法中加入了自适应视野和步长,加入了启发选择机制 该代码运行结果是那四个栅格地图的一个,只包含I
- 永磁同步电机直接公式法计算,它是将MTPA和弱磁结合起来应用,弱磁方法选择的是公式法(直接计算法) 包括直接法弱磁控制基本原理、实现方法及仿真 最最重要的提供从内环到外环电流环的仿真步骤,各个参数
- 三相逆变器之下垂控制?负载突变分析 图一控制阶跃信号为0.7 图二整体结构控制图 图三负载突变时,电流幅值发生变化曲线图 图四负载突变时,功率发生变化曲线图
- 灰狼算法优化支持向量机(GWO- VMD) 1、适合新手学习使用、保证运行哦 2、GWOSVM,gwosvm 适合新手学习,研究程序,代码很齐全 3、注释也很多,(matlab)程序哦 4、带入接带
- 光伏逆变器低电压穿越仿真模型,boost加NPC拓扑结构,基于MATLAB Simulink建模仿真 具备中点平衡SVPWM控制,正负序分离控制,pll,可进行低电压穿越仿真 仿真模型使用MATL
- 交错并联Boost PFC仿真电路模型 采用输出电压外环,电感电流内环的双闭环控制方式 交流侧输入电流畸变小,波形良好,如效果图所示 simulink仿真 matlab simulink仿真模型
- 永磁同步电机最大转矩电流比控制MTPA,id=0控制仿真及其对比,可帮助更好理解其区别 MTPA单位电流产生最大的输出扭矩,或者具有相同转矩情况下该电流幅值最小,该控制方法相对电流小=可以减小电机损
- comsol固态纳米孔稳态仿真
- STM32单片机指纹密码锁仿真 仿真程序 功能: 1.键盘解锁 2.指纹解锁 3.可修改密码 3.蜂鸣器 警报 4.LED灯 5.LCD显示屏 资料(源码 proteus仿真电路 演示视频)
- 西门子PID程序西门子PLC 1200和多台G120西门子变频器Modbud RTU通讯,带西门子触摸屏,带变频器参数 Modbus通讯报文详细讲解,PID自写FB块无密码可以直接应用到程序,PID带
- 光伏系统+boost电路+单相spwm逆变并网仿真 直流母线电压400V 输出交流电压220V 负载可调 THD小于5% 纹波小 simulink
- 基于yolov5的布匹缺陷检测(含源码和数据集)
- 永磁同步电机超前角弱磁控制,抵抗负载扰动,切弱磁的过程较为平滑,主要原理是通过电压反馈,得到偏转角度theta,并通过id=iscos(theta)的方式控制弱磁电流 该弱磁控制为一个多闭环系统,由
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈