I tried editing the minimalist launch file example at the very end of the older post to fit with my topics:
Camera: RGB image -> /usb_cam/image_raw
Camera: RGB Camera info -> /usb_cam/camera_info
Lidar: Point cloud -> /velodyne_points (pointcloud2 message)
But I'm coming across all kinds of issues if I try using the pointcloud_to_depthimage node by simply launching it in a different shell separate from the launch file mainly because I'm not very experienced with RTAB-Map. How can I incorporate the pointcloud_to_depthimage node into the launch file? I would really appreciate the help
You may have to play with decimation and fill_holes_size parameters to fill holes in the depth image. Note that this code is slightly different from the code that generated the image from this post which used this, while pointcloud_to_depthimage is using this (more general).
The requirement you need is to have a static_transform_publisher publishing the transform between the lidar and the camera. For example if the camera is just over the lidar looking forward:
Thank you for the response! That's what I was looking for, some answer to defining the TF which turned out pretty simple. The image above shows a new warning I get when I launch the launch file. I don't have a base_link existing so I'm going over the TF course in robot ignite academy now. I should be able to get this working soon
I'm now getting this on repeat when I launch. My TF tree is simply /velodyne and /usb_cam. Shouldn't there also be an /odom frame?
I realize Could not get transform from odom to velodyne after 0.200000 seconds (for stamp=1558621071.290520)! Error="canTransform: target_frame odom does not exist.. canTransform returned after 0.200338 timeout was 0.2.".
there is a line cut off in the photo:
<launch><includefile="$(find rtabmap_ros)/launch/rtabmap.launch"><argname="frame_id"value="velodyne"/><argname="subscribe_scan_cloud"value="true"/><argname="rgb_topic"value="/usb_cam/image_raw"/><argname="depth_topic"value="/image_raw"/><argname="camera_info_topic"value="/usb_cam/camera_info"/><argname="scan_cloud_topic"value="/velodyne_points"/><!-- Icp parameters based on KITTI dataset --><argname="rtabmap_args"value="--delete_db_on_start --Reg/Strategy 1 --Icp/MaxTranslation 0 --Icp/VoxelSize 0.3 --Icp/DownsamplingStep 10 --Icp/MaxCorrespondenceDistance 0.3 --Icp/Iterations 10 Icp/Epsilon 0.01 --Icp/CorrespondenceRatio 0.1 --Icp/PointToPlane true"/><!-- Default visual-only odometry, Icp ROS interface is not yet implemented--><argname="odom_args"value="--Reg/Strategy 0"/></include></launch>
Here is the launch file in case it helps.
Edit: Okay so I found out the /image_raw topic from the pointcloud_to_depthimage node is publishing but its just publishing a blank image, no depth data, I'm trying to figure out why I'm not getting a good depth image. Lidar should be shooting out 300,000 points and my camera pixels total 307200 (640x480) so I assume my decimation should be 0?
What does "/image_raw" look like in rqt_image_view? Can you record a small ros bag and share the link (dropbox, google drive...):
$ rosbag record /usb_cam/image_raw /usb_cam/camera_info /velodyne_points
While the velodyne gives 300 000 points, they are not all in the field of view of the camera. With a VLP-16 and not using hole filling parameters of pointcloud_to_depthimage, you would get only 16 lines in the image.