I am using a Realsense D435 and RPLidar 360 planar scanner with RTAB-Map. When using just the Realsense, the map was not very clean, but when I added the RPLidar sensor, it produced much better pointclouds. Now I am trying to generate an octopmap from the data. I have read through the other questions regarding the issue, and I was able to get the octomap working, but I needed to set the paramater Grid/FromDepth to true. which seems to be ignoring the scan from the RPLidar sensor now, so I am back to noisy maps without straight walls.
Is there a way to build an octomap, but still use the rplidar to generate the occupancy grid?
and use that data in helping create the octomap?
To have both point clouds from camera and lidar in OctoMap, you should merge them before sending the merged PointCloud2 to rtabmap point cloud input (scan_cloud:=/mergedPointcloud with subscribe_scan_cloud:=true). With Grid/FromDepth=false, the OctoMap will be created from that cloud. You could also get the OctoMap projection 2D occupancy grid map from /rtabmap/octomap_grid topic.
However, why do you need an OctoMap? the 2D occupancy grid from RPLidar is not enough? If you need to avoid 3D obstacles not detected by the lidar, that can be done by feeding the camera cloud to local costmap of move_base (possibly using obstacles_detection nodelet to feed obstacle cloud that marks obstacles and ground cloud that clears obstacles, similarly to this configuration).
I tried to do what you suggested by setting subscribe_scan_cloud:=true, subscribe_scan:=false and Grid/FromDepth:=false,
while subscribed to scan_cloud:=mergedPointcloud. But nothing was being published to the mergedPointcloud so it did not work. do I need to do something else to make mergedPointcloud publish information?
The laser scan I am using is just a 2 dimensional scan: it only has 1 beam that scans 360 degrees.
The reason I am using OctoMap is because I want to build a 3D map first and then use that for 3D path planning with a multirotor UAV, not a ground robot. I believe voxels will be better to plan with than a pointcloud because it accounts for all of the volume in the space.
yes, the "mergedPointcloud" is a message that you would need to provide with all point clouds (from lidar and camera) merged together.
For your application, it is possible to use a 2D occupancy grid configuration with the plannar lidar on your ground robot, then convert the resulting database to a 3D map with the databaseViewer and the saved images. That database could be then used with your drone. However, as the point of views of the map will differ between the ground robot and the drone, the drone may have difficulty to localize robustly in the map.
To convert the database to a 3D map:
1) Backup the database
2) Open database in rtabmap-databaseViewer
3) Open View->Core Parameters
4) Select the Grid parameters tab
5) Set Grid/FromDepth to true (assuming it was false with the ground robot and 2D lidar)
6) Do Edit->Regenerate local grid maps...
You now have a database containing the 3D map. OctoMap topics should be published.