For the general point cloud created in rtabmapviz or MapCloud rviz plugin, objects won't be cleared. For the occupancy grid, it should clear missing objects, as long as the sensor can see the floor under it or that laser scans can hit another object/wall behind the area of the missing object. If you have screenshots or even a database to share, it would be easier to see the problem.
I've another question/remark concerning dynamic objects in a dynamic environment.
I think, the best way is to remove the dynamic object in the depth image (via setting the value to 0 with a corresponding bounding box which surrounds the object). To do this, it could be possible to use an object detection algorithm:
1. Detect the object with the bounding box
2. mask the depth image with the corresponding bounding box
3. feed rtabmap with the rgb image and the masked depth image
But what about the problem if the object detector does not always detect the object? It could be possible that in one frame, a specific object has been detectet, but in the next frame it has NOT been detected. This would affect the 3D map (cloud_map) as well the odometry...
Can you give me some hints how to deal with this problem, especially in ROS and with the published topic "cloud_map"?
For odometry, as long as most features tracked are from structures that are static, it would not affect too much the performance.
For mapping, if spurious objects are added to map, they will indeed appear in cloud_map. You would then need to use octomap_occupied_space instead in which dynamic objects can be cleared. This has however a cost, as 3D ray tracing should be done.
To clear moving people, the sensor should be able to see the background (e.g., a wall behind where the person was). If you see Table 10 in the paper, the computation time is quite high when a loop should be closed.
If you don't need 3D point clouds and if only a 2D map from projected 3D obstacles would be enough, consider using the 2D occupancy grid with tracing instead (grid_map):