Using kinect fusion's iterative closest point (ICP) function I've tried to map an environment using just the previous and current point clouds to find the transforms. This fails due to drift, even when the kinect is stationary the ICP algorithm is reporting movement. The full kinect fusion is stable because they build a voxel representation of the world.
How does RTAB-MAP keep a stationary sensor from thinking its moving ? Is it just that ransac cloud alignment is that much more stable ? Is there a minimum movement amount or something similar, wouldn't that cause other issues ?
RTAB-Map's default odometry is not really Frame-to-Frame, it is more Frame-to-Map (Odom/Strategy=0). It maintains a local map with the recent extracted features from the past images. The size of the local map is defined by the "OdomBow/LocalHistorySize" parameter (default 1000 last features). So if the camera is not moving, new features are always compared to the same in the local map. It may be like buffering a small static window of the environment to compare to.
If you set Odometry strategy to Frame-to-Frame (Odom/Strategy=1), there is a parameter to change the keyframe only if the tracked features drop under a ratio of the previous frame, so if activated and not moving, the keyframe would not change, avoiding the drift.