I have an experimental test with Kinect2 against a ground truth with a centimeter error but my result shows some parts my error is more than 30 cm meter ! is it usual ? is there any way I increase the accuracy of my localization (VO)?
x and y show error between the kinect2 localization and my ground trouth
How the ground truth is acquired? How are they compared? Are timestamps between ground truth and rtabmap synchronized? I suggest to use TUM's RGB-D evaluation tool so that you don't have to care to align the trajectories together. It is also possible to feed the ground truth by tf to rtabmap node for convenience (set ground_truth_frame_id and ground_truth_base_frame_id parameters). It will compute automatically the RMSE and in databaseViewer or rtabmapviz you will be able to see the two paths aligned.
In the last screenshot, the point cloud looks good, though I cannot really compare with the real environment. The offset between ground truth and the trajectory computed by rtabmap can be also caused by a scale factor. The scale is defined by the camera calibration. If the focal distance is wrongly larger than reality, for example, the point cloud cloud will look bigger than reality, thus trajectory will be larger. If the point cloud represents well the environment, that means it could be indeed a scale problem. If the point cloud doesn't represent well the environment at some places, then the error is really coming from motion estimation.