Providing mask to discard features in given portions of the image
I’m using visual odometer (rgbd_odometry in ROS) based on Kinect to reconstruct the trajectory of a semi-autonomous wheelchair. I have the following setup: the Kinect is placed 30 cm on top of the seat and it is slightly inclined towards the ground (~20 deg) to detect tables that the laser scan is not able to safely recognize. A small screen is placed in front of the seat, fixed to wheelchair by means of a support arm and unfortunately in the fiel of vision of the Kinect. I cannot change this configuration. In addition, there is no a static map of the environment.
The visual odometry is not properly work (sub estimation of the current position of the wheelchair by several meters) and the reason (my reasonable guess) is that is acquiring features from the portion of the image related to the small screen (that is fixed with respect to the wheelchair).
Is it possible to provide to rgbd_odometry a mask in order to discard features belonging to portion of the image (specifically the part related to the fixed screen)?