RGBD Outdoor Mapping - Offline Database Processing - Localization

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

RGBD Outdoor Mapping - Offline Database Processing - Localization

DavideR
INTRODUCTION

First of all I want to thank to Mathieu for the astonishing contribution you're giving at the community. Your efforts are really impressive!

I would like to write this post in detail, so to help as much people as possible who could be working with a scenario similar to mine.

HARDWARE CONFIGURATION

ROBOT PLATFORM : 2 wheeled differential drive robot
RGBD SENSOR : Stereolabs ZED Mini Camera
LIDAR SENSOR : Quanergy 3D Lidar (360 FOV)
IMU SOURCES : Standalone central unit / ZED Mini embedded IMU
GPS SENSOR

COMPUTING PLATFORM

2 Machines ROS-based network
COMMUNICATION LINK : Local Ethernet Network
RGBD PROCESSING UNIT : NVIDIA JETSON TX1 [ L4T (JETPACK 3.3) / ZED SDK 2.7.1 / ROS Kinetic ]
ROS MASTER : Lenovo Thinkpad Laptop Intel i5-5300U CPU @ 2.30GHz / 8GB DDR3 RAM [Linux Ubuntu 16.04 / ROS Kinetic]

SOFTWARE CONFIGURATION

Jetson TX1 is used for image/depth processing only [ Input : raw image streams - IMU / Output : RGB + Depth Map - Odometry (published topics)]
RTABMAP: Version 0.18.0 [ROS Node Location : Laptop / Standalone Location : Laptop]
ZED SDK : Version 2.7.1 [Installation Location : Jetson TX1]
ZED ROS WRAPPER : [ ROS Node Location : Jetson TX1]
LIDAR DRIVER NODE : [ROS Node Location : Laptop]

PROJECT'S GOAL

A. Record data navigating an urban outdoor enviroment [~1Km start>finish path - Few or absent loop closures] [ 2D Occupancy grid map required for further navigation purposes].
B. Process the database offline to retrieve the most accurate map not subjected to online processing constraints.
C. Use the created map for navigation purposes exploiting appearance based localization and possibly Visual Odometry Fusion.

QUESTIONS

1. RECORDING DATA

Recorded topics : RGB Image stream - Depth Map - Lidar 2D Laser Scan - ZED Camera IMU Odometry - Robot Wheel Odometry - GPS

a.Given an offline map creation, am I still subjected to any quality/performances trade-off? It is possible to store raw data at maximum quality/precision/input rate?

b.My intended pipeline is : Record ROS BAG file at maximum quality [RTABMAP OFF]> Process the bag by means of rtabmap-dataRecorder [OFFLINE] > Process the resulting database by means of rtabmap standalone and configure parameters offline. Is that feasible?

c.Is there any threshold (regarding input rate, quality etc) above which, a raw data quality improvement would results in an useless overkill?


2.PROCESSING THE DATABASE OFFLINE

Given the pipeline feasibility, I'm wondering if I'll be able to fully tune rtabmap parameters offline so to get the best accuracy I'd like to obtain. (Given a good odometry performance).

3.LOCALIZATION MODE

I've aware about the integration of movbase within the rtabmap framework. I am wondering if it is possible to run rtabmap in localization mode during navigation within the created map while having multiple localization sources (e.g. lidar based montecarlo localization - appearance based rtabmap localization).
I'm interested in evaluating the performances of the different localization approaches.


Thank You in advance for your support!!
Reply | Threaded
Open this post in threaded view
|

Re: RGBD Outdoor Mapping - Offline Database Processing - Localization

matlabbe
Administrator
Hi,

It is an interesting setup. Is the Quanergy LiDAR gives a point cloud similar to Velodyne? If so, in your recording topics, you could record 3D point clouds instead of 2d laser scan if you want to keep all data. You have both wheel odometry and zed odom, you would have to use only one or combine both with robot_localization package. For your questions:

1. RECORDING DATA
a.Given an offline map creation, am I still subjected to any quality/performances trade-off? It is possible to store raw data at maximum quality/precision/input rate?
Yes, it is possible. You have the choice to record a rosbag, which would give you the most flexibility if you want to try other ROS approaches, or record data in RTAB-Map's database format directly using data_recorder.launch. When recorded in RTAB-Map's database format, we can reprocess with different parameters using rtabmap standalone (use recorded database as input source) or rtabmap-reprocess tool (which can be reprocessed faster than real-time). With the rosbag, it is like replaying online the experiment. For this reason, it is maybe most useful to record the rosbag with everything at first, so that you can test different rtabmap input configurations afterwards.

Note about data_recorder.launch: If you want to use it, I suggest to use rtabmap_ros/rgbd_sync nodelet to pre-synchronized depth/rgb images together before feeding it to recorder node. Example:
<launch>
  <node pkg="nodelet" type="nodelet" name="rgbd_sync" args="standalone rtabmap_ros/rgbd_sync">
    <remap from="rgb/image"         to="/zed/rgb/image_rect_color"/>
    <remap from="depth/image"       to="/zed/depth/depth_registered"/>
    <remap from="rgb/camera_info"   to="/zed/rgb/camera_info"/>
    <param name="approx_sync"       value="false"/>
  </node>
  <include file="$(find rtabmap_ros)/launch/data_recorder.launch">
    <arg name="subscribe_odometry"   value="true"/>
    <arg name="subscribe_scan_cloud" value="true"/>
    <arg name="subscribe_depth"      value="false"/>
    <arg name="subscribe_rgbd"       value="true"/>
    <arg name="rgbd_topic"           value="/rgbd_image" />
    <arg name="scan_cloud_topic"     value="/quanergy_cloud"/>
    <arg name="odom_topic"           value="/zed_vio_odom"/> <!-- other choices: /wheel_odom or /odom_combined (robot_localization) -->
    <arg name="frame_id"             value="base_link"/>
    <arg name="max_rate"             value="1"/>       <!-- 0 = Record as fast as possible --> 
  </include>
</launch>
Note that a compatible odometry covariance with RTAB-Map is very important. To use arbitrarily fixed values, you would have to set "odom_frame_id" to use TF instead. data_recorder.launch lacks of arguments to set the covariance value, you would have to edit the file to add under data_recorder node something like:
<arg name="odom_tf_linear_variance" default="0.01"/>
<arg name="odom_tf_angular_variance" default="0.005"/>

b.My intended pipeline is : Record ROS BAG file at maximum quality [RTABMAP OFF]> Process the bag by means of rtabmap-dataRecorder [OFFLINE] > Process the resulting database by means of rtabmap standalone and configure parameters offline. Is that feasible?
As discussed in previous point, doing "Record ROS BAG file at maximum quality [RTABMAP OFF]" is a good way to start. After processing one time with rtabmap, you will get a database. This database can be used as input in standalone app to tune parameters offline and replay the database at different frame rate (like faster than real-time for convenience).

c.Is there any threshold (regarding input rate, quality etc) above which, a raw data quality improvement would results in an useless overkill?
It is more a hard drive space issue. Recording all raw data at maximum frame rate can give you very big rosbags. If odometry is already computed, RTAB-Map by default requires only 1 Hz data. The recording approach using data_recorder.launch would give a lot smaller files if rate is limited to 1 Hz for example.


2.PROCESSING THE DATABASE OFFLINE

Given the pipeline feasibility, I'm wondering if I'll be able to fully tune rtabmap parameters offline so to get the best accuracy I'd like to obtain. (Given a good odometry performance).
Yes, we can change all parameters offline.

3.LOCALIZATION MODE

I've aware about the integration of movbase within the rtabmap framework. I am wondering if it is possible to run rtabmap in localization mode during navigation within the created map while having multiple localization sources (e.g. lidar based montecarlo localization - appearance based rtabmap localization).
I'm interested in evaluating the performances of the different localization approaches.
If you are creating a 3D map, it could be possible to make rtabmap just publishing the OctoMap for a lidar-based localization approach using the OctoMap while disabling appearance based rtabmap localization (Kp/MaxFeatures=-1). For example, the other localization approach would publish /map -> /odom. Make sure to set publish_tf to false for rtabmap so that /map -> /odom is not published by multiple nodes. To use another appearance-based localization approach, we would need more info about the format that is used. For example, it is possible to export a visual map in bundler format (see this post) and use ACG_localizer.

cheers,
Mathieu


Reply | Threaded
Open this post in threaded view
|

Re: RGBD Outdoor Mapping - Offline Database Processing - Localization

DavideR
This post was updated on .
Thank You Mathieu for your support!

I've tried to figure out how RTAB Localization mode should work. I'm not sure if I completely understood what is going on. Could You assure me that I got it right?

So, Localization mode is based on the same algorithm which is used for loop closure detection during mapping. Therefore when I move the robot in an already mapped enviroment, the RGB-D images streamed by the camera are used for appearance based place recognition. This is done through the bag of visual words technique.

[Here I am not sure how the depth information comes into play..
1.Every keyframe's feature stored in the map has the info of its depth?
2.Is the depth of the features used for estimating the pose of the camera through SFM?
3.The point cloud should not be necessary when localizing through RGB visual words approach. Is it true anyway that the pc can be exploited to refine the pose estimate by ICP processing?]


The localization output is a transform map->odom, so that it should estimate the position of the robot within the map.

Given your experiences, what should I expect? (if good overall conditions are met)

A. A reliable "continuous" global localization.
B. Just a "discrete" global localization useful just for "localization recovery" in case of other locating methods failures (I'm thinking about kidnapped robot occurrences).

Can I use the appearance based localization to locate the robot within a 2D occupancy map?

Excuse me for my naive questions..I'm quite a newbie in this field.

Thank you again for your support

Cheers
Reply | Threaded
Open this post in threaded view
|

Re: RGBD Outdoor Mapping - Offline Database Processing - Localization

matlabbe
Administrator
Hi,

1.Every keyframe's feature stored in the map has the info of its depth? 
Yes

2.Is the depth of the features used for estimating the pose of the camera through SFM?
It depends, the pose estimation is done by PNP which is 2D <-> 3D estimation. For localization, the depth is not required. However, for the map, the depth of the features must be known.

3.The point cloud should not be necessary when localizing through RGB visual words approach. Is it true anyway that the pc can be exploited to refine the pose estimate by ICP processing?] 
The point cloud is not necessary for appearance-based localization. However, if a laser scan is available, it can be used to refine the transformation with ICP.

You will have more details in the paper "RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation".

Can I use the appearance based localization to locate the robot within a 2D occupancy map?
Using rtabmap yes, rtabmap keeps the 3D features even if you output a 2d occupancy grid.

For localization, as by default rtabmap is updated at 1 Hz, you will get discrete jumps for the /map -> /odom transform. It is based on the standard, see REP105. Localizations may not happen at every frames, so the robot may "jump" when relocalized to correct the odometry drift.

cheers,
Mathieu