actually, tango is scotch on top of the lidar(as in design02.jpg).Lidar point source is 4 cm backward Tango IR source (assuming Tango IR is the center of Tango device in RTAB, I center it on lidar). Not the best settup but I'm designing 2 3d print stand to mount the phone properly.
Main goal of the lidar is to catch what tango don't see. With all my test with tango,I known I'm able to scan a large environment, as long as I have good "unique scene" across the environment to make good loop closure. And results are unbelievable. The only problem is the field of capture. I have to make a lot of back and forth between 2 "unique scene" to scan each angle of the environment.
According to you, which design is the best?
I think I miss something.
first, there is 2 times "/tango/camera/color_1/camera_info \" in the rosbag command you ask me to test. Is that nomal?
I launch lidar command first, then I launch this command, and ros tango streamer on the phone:
Note that we set Rtabmap/DetectionRate to 0 as we are received the assembled scans at 1 hz. Then exporting the full cloud to PLY (with rtabmap's File->Export Clouds...) and view it in CloudCompare:
There are still some small errors in rotation, but I can think you are on a good way. For the synchnorization of Lidar with TF coming from Tango, the best would be to somewhat synchronize the clock between the phone and computer feeding the lidar, though not sure how to do that. For the 1 second lag on the previous post, I think I have seen it with Tango too, where the clock of Tango gets slowly unsynchronized with remote computer after a couple of minutes, the fix is to reboot the phone.
I did some other test, tango points and lidar points are better align.
The only things I change is the static transform publisher:
from : rosrun tf static_transform_publisher 0 0 .04 1.57079632679 3.14159265359 0 device Lidar 100
to: rosrun tf static_transform_publisher 0 0 .04 1.57079632679 3.14159265359 0 device Lidar 5
and the launch command sequence.
my tf tree look like this:
Things look better but that's not perferct. Don't known if it's an general error or only the precision from tango IMU that shift points of lidar when there are far from source. When I show IMU in rviz, it's seems to shake a lot on rotation.
Some other things also disturb me. When I launch "rosrun tf tf_monitor", I got this:
RESULTS: for all Frames
Frame: Lidar published by unknown_publisher Average Delay: -0.00470797 Max Delay: 0.00404124
Frame: camera_color published by unknown_publisher(static) Average Delay: 0 Max Delay: 0
Frame: camera_depth published by unknown_publisher(static) Average Delay: 0 Max Delay: 0
Frame: camera_fisheye published by unknown_publisher(static) Average Delay: 0 Max Delay: 0
Frame: device published by unknown_publisher Average Delay: -0.158295 Max Delay: 0
Frame: imu published by unknown_publisher(static) Average Delay: 0 Max Delay: 0
Frame: laser published by unknown_publisher(static) Average Delay: 0 Max Delay: 0
Frame: start_of_service published by unknown_publisher(static) Average Delay: 0 Max Delay: 0
Node: unknown_publisher 198.213 Hz, Average Delay: -0.158295 Max Delay: 0
Node: unknown_publisher(static) 198.109 Hz, Average Delay: 0 Max Delay: 0
is that normal there are all named "unknown publisher"?
and sometimes during scan,this message pop:
Warning: TF_OLD_DATA ignoring data from the past for frame start_of_service at time 1.55437e+09 according to authority unknown_publisher
Possible reasons are listed at http://wiki.ros.org/tf/Errors%20explained at line 277 in /tmp/binarydeb/ros-kinetic-tf2-0.5.20/src/buffer_core.cpp
I tried reset tf from rviz bottom button, but don't see any difference
A last thing:
In tango ros streamer, I can't acces the setting "enable color camera". Says "requires API >23. Is that why my bag is missing the images?
I'm thinking about arduino IMU module to test it's presicion compare to the tango IMU. Do you think that can help?
I have doubt about this mapping method. Here we map with the tango IMU precision. That is why I have noise on lidar points (same noise than tango point cloud) while lidar is a very precise device.
Can we make the pose estimation in a other way?
Is there a way to find features in lidar points to estimate its transform in the 3d world? then check every second if this lidar pose estimation and Tango imu pose are enough close to validate lidar transform and points, to use lidar precision, not IMU precision.Then use images and when RTAB-map find a good loop closure, set Lidar pose estimation and IMU pose at the same location.
I read the "RTAB setup with a 3D LiDAR" thread. Is it similar to what I'm thinking?
I've found that the covariance set for tango odometry is not set, thus the loop closures add more errors in the map than what they can correct. I updated the tutorial. You need to start rtabmap.launch with (based on what I've set for RTAB-Map Tango):
At the beginning, the camera is looking straight up toward the ceiling, where there are not a lot of discriminative features, thus Tango is drifting a lot more. Beware that Tango drifts more when the camera is looking textureless (or with very repetitive textures) areas. Those are the point clouds created with only Tango odometry, see how there are multiple layers on the ceiling.
It is possible to refine the odometry poses with the laser scans. However, in this database when rotating there is not always a lot of overlap with the previous scan. Increase Rtabmap/DetectionRate to record more frames/scans (if you set it to 0, it would record at 5 Hz matching max tango cloud rate).
Also, there is seem to still have not a good synchronization between tango and the lidar. Not sure if it is a timestamp problem (stamps not matching between the devices) or some TF delay somewhere. For the static transform publisher, not sure setting at 200 Hz vs 10 Hz is the solution, as static transforms don't change over time. You may be lucky that time the stamps between Tango and host device were more synchronized.
Here is a comparison between tango odometry and when refining using scan matching:
In the second image the scans were downsampled for he matching, but we can see that when the scans are matching the tango clouds don't, which means a synchronization/TF problem between lidar and tango.
I don't think the arduino IMU would be more accurate than what Tango is using, though you may get a better synchronization with the lidar.
I tried to modify the launch file of velodyne exemple but have a lot of errors.don't understand why... please HELP!!!
I'm able to get a sensor_msgs/Imu from my mpu6050 with this : roslauch mpu6050_serial_to_imu demo.launch
data look ok in Rviz.
I launch my lidar: roslaunch quanergy_client_ros client2.launch
static transform between imu and Lidar device: rosrun tf static_transform_publisher 0 0 0 0 0 0 imu_link Lidar 100
have got this msg:
roslaunch rtabmap_ros rtabmap_LidarImu.launch
... logging to /home/sdeboffle/.ros/log/fdfbd2bc-6120-11e9-95cf-9829a6388ce0/roslaunch-sdeboffle-Predator-G3-572-30044.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
You need latest rtabmap version built from source to make it work (as it is a recent addon). rtabmap binaries should be uninstalled, libpointmatcher should be installed for this lidar example, then rebuild rtabmap (make sure you see "-- *With libpointmatcher = YES (License: BSD)" when doing cmake part of rtabmap.
See https://github.com/introlab/rtabmap_ros#build-from-source for details.
is my imu-->Lidar tf static transform look logical to you? Is it better to use imu_base or imu_link?
is there a way to add a kinect to that setup?Is there a advantage to do so, for visual loop closure?
After a couple a minuts, rtab starts to be very slow, how to make it run better for long time scan? (tried to increase Icp/VoxelSize but no results)
I tried to modify the Kinect + Odometry + 2D laser (setup your robot) tutorial for my setup but get that msgs:
[ WARN] [1555599958.490518678]: /rtabmap/rgbd_sync: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set.
/rtabmap/rgbd_sync subscribed to (approx sync):
In that setup you don't seem using the IMU (use_imu argument is false), I recommend to use one if available. For the transform IMU->lidar, the orientation should be accurate. For example, make sure the imu frame matches the lidar frame.
The odometry seems drifting quite a lot, can you share a rosbag (lidar, tf, imu)? It would be easier to debug.
You can use a kinect at the same time. The rgbd_sync approach is indeed to way to go. The warning here is that rgbd_sync didn't receive any kinect data. Verify kinect topics are published:
I think I had some kinect drivers issues. I reinstall it, and now all topics are published.
now it's a other error:
[ INFO] [1555687462.966648576]: Odom: ratio=0.245394, std dev=0.001585m|0.001585rad, update time=1.454655s
[ INFO] [1555687464.427559120]: Odom: ratio=0.240180, std dev=0.001576m|0.001576rad, update time=1.445957s
[ INFO] [1555687465.925904932]: Odom: ratio=0.242494, std dev=0.001599m|0.001599rad, update time=1.482368s
[ WARN] [1555687466.392625679]: /rtabmap/rtabmapviz: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). Parameter "approx_sync" is false, which means that input topics should have all the exact timestamp for the callback to be called.
/rtabmap/rtabmapviz subscribed to (exact sync):
[ INFO] [1555687467.388982669]: Odom: ratio=0.243336, std dev=0.001566m|0.001566rad, update time=1.442059s
[ WARN] [1555687467.748107115]: /rtabmap/rtabmap: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). If topics are not published at the same rate, you could increase "queue_size" parameter (current=100).
/rtabmap/rtabmap subscribed to (approx sync):
I think I made a mistake in my launch file but don't understand where.
if you have an idea...
Looking at the generated point cloud from laser scans, there are strange overlapping walls (see section inside blue rectangle):
I looked in the database and all links are pretty good. The problem here is the glass wall in the right room. There an example of single scan and corresponding RGB view: see how the glass wall we see in RGB image is causing reflections in the scan (doubling the room behind the glass wall), shown in blue rectange on right top view of the scans (the green triangle is the camera frustum):
This kind of situation (e.g., glass, windows, reflective surfaces) is causing the mirroring room problem as seen in the first image of this post. Ideally, it would be nice to select those points behind the wall in DatabaseViewer and delete them, so they are not added to global map afterwards. However, that feature is only implemented with depth images (like removing mirrors in this tutorial)...
Another problem in the database: the TF between camera and lidar seems a little off in "yaw", so the walls in RGB-D image doesn't match the walls in the scan:
EDIT For your question:
I think it's append when I pass through a door, visual odometry is broken.
You may try traversing backward the doors, so that the scans can still be registered with the previous ones.
you reply to my question with the "mirror effect". I thought that error in point cloud was due to visual odometry error, but your answer explain all my problem. I tried to reject all "space proximity"loop closure with the database editor and check all remaining visual loop closure but no change. That's why. Thanks.
For the tf between lidar and camera, I think the kinect move gently at a time between my tests. I have to fix it better to the lidar but I wanted to have good results before that.
Even with this yaw error, kinect cloud should be good, no? With a little rotation in cloud compare,it should superpose with lidar cloud.
Many thanks to you. I'm clearly not a expert of ROS environment, or SLAM technology and my coding knowledge are limited (as my english vocabulary) but you make me able to do great things. YOU ROCK!!!
I can't reopen db on my desktop due to version. Do you plan to build a v19 for windows?or do you have a trick to open a v19 db in v18 standelone rtabmap? :)
The kinect clouds will appear with a small angle error against the lidar point cloud. The position would be good, but not the rotation of each individual kinect frame. Even if you say you are not an expert in ROS, you have done great so far.