I have a question about implmentation of RTABMAP with 2 D435i. Let’s assume we have a robot with two RGBD cameras and two IMUs (two D435i), and we want to obtain 3D map and localize robot implementing RTABMAP. My question is what the best and feasible way to do that is:
One solution can be to obtain a map from each set of sensor and then fuse the results (I am not sure if it is feasible).
Another possible solution is to perform SLAM using all sensors (RGBD and IMU). In such case, since we have two IMUs with different frames, how can we implement RTABMAP and fuse IMU with those obtained results from RTABMAP.
Are the cameras the only sensors in the robot? Another option is to use the laserscan coming from a LIDAR, which is the most common way of building a 2D map.
I haven’t looked into the mapping capabilities of RTABMAP, but I assume that you can do it with only one sensor, so fusing the results of two cameras seems like overkill when creating a map.
As for the second question, I think you are looking to using something like a Kalman Filter to use multiple inputs to improve localization. For this, I recommend taking this course:
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.