ROS Navigation: Sensor Fusion

Hello ROS community,

I am wondering if there are any resources I could use that cover well the material surrounding sensor fusion in ROS, in particular the robot_localization package. I am aware that there is a series of notebooks offered here, and I did find them to be helpful and relevant. However, I am wondering how I could better measure the impact of sensor fusion. Currently, I am given a file where I can specify a number of sensory modalities and whether I would like to include the speed or position from that particular modality. From here I find it challenging to decide what exactly it is I should choose to include and moreover, once I do include that parameter, how can I see its effect on the system? I did perform one test so far, which was what was covered in the lecture. This test involved visualizing the odometry in RVIZ. Here I did a case where I included noise along with a base case without noise. For both instances, I also did a case with and without the ekf filtering. It would be great to be able to communicate a bit more than this in terms of how the sensor fusion is really impacting system performance, however. I appreciate any suggestions.

This is the file to which I’m referring:

How would you better demonstrate the impacts of these parameters on overall performance?

Regards,
Mike

Have you tried our sensor fusion course?

Yes I have, but in that course there is no clear indication to my recall as to how we can measure the effect of the sensor fusion on the navigation process. They do give us some ways that we can visualize whether or not it is work in RVIZ. I was wondering if there is any better way for us to confirm that the sensor data is in fact being fused and fed into the appropriate channels for locomotion. Would it be better to look at the output from certain topics? I’m not certain how to specifically see whether or not the data is fusing beyond just looking at RVIZ odometry data.

I’ll refer you to an expert on the course, @albertoezquerro.

Hello @mykullrizzo,

As you mentioned, in our courses we cover how to visualize the impact of the Kalman Filter using Riviz. In particular in the Kalman Filters course, in Unit 4, section 4.6, the EKF localization demo shows how the filter’s effect can be visualized using Rviz. A further step towards measuring the impact of the filter would be to conduct a quantitative comparative study between the sensor fusion performance using different sensors. In our courses we do not conduct such type of quantitative analysis.

However here are some ideas of tasks to consider when conducting such a study :

  • First select the sensors that you want to compare. You should investigate which sensors are a good candidates to be tested in terms of your budget, required accuracy etc…

  • You also must determine the specific scenario or environment where the sensors will be tested. This could involve creating a Gazebo simulation or conducting experiments with a real robot. Results in simulation are not 100% transferable to the real world.

  • Then define the performance metrics that will be used to measure the performance of the different sensor configurations.

  • Next collect data from the sensors in the defined scenario. You should collect enough data to provide sufficient information for analysis.

  • Finally analyze the data collected using statistical methods. Compare the performance of the different sensors in terms of the performance metrics that you defined.

Based on the results of the analysis, you would be able to draw conclusions regarding the sensor(s) that produce the most accurate sensor fusion performance in the given scenario.

Hope this helps,

Roberto

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.