Reactive Navigation in Tree Orchards using Stereo Camera

Hi! I’ve gone through the ROS Navigation in 5 Days Lectures but seen that the course is mostly focused around navigating with a map. I’m trying to achieve reactive navigation instead, where the robot doesn’t have a map and drives using the data coming in from the sensors in real time.

My main goal is to have a robot with a stereo camera navigate down a tree orchard row. The robot should be able to navigate in different orchards (eg. for apples, raspberries etc). To do that I want to extract two lines which represent the tree rows from the 3D Pointcloud provided by the RGBD camera to then be able to find the centreline of the path. I’ve seen this could be done using RANSAC to fit two lines to the points on the sides but I’m not quite sure how to achieve that without regions of interest.

Any guidance as to how to approach the problem? Should I try and create clusters/segment the pointcloud and then fit the line to those clusters using RANSAC?

Hi Welcome to RobotIgniteAcademy forum!

There are lodas of ways you could go around this problem, an dsure each user here will give you different solutions.

From my view you could do the following:

  1. First: navigation without map can be done with local costmaps, like its done in the navigation couse. So try that first.

  2. If the environment is always the same in the structure, you could use a discrete map for that that has a sample orchard andresets each time you change of orchard.

  3. Create a patern recognition system with AI or algorithm that detects the orchard, the distance between them and positions virtual obtsacles based on that, for the navigation to use it.

I would reccomend you to create a ROSject in ROSDevelopmentStudio with that project so that you can show it here and users can give you more insights to your project solutions.

Thanks for the suggestions!

I’m currently thinking of converting the pointcloud into a laser scan, then trying to fit a line to each side of the corridor/row and then finding the centreline of the path. I’m trying to figure out how to segment the scan so I only keep two lines, of the left and right side, and not more. (I think it’s kind of similar to a wall follower algorithm, but I want to stay in the centre between two walls.

If I used local costmaps, is there a way to tell the robot to stay in the centre of the path?

Sure there is.

One idea is to get the relative distance between the two rows and set its middle point as target for the position.