Controlling a real UR5 CB3 with ROS2

I’m trying to control a real UR5 CB3 model using ROS2 to perform a vision based pick and place movement. I’m using a depth camera (intel D435) with the YOLO algorithm to extract the X, Y, Z coordinates of the object from an image and use that to plan and move the robot. So I took the ROS manipulation class and in the class I noticed that they don’t use the ROS2 drivers for the UR, but only use the MoveIt2 package. What’s the difference between using the two methods? Advantages or disadvantages of using either methods? Also, are there any resources or classes to teach how to use ROS2 drivers for UR, since the documentation for the drivers only show how to use the MoveIt GUI to plan and execute the trajectories, but not to use scripts. Thank you

The ur-drivers are really just the drivers to interface with the actual hardware - enabling the ur-controller to take ROS commands and feed information back to ROS.

The official drivers already provide a MoveIt configuration package for their manipulators only.
Any changes to the robot model, such as adding an end-effector need to translate back to MoveIt - Hence you need a new MoveIt configuration.

Since MoveIt is taking care of the motion planning, head over to [Tutorials — MoveIt Documentation: Rolling documentation](their website). There are a lot of tutorials on how to use MoveIt.

Note: I didn’t take the manipulation class yet, but I guess you can just feed MoveIt’s setup assistant any URDF to create a configuration for motion planning without requiring any software from the robot manufacturers, if you don’t use actual hardware.

Thank you @KlausLex for the detailed response. I have couple more follow up question then if you don’t mind; am I able to fully control the entire robot, including the end-effectors, using the moveit2 package without the UR drivers? If so, I’m slightly confused about the purpose of the ROS2 drivers for the Universal Robots, since they just seem like less diverse moveit2 configuration. Is it to allow for a more specific control of the robot, such as using the force sensor, speed control and I/O’s etc? Is the motion planning by ROS2 drivers for the UR done using MoveIt package by default (in the script, not in the GUI)? Sorry if I’m not understanding your response too well, this is my second week working with the robot.

MoveIt is the motion planning framework. It is responsible to compute trajectories on the software-side.
This works by passing the URDF - the Unversal Robot Description File, which does not take actual hardware into consideration.
So MoveIt works for any symbolic robot in the virtual space, but cannot control your real robot.

To control the physical robot, there needs to be an interface that tells the robot how to interpret commands sent by MoveIt, as well as allows ROS to get feedback from the control box.

Take the joint_state_publisher for example.
You can use this node to individually actuate each joint of a virtual robot, but this does not impact the real robot - since you manually publish joint states, since you’re basically simulating joint states virtually instead of reading the actual joint states from the physical robot.

Using the ur-driver, ROS is able to read the real joint states from the physical robot from the ur-control-box.

To recap:

  1. The purpose of the driver sits in-between the physical robot and the virtual environment.
    MoveIt does not know how to interact with hardware.

  2. Yes, as you can read on their Github:

MoveIt! support is built-in into this driver already. Watch MoveIt in action with the Universal Robots ROS2 driver:

The GUI you see in RVIz is also just scripts in the background - Their tutorials show how to implement those as well.

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.