Hi! So happy you like the course ;).
You made a lot of questions so let me answer them one by one:
- When the Mira robot is static, why I can still read the velocity from JointState?
I suppose you mean that although there is no control over the joints, there is still joint state info? That because when we spawn the URDF, we launch the joint state publisher. This is needed by RVIZ to know where the joints that aren’t fixed are. Speeds in the Join state normally arent important especially if the control is by position.
- I don’t understand the codes from line 33-42, do they have the same function as the callback function self.mira_joints_callback? If so, why we have to repeat the process to get create a dictionary that contains joints name and position?
Yeah, its another way to get a topics info. You can use callbacks, or you can call each time the topic. This way you only ask for info when the call is made, not each time the topic gets new info.
- Are these three functions: move_mira_roll_joint, move_mira_pitch_joint, move_mira_yaw_joint need to be used in U1-4 or U1-5?
Exactly ;). To track something in 3D space you need to move the head in those three axis. Essentially miras head would be a spherical joint.
- Where does the program call the function mira_check_joint_value?
I doesnt, its there just in case for checking if you need it.
- Where does the program call the function mira_movement_look?
mira_movement_look is also for testing and other applications, not needed here.
- I actually don’t understand the logic of the function blob_info_callback, why we can always keep the ball always at the center of the screen by adding the turning speed to the current position without any process to get the red ball’s position? Could someone explain this to me?
Good question here. As you might have seen in the code, you are NOT getting any IMAGE information. Isn’t that weird? That’s because the only info you are getting is this topic: /mira/commands/velocity, that is published by the U1-2 exercise script that at the same time is getting the info of the position of the ball through /blobs topic. And its this blobs that has the position of the ball in the image, and its transformed into miras cmd_vel commands that you are then using in this script for the U1-3. It was made that way to show that in ROS the sensory information it’s transformed by many nodes at the same time, made by different teams… Thats normally how things go in big projects.
Hope that answered your questions. Feel free to make more questions if something is unclear. Happy learning.