Unit 7: Training a Fetch Robot Part 1 +Part 2. Lack of consistency steps=> correction

Hi All, the conceptual aspects are well explained under my point of view.

However a step by step should be explained.
Why ?
In part 1, it showed how to create Robot Environment and binding to Gazebo environment in a local package.

In part 2, it refers to the ready environments contained in open_ai package.
I think the target is to do everything from scratch and the unit7 bypasses some aspect as it assumes that some things are known (msg creation , recompilation , cmake changes other configuration … of course these are thoroughly explained in other learning paths but it enforces to delve deep and makes the abstract learning very difficult)

If the student/learner wants to understand deeper what he is doing in the step by step, its up to him to explore the other learning paths.

Here are other of following reasons.
Part 1-2 are a full round up of different concepts like:

  • List item
  • ROS Core
  • messages + catkin cmake
  • topics
  • publish -subscribe (Ros basics )
  • mechanical definition (URDF modelling + Collision engine for Gazebo)
  • decoupling python 2.7(only for ROS components) and python 3.5
  • RL concepts (available to in openAI)

I think the target of this unit is to show the learner how to integrate any RL algorithm in a ROS environment and all the aspect that have to be taken into account.

I don’t see clearly how i can apply analogously this unit to another ROS environment with another robot with another task (except by delving deep into the other resources) because this unit is a bit “foggy”

Think about the following issue. Imagine i would like to apply an ppo reinforcement algorithm with another ROS model and environment with a defined target.

This is just an opinion for these last unit.

So in other words, can we have the correction of :slight_smile:
Exercice 7.3 7.4 and 7.5

In all cases congratulations to the whole team
Sugreev

@bayodesegun Perhaps you take a look at this :slight_smile:

I will try to answer this as well.

I agree that in unit 7 part 2, we should use the evnironments we just created, instead of the ones included in the openai_ros package.

If you are diving into RL with ROS, you really are required to know all the basics. If you want to apply OpenAI-ROS to your own work, you will have to do a lot of custom work, consisting of precisely those things. It is not easy and you will have to work hard, but I’m sure you can figure it out. This is how you really learn and understand the course. If you haven’t done so yet, I highly recommend to do the ‘ROS for beginners Path’. It will save you lots of time in the long run.

The second part of your sentence says it all: ‘all the aspects that have to be taken into account’. Unfortunately this includes all these steps.

For using OpenAI-ROS you always need 3 things:

  1. Training Script (use baselines or your own scripts here)
  2. Task environment
  3. robot environment

If your robot is already supported by OpenAI-ROS, then you can ignore 3. and just use the included environment, if not, then you have to create your own (lots of work)

Change the task environment, so that the actions from the agent do what you want, and that get_obs and get_reward calculate what you want.

That is basically it.

1 Like