OpenAI with ROS: Chapter 8: Add instructions for working on a local pc

Maybe it is an idea to add some information on how to make TF work in a local python3 environment. I got it to work on my local pc by building the tf package from source as was explained in this issue. Unfortunately, although the DQL algorithm is training the rewards stays very negative and ends after 4 tries. I think a detailed instruction on how to make the tutorial work on a local pc here or in the lesson would greatly improve my learning experience.

Hi @rick.a.staa,
Getting ROS to properly work in python 3 is a pain. I recommend running two seperate virtual environments, one with ros and python 2, the other with python 3 and you OpenAI algorithms. You can connect the two using the pyro4 package.

Hi @simon.steinmann91,
Thanks a lot for your response! Sorry for being unclear in my initial question, the question is more about how to get the DQL network of the openai:baselines package to work on a local computer. I am currently trying to do this using the following steps:

  1. I create a .catkin_ws_python3/openai_venv virtual environment using the
  2. I following clone the https://github.com/ros/geometry and https://github.com/ros/geometry2 in the .catkin_ws_python3 catkin workspace.
  3. I then install some prerequisites to use Python3 with ROS.
sudo apt update
sudo apt install python3-catkin-pkg-modules python3-rospkg-modules python3-empy
  1. I then prepare the .catkin_ws_python3 workspace:
mkdir -p ~/catkin_ws/src; cd ~/catkin_ws
catkin_make
source devel/setup.bash
wstool init
wstool set -y src/geometry2 --git https://github.com/ros/geometry2 -v 0.6.5
wstool set -y src/geometry --git https://github.com/ros/geometry -v 1.12.0
wstool up
rosdep install --from-paths src --ignore-src -y -r
  1. Following I compile the geometry and geomtry2 packages for python3:
catkin_make --cmake-args \
            -DCMAKE_BUILD_TYPE=Release \
            -DPYTHON_EXECUTABLE=/usr/bin/python3 \
            -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m \
            -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so
  1. In one terminal I then load the cubly environment using a ROS + python2 virtual environment.
source `catkin_ws/devel/setup.bash`
roslaunch moving_cube_description main.launch
  1. In another terminal I then activate the openai_venv virtual environment and source the right setup.bash files according to the lesson:
source ~/.catkin_ws_python3/openai_venv/bin/activate
source ~/.catkin_ws_python3/devel/setup.bash
cd /home/user/catkin_ws
rm -rf build devel
catkin_make
source devel/setup.bash
roslaunch my_moving_cube_pkg my_start_training_deepq_version.launch

Unfortunately, I get the following error after doing all this:

Logging to /tmp/openai-2020-02-07-13-56-37-575004
Traceback (most recent call last):
  File "/home/user/catkin_ws/src/my_moving_cube_pkg/scripts/my_start_deepq.py", line 8, in <module>
    import my_one_disk_walk
  File "/home/user/catkin_ws/src/my_moving_cube_pkg/scripts/my_one_disk_walk.py", line 10, in <module>
    from tf.transformations import euler_from_quaternion
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/tf/__init__.py", line 28, in <module>
    from tf2_ros import TransformException as Exception, ConnectivityException, LookupException, ExtrapolationException
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/tf2_ros/__init__.py", line 38, in <module>
    from tf2_py import *
  File "/opt/ros/kinetic/lib/python2.7/dist-packages/tf2_py/__init__.py", line 38, in <module>
    from ._tf2 import *
ImportError: dynamic module does not define module export function (PyInit__tf2)

This error appears both on my local pc as well on the simulation provided alongside the lesson. I can solve this issue by editing the PYTHONPATH from:

/home/simulations/public_sim_ws/devel/lib/python2.7/dist-packages:/opt/ros/kinetic/lib/python2.7/dist-packages:/home/user/.catkin_ws_python3/devel/lib/python3/dist-packages:/home/simulations/public_sim_ws/src/all/ros_basics_examples/python_course_class

to:

/home/simulations/public_sim_ws/devel/lib/python2.7/dist-packages:/home/user/.catkin_ws_python3/devel/lib/python3/dist-packages:/opt/ros/kinetic/lib/python2.7/dist-packages:/home/simulations/public_sim_ws/src/all/ros_basics_examples/python_course_class

Unfortunately, in both the online simulation and the local simulation the results of the DQL are terrible and nothing like the gif that is depicted in the course

Results given by the DQL in the course

Results given by the DQL in the simulation

Do you maybe have any idea what is going wrong? Maybe the lesson is outdated and the number of max_timesteps: 1000 needs to be increased? I used the exact code from the lesson.

I ran into the same issues, trying to use Reinforcement Learning with deep learning and ROS. It was actually turned into the topic of my master’s thesis. The way I overcame this and many other issues, is to completely seperate the AI and simulation part. Having them run in two completely seperate environments and to have a virtual connection between the two. If you want I can help you set that up.

1 Like

Dear @simon.steinmann91 thanks for your response and the tip to completely separate the AI and the simulation. Also, I really appreciate you offering to help me, as I’m currently also doing my master thesis in RL. I added you on Github and stared some of your repositories! I let you know if I run into problems. I will keep this issue open so that the authors of the course are notified about the problems I experience with the lesson.

@simon.steinmann91 @duckfrost It appears that by increasing the max_timesteps the DQL seems to find solutions. I, therefore, guess only the max_timesteps in the configuration file and the instructions to launch the my_start_training_deepq_version.launch need to be modified (to achieve the right PYTHONPATH). And additionally maybe a note about the changed openia:baselines save function as mentioned in this topic.

Keep in mind that the success of a training session is hugely dependent on the hyperparameters you set. 2 obvious ones are the steps per episode and the max number of episodes. If the chance of success through random exploration is not high enough, your training may never converge.

1 Like