Iri_wam does not start to learn

Hello everyone,

I have been trying to train an iri_wam robot (the link to repo: Bitbucket) in the Gazebo 11 environment with ROS Noetic. However, the robot does not start to learn.

I think all of the required and dependent files and packages were successfully downloaded from proper branches, for instance, the “noetic” branch for the “iri_wam” package and the “version2” branch for the “openai_ros” package. After I build those packages and roslaunch “start_training_v2.launch,” the code cannot escape from the loop as follows:

while state_result < self.DONE:
            "Doing Stuff while waiting for the Server to give a result....")
        state_result = self.client.get_state()
        rospy.loginfo("state_result: "+str(state_result))

that is implemented in the file. I know the get_state() function should return DONE if the IriWamExecTrajectory client receives the goal processed by the action server that pairs with the client. However, that function only returns ACTIVE after it returns PENDING at first. Did anybody face similar situations and solve them?

After I investigate the implementation of the ActionClient module, I suppose that the server would be coded on the Gazebo side within the entire simulation environment. However, I cannot identify how the server is declared within the “openai_ros” or “iri_wam” packages and it behaves.

One concern is that the message “libcurl: (7) Failed to connect to port 443: No route to host” appears on my terminal when I launch the code. I ask about this issue on the Gazebo community, but I have not received any response yet. I think this failure would not be critical to my situation. But, there might be a bad influence though I am not sure.

I am really glad if someone could give me any advice.

Thank you,


So version2 hsnt been tested and updated throuroghly for Noetic Ubuntu20, so probably a lot of the examples wont wok as expected.

I’ve tested on my local pc and at least I got the iriwam moving, although the done conditions and ewards have to be debugged tomake it learn correctly.

pip3 install gitpython

pip3 install gym

source /opt/ros/noetic/setup.bash

mkdir -p ~/catkin_ws/src

cd ~/catkin_ws/src

git clone Bitbucket

git clone -b vrsion2 Bitbucket

cd ~/catkin_ws

Solve any dependencies you might have
change the config file to your workspace path iriwam_openai_ros_example/iriwam_openai_qlearn_params_v2.yaml.
Launch the iriwam example:

roslaunch iriwam_openai_ros_example start_training_v2.launch

Hope this helps and by all means ifyou get it to work an dtrain correctly, ask a push request or tell us through teh forum and we will update the git accordingly :wink:

Hello @duckfrost,

Although I investigated the code this last weekend, I could not come up with how to debug the done conditions and the rewards. Besides, for me, designing the conditions and rewards seems to be irrelevant to this issue since the error seems to be caused by the ill-connection between the action client and server. How did you connect the done condition and reward design to this problem?

Besides, other than Ubuntu Focal/ROS Noetic, is there any environment on which the iri_wam example properly works? Before I work on this simulation on the Focal/Noetic environment, I tried to implement it on the Bionic/Melodic environment. However, I faced a critical issue caused by the difference between Python2 and 3 as discussed on this link(tf2_py with Python3) and relinquished to simulate it on the environment. If I can make the simulation work appropriately, I want to switch to the versions from Focal/Noetic.

Thank you,


The Iriwam might work in the default branch of openAI ( version2 development got stopped on its tracks months ago, and unfortunately hasn’t been maintained since ).

git clone --depth 1

The only thing is the default version doesn’t download and automatically launch the simulations.
Have a look at it and see if you can make it work that way.

Hi @duckfrost,

Thanks for your instruction. I tried to follow it, but that entailed further investigation into the packages and seemed complicated. So, I have been working on solving the issue with the version 2 repo.

I found that the problem occurs at

self.client.send_goal(my_goal, feedback_cb=self.feedback_callback)

on line 459 as well. I set done_cb to have the notification when the corresponding action_server finished processing the goal. However, the callback function was not called. Besides, I replaced the send_goal with send_goal_and_wait, which is also the function provided by the simple action client, as follows:


and launched the training. However, even although I waited for a couple of minutes, the training stuck at the send_goal_and_wait line. So, I suppose that the issue is caused by the action server in Gazebo which fails to process the my_goal rather than self.client.get_state.

Do you have any idea about the codes within your packages that may affect the action server’s behaviour in Gazebo? I could not find such lines on my own. I want to pick the package developer’s brain.

Thank you,