Openai_ros using torch GPU

hello, friends, I am learning examples using openai_ros, I need to make the tensor run in CUDA, but when I call to_device(‘cuda’), I cannot see the python is loaded to the gpu, seems if I do rosrun pkg_name python_code, it block the possible to run on gpu, any one has some insights ? thank u.

Hi,

Check the configuration of the running. Because it might be configured to NOT use CUDA.

1 Like

appreciate your quick response ! Now I can settle this done, answer my own question,

  1. rosrun will NOT block cuda function using pytorch
  2. the reason why my code not loaded, it is my python code fault, forgot send whole tensor to cuda, it always report errors tensors not in the same device, so if u guys met the same issues, check your code, the rosrun basically equivalent to /bin/python python_ros_node

Just want ask another ros-Gazebo openai_ros issue:

  • openai_ros need reset the Gazebo and controller or other plugins, millions times, I will met one issue is if u turn on the node loglevel to loedebug , when before or after reset, it will print Connecting to <ros_master id>, and stuck there for few seconds or even longer, and Gazebo will also freeze, it slow down the training process, could u please provide some help on this issue ? great thanks !!
    sli

No idea that issue why is happening. It might bee too much logging for the system.
But I would have to take a deeper look into that

thank u, probably web ros restrict the number of cpu core, when using local not web u probably can meet this issue, just try the pendulum, it has this issue