Using Ray RLlib with custom simulator

1.1k views Asked by At

I'm very new to Ray RLlib and have an issue with using a custom simulator my team made. We're trying to integrate a custom Python-based simulator into Ray RLlib to do a single-agent DQN training. However, I'm uncertain about how to integrate the simulator into RLlib as an environment.

According to the image below from Ray documentation, it seems like I have two different options:

  1. Standard environment: according to the Carla simulator example, it seems like I can just simply use the gym.Env class API to wrap my custom simulator and register as an environment using ray.tune.registry.register_env function.
  2. External environment: however, the image below and RLlib documentation gave me more confusion since it's suggesting that external simulators that can run independently outside the control of RLlib should be used via the ExternalEnv class.

If anyone can suggest what I should do, it will be very much appreciated! Thanks! Ray RLlib Environments

2

There are 2 answers

0
paypaytr On BEST ANSWER

If your environment is indeed can be made to structurized to fit Gym style (init,reset,step functions) you can use first one.

External environment is mostly for RL environments that doesn't fit this style for example Web Browser(test automation etc) based application or any continual finance app etc.

0
Klaus On

Since you wrote that you work with a custom Python-based simulator, I would say that you can employ PolicyClient and PolicyServerInput API. Implement the PolicyClient on your simulator (env) side and provide the PolicyClient with data from the simulator (observations, rewards etc.). This is what I think may help you.