How to vectorize an RLlib MultiAgentEnv similar to Gym/Gymnasium's VectorEnv?

30 views Asked by At

I have a multi-agent RL environment that's implemented as an RLlib MultiAgentEnv. I would like to do training and analysis with this environment but with my own custom algorithm code and without the RLlib framework.

In Gym/Gymnasium, they have a VectorEnv class that allows you to run multiple instances of an environment synchronously or asynchronously and collect batches of experiences from those environments. Does RLLib have a similar equivalent to its MultiAgentEnv class format of environments? I would not be opposed to using Ray or something else to do this, I just don't want to deal with the RLLib API for training, I am trying to use my own custom code.

I saw that they had this code about VectorEnv, but I am not sure how to make use of it: https://github.com/proroklab/VectorizedMultiAgentSimulator/blob/main/vmas/simulator/environment/rllib.py

Thanks in advance for any assistance!

0

There are 0 answers