Custom Reinforcement Learning Environment with Neural Network

32 views Asked by At

I'm currently working on a reinforcement learning project. In this project the dynamics of the environment is modeled with a neural network surrogate model. I would like to build a custom environment including this neural network which runs on a GPU. Therefor the environment should accept and return tensors (pytorch). This would enable to run the whole training process on the GPU without switching between CPU & GPU (numpy arrays & pytorch tensors).

So far I know about openai gymnasium environments, which work with numpy arrays. Is there any framework for training RL and building environments for multi-process on GPU? I'm thankful for any opinion, hint or experience. Thanks

0

There are 0 answers