Is there a way to allocate remaining GPU to your code on PyTorch?

111 views Asked by At
  1. Is there a way to allocate the remaining memory in each GPU for your task?
  2. Can I split my task across multiple GPU's?

nvidia-smi for your reference

1

There are 1 answers

0
Jason Adhinarta On
  1. Yes. PyTorch is able to use any remaining GPU capacity given that there is enough memory. You only need to specify which GPUs to use: https://stackoverflow.com/a/39661999/10702372
  2. Yes. GPU parallelism is implemented using PyTorch's DistributedDataParallel