I'm trying to use aitextgen to finetune 774M gpt 2 on a dataset. unfortunately, no matter what i do, training fails because there are only 80 mb of vram available. how can i clear the vram without restarting the runtime and maybe prevent the vram from being full?
Can i clear up gpu vram in colab
10k views Asked by Blazeolmo 343 At
2
There are 2 answers
2
On
Another solution can be using these code snippets.
1.
!pip install numba
- Then:
from numba import cuda
# all of your code and execution
cuda.select_device(0)
cuda.close()
Your problem is discussed in Tensorflow official github. https://github.com/tensorflow/tensorflow/issues/36465
Update: @alchemy reported this to be unrecoverable in terms of turning on. You can try below code.
device = cuda.get_current_device()
device.reset()
!nvidia-smi
inside a notebook block.!kill process_id
It should help you.