Google Colab crashes when I starting to train my StyleGAN2 model, after tick 0. The error "Your session crashed after using all available RAM" appears.
I'm using a fork of StyleGAN2 where added supporting of non-square images. Dataset consists of jpg images which are quite small (640x384). However, there are 12,195 of them. Size of tfrecords file is 1,47 GB. The command for training is:
!python run_training.py --num-gpus=1 --data-dir=./dataset --config=config-f --dataset=cg --mirror-augment=true --metric=none --total-kimg=20000 --min-h=3 --min-w=5 --res-log2=7 --result-dir="/content/drive/My Drive/results"
I'd like to know if there is a way to continue work with this dataset in Colab, after changing of some parameters.
Well, the message is clear, the training exceeds RAM. I remember that there is a way to increase RAM capability in
Google Colab
, but couldn't find it again. Another stuff that you can try is modifying the code forked in order to train the model using batch by batch of the total images (unless it's already implemented and you just have to pass as parameter, check the StyleGAN2 documentation), if the batch size doesn't work you can try to usecallbacks
in order to save the weights into a file e.g.pickle
format before the RAM crashes. I know I'm not giving an exact solution, but this may work as a guide for you :)