how to find out amount of VRAM used by the model itself? (LSTM)

742 views Asked by At

How to find out the VRAM usage of this model? (Its not about the data being trained, but the model and its weights being loaded into VRAM

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Dropout,LSTM,BatchNormalization
import tensorflow as tf


model = Sequential()
model.add(LSTM(700, input_shape=(10000,5000,20), return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())


model.add(LSTM(700, return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())


model.add(LSTM(700))
model.add(Dropout(0.2))
model.add(BatchNormalization())


model.add(Dense(64, activation='sigmoid'))
model.add(Dropout(0.2))


model.add(Dense(32, activation='sigmoid'))
model.add(Dropout(0.2))
1

There are 1 answers

0
alwaysmvp45 On

You can approximate it based on the number of parameters and the parameters datatypes.

model.summary()

will tell you how many parameters you have and then just multiply by the size. For example, if you found you had 5000 parameters and they are 32b floating point, then your model is at least 5000*32/8 = 1250 bytes.

Naturally, there is more that goes into it, but this is often an easy way to get a fairly close lower bound. You could also save the model and see how large the saved file is, which would include the model structure as well.

tf.saved_model.save(model, path)

Do be careful, because if your model is large, then training it will be even large since it will store gradients as well for all the learnable parameters. So you can easily imagine how a 1GB model may need 5-10GB to train.