I am trying to run 1-hot encoding on IMDB data which I loaded from keras.datasets. But When I run tf.keras.utils.to_categorical(train_seq) in the following context, I am getting MemoryError as attatched image, eventhough I have 7961mib space on my RTX 4060 TI (when I checked nvidia-smi after failure). I am using venv virtual environment on Ubuntu 22.04 LTS and using tensorflow=2.13 and my computer has 32gbx2 RAM. I want to 1-hot encode train_seq into (20000, 100, 500) ndarray. Thanks in advance.
MemoryError

nvidia smi after failure

Full code
import tensorflow as tf
from sklearn.model_selection import train_test_split
(x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data()
x_train, x_valid, y_train, y_valid = train_test_split(x_train, y_train, test_size=0.2, random_state=42)
train_seq = pad_sequences(x_train, maxlen=100)
valid_seq = pad_sequences(x_valid, maxlen=100)
tf.keras.utils.to_categorical(train_seq)
I've tried sudo kill -9 <PID> and restarted my python kernel and ran again. And ran the same code on my google colab with T4 runtime. (T4 worked well and it only used GPU RAM
4.6 / 15.0 GB when I've checked after running.)