torch.cuda.OutOfMemoryError: CUDA out of memory. Using model.train() with ultralytics

184 views Asked by At

I'm working on an api with Flask using a deep learning model on YOLOv8 I use a first model with yolov5 to segment an image and give a list of images (the detected elements) I then pass the whole list to the yolov8 model in order for it to classify them results = model.predict(segmentation(img)) However I get the following error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 418.00 MiB. GPU 0 has a total capacty of 5.80 GiB of which 86.19 MiB is free. Process 59994 has 480.00 MiB memory in use. Including non-PyTorch memory, this process has 5.19 GiB memory in use. Of the allocated memory 4.12 GiB is allocated by PyTorch, and 858.55 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF I'm running on ubuntu23.04 with an nvidia rtx3060 mobile 6gb with a venv

I tried the foloowing command and nothing worked: export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:2048'

0

There are 0 answers