I am trying to apply deep learning to my classification problem using Keras library. I am running it on GPU but it runs out of memory (my GPU is old), even when I am using batch size of 1. When I reduce the image size it works; but resized images are so blurred and I am loosing important information. Therefore, the accuracy is not so good. I am wondering whether it is feasible in Keras to add the amount of the batch size (not the full dataset) to the shared variables in each iteration, and in next iteration again load the the new images and update the shared variable? I read some Keras tutorials but I am not sure if it is feasible and if so, how?
I would be thankful if you help me with this problem.