I am relatively new to the field of deep learning and recently embarked on a project to train a MobileNet model on my custom dataset. During this process, I encountered a discrepancy in image sizes – my dataset consists of images sized at 768x768, whereas the model's default size is 320x320. Despite successfully completing the training with 40,000 steps and no errors, I observed a markedly low Mean Average Precision (MAP) score of approximately 20%.
I am curious if this difference in image sizes could be a contributing factor to the subpar results. Consequently, I attempted to address this issue by modifying the image size parameter within the configuration file to match my dataset (768x768). However, even after making this adjustment, the outcome remained the same.
I am seeking advice on whether this difference in image dimensions could impact the model's performance and, if so, how best to address this issue to achieve better results. Any guidance or insights you can provide would be immensely valuable.