YOLOv8 custom model not making predictions

24 views Asked by At

I use a custom trained Yolov8 model to predict whether a physical door is closed or open. I have trained Yolov8 on a custom dataset but it does not make any detections even when passing the same data that was used for training.

I used a dataset of approximately 300 images. This is my code:

import os

from ultralytics import YOLO
import cv2


VIDEOS_DIR = os.path.join('.', 'videos')

video_path = os.path.join(VIDEOS_DIR, 'sample door.mp4')
video_path_out = '{}_out.mp4'.format(video_path)

cap = cv2.VideoCapture(video_path)
ret, frame = cap.read()
H, W, _ = frame.shape
out = cv2.VideoWriter(video_path_out, cv2.VideoWriter_fourcc(*'MP4V'),        int(cap.get(cv2.CAP_PROP_FPS)), (W, H))

model_path = os.path.join('.', 'runs', 'detect', 'train', 'weights', 'last.pt')


model = YOLO(model_path)  # load a custom model


while ret:

    results = model(frame)[0]
    for result in results.boxes.data.tolist():
        x1, y1, x2, y2, score, class_id = result
        print(x1,y1,x2,y2)

        cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 4)
        cv2.putText(frame, results.names[int(class_id)].upper(), (int(x1), int(y1 - 10)),
                    cv2.FONT_HERSHEY_SIMPLEX, 1.3, (0, 255, 0), 3, cv2.LINE_AA)

    out.write(frame)
    ret, frame = cap.read()

cap.release()
out.release()
cv2.destroyAllWindows()

The following are the results of the training: https://i.stack.imgur.com/huyZR.png

1

There are 1 answers

0
hanna_liavoshka On

As it comes from the training results, the model doesn't perform well: the recall < 0.2 means the model can recognize lower than 20% of the target objects from the dataset it was trained on. We should expect an even lower result on real data. Consider re-train the model according to these tips: https://docs.ultralytics.com/yolov5/tutorials/tips_for_best_training_results/?h=tips. Please make sure you have basic knowledge of model performance metrics: https://docs.ultralytics.com/guides/yolo-performance-metrics/?h=metrics#object-detection-metrics.

Also, a checkpoint with the best performance is saved by the training process as best.pt. Try it instead of the last.pt you are using now.

For a quick check, if the low model performance is a cause of the detection lack, you can lower the confidence threshold, which has a default value of 0.25. Objects detected with confidence below this threshold are ignored. So you can try something like this: results = model(frame, conf=0.01)[0], just for the experiment. If you get the detections, even incorrect, on the training data, the cause is likely to be low model performance.