Ultralytics doesn't find source

58 views Asked by At

So I was trying to run a simple script to test out ultralytics and see how its yolov8 model works. I followed the steps from this video, but when I ran this script:

from ultralytics import YOLO
import cv2

model = YOLO('yolov8n.pt')

video = './Vine_Golvo.mp4'
capture = cv2.VideoCapture(video)

condition = True

# read frames
while condition:
    condition, frame = capture.read()   # condition is updated to true or false, depending on wether or not 
                                        # a frame was succesfully retrieved (will be false after the last frame of the video)
    # frame saves the data of the current frame
    
    # track objects
    results = model.track(frame, persist=True)
    
    # plot results
    frame_ = results[0].plot()
    
    # visualize
    cv2.imshow('Paul', frame_)
    if cv2.waitKey(25) & 0xFF == ord('q'):  # the second thing says that we close the program when you press 'q'
        break

this error popped out:

C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Scripts\\python.exe "C:\\Users\\Costin\\Desktop\\Programare\\Materiale limbaje de programare\\Python\\Image recognition\\Icyu\\main.py"
WARNING ⚠️ 'source' is missing. Using 'source=C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\assets'.

image 1/2 C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\assets\\bus.jpg: 640x480 3 persons, 1 bus, 63.0ms
Traceback (most recent call last):
File "C:\\Users\\Costin\\Desktop\\Programare\\Materiale limbaje de programare\\Python\\Image recognition\\Icyu\\main.py", line 18, in \<module\>
results = model.track(frame, persist=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\engine\\model.py", line 481, in track
return self.predict(source=source, stream=stream, \*\*kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\engine\\model.py", line 441, in predict
return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\engine\\predictor.py", line 168, in __call__
return list(self.stream_inference(source, model, \*args, \*\*kwargs))  # merge list of Result into one
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\torch\\utils_contextlib.py", line 56, in generator_context
response = gen.send(request)
^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\engine\\predictor.py", line 256, in stream_inference
self.run_callbacks("on_predict_postprocess_end")
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\engine\\predictor.py", line 393, in run_callbacks
callback(self)
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\trackers\\track.py", line 69, in on_predict_postprocess_end
tracks = tracker.update(det, im0s\[i\])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\trackers\\byte_tracker.py", line 293, in update
warp = self.gmc.apply(img, dets)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\trackers\\utils\\gmc.py", line 102, in apply
return self.applySparseOptFlow(raw_frame)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\\Users\\Costin\\Desktop\\Virtualenvs\\yolov8\\Lib\\site-packages\\ultralytics\\trackers\\utils\\gmc.py", line 329, in applySparseOptFlow
matchedKeypoints, status, \_ = cv2.calcOpticalFlowPyrLK(self.prevFrame, frame, self.prevKeyPoints, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
cv2.error: OpenCV(4.9.0) D:\\a\\opencv-python\\opencv-python\\opencv\\modules\\video\\src\\lkpyramid.cpp:1394: error: (-215:Assertion failed) prevPyr\[level \* lvlStep1\].size() == nextPyr\[level \* lvlStep2\].size() in function 'cv::\`anonymous-namespace'::SparsePyrLKOpticalFlowImpl::calc'

Process finished with exit code 1

I initially ran this in VsCode, but since the tutorial used PyCharm (and I've had problems in the past with VsCode anyway) I thought maybe the editor and environment were the issue. They weren't, same error pops out in PyCharm when using a virtual environment.

1

There are 1 answers

0
hanna_liavoshka On

If this error occurs from the start of the script run and before any video frames have been successfully processed, the problem may be in the incorrect video path. You can try to specify the full path instead of the relative one, and even test different videos to check if the error still exists.

If the error occurs at the end of the video file processing, the problem is in the while loop logic: even if the condition is false and capture.read() didn't return any frame (like at the end of the video), your code still tries to track this nonexistent frame before the while condition will do its job:

while condition:
    condition, frame = capture.read() 
    results = model.track(frame, persist=True)

So here I would propose to use the usage example from Ultralytics documentation: https://docs.ultralytics.com/modes/track/#persisting-tracks-loop. This code will end the process in both cases: when the video ends or when "q" is pressed, and will not try to process the frames after the end of the video.

import cv2
from ultralytics import YOLO

# Load the YOLOv8 model
model = YOLO('yolov8n.pt')

# Open the video file
video_path = "./Vine_Golvo.mp4"
cap = cv2.VideoCapture(video_path)

# Loop through the video frames
while cap.isOpened():
    # Read a frame from the video
    success, frame = cap.read()

    if success:
        # Run YOLOv8 tracking on the frame, persisting tracks between frames
        results = model.track(frame, persist=True)
        # Visualize the results on the frame
        annotated_frame = results[0].plot()
        # Display the annotated frame
        cv2.imshow("Paul", annotated_frame)
        # Break the loop if 'q' is pressed
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
    else:
        # Break the loop if the end of the video is reached
      break

# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()