I am working on a video editing application in python using PyQt. I want to be able to make "clips" out of video files and have the ability to concatenate these clips to render a new video. I have used the QMediaPlayer class with a VideoSurface to play back a video that has instantiated a QVideoWidget with a .mp4 file, but I want to play back a video using a numpy array (preferred) or some mutable object. I have looked through some open source video editors (OpenShot, Vidcutter, Pitivi) but I cannot seem to find what I need. Many use C++ frameworks, with which I am unfamiliar.
I have used multithreading and for-loops to try to hack my way through a solution by using a QImage object, meaning I loop through a video frame by frame, extracting the numpy array representation of each frame and converting it a QImage object and calling repaint(). Unfortunately, even with calling this function on a new thread, this does not render at the desired speed. This was inspired by moviepy's method of rendering clips with pygame. I have also looked through PyQt documentation, but these classes do not seem to meet my needs or I do not understand them. Graphical User Interface programming is new to me.
I know that fps is not the issue, and if you run the code below with an updated VideoFileCLip parameter, the video will display (without sound) at approximately 75% of the original frame rate. And, the window will "not respond" if there is no multithreading. I have tried fps of 30 and 60, but still my for-loop method is not desired because other tasks will be performed elsewhere, and the computational complexity will only increase.
Here is a simplified version that will reveal the issue:
import numpy as np
import sys
import time
from PyQt5.QtGui import QImage, QPainter
from PyQt5.QtWidgets import QApplication, QWidget
import threading
from moviepy.editor import VideoFileClip
class Demo(QWidget):
def __init__(self):
super().__init__()
self.video = VideoFileClip(r'C:\Users\jklew\Videos\Music\Fractalia.MP4') # I am using a real video
im_np = self.video.get_frame(0)
self.image = QImage(im_np, im_np.shape[1], im_np.shape[0],
QImage.Format_RGB888)
self.stopEvent = threading.Event()
self.thread = threading.Thread(target=self.display_clip, args=())
self.thread.start()
def paintEvent(self, event):
painter = QPainter(self)
painter.drawImage(self.rect(), self.image)
def display_clip(self, fps=60):
clip = self.video
img = clip.get_frame(0) # returns numpy array of frame at time 0
t0 = time.time()
for t in np.arange(1.0 / fps, clip.duration-.001, 1.0 / fps):
img = clip.get_frame(t) # returns numpy array of frame at time t
# print(img.shape)
t1 = time.time()
time.sleep(max(0, t - (t1-t0))) # loop at framerate specified
self.imdisplay(img) #, screen)
def imdisplay(self, img_array):
# fill the widget with the image array
# TODO: Qt widget
self.image = QImage(img_array, img_array.shape[1], img_array.shape[0], QImage.Format_RGB888)
self.repaint()
def main():
app = QApplication(sys.argv)
demo = Demo()
demo.show()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
This problem also extends to mutable audio data as well.
Try to do it with QPixmaps. Before you show the frames save them into a list:
and than, when you play the image sequenze back: