I hope you are loud and proud.
I'm a newbie to PyAV
and I'm using aiortc
for WebRTC MediaServer, in an aiortc
live session I have av.VideoFrame
objects available for each video frame and I want to create HLS video segments from the frames in real-time.
As you can see in this project:
They have used OpenCV
video frame bytes piped to FFmpeg
CLI for HLS streaming
My question is, how can I use PyAV
/python
for consuming av.VideoFrame
objects and extract 2-second video segments consisting of 60 frames for HLS streaming?
Or any python package appropriate for assembling VideoFrames to HLS stream
Thanks in advance
You are mixing some terms. I assume that HLS refers to HTTP-Streaming. WebRTC is a different way (protocol) to send video (P2P).
aiortc
is not a MediaServer, even though you can do something similar. You can useaiortc
as WebRTC client that sends a video track to a browser (another WebRTC client). The connection can be established using a HTTP server as signaling server.I assume in the following that you like to stream your video using
aiortc
(WebRTC).How to transform frames of a track (e.g. from webcam)?
If I understand you correctly, your actuall question is "How to transform frames of a track (e.g. from webcam)?". Here is a runnable example that uses
MediaStreamTrack
to implementVideoTransformTrack
. However, this example receives the webcam image from the browser (other client). Have a look at this example, which uses the webcam of your server (Python application). Just addto the
RTCPeerConnection
(pc
in the snippet above) withvideo
returned fromcreate_local_tracks
.How to create track from frames?