Pass ffmpeg Stream to OpenCV

632 views Asked by At

I would like to use the redirection operator to bring the stream from ffmpeg to cv2 so that I can recognize or mark the faces on the stream and redirect this stream again so that it runs under another stream.

One withoutfacedetect and One withfacedetect.

raspivid -w 1920 -h 1080 -fps 30 -o - -t 0 -vf -hf -b 6000000 | ffmpeg -f h264 -i - -vcodec copy -g 50 -strict experimental -f tee -map 0:v "[f=flv]rtmp://xx.xx.xx.xx/live/withoutfacedetect |[f=h264]pipe:1" > test.mp4

I then read up on CV2 and came across the article.

https://www.bogotobogo.com/python/OpenCV_Python/python_opencv3_Image_Object_Detection_Face_Detection_Haar_Cascade_Classifiers.php

I then ran the script with my picture and was very amazed that there was a square around my face.

But now back to business. What is the best way to do this?

thanks to @Mark Setchell, forgot to mention that I'm using a Raspberry Pi 4.

1

There are 1 answers

0
Mark Setchell On BEST ANSWER

I'm still not 100% certain what you are really trying to do, and have more thoughts than I can express in a comment. I have not tried all of what I think you are trying to do, and I may be over-thinking it, but if I put down my thought-train, maybe others will add in some helpful thoughts/corrections...

Ok, the video stream comes from the camera into the Raspberry Pi initially as RGB or YUV. It seems silly to use ffmpeg to encode that to h264, to pass it to OpenCV on its stdin when AFAIK, OpenCV cannot easily decode it back into BGR or anything it naturally likes to do face detection with.

So, I think I would alter the parameters to raspivid so that it generates RGB data-frames, and remove all the h264 bitrate stuff i.e.

raspivid -rf rgb  -w 1920 -h 1080 -fps 30 -o - | ffmpeg ...

Now we have RGB coming into ffmpeg, so you need to use tee and map similar to what you have already and send RGB to OpenCV on its stdin and h264-encode the second stream to rtmp as you already have.

Then in OpenCV, you just need to do a read() from stdin of 1920x1080x3 bytes to get each frame. The frame will be in RGB, but you can use:

cv2.cvtColor(cv2.COLOR_RGB2BGR)

to re-order the channels to BGR as OpenCV requires.

When you read the data from stdin you need to do:

frame = sys.stdin.buffer.read(1920*1080*3)

rather than:

frame = sys.stdin.read(1920*1080*3)

which mangles binary data such as images.