I'm using CV2 (OpenCV) for Python, and the Pyglet Python libraries to create a small application which will display live video from a webcam and have some text or static images overlayed. I've already made an application with CV2 that just displays the webcam image in a frame, but now I'd like to get that frame inside a pyglet window.
Here's what I've cobbled together so far:
import pyglet
from pyglet.window import key
import cv2
import numpy
window = pyglet.window.Window()
camera=cv2.VideoCapture(0)
def getCamFrame(color,camera):
retval,frame=camera.read()
if not color:
frame=cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
frame=numpy.rot90(frame)
return frame
frame=getCamFrame(True,camera)
video = pyglet.resource.media(frame, streaming=True)
@window.event
def on_key_press(symbol, modifiers):
if symbol == key.ESCAPE:
print 'Application Exited with Key Press'
window.close()
@window.event
def on_draw():
window.clear()
video.blit(10,10)
pyglet.app.run()
When run, I get the following error:
Traceback, line 20 in <module>
video = pyglet.resource.media(frame, streaming=True)
TypeError: unhashable type: 'numpy.ndarray'
I'm also open to other options that would let me display text over my live video. I originally used pygame, but in the end, I'll need multiple monitor support, so that's why I'm using pyglet.
There are a number of problems with your approach, but the trickiest thing is converting numpy arrays to textures. I use the approach below, which I discovered at some point elsewhere on SO. In short, you have to utilize the ctypes types and structures exposed by
pyglet.gl
in order to generate an array of GLubytes, and then put the contents of the image (a numpy array) into it. Then, because you have a 1-d array of values, you have to specify how Pyglet should make the image,pImage
here, by specifying the pixel format and pitch.If you get the example below working, you should be able to get
pImg
to update on each call ofon_draw
, and you should be done.