Trying to build a system to detect emotion in online meeting using python. I have created a deep learning model to detect emotions.
Using this code to view the screen
import time
import cv2
import mss
import numpy
with mss.mss() as sct:
monitor = {"top": 0, "left": 0, "width": 1000, "height": 1000}
while "Screen capturing":
last_time = time.time()
img = numpy.array(sct.grab(monitor))
cv2.imshow("OpenCV/Numpy normal", img)
if cv2.waitKey(25) & 0xFF == ord("q"):
cv2.destroyAllWindows()
break
Is there any way I can connect my model with this screen and see the emotions on this live screen?
My output should be face on the screen along with emotion text on it.
You can pass the
imgvariable to your model and get the emotions (as text) during inference. After that, it's a matter of updating the image by using OpenCV putText and then displaying this updated image using imshow.