I have a ASUS Xtion Pro-Live camera. It is connected to a raspberry pi. I have written a python code which grabs frames from the camera, displays and saves.
def get_frames():
capture = cv2.VideoCapture(cv.CV_CAP_OPENNI)
capture.set(cv.CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, cv.CV_CAP_OPENNI_VGA_30HZ)
while(True):
if not capture.grab():
print "Unable to Grab Frames from camera"
break
okay, color_image = capture.retrieve(0, cv.CV_CAP_OPENNI_BGR_IMAGE)
if not okay:
print "Unable to retrieve Color Image from device"
break
cv2.imshow("rgb camera", color_image)
name = "images/" + str(time.time()) + ".png"
cv2.imwrite(name, color_image)
if cv2.waitKey(10) == 27:
break
capture.release()
I want to use the similar code on my computer. But in this case, basically I need to access to the raspberry pi, and use the camera which is connected to raspberry pi. I need to get real time video data from the camera in the similar way, and use it on my code.
How can I manage to do that?
It looks like you're only using the RGB stream which shouldn't be a huge amount of data. If you're planning to stream depth+RGB you should look for a way to compress the data before sending it on the network, then decompressing it at the other end.
I remember this is a problem people have been tacking when the kinect came out. For example, check out Fabrizio Pece's paper on Adapting standard video codecs for depth streaming (pdf). You should be able to find similar papers and implementations.
If you're not interested in streaming depth and using RGB only, more like a webcam, I imagine there are python libraries that will allow you to create an HTTP or RTP stream from your Raspberry PI which you can then read on your other computer.