invoking the mouse function of Open Gl using Kinect

480 views Asked by At

I am creating an app in C++ (OpenGL) using Kinect. Whenever we click in OpenGL the function invoked is

void myMouseFunction( int button, int state, int mouseX, int mouseY ) 
{

}

But can we invoke them using Kinect? Maybe we have to use the depth buffer for it, but how?

2

There are 2 answers

1
datenwolf On BEST ANSWER

First: You don't "click in openGL", because OpenGL doesn't deal with user input. OpenGL is purely a rendering API. What you're referring to is probably a callback to be used with GLUT; GLUT is not part of OpenGL, but a free standing framework, that also does some user input event processing.

The Kinect does not generate input events. What the Kinect does is, it returns a depth image of what it "sees". You need to process this depth image somehow. There are frameworks like OpenNI which process this depth image and translate it in gesture data or similar. You can then process such gestures data and process it further to interpret it as user input.

In your tags you referred to "openkinect", the open source drivers for the Kinect. However OpenKinect does not gesture extraction and interpretation, but only provides the depth image. You can of course perform simple tests on the depth data as well. For example testing of there's some object within the bounds of some defined volume and interpret this as sort of an event.

1
F. P. On

I think you are confusing what the Kinect really does. The Kinect feeds depth and video data to your computer, which will then have to process it. Openkinect only does very minimal processing for you -- no skeleton tracking. Skeleton tracking allows you to geta 3D representation of where each of your user's joints is.

If you're just doing some random hacking, you could perhaps switch to the KinectSDK -- with the caveat that you will only be able to develop and deploy on Windows.

KinectSDK works with OpenGL and C++, too, and you can get a said user's "skeleton".

OpenNI -- which is multiplatform and free as in freedom -- also supports skeleton tracking, but I haven't used it so I can't recommend it.

After you have some sort of skeleton tracking up, you can focus on the user's hands and process his movements to get your "mouse clicks" working. This will not use GLUT's mouse handler's though.