I am creating an app in C++ (OpenGL) using Kinect. Whenever we click in OpenGL the function invoked is
void myMouseFunction( int button, int state, int mouseX, int mouseY )
{
}
But can we invoke them using Kinect? Maybe we have to use the depth buffer for it, but how?
First: You don't "click in openGL", because OpenGL doesn't deal with user input. OpenGL is purely a rendering API. What you're referring to is probably a callback to be used with GLUT; GLUT is not part of OpenGL, but a free standing framework, that also does some user input event processing.
The Kinect does not generate input events. What the Kinect does is, it returns a depth image of what it "sees". You need to process this depth image somehow. There are frameworks like OpenNI which process this depth image and translate it in gesture data or similar. You can then process such gestures data and process it further to interpret it as user input.
In your tags you referred to "openkinect", the open source drivers for the Kinect. However OpenKinect does not gesture extraction and interpretation, but only provides the depth image. You can of course perform simple tests on the depth data as well. For example testing of there's some object within the bounds of some defined volume and interpret this as sort of an event.