Languages to develop applications for Xbox 360 kinect

127 views Asked by At

I know this sounds stupid and I'm propably very late to the party but here's the thing I want to program an gesture recogniction application (in the likes of this Hand detection or this actual finger detection) for the Xbox 360 Kinect. SDK (version 1.8) is found, installed and works, preliminary research is done - I only forgot to look in which language to write the code. The link from the SDK to the documentation would be the first thing to do but is a dead end, unfortunately.
From the provided examples it seems either to be C++ or C# although some old posts also claim Java. My question is: Is there a documentation not tied to the SDK and which pitfall are there in regard to developing in this specific case under C++/C#/Java? A post from 2011 barely covers the beginning.

Addendum: On further looking I was prompted for the Samples site from the developer toolkit - which can be reached, yet all listed and linked examples are dead ends too.

Addendum: For reference I userd this instruction - ultimately proving futile.

Found an version of NiTE here

1

There are 1 answers

3
George Profenza On BEST ANSWER

I've provided this answer in the past.

Personally I've used the Xbox360 sensor with OpenNI the most (because it's cross platform). Also the NITE middleware on alongside OpenNI provides some basic hand detection and even gesture detection (swipes, circle gesture, "button" push, etc.).

While OpenNI is opensource, NITE isn't so you'd be limited to what they provide.

The links you've shared use OpenCV. You can install OpenNI and compile OpenCV from source with OpenNI support. Alternatively, you can manually wrap the OpenNI frame data into an OpenCV cv::Mat and carry on with the OpenCV operations from there.

Here's a basic example that uses OpenNI to get the depth data and passes that to OpenCV:

#include <OpenNI.h>

#include "opencv2/highgui/highgui.hpp"
#include "opencv2/videoio/videoio.hpp"

#include <iostream>

using namespace cv;
using namespace std;

int main() {
    // setup OpenNI
    openni::Status rc = openni::STATUS_OK;
    openni::Device device;
    openni::VideoStream depth, color;
    const char* deviceURI = openni::ANY_DEVICE;
    rc = openni::OpenNI::initialize();

    printf("After initialization:\n%s\n", openni::OpenNI::getExtendedError());
    rc = device.open(deviceURI);
    if (rc != openni::STATUS_OK)
    {
        printf("Device open failed:\n%s\n", openni::OpenNI::getExtendedError());
        openni::OpenNI::shutdown();
        return 1;
    }

    rc = depth.create(device, openni::SENSOR_DEPTH);
    if (rc == openni::STATUS_OK)
    {
        rc = depth.start();
        if (rc != openni::STATUS_OK)
        {
            printf("Couldn't start depth stream:\n%s\n", openni::OpenNI::getExtendedError());
            depth.destroy();
        }
    }
    else
    {
        printf("Couldn't find depth stream:\n%s\n", openni::OpenNI::getExtendedError());
    }

    if (!depth.isValid())
    {
        printf("No valid depth stream. Exiting\n");
        openni::OpenNI::shutdown();
        return 2;
    }

    openni::VideoMode vm = depth.getVideoMode();
    int cols, rows;
    cols = vm.getResolutionX();
    rows = vm.getResolutionY();

    openni::VideoFrameRef frame;
    depth.start();
    // main loop
    for (;;) {
        // read OpenNI frame
        depth.readFrame(&frame);
        // get depth pixel data
        openni::DepthPixel* dData = (openni::DepthPixel*)frame.getData();
        // wrap the data in an OpenCV Mat
        Mat depthMat(rows, cols, CV_16UC1, dData);
        // for visualisation only remap depth values
        Mat depthShow;
        const float scaleFactor = 0.05f;
        depthMat.convertTo(depthShow, CV_8UC1, scaleFactor);
        if(!depthShow.empty())
        {
            imshow("depth", depthShow);
        }
        
        if (waitKey(30) == 27)   break;

    }
    // OpenNI exit cleanup
    depth.stop();
    openni::OpenNI::shutdown();
}

One of the tutorials you've linked to (https://github.com/royshil/OpenHPE) uses libfreenect which is another great cross platform option to interface with the old Kinect.

FWIW, Kinect for XBox One has better depth data, deals better with direct sunlight and the SDK has support for custom gesture recognition (see this tutorial for example).