Convert 4 tracking points to a 3D representation

156 views Asked by At

I have build this gizmo shape that I have build with leds. This nice diffuse light comes from Styrofoam balls.

gizmo
(source: rapidus.net)

Then, I did a window c++ file to start a connected Kinect to detect the light balls and put trackers on the

screen (using sdl). If the program do not detect a Kinect, it tries to start your webcam and tracks the light

balls (using OpenCV).

Kinect:

kinect
(source: rapidus.net)

Webcam:

webcam
(source: rapidus.net)

So I end up with 4 coordinates (x,y) that can be fed to me continously.

I want to create a 3D representation from those coordinates and obtain a vector. I would do a reset positionning and then compare rotation and translation from this position to evaluate changes in the gizmo position.

We know from wikipedia that the low resolution of the Kinect is 640x480 and that the sensor has an angular field of view of 57° horizontally and 43° vertically. So I will stick with the kinect at first, since we know its parameter for sure.

I have try to understand how to do it with thread like these, but frankly, I did not get much out of it. How do I reverse-project 2D points into 3D?

So,if someone has basic information on how to start to takle this challenge, I would love to ear it. I know a bit about matrix and 3D stuff (dot product, cross product), but I am no mathematician. Please, keep it simple if possible. Thanks.

0

There are 0 answers