I want to generate a 3D model as I walk alone in a building or in a room with the kinect sensor. I want know is this possible?
my idea is to get 3d imaging from interval(like 1 image per 3 meters) and combine them, but how can I do that, how can I match and stitch these models into one 3d model, like in the picture below.
this image is taken from here
according to this it is possible. Is there any codes (like C#) I can refer?
EDIT: mapping real world object using kinect I found this, and he managed to map real world 3D environment with gps location.for my project im planning to use sensor in a remote controlled car however gps won't work indoors, how can manage this problem?