Kinectv2 normalizing depth values

144 views Asked by At

I am using Kinect v2 to capture the depth frames. I saw Kinect SDK 1.x codes in C++, they used this

BYTE depth = 255 - (BYTE)(256*realDepth/0x0fff);

I want to know, what is the purpose of this command and do I need to use this also for Kinect v2? If I have to use this, then my code is in C#. I am getting error in multiplying this 256*realDepth
Error: Operator '*' cannot be applied to operands of type int and unshort.

For those who give downmark, please explain the reason for that

1

There are 1 answers

0
Vito Gentile On BEST ANSWER

That line of code is used to normalize depth values, which are coded in 11 bits in the C++ API. With that command, the 11-bit representation is converted in an 8-bit one, which allows to display the depth map as a grayscale image.

Anyway, you don't need to use that line of code if you are developing your application in C#, because the API can do it for you.