Finding Depth value for Color Pixels in Kinect

2.7k views Asked by At

I have certain points X,Y in 1920*1080 of the color pixels. I don't know how to map that particular point in the depth data 512×424. I know i need to use co-ordinate mapper but cant figure it out how to do that. I'm beginner in Kinect, I'm using C#. Someone please help me in this

2

There are 2 answers

2
Vangos On BEST ANSWER

If you want to map FROM the Color frame TO the Depth frame, you need to use the method MapColorFrameToDepthSpace:

ushort[] depthData = ... // Data from the Depth Frame
DepthSpacePoint[] result = new DepthSpacePoint[512 * 424];

_sensor.CoordinateMapper.MapColorFrameToDepthSpace(depthData, result);

You need to provide this method with 2 parameters:

  1. The complete depth frame data (like this).
  2. An empty array of DepthSpacePoints.

Providing those parameters, the empty array will be filled with the proper DepthSpacePoint values.

Otherwise, Rafaf's answer is what you need.

0
Rafaf Tahsin On

Following is an example, I've converted a CameraSpacePoint to ColorSpacePoint and the same CameraSpacePoint to DepthSpacePoint.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Kinect;

namespace Coordinate_Mapper
{
    class Program
    {
        public static KinectSensor ks;
        public static MultiSourceFrameReader msfr;
        public static Body[] bodies;
        public static CameraSpacePoint[] cameraSpacePoints;
        public static DepthSpacePoint[] depthSpacePoints;
        public static ColorSpacePoint[] colorSpacePoints;

        static void Main(string[] args)
        {
            ks = KinectSensor.GetDefault();
            ks.Open();
            bodies = new Body[ks.BodyFrameSource.BodyCount];
            cameraSpacePoints = new CameraSpacePoint[1];
            colorSpacePoints = new ColorSpacePoint[1];
            depthSpacePoints = new DepthSpacePoint[1];

            msfr = ks.OpenMultiSourceFrameReader(FrameSourceTypes.Depth | FrameSourceTypes.Color | FrameSourceTypes.Body);
            msfr.MultiSourceFrameArrived += msfr_MultiSourceFrameArrived;

            Console.ReadKey();
        }

        static void msfr_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
        {
            if (e.FrameReference == null) return;
            MultiSourceFrame multiframe = e.FrameReference.AcquireFrame();
            if (multiframe == null) return;

            if (multiframe.BodyFrameReference != null)
            {
                using (var bf = multiframe.BodyFrameReference.AcquireFrame())
                {
                    bf.GetAndRefreshBodyData(bodies);
                    foreach (var body in bodies)
                    {
                        if (!body.IsTracked) continue;
                        // CameraSpacePoint
                        cameraSpacePoints[0] = body.Joints[0].Position;
                        Console.WriteLine("{0} {1} {2}", cameraSpacePoints[0].X, cameraSpacePoints[0].Y, cameraSpacePoints[0].Z);

                        // CameraSpacePoints => ColorSpacePoints
                        ks.CoordinateMapper.MapCameraPointsToColorSpace(cameraSpacePoints, colorSpacePoints);
                        Console.WriteLine("ColorSpacePoint : {0} {1}", colorSpacePoints[0].X, colorSpacePoints[0].Y);

                        // CameraSpacePoints => DepthSpacePoints
                        ks.CoordinateMapper.MapCameraPointsToDepthSpace(cameraSpacePoints, depthSpacePoints);
                        Console.WriteLine("DepthSpacePoint : {0} {1}", depthSpacePoints[0].X, depthSpacePoints[0].Y);
                    }
                }
            }
        }
    }
}

N:B:

  1. I'd to store CameraSpacePoint, DepthSpacePoint, ColorSpacePoint to arrays because the parameters of the methods in CoordinateMapper class are array. For more, please check out the CoordinateMapper Method Reference

  2. This blog may help you. Understanding Kinect Coordinate Mapping