Monday, March 24, 2014

Drawing Kinect V2 Body Joints

So far the posts in my Kinect for Windows v2 series have concentrated on the Depth, Color and BodyIndex data sources. Today I want to highlight how to access the Body data stream.
ColorExample-02-18-51
This is an early preview of the new Kinect for Windows, so the device, software and documentation are all preliminary and subject to change.
The new Kinect sensor and SDK has improved skeleton tracking so significantly that the Microsoft team has changed the name of their Skeleton class to Body. In addition to being able to track six skeletons (instead of 2), the API is shaping up to include features for Activities (facial features such as left eye open), Expressions (happy or neutral), Appearance (wearing glasses), Leaning (left or right) and the ability to track if the user is actively looking at the sensor. I’ll dive into those in up coming posts, but for now I want to focus on Joint information.
Generally speaking, the body tracking capability and joint positions are improved over previous version of the SDK. Specifically, the positions for hips and shoulders are more accurate. Plus version 2 has introduced 5 new joints (Neck, Hand-Tip-Left, Thumb-Left, Hand-Tip-Right, Thumb-Right) bringing the total number of joints to 25.
Body-Joints
Getting the Body data from the sensor is slightly different than the other data streams. First you must initialize an array of Body with a specific size and then pass it into the GetAndRefreshBodyData method on the BodyFrame. The SDK uses a memory saving optimization that updates the items in this array rather than creating a new set each time. So if you want to hang onto an instance of Body between frames, you need to copy it to another variable and replace the item in the Array with a null value.
The following shows how setup and populate an array of Body objects per frame:

private void SetupCamera()
{
    _sensor = Sensor.Default;
    _sensor.Open();

    _bodies = new Body[_sensor.BodySouceFrame.BodyCount];

    _reader = _sensor.BodySourceFrame.OpenReader();
    _reader.FrameArrived += FrameArrived;
}

private void FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
    using (BodyFrame frame = e.Frame.AcquireFrame())
    {
        if (frame == null)
            return;

        frame.GetAndRefreshBodyData(_bodies);
    }
}
Once you have a Body to work with, the joints and other features are provided to us as dictionaries which allows us to access the joints by name. For example:
Joint head = body.Joints[JointType.Head];
Each Body and Joint is also equipped with tracking confidence. Given this information, we can loop through the joints and only display the items that we know are actively being tracked. The following example iterates over the Body and Joint collections, and uses the CoordinateMapper to translate the joint position to dimensions on our color image. For simplicity sake, I’m simply coloring the surrounding pixels to illustrate their position.
private void DrawBodies(BodyFrame bodyFrame, int colorWidth, int colorHeight)

    bodyFrame.GetAndRefreshBodyData(_bodies);

    foreach(Body body in _bodies)
    {
        if (!body.IsTracked)
            continue;

        IReadOnlyDictionary<JointType, Joint> joints = body.Joints;

        var jointPoints = new Dictionary<JointType, Point>();

        foreach(JointType jointType in joints.Keys)
        {
            Joint joint = joints[jointType];
            if (joint.TrackingState == TrackingState.Tracked)
            {
                ColorSpacePoint csp = _coordinateMapper.MapCameraPointToColorSpace(joint.Position);
                jointPoints[jointType] = new Point(csp.X, csp.Y);                                                                        
            }
        }

        foreach(Point point in jointPoints.Values)
        {
            DrawJoint(ref _colorData, point, colorWidth, colorHeight, 10);
        }
    }
}

private void DrawJoint(ref byte[] colorData, Point point, int colorWidth, int colorHeight, int size = 10)
{
    int colorX = (int)Math.Floor(point.X + 0.5);
    int colorY = (int)Math.Floor(point.Y + 0.5);

    if (!IsWithinColorFrame(colorX, colorY, colorWidth, colorHeight))
        return;

    int halfSize = size/2;

    // loop through pixels around the point and make them red
    for (int x = colorX - halfSize; x < colorX + halfSize; x++)
    {
        for(int y = colorY - halfSize; y < colorY + halfSize; y++)
        {
            if (IsWithinColorFrame(x,y, colorWidth, colorHeight))
            {
                int index = ((colorWidth * y) + x) * bytesPerPixel;
                colorData[index + 0] = 0;
                colorData[index + 1] = 0;
                colorData[index + 2] = 255;
            }
        }
    }
}

private bool IsWithinColorFrame(int x, int y, int width, int height)
{
    return (x >=0 && x < width && y >=0 && y < height);
}
Happy Coding.

0 comments: