Sunday, June 14, 2015

Debugging Google Authentication + Azure MobileServices

Let’s say you’re building a mobile application using Azure MobileServices with Google Authentication but you want to debug and run the solution on your local developer environment. This post will walk you through setting up your local environment and provide an example of how to test your local services.

Before we start, the getting started documentation on how to add Google Authentication to your mobile application is pretty good, and if you follow the samples it’s fairly easy to get up and running. For the purposes of this post, I’m not going to repeat those setup instructions so if you’re trying to get it running you should definitely read this article before reading any further. I should also point out that I’m using a .NET Backend for my MobileService, so there are a few minor variances if you’re using Node.

Step 1: add your Google Client ID and Client Secret to your web.config. By default, the configuration for your MobileService is defined through the Azure MobileServices dashboard so local values in your web.config are ignored in the Azure environment. In order to debug Google Authentication locally, simply provide values for MS_GoogleClientID, MS_GoogleClientSecret in your web.config:

Google_Auth_WebConfig

Step 2: modify the settings of your Google Project to support your local environment. At this point, attempts to authenticate in your local environment will fail because the Redirect Url for your local environment doesn’t match the settings defined in the Google Developer Console. Fortunately, the good folks at Google had the smarts to allow multiple redirect URLs. Simply provide the URL of your local developer environment. In my case, I’m running this in IIS Express and I’m using the URL http://localhost:61915

  1. Navigate to https://console.developers.google.com/project and select your Project
  2. In the Credentials section, click the Edit Settings button and add your local URL to the Authorized Redirect URIs.

Google_Auth_WebConfig_RedirectUri

Testing it out

So we’ve made a few simple changes and now it’s time to test it out.

  1. Navigate to http://localhost:61915/login/google in your browser.
  2. If all things are properly configured you should be redirected to Google’s oAuth page to provide consent to your application. Provide consent, dummy.
  3. Your browser will be redirected to a special URL (http://localhost:61915/done#token=<json-data>).
  4. Copy the URL and decode it. I like to use this online tool http://meyerweb.com/eric/tools/dencoder/
  5. Once decoded, make note of the authentication-token.

Google_Auth_RedirectUri_Decoded

  1. Now using your favourite REST client (mine's the Chrome Advanced REST Client), supply the header X-ZUMO-AUTH: + your auth token and call any web-API method that would require authentication.

Cheers!

submit to reddit

Tuesday, September 30, 2014

Mobile Services: Apply Database Migrations on Startup

As per my last post, if you’re using Entity Framework with Azure Mobile Services you’ll want to move away from Automatic Database upgrades and move to Code Based migrations. There are several strategies posted online, here’s the strategy that worked best for our project.

One of the challenges that my team identified early in the process was that our database content changed more frequently than our database schema. The default database initializer strategies only run the database migration and seeder scripts when the schema changes, so we found it more useful to manually apply the database migrations on start-up. This adds a bit of performance overhead to the start-up process, but since our database seeder scripts were relatively small, this seemed like an acceptable trade-off.

First, you’ll want to enable Code Based Migrations if you haven’t already. This involves opening the Package Manager Console window and typing the following:

Enable-Migrations

This power-shell command will add a few additional NuGet packages and make a few changes to your project. Most notably, the script will add a Migrations folder to your project and a Configuration class. We’ll make a few changes to the Configuration class by allowing automatic migrations and allowing data loss. The data loss feature will allow you to drop database columns even if they have data, so pay attention to your migration scripts.

internal sealed class Configuration : DbMigrationsConfiguration<MyServiceContext>
{
  public Configuration()
  {
    ContextKey = "MyProject.Models.MyServiceContext";
   
    AutomaticMigrationsEnabled = true;
    AutomaticMigrationDataLossAllowed = true;
  }

}

Next you’ll want to override the Seed method and populate your database with starter data. I highly recommend moving all your seeding logic to a static method so that you can reuse this logic in your test scripts, etc. Also note that the System.Data.Entity.Migrations namespace adds an AddOrUpdate extension method for DbSet<T> which greatly simplifies your database seed. Following this approach will allow you to run the seed as many times as you want without introducing duplicates.

protected override void Seed(MyServiceContext context)
{
    //  This method will always be called.

    //  You can use the DbSet<T>.AddOrUpdate() helper extension method 
    //  to avoid creating duplicate seed data. E.g.
    //
    //    context.People.AddOrUpdate(
    //      p => p.FullName,
    //      new Person { FullName = "Nigel Tufnel" },
    //      new Person { FullName = "David St. Hubbins" },
    //      new Person { FullName = "Derek Smalls" }
    //    );
    //
    MyDatabaseSeeder.Seed(context);
}

Finally, to apply the database migrations on start-up you’ll need to add the following to your WebApiConfig.Register() method:

public static void Register()
{
   
   Database.SetInitializer<MyServiceContext>(null);

   var migrator = new DbMigrator(new Configuration());
   migrator.Update();

}

With this in place, the DbMigrator will apply any database changes when there are available and reliably call the Seed method with every deployment.

Happy Coding.

submit to reddit

Monday, September 15, 2014

Mobile Services: Sharing a database schema between environments

I recently completed a project where we used Azure Mobile Services as the backend system for an Android application. I was pleased at how easy it was to setup and deploy to an environment, but there are a few details for production deployments that make things a bit tricky.

As your development team sprints full steam towards the finish line one major hurdle you’ll have to cross is database versioning. The default entity framework strategy is to use a database initializer that drops your database on start-up or any time the model changes, which obviously is not good and you can’t deploy to production with this enabled. The solution for this is to disable automatic upgrades and use code-first database migrations, which are also a bit tricky (I might blog about later). Here’s an initial primer on database migrations.

Before you run off to enable database migrations and start scripting out your objects into code, there’s something you should know. If you’re like me and you have separate azure environments for testing and production, the scripted objects will contain the schema name of your development environment and you’ll most likely get an error about the “__MigrationHistory” table on start-up.

You can avoid this hassle by making a small change to your DatabaseContext to ensure both environments use the same schema. The default boilerplate code uses the name of the Azure Mobile Service in your application-settings:

// part of MyMobileServiceContext : DbContext
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    string schema = ServiceSettingsDictionary.GetSchemaName();
    if (!string.IsNullOrEmpty(schema))
    {
        modelBuilder.HasDefaultSchema(schema);
    }

    modelBuilder.Conventions.Add(
        new AttributeToColumnAnnotationConvention<TableColumnAttribute, string>(
            "ServiceTableColumn", (property, attributes) => attributes.Single().ColumnType.ToString()));
}

This simple change to use the same schema name ensures that your scripted objects and runtime time stay in sync:

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    //string schema = ServiceSettingsDictionary.GetSchemaName();
    string schema = "MySchemaName";
    if (!string.IsNullOrEmpty(schema))
    {
        modelBuilder.HasDefaultSchema(schema);
    }

    modelBuilder.Conventions.Add(
        new AttributeToColumnAnnotationConvention<TableColumnAttribute, string>(
            "ServiceTableColumn", (property, attributes) => attributes.Single().ColumnType.ToString()));
}

Happy coding.

Wednesday, March 26, 2014

Head Tracking with Kinect v2

This is yet another post in my series about the new Kinect using the November 2013 developer preview SDK. Today we’re going to have some fun by combining the color, depth and body data streams (mentioned in my last few posts, here, here and here) and some interesting math to create an image that magically tracks the user’s head.

This is an early preview of the new Kinect for Windows, so the device, software and documentation are all preliminary and subject to change.

ColorExample-09-29-58

If you recall from my last post, I used the CoordinateMapper to translate the coordinates of the user’s joint information on top of the HD color image. The magic ingredient converts the Joint’s Position to a ColorSpacePoint.

Joint headJoint = body.Joints[JointType.Head];

ColorSpacePoint colorSpacePoint = 
    _sensor.CoordinateMapper.MapCameraSpaceToColorPoint(headJoint.Position);

If we take the X & Y coordinates from this ColorSpacePoint and the wonderful extension methods of the WriteableBitmapEx project, we can quickly create a cropped image of that joint.

int x = (int)Math.Floor(colorSpacePoint.X + 0.5);
int y = (int)Math.Floor(colorSpacePoint.Y + 0.5);

int size = 200;

WriteableBitmap faceImage = _bmp.Crop(new Rect(x,y,size,size));

ColorExample-08-04-32

Wow, that was easy! Although this produces an image that accurately tracks my head, the approach is somewhat flawed as it doesn’t scale based on the user’s position from the camera: if you stand too close to the camera you’ll only see a portion of your face; Stand too far and you’ll see my face and torso. We can fix this by calculating the desired size of the image based on the depth of the joint. To do this, we’ll need to obtain a DepthSpacePoint for the Joint and a simple trigonometric formula…

The DepthSpacePoint by itself doesn’t contain the depth data. Instead, it contains the X & Y coordinates from the depth image which we can use to calculate the index in the array of depth data. I’ve outlined this in a previous post, but for convenience sake here’s that formula again:

// get the depth image coordinates for the head
DepthSpacePoint depthPoint =
    _sensor.CoordinateMapper.MapCameraPointToDepthSpace(headJoint.Position);

// use the x & y coordinates to locate the depth data
FrameDescription depthDesc = _sensor.DepthFrameSource.FrameDescription;
int depthX = (int)Math.Floor(depthPoint.X + 0.5);
int depthY = (int)Math.Floor(depthPoint.Y + 0.5);
int depthIndex = (depthY * depthDesc.Width) + depthX;

ushort depth = _depthData[depthIndex];

To calculate the desired size of the image, we need to determine the width of the joint's pixel in millimeters. We do this using a blast from the past, our best friend from high-school trigonometry, Soh-Cah-Toa.

Kinect_Depth

Given that the Kinect’s Horizontal Field of View is 70.6°, we bisect this in half to form a right-angle triangle. We then take the depth value as the length of the adjacent side in millimeters. Our goal is to calculate the opposite side in millimeters, which we can accomplish using the TOA portion of the mnemonic:

tan(0) = opposite / adjacent
opposite = tan(0) * adjacent

Once we have the length of the opposite, we divide it by the number of pixels in the frame which gives us the length in millimeters for each pixel. The algorithm for calculating pixel width is shown here:

private double CalculatePixelWidth(FrameDescription description, ushort depth)
{
    // measure the size of the pixel
    float hFov = description.HorizontalFieldOfView / 2;
    float numPixels = description.Width / 2;

    /* soh-cah-TOA
     * 
     * TOA = tan(0) = O / A
     *   T = tan( (horizontal FOV / 2) in radians )
     *   O = (frame width / 2) in mm
     *   A = depth in mm
     *   
     *   O = A * T
     */
 
    double T = Math.Tan((Math.PI * 180) / hFov);
    double pixelWidth = T * depth;

    return pixelWidth / numPixels;
}

Now that we know the length of each pixel, we can adjust the size of our head-tracking image to be a consistent “length”. The dimensions of the image will change as I move but the amount of space around my head remains consistent. The following calculates a 50 cm (~19”) image around the tracked position of my head:

double imageSize = 500 / CalculatePixelWidth(depthDesc, depth);

int x = (int)(Math.Floor(colorPoint.X + 0.5) - (imageSize / 2));
int y = (int)(Math.Floor(colorPoint.Y + 0.5) - (imageSize / 2));

WriteableBitmap faceImage = _bmp.Crop(new Rect(x,y, imageSize, imageSize));

Happy Coding.

Monday, March 24, 2014

Drawing Kinect V2 Body Joints

So far the posts in my Kinect for Windows v2 series have concentrated on the Depth, Color and BodyIndex data sources. Today I want to highlight how to access the Body data stream.

ColorExample-02-18-51

This is an early preview of the new Kinect for Windows, so the device, software and documentation are all preliminary and subject to change.

The new Kinect sensor and SDK has improved skeleton tracking so significantly that the Microsoft team has changed the name of their Skeleton class to Body. In addition to being able to track six skeletons (instead of 2), the API is shaping up to include features for Activities (facial features such as left eye open), Expressions (happy or neutral), Appearance (wearing glasses), Leaning (left or right) and the ability to track if the user is actively looking at the sensor. I’ll dive into those in up coming posts, but for now I want to focus on Joint information.

Generally speaking, the body tracking capability and joint positions are improved over previous version of the SDK. Specifically, the positions for hips and shoulders are more accurate. Plus version 2 has introduced 5 new joints (Neck, Hand-Tip-Left, Thumb-Left, Hand-Tip-Right, Thumb-Right) bringing the total number of joints to 25.

Body-Joints

Getting the Body data from the sensor is slightly different than the other data streams. First you must initialize an array of Body with a specific size and then pass it into the GetAndRefreshBodyData method on the BodyFrame. The SDK uses a memory saving optimization that updates the items in this array rather than creating a new set each time. So if you want to hang onto an instance of Body between frames, you need to copy it to another variable and replace the item in the Array with a null value.

The following shows how setup and populate an array of Body objects per frame:

private void SetupCamera()
{
    _sensor = Sensor.Default;
    _sensor.Open();

    _bodies = new Body[_sensor.BodySouceFrame.BodyCount];

    _reader = _sensor.BodySourceFrame.OpenReader();
    _reader.FrameArrived += FrameArrived;
}

private void FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
    using (BodyFrame frame = e.Frame.AcquireFrame())
    {
        if (frame == null)
            return;

        frame.GetAndRefreshBodyData(_bodies);
    }
}

Once you have a Body to work with, the joints and other features are provided to us as dictionaries which allows us to access the joints by name. For example:

Joint head = body.Joints[JointType.Head];

Each Body and Joint is also equipped with tracking confidence. Given this information, we can loop through the joints and only display the items that we know are actively being tracked. The following example iterates over the Body and Joint collections, and uses the CoordinateMapper to translate the joint position to dimensions on our color image. For simplicity sake, I’m simply coloring the surrounding pixels to illustrate their position.

private void DrawBodies(BodyFrame bodyFrame, int colorWidth, int colorHeight)

    bodyFrame.GetAndRefreshBodyData(_bodies);

    foreach(Body body in _bodies)
    {
        if (!body.IsTracked)
            continue;

        IReadOnlyDictionary<JointType, Joint> joints = body.Joints;

        var jointPoints = new Dictionary<JointType, Point>();

        foreach(JointType jointType in joints.Keys)
        {
            Joint joint = joints[jointType];
            if (joint.TrackingState == TrackingState.Tracked)
            {
                ColorSpacePoint csp = _coordinateMapper.MapCameraPointToColorSpace(joint.Position);
                jointPoints[jointType] = new Point(csp.X, csp.Y);                                                                        
            }
        }

        foreach(Point point in jointPoints.Values)
        {
            DrawJoint(ref _colorData, point, colorWidth, colorHeight, 10);
        }
    }
}

private void DrawJoint(ref byte[] colorData, Point point, int colorWidth, int colorHeight, int size = 10)
{
    int colorX = (int)Math.Floor(point.X + 0.5);
    int colorY = (int)Math.Floor(point.Y + 0.5);

    if (!IsWithinColorFrame(colorX, colorY, colorWidth, colorHeight))
        return;

    int halfSize = size/2;

    // loop through pixels around the point and make them red
    for (int x = colorX - halfSize; x < colorX + halfSize; x++)
    {
        for(int y = colorY - halfSize; y < colorY + halfSize; y++)
        {
            if (IsWithinColorFrame(x,y, colorWidth, colorHeight))
            {
                int index = ((colorWidth * y) + x) * bytesPerPixel;
                colorData[index + 0] = 0;
                colorData[index + 1] = 0;
                colorData[index + 2] = 255;
            }
        }
    }
}

private bool IsWithinColorFrame(int x, int y, int width, int height)
{
    return (x >=0 && x < width && y >=0 && y < height);
}

Happy Coding.

Wednesday, March 19, 2014

Mapping between Kinect Color and Depth

Continuing my series of blog posts about the new Kinect v2, I’d like to build upon my last post about the HD color stream with some of the depth frame concepts that I've used in the last few posts. Unlike the previous version of the Kinect, the Kinect v2 depth stream is not the same dimensions as the new color stream. This post will illustrate how the two are related and how you can map depth data to the color stream using the CoordinateMapper.

This is an early preview of the new Kinect for Windows, so the device, software and documentation are all preliminary and subject to change.

While the Kinect’s color and depth streams are represented as arrays of information, you simply cannot compare the x & y coordinates between the sets of data equally: the bytes of the color frame represent color pixels from the color camera; the bits of the depth frame represent Cartesian distance from the depth camera. Fortunately, the Kinect SDK ships with a mapping utility that can convert data between the different “spaces”.

For this post I’m going to use the MultiFrameSourceReader to access both the depth and color frames at the same time and we’ll use the CoordinateMapper.MapDepthFrameToColorSpace method to project the depth data into an array of ColorSpacePoint. Here’s the skeleton for processing frames as they arrive:

private void FrameArrived(object sender, MultiSourceFrameArrivedEventArgs e) 
{ 
    var reference = e.FrameReference; 

    MultiSourceFrame multiSourceFrame = null;
    ColorFrame colorFrame = null; 
    DepthFrame depthFrame = null; 

    try 
    { 
        using (_frameCounter.Increment()) 
        { 
            multiSourceFrame = reference.AcquireFrame(); 
            if (multiSourceFrame == null) 
                return; 

            using (multiSourceFrame) 
            { 
                colorFrame = multiSourceFrame.ColorFrameReference.AcquireFrame();
                depthFrame = multiSourceFrame.DepthFrameReference.AcquireFrame(); 

                if (colorFrame == null | depthFrame == null) 
                    return; 

                // initialize color frame data 
                var colorDesc = colorFrame.FrameDescription; 
                int colorWidth = colorDesc.Width; 
                int colorHeight = colorDesc.Height; 

                if (_colorFrameData == null) 
                { 
                    int size = colorDesc.Width * colorDesc.Height; 
                    _colorFrameData = new byte[size * bytesPerPixel]; 
                } 

                // initialize depth frame data 
                var depthDesc = depthFrame.FrameDescription; 

                if (_depthData == null) 
                { 
                    uint depthSize = depthDesc.LengthInPixels; 
                    _depthData = new ushort[depthSize]; 
                    _colorSpacePoints = new ColorSpacePoint[depthSize]; 
                } 

                // load color frame into byte[] 
                colorFrame.CopyConvertedFrameDataToArray(_colorFrameData, ColorImageFormat.Bgra); 

                // load depth frame into ushort[] 
                depthFrame.CopyFrameDataToArray(_depthData); 

                // map ushort[] to ColorSpacePoint[] 
                _sensor.CoordinateMapper.MapDepthFrameToColorSpace(_depthData, _colorSpacePoints); 

                // TODO: do something interesting with depth frame 

                // render color frame 
                _bmp.WritePixels( 
                    new Int32Rect(0, 0, colorDesc.Width, colorDesc.Height), 
                    _colorFrameData, 
                    colorDesc.Width * bytesPerPixel, 
                    0); 
            } 
        } 
    } 
    catch { } 
    finally 
    { 
        if (colorFrame != null) 
            colorFrame.Dispose(); 

        if (depthFrame != null) 
            depthFrame.Dispose(); 

    }
}

The MapDepthFrameToColorSpace method copies the depth data into an array of ColorSpacePoint where each item in the array corresponds to the items in the depth data. We can use the X & Y coordinates of the ColorSpacePoint to find the color data as demonstrated below. There’s one caveat: not all points in the depth array contain data that can be mapped to color pixels. Some points might be too close or too far, or there’s no depth data because it’s a shadow or reflective material.

The following snippet shows us how to locate the color bytes from the ColorSpacePoint:

// we need a starting point, let's pick 0 for now
int index = 0;

ushort depth = _depthData[index];
ColorSpacePoint point = _colorSpacePoints[index];

// round down to the nearest pixel
int colorX = (int)Math.Floor(point.X + 0.5);
int colorY = (int)Math.Floor(point.Y + 0.5);

// make sure the pixel is part of the image
if ((colorX >= 0 && (colorX < colorWidth) && (colorY >= 0) && (colorY < colorHeight))
{

    int colorImageIndex = ((colorWidth * colorY) + colorX) * bytesPerPixel;

    byte b = _colorFrameData[colorImageIndex];
    byte g = _colorFrameData[colorImageIndex + 1];
    byte r = _colorFrameData[colorImageIndex + 2];
    byte a = _colorFrameData[colorImageIndex + 3];

}

If we loop through the depth data and use the above technique we can draw our depth data on top of our color frame. For this image, I’m drawing the depth data using an intensity technique described in an earlier post. It looks like this:

ColorExample-10-16-26

You may notice that the pixels are far apart and don’t go to the entire edge of the image. This makes sense because our depth data is a smaller resolution (424 x 512 compared to 1080 x 1920) and tighter viewing angle (70.6° compared to 84°). The mapping also isn’t perfect in this release (remember this is a developer preview!)

We can use the same technique to draw each pixel of the depth frame using values from the color frame, like so:

// clear the pixels before we color them
Array.Clear(_pixels, 0, _pixels.Length);

for (int depthIndex = 0; depthIndex < _depthData.Length; ++depthIndex)
{
    ColorSpacePoint point = _colorSpacePoints[depthIndex];

    int colorX = (int)Math.Floor(point.X + 0.5);
    int colorY = (int)Math.Floor(point.Y + 0.5);
    if ((colorX >= 0) && (colorX < colorWidth) && (colorY >= 0) && (colorY < colorHeight))
    {
        int colorImageIndex = ((colorWidth * colorY) + colorX) * bytesPerPixel;
        int depthPixel = depthIndex * bytesPerPixel;

        _pixels[depthPixel] = _colorData[colorImageIndex];
        _pixels[depthPixel + 1] = _colorData[colorImageIndex + 1];
        _pixels[depthPixel + 2] = _colorData[colorImageIndex + 2];
        _pixels[depthPixel + 3] = 255;
    }
}

…which results in the following image:

ColorExample-10-09-28

So as you can see, we can easily map between the two coordinate spaces. I intended to build upon this further in upcoming posts, so if you haven’t already, add me to your favorite RSS reader.

Happy coding.

Tuesday, March 18, 2014

Accessing Kinect Color Data

One of the new features of the Kinect v2 is the upgraded color camera which now provides images with 1920 x 1080 resolution. This post will illustrate how you can get access to this data stream.

This is an early preview of the new Kinect for Windows, so the device, software and documentation are all preliminary and subject to change.

Much like the other data streams, accessing the color image is done using a reader:

private void SetupCamera();

    var sensor = KinectSensor.Default;
    sensor.Open();
    ColorFrameReader reader = sensor.ColorFrameSource.OpenReader();
    reader.FrameArrived += ColorFrameArrived;
}

In this example, I'm simply copying the frames to a WriteableBitmap which is bound to the view.

private readonly int bytesPerPixel = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8;
private readonly WriteableBitmap _bmp = new WriteableBitmap(1920, 1080, 96, 96, PixelFormats.Bgra32, null);
Byte[] _frameData = null;

private void ColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
    var reference = e.FrameReference;

    try
    {
        var frame = reference.AcquireFrame();
        if (frame == null) return;

        using(frame)
        {
            FrameDescription desc = frame.FrameDescription;
            var size = desc.Width * desc.Height;
            
            if (_frameData == null)
            {
                _bmp = new WriteableBitmap(desc.Width, desc.Height, 96, 96, PixelFormats.Bgr32, null);
                _frameData = new byte[size * bytesPerPixel];
            }

            frame.CopyConvertedFrameDataToArray(_frameData, ColorImageFormat.Bgra);

            _bmp.WritePixels(
                new Int32Rect(0, 0, desc.Width, desc.Height),
                _frameData,
                desc.Width * bytesPerPixel,
                0);
        }
    }
    catch
    {}

}

As you’d expect, this updates the view every 30 frames per second with a HD image. The wide-angle lens of the camera (84°) captures a fair amount of my office.

ColorExample-09-41-17

Happy Coding.