Friday, March 18, 2011

Parsing Nullable Enumerations

I had an interesting challenge yesterday that involved converting a string to an enumerated value. This seems like it would be really trivial, but I had a few extra considerations that made it harder to deal with.

We're all familiar with the Enum.Parse method that has existed in the .NET Framework since version 1.1.  It's a somewhat clunky and verbose method:

MyEnum result = (MyEnum)Enum.Parse(typeof(MyEnum), "First");

Recently, the .NET Framework 4.0 introduced a new static method on the Enum type which uses generics.  It's much cleaner, if you don't mind the out parameters.

MyEnum result;

if (Enum.TryParse("First", out result))
    // do something with 'result'

Unfortunately, the above methods are constrained to work with struct value-types only and won’t work with Nullable types. After some experimenting and fussing with casting between generic types I managed to get a solution that works with both standard and nullable enumerations.

I’m sure there’s a bit extra boxing in here, so as always your feedback is welcome.  Otherwise, this free-as-in-beer extension method is going on my tool belt.

public static TResult ConvertToEnum<TResult>(this string source)
    // we can't get our values out of a Nullable so we need
    // to get the underlying type
    Type enumType = GetUnderlyingTypeIfNullable(typeof(TResult));
    // unfortunatetly, .net 4.0 doesn't have constraints for Enum
    if (!enumType.IsEnum)
        throw new NotSupportedException("Only enums can be converted here, chum.");

    if (!String.IsNullOrEmpty(source))
        object rawValue = GetRawEnumValueFromString(enumType, source);
        // if there was a match
        if (rawValue != null)
            // having the value isn't enough, we need to
            // convert this back to an enum so that we can 
            // perform an implicit cast back to the generic type
            object enumValue = Enum.Parse(enumType, rawValue.ToString());
            // implicit cast back to generic type
            // if the generic type was nullable, the cast
            // from non-nullable to nullable is also implicit
            return (TResult)enumValue;
    // if no original value, or no match use the default value.
    // returns 0 for enum, null for nullable<enum>
    return default(TResult);

private static Type GetUnderlyingTypeIfNullable(Type type)
    if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
        // Nullable<T> only has one type argument
        return type.GetGenericArguments().First();
    return targetType;

private static object GetRawEnumValueFromString(Type enumType, string source)
    FieldInfo[] fields = enumType.GetFields(BindingFlags.Public | BindingFlags.Static);

    // attempt to find our string value
    foreach (FieldInfo field in fields)
        object value =  field.GetRawConstantValue();

        // exact match
        if (field.Name == source)
            return value;

        // attempt to locate in attributes
        var attribs = field.GetCustomAttributes(typeof (XmlEnumAttribute), false);
            if (attribs.Cast<XmlEnumAttribute>().Any(attrib => source == attrib.Name))
                return value;

    return null;

Heh: “Free as in Beer”. Happy St. Patty’s, everybody.

submit to reddit

Friday, March 04, 2011

Add a Custom Toolbar for Source Control

My current project uses TFS and I spent a lot of time in and out of source control, switching between workspaces to manage defects. I found myself needing access to my workspaces and getting really frustrated with how clunky this operation is within Visual Studio.  There are two ways you can open source control.

  1. The Source Control item in the Team System tool window.  I don’t always have the Team Explorer tool window open and when I open it, it takes a few seconds to get details from the server.
  2. View –> Other Windows –> Source Control Explorer.  Useful, but there’s too much mouse movement and clicking to be accessible.

So, rather than creating a custom keyboard shortcut that I would forget I added a toolbar that is always in plain-sight. It’s so convenient that I take it for granted, and when I pair with others they comment on it. So for their convenience (and yours), here’s how it’s done.

Add a new Toolbar

From the Menubar, select “Tools –> Customize”.  It’ll pop up this dialog. 

Click on “New” and give your toolbar a name.


Add the Commands

Switch to the Commands tab and select the name of your toolbar in the Toolbar dropdown.


Click on the “Add Command” button and select the following commands:

  • View : TfsSourceControlExplorer
  • View : TfsPendingChanges


Now style the button’s accordingly using the “Modify Selection” button.  I’ve set mine to use the Default styling, which is just the button icon.



Thursday, March 03, 2011

Building Custom Test Frameworks

Author’s note: this post focuses on writing test tools to simplify repetitive tasks such as complex assertions and is inspired by colleagues that wrote a tool that nearly evolved beyond their control. I am not encouraging developers to write frameworks that mask symptoms of poorly isolated components or other design problems.

For some, evolving code through writing tests is seen as a labour of love. For others, it's just labour. The latter is especially true when you realize that you have to write a lot of repetitious tests that will likely introduce test friction later on. When faced with this dilemma, developers rise to the challenge and write code.

Fortunately, there's a lot of great test framework tools popping up in the .net community that are designed to plug in to your test code. (I’m running out of battery power on my laptop as I write this, so my list of tools is lacking.  Send me a note and I’ll list your tool here) These tools can certainly make things easier by removing some obstacles or laborious activities, but if you're planning on writing your own tool or framework there a few pitfalls.

Common Pitfalls

Obscured Intent

One of the goals of well written unit tests is to provide live examples of proper usage so that developers can learn more about the code's intended behaviour. This benefit can be significantly hindered when the usage and clear intent of your code has been abstracted into your framework.

Eventual Complexity

Applying automation to a problem follows the 80/20 rule where the majority of problems fit nicely into your abstraction. The edge cases however have a tendency to add bloat and it doesn't take much to quickly trash the idea of a simple tool. This is rarely a consequence of poor planning or bad design; additional complexity tends to creep in over time as your code evolves or as more consumers of the tool come on board.

There's a consequence to this type of complexity: if few developers understand the tool's implementation, you risk limiting these developers to be tool maintainers. Even worse, if these developers leave your team there's a risk that the entire suite of tests will be abandoned if they start failing.

Dependence on Tool Quality / False Positives

In TDD, application defects hide in the tests you haven't written, so quality is a reflection of the accuracy and completeness of the tests. Likewise, tests that leverage a custom test framework are only as reliable as the completeness and accuracy of the tool. If the framework takes on the lions share of the work, then the outcome of the test is abdicated to the tool. This is dangerous because any change to the tool's logic could unintentionally allow subtle defects in the tested code to go unnoticed. False positives = tests that lie!


Tests for Tools

Oddly enough, if your goal is to write a tool so that you don't need to write tests, you are going to need to write tests for the tool. Having tests for your custom tool ensures that false positives aren’t introduced as the tool is enhanced over time. From a knowledge transfer perspective, the tests serve as a baseline to describe the tool’s responsibilities and enable others to add new functionality (and new tests) with minimal oversight from the original tool author.

Design for Readability

Great frameworks fit in and enable you to express intent; so be careful of over-reaching and trying to do too much. Make sure your tool doesn't abstract away clues as to what the test is intended for, and if possible use descriptive method names to improve readability. Documenting your tool with Xml documentation syntax and comments is also very helpful for intent.

Evolve Naturally

Unless you spend weeks planning out your unit tests, custom test tools are the product of necessity that are realized when you start to write the second or third duplicated test.  I tend to realize my test tools as “found treasures” of my normal TDD development cycle.

Rather than diving in and writing a framework or tool first, focus on satisfying the requirement of working software by writing the test using normal TDD best practices (red, green, refactor). During the clean up and refactor step, you will find duplication between tests or even different fixtures. When that happens, promote the duplicated code to a helper method, then a helper class. I like to think that great TDD is about refactoring mercilessly while keeping the original intent of the code clear – it’s about balance, know when you’ve gone too far.

If you a reach a point where you can extract commonalities between a few helper classes into a generic solution, you’ll find yourself standing in the doorway between where your fixtures were self-contained and where they’ve become dependent on shared test logic.  Learn to recognize this moment because this is when you should stop writing tests for production code for a moment and write some tests for the tool. It’s also a good idea to keep a few of the manual tests around so that you can go back into the production code and deliberately break it to prove that both sets of tests are equivalent.

…Until next time. Happy coding.  (Now where’s my dang power supply?)