Monday, June 27, 2011

Visual Studio Code Analysis Settings

I've just spent the last week or so working with Code Analysis for my Visual Studio 2010 projects.  I still haven't figured out if I'm using Visual Studio's built in Code Analysis tool or FxCop 10.0, because the two tools share a lot of the same engine and rulesets.  At any rate, one of the pain points I had to work through was ensuring that the configuration settings between my Visual Studio configurations were the same.  Strangely enough, I had more confidence in my changes when manually editing the csproj files in Notepad++ than from Visual Studio -- it seemed like each configuration had slightly different settings, and being able to see the msbuild settings seemed to make more sense.

During this process, I found it quite frustrating that none of the CodeAnalysis settings that are configurable from the csproj schema were documented online.  However, the intellisense for the schema is quite good.  The schema is located at: <Program Files>\Microsoft Visual Studio 10.0\Xml\Schemas\1033\MSBuild\Microsoft.Build.CommonTypes.xsd

Most of the settings generated by Visual Studio are often the default values and can be discarded.

For my reference, here's the documentation, lifted from the schema:

Project Element Description
CodeAnalysisAdditionalOptions Additional options to pass to the Code Analysis command line tool.
CodeAnalysisApplyLogFileXsl Indicates whether to apply the XSL style sheet specified in $(CodeAnalysisLogFileXsl) to the Code Analysis report. This report is specified in $(CodeAnalysisLogFile). The default is false.
CodeAnalysisConsoleXsl Path to the XSL style sheet that will be applied to the Code Analysis console output. The default is an empty string (''), which causes Code Analysis to use its default console output.
CodeAnalysisCulture Culture to use for Code Analysis spelling rules, for example, 'en-US' or 'en-AU'. The default is the current user interface language for Windows.
CodeAnalysisDependentAssemblyPaths Additional reference assembly paths to pass to the Code Analysis command line tool. A fully qualified path to a directory containing reference assemblies to be used by Code Analysis.
CodeAnalysisDictionary Code Analysis custom dictionaries.  Semicolon-separated list of Code Analysis custom dictionaries. Wildcards are allowed.
CodeAnalysisFailOnMissingRules Indicates whether Code Analysis should fail if a rule or rule set is missing. The default is false.
CodeAnalysisForceOutput Indicates whether Code Analysis generates a report file, even when there are no active warnings or errors. The default is true.
CodeAnalysisGenerateSuccessFile Indicates whether Code Analysis generates a '$(CodeAnalysisInputAssembly).lastcodeanalysissucceeded' file in the output folder when no build-breaking errors occur. The default is true.
CodeAnalysisIgnoreBuiltInRules Indicates whether Code Analysis will ignore the default rule directories when searching for rules. The default is false.
CodeAnalysisIgnoreBuiltInRuleSets Indicates whether Code Analysis will ignore the default rule set directories when searching for rule sets. The default is false.
CodeAnalysisIgnoreInvalidTargets Indicates whether Code Analysis should silently fail when analyzing invalid assemblies, such as those without managed code. The default is true.
CodeAnalysisIgnoreGeneratedCode Indicates whether Code Analysis should fail silently when it analyzes invalid assemblies, such as those without managed code. The default is true.
CodeAnalysisImport Code Analysis projects (*.fxcop) or reports to import. Semicolon-separated list of Code Analysis projects (*.fxcop) or reports to import. Wildcards are allowed.
CodeAnalysisInputAssembly Path to the assembly to be analyzed by Code Analysis. The default is '$(OutDir)$(TargetName)$(TargetExt)'.
CodeAnalysisLogFile Path to the output file for the Code Analysis report. The default is '$(CodeAnalysisInputAssembly).CodeAnalysisLog.xml'.
CodeAnalysisLogFileXsl Path to the XSL style sheet to reference in the Code Analysis output report. This report is specified in $(CodeAnalysisLogFile). The default is an empty string ('').
CodeAnalysisModuleSuppressionsFile Name of the file, without the path, where Code Analysis project-level suppressions are stored. The default is 'GlobalSuppressions$(DefaultLanguageSourceExtension)'.
CodeAnalysisOverrideRuleVisibilities Indicates whether to run all overridable Code Analysis rules against all targets. This will cause specific rules, such as those within the Design and Naming categories, to run against both public and internal APIs, instead of only public APIs. The default is false.
CodeAnalysisOutputToConsole Indicates whether to output Code Analysis warnings and errors to the console. The default is false.
CodeAnalysisVerbose Indicates whether to output verbose Code Analysis diagnostic info to the console. The default is false.
CodeAnalysisPath Path to the Code Analysis installation folder. The default is '$(VSINSTALLDIR)\Team Tools\Static Analysis Tools\FxCop'.
CodeAnalysisPlatformPath Path to the .NET Framework folder that contains platform assemblies, such as mscorlib.dll and System.dll. The default is an empty string ('').
CodeAnalysisProject Path to the Code Analysis project (*.fxcop) to load. The default is an empty string ('').
CodeAnalysisQuiet Indicates whether to suppress all Code Analysis console output other than errors and warnings. This applies when $(CodeAnalysisOutputToConsole) is true. The default is false.
CodeAnalysisRuleAssemblies Semicolon-separated list of paths either to Code Analysis rule assemblies or to folders that contain Code Analysis rule assemblies. The paths are in the form '[+|-][!][file|folder]', where '+' enables all rules in rule assembly, '-' disables all rules in rule assembly, and '!' causes all rules in rule assembly to be treated as errors. For example '+D:\Projects\Rules\NamingRules.dll;+!D:\Projects\Rules\SecurityRules.dll'. The default is '$(CodeAnalysisPath)\Rules'.
CodeAnalysisRuleDirectories Semicolon-separated list of directories in which to search for rules when resolving a rule set. The default is '$(CodeAnalysisPath)\Rules' unless the CodeAnalysisIgnoreBuiltInRules property is set to true.
CodeAnalysisRules Semicolon-separated list of Code Analysis rules. The rules are in the form '[+|-][!]Category#CheckId', where '+' enables the rule, '-' disables the rule, and '!' causes the rule to be treated as an error. For example, '-Microsoft.Naming#CA1700;+!Microsoft.Naming#CA1701'. The default is an empty string ('') which enables all rules.
CodeAnalysisRuleSet A .ruleset file which contains a list of rules to run during analysis. The string can be a full path, a path relative to the project file, or a file name. If a file name is specified, the CodeAnalysisRuleSetDirectories property will be searched to find the file. The default is an empty string ('').
CodeAnalysisRuleSetDirectories Semicolon-separated list of directories in which to search for rule sets. The default is '$(VSINSTALLDIR)\Team Tools\Static Analysis Tools\Rule Sets' unless the CodeAnalysisIgnoreBuiltInRuleSets property is set to true.
CodeAnalysisSaveMessagesToReport Comma-separated list of the type ('Active', 'Excluded', or 'Absent') of warnings and errors to save to the output report file. The default is 'Active'.
CodeAnalysisSearchGlobalAssemblyCache Indicates whether Code Analysis should search the Global Assembly Cache (GAC) for missing references that are encountered during analysis. The default is true.
CodeAnalysisSummary Indicates whether to output a Code Analysis summary to the console after analysis. The default is false.
CodeAnalysisTimeout The time, in seconds, that Code Analysis should wait for analysis of a single item to complete before it aborts analysis. Specify 0 to cause Code Analysis to wait indefinitely. The default is 120.
CodeAnalysisTreatWarningsAsErrors Indicates whether to treat all Code Analysis warnings as errors. The default is false.
CodeAnalysisUpdateProject Indicates whether to update the Code Analysis project (*.fxcop) specified in $(CodeAnalysisProject). This applies when there are changes during analysis. The default is false.
CodeAnalysisUseTypeNameInSuppression Indicates whether to include the name of the rule when Code Analysis emits a suppression. The default is true.
RunCodeAnalysis Indicates whether to run Code Analysis during the build.

As this has been a fun week dealing with FxCop, expect to see a few more posts related to the above settings.  Stay right here and click refresh endlessly, or subscribe to the rss feed.

Happy coding.

submit to reddit

Tuesday, May 31, 2011

The Tests are Broken, Now What?

Despite our best intentions to write durable tests, it seems inevitable that tests will break at some point. In some cases, breaking a test can be seen as a very positive thing: we've introduced a side-effect that has broken existing functionality, and our tests have helped prevent a bug. However, in some cases, tests break because of external factors -- maybe the tests weren't isolated enough, maybe a developer went rogue and forgot to run the tests, or some other poorly justified or lousy excuse. While identifying what went wrong in your development process is important, the immediate concern is that these broken tests represent a failed build and developer productivity is at risk -- what should you do with the broken tests?

If the developer looking at the tests knows a bit about the code, they should have enough information in the tests to help sort things out.  The developer should drop what they're doing, slow down and research the code and the tests and fix the broken tests. If caught early, it's very manageable. Unfortunately, this isn't the common case -- the developer introducing the breaking changes doesn't understand the tests and is ultimately left with one of two approaches:

  1. Comment out or remove the test
  2. Mark the test as Ignored

Neither of these approaches "fix" anything.  The first approach, to comment the test out, is equivalent to pissing on the requirements. The test supposedly represents a piece of functional behaviour of the application, the fact that it's broken suggests a feature is also broken. The only reason to comment out a test is if that functionality has ceased to exist or the test was invalid in the first place. (Take a look at my checklist for unit test code reviews to help qualify invalid tests)

The second alternative to mark the tests as Ignored can be dangerous, depending on whether you're able to track why the test has been disabled and able to monitor when it enters this state.  This is one part developer process, one part testing framework, one part reporting.

Within NUnit, the Ignore attribute fits the bill nicely.  The attribute accepts a string message which is used to report why the test has been disabled. It should be standard developer process to only allow use of Ignore if a message is provided. (Come to think of it, I should put in a feature request so that it is required.)  When NUnit runs, the Ignored tests are not executed but the number of ignored tests are included in the xml output, meaning that your build server can track ignored tests over time. You can also bypass the Ignored attribute within the NUnit user-interface by simply selecting the Ignored test and running it manually.

[Ignore("The xml for this test isn't serializing correctly. Please fix!")]
[Test]
public void SuperImportantTest()
{
    // snip
}

Within MSTest however, Ignore is essentially the same as commenting out the tests.  There's no ability to record why the test has been excluded (developers must leave comments), and when MSTest runs the test suite the Ignored tests are simply excluded from the test run.  Looking at the MSTest output, the TRX file does not define a status for ignored, meaning that it can't be tracked in your build reports.

<ResultSummary outcome="Completed">
  <Counters 
    total="1" 
    executed="1" 
    passed="1" 
    error="0" 
    failed="0" 
    timeout="0" 
    aborted="0" 
    inconclusive="0" 
    passedButRunAborted="0" 
    notRunnable="0" 
    notExecuted="0" 
    disconnected="0" 
    warning="0" 
    completed="0" 
    inProgress="0" 
    pending="0" />
</ResultSummary>

Though I'm not a big fan of this, I'll spare you the complaining and move on to working solutions.

The best approach within MSTest isn't to disable the test but to indicate that it isn't reliable and needs attention. In addition, we need these tests to stand out without breaking the build. This is exactly what Inconclusive is for.

[TestMethod] 
public void SuperImportantTest() 
{ 
    Assert.Inconclusive("The xml for this test isn't serializing correctly. Please fix!"); 
    
    // snip 
}

The key to using Assert.Inconclusive is to put the statement as the first code-expression in your test. When MSTest encounters the Assert.Inconclusive statement, execution of the test is immediately halted, and the test is recorded as "Inconclusive". The TRX reports this status separately, and the message appears in the xml output.

Next Steps

With the tests marked as problematic, it’s possible to check-in the tests and unblock developers from a failed build while you and the original test author figure out how to fix the problem. It’s a temporary patch and is not intended for all breaking tests, just small fires that happen during critical developer crunches.  Once the fire has been put out and the tests corrected, you really should go back and investigate why the test broke in the first place:

  • Should the developer’s changes have broken this test?  If not, could the code/test have been written differently to be more durable?
  • What could the developer have done differently? Is this a one time occurrence or a disconnect in process?

That’s all for now.

submit to reddit

Monday, April 18, 2011

Testing Asynchronous Logic

I've been doing a fair bit of asynchronous logic recently where I'm designing objects to raise events after long-running operations have completed, and naturally this creates a few challenges during testing. When a method uses the asynchronous pattern, the work is executed on a separate thread and execution returns immediately to the caller. This is a problem for the a unit test as they are by design a series of standalone sequential steps -- the test may finish or execute assertions before the event is called.

The immediate workaround is to add delays into the tests using a Thread.Sleep statement.

[Test]
public void AsynchronousTestsDontAlwaysWork()
{
    bool eventCalled = false;
    
    var subject = new MyClass();
    subject.OnComplete += (o,e) => eventCalled = true;
    
    subject.DoWorkAsync();
    
    Thread.Sleep(100);
    
    Assert.AreTrue( eventCalled ); // can fail because the event might raise after this assertion
}

This strategy will work, but it's dangerous in the long-term because the duration of the asynchronous operation is dependent on the complexity of the implementation and the environment the test is executing in.  In other words, you may need to increase sleep interval if the implementation needs to do more work or if the tests fail on slower machines. In the end, the sleep interval reflects the lowest common denominator in your environment (that slow laptop that intern uses) and the entire test run becomes painfully (and unnecessarily) slow.

The solution is to add a little extra plumbing to block execution until the event is fired.

[Test]
public void AsynchronousTestWithBlockingAlwaysWorks()
{
    var reset = new System.Threading.ManualResetEvent(false);
    
    var subject = new MyClass();
    subject.OnComplete += (o,e) => reset.Set();
    
    subject.DoWorkAsync();
    
    if (!reset.WaitOne(1000)) // block for at most 1 second
    {
        Assert.Fail("The test timed out waiting for the OnComplete event.");
    }
}

The above example uses the ManualResetEvent, which in short is a signalling mechanism. The WaitOne method blocks execution until another thread calls the Set method or the timeout value is exceeded. In the above example, I've set up an anonymous delegate for my event so that the test finishes as soon as the asynchronous logic completes.

Cleaning it up...

As I mentioned earlier, I've been doing a lot of this type of testing so the above pattern has appeared in more than one test. It would be nice to encapsulate this a bit and streamline it into a reusable utility. This post on StackOverflow had an interesting concept to wrap the blocking logic into an object that supports IDisposable. The original poster was attempting to use Expressions to access events, which sadly isn't possible from outside the class, but here's an updated example that uses reflection and a named parameter for the event:

public sealed class EventWatcher : IDisposable
{
    private readonly object _target;
    private readonly EventInfo _eventInfo;
    private readonly Delegate _listener;
    private readonly ManualResetEvent _reset;
    private readonly int _timeout;

    public static EventWatcher Create<T>(T target, string eventName, int timeout = 1000)
    {
        EventInfo eventInfo = typeof (T).GetEvent(eventName);
        if (eventInfo == null)
            throw new ArgumentException("Event not found.", "eventName");

        return new EventWatcher(target, eventInfo, timeout);
    }

    private EventWatcher(object target, EventInfo eventInfo, int timeout)
    {
        _target = target;
        _eventInfo = eventInfo;
        _timeout = timeout;

        _reset = new ManualResetEvent(false);

        // Create our event listener
        _listener = CreateEventListenerDelegate(eventInfo.EventHandlerType);

        _eventInfo.AddEventHandler(target, _listener);
    }

    public void SetEventWasRaised()
    {
        _reset.Set();
    }

    private Delegate CreateEventListenerDelegate(Type eventType)
    {
        // Create the event listener's body, setting the 'eventWasRaised_' field.
        var setMethod = typeof(EventWatcher).GetMethod("SetEventWasRaised");
        var body = Expression.Call(Expression.Constant(this), setMethod);

        // Get the event delegate's parameters from its 'Invoke' method.
        MethodInfo invokeMethod = eventType.GetMethod("Invoke");
        var parameters = invokeMethod.GetParameters()
            .Select((p) => Expression.Parameter(p.ParameterType, p.Name));

        // Create the listener.
        var listener = Expression.Lambda(eventType, body, parameters);
        return listener.Compile();
    }

    public void Dispose()
    {
        bool eventWasFired = _reset.WaitOne(_timeout);
        // Remove the event listener.
        _eventInfo.RemoveEventHandler(_target,_listener);

        if (!eventWasFired)
        {
            string message = String.Format("Test timed-out waiting for event \"{0}\" to be raised.", _eventInfo.Name);
            Assert.Fail(message);
        }
    }
}

[Test]
public void DemonstrateEventWatcherWithUsingBlockAndDisposePattern()
{
    var subject = new MyClass();
    
    using(EventWatcher.Create(subject, "OnComplete"))
    {
        subject.DoWorkAsync();
        
    } // throws Assert.Fail() if event wasn't called.
}

Some Refinements...

This solution works, but there are a few drawbacks:

  1. The using block is a very succinct mechanism for controlling scope, but unfortunately if an exception occurs in the body of the using statement it will get swallowed by the exception in our EventWatcher's Dispose method.
  2. This only validates that the event was called, and it doesn't give us access to the event argument, which might be of interest to the test.
  3. The event name is a literal string value which isn't very refactor friendly.

Preventing Hidden Exceptions from the Finally block

To address the first problem where exceptions aren't properly handled inside the using statement, we change the signature slightly to perform the test logic in an Action.

// snip...
public static void WaitForEvent<T>(T target, string eventName, Action action) 
{ 
    var watcher = Create(target, eventName);

    try 
    { 
        action(); 
    } 
    catch (Exception) 
    { 
        watcher.Clear(); 
        throw; 
    } 
    finally 
    { 
        watcher.Dispose(); 
    } 
}
// ...end snip

[Test] 
public void DemonstrateEventWatcherWithAnActionInsteadOfUsingBlock() 
{ 
    EventWatcher.WaitForEvent(subject, "OnComplete", () => subject.DoWorkAsync()); 
}

Gaining Access to the EventArgs

Addressing the second problem requires some changes to the CreateEventListenerDelegate so that the arguments of the event are recorded.

//snip
public void SetEventWasRaised(EventArgs e) 
{ 
    EventArg = e; 
    _reset.Set(); 
}

public EventArgs EventArg { get; private set; }

private Delegate CreateEventListenerDelegate(Type eventType) 
{ 
    // Get the event delegate's parameters from its 'Invoke' method. 
    MethodInfo invokeMethod = eventType.GetMethod("Invoke"); 
    var parameters = invokeMethod.GetParameters() 
        .Select((p) => Expression.Parameter(p.ParameterType, p.Name)).ToList();

    // Setup the callback to pass in the EventArgs 
    var setMethod = typeof(EventWatcher).GetMethod("SetEventWasRaised"); 
    var body = Expression.Call(Expression.Constant(this), setMethod, parameters[1]);

    // Create the listener. 
    var listener = Expression.Lambda(eventType, body, parameters); 
    return listener.Compile(); 
} // end snip

[Test] 
public void DemonstrateEventWatcherCanCaptureEventArgs() 
{ 
    EventArgs e; 
    EventWatcher.WaitForEvent(subject, "OnComplete", out e, t => t.DoWorkAsync()); 
    
    Assert.IsNotNull(e); 
} 

Making it Refactor-Friendly...

Tackling the last problem where we don't have type-safety for our event, we need a special workaround because we're unable to use an Expression to capture our event. From a language perspective, C# will only allow external callers to add or remove event handler assignments, so we can't write a lamda expression that points to an event reference. Hopefully this is something that will be added to the language in a future version, but until then accessing the event requires some special hackery.

Taking a play from the Moq source code, we can infer the event name by using a dynamic proxy and intercepting the method invocation when we perform an event assignment. It's a long and awkward way around, but it works, albeit it comes with all the caveats of using a dynamic proxy: non-sealed classes with virtual events, MarshalByRefObject, or interfaces.

A note about virtual events: I’m personally not a big fan of virtual events (Resharper warnings, etc), but if you’re comfortable putting the event declarations on an interface, you can explicitly specify the interface in the generic type argument.  EventWatcher.Create<IMyInterface>(…)

public class EventAssignmentInterceptor : Castle.DynamicProxy.IInterceptor
{
    private Castle.DynamicProxy.IInvocation _lastInvocation;

    #region IInterceptor implementation

    public void Intercept(IInvocation invocation)
    {
        _lastInvocation = invocation;
    }

    #endregion

    public string GetEventName()
    {
        string methodName = _lastInvocation.Method.Name;
        return methodName.Replace("add_", "").Replace("remove_", "");
    }
}

public class EventHelper
{
    public static string GetEventName<T>(T target, Action<T> action) where T : class
    {
        var generator = new Castle.DynamicProxy.ProxyGenerator();
        var interceptor = new EventAssignmentInterceptor();

        T proxy;

        if (typeof(T).IsInterface)
        {
            proxy = generator.CreateInterfaceProxyWithoutTarget<T>(interceptor);
        }
        else
        {
            proxy = generator.CreateClassProxy<T>(interceptor);
        }

        action(proxy);

        return interceptor.GetEventName();
    }
}

Examples

Here’s a few examples that illustrate the various syntax.  Note, for sake a brevity (this post is crazy large) the overloads that define events as literals are not shown.

Basic usage in a using block. Note that exceptions inside the using may be swallowed by the dispose, and there's no access to the event args.

using(EventWatcher.Create(subject, t => t.OnComplete += null)) 
{ 
    subject.PerformWork(); 
} 

using(EventWatcher.Create(subject, t => t.OnComplete += null, timeout:1000) 
{ 
    subject.PerformWork(); 
}

Condensed usage. Provides access to the EventArgs and avoids the dispose problem.

EventArgs e = EventWatcher.Create(subject, t=> t.OnComplete +=null, () => subject.PerformWork() );

EventArgs e; 
EventWatcher.Create(subject, t=> t.OnComplete += null, out e, ()=> subject.PerformWork() ); 

Fluent usage. A cleaner syntax that declares the event and the action separately.

EventArgs e = EventWatcher 
            .For(subject, t => t.OnComplete += null) 
            .WaitOne(() => subject.PerformWork()) 

EventArgs e; 
EventWatcher.For(subject, t=> t.OnComplete += null) 
            .WaitOne( () => subject.PerformWork(), out e);

Full Source

The full source is here.

public sealed class EventWatcher : IDisposable
{
    private readonly object _target;
    private readonly EventInfo _eventInfo;
    private readonly Delegate _listener;
    private readonly ManualResetEvent _reset;
    private readonly int _timeout;

    #region Create
    public static EventWatcher Create<T>(T target, string eventName, int timeout = 1000)
    {
        EventInfo eventInfo = typeof(T).GetEvent(eventName);
        if (eventInfo == null)
            throw new ArgumentException("Event not found.", "eventName");

        return new EventWatcher(target, eventInfo, timeout);
    }

    public static EventWatcher Create<T>(T subject, Action<T> eventAssignment, int timeout = 1000) where T : class
    {
        string eventName = EventHelper.GetEventName(subject, eventAssignment);

        return Create(subject, eventName, timeout);
    } 
    #endregion

    #region For
    public static EventWatcherSetup<T> For<T>(T subject, Action<T> eventAssignment) where T : class
    {
        string eventName = EventHelper.GetEventName(subject, eventAssignment);
        return For(subject, eventName);
    }

    public static EventWatcherSetup<T> For<T>(T subject, string eventName)
    {
        return new EventWatcherSetup<T>(subject, eventName);
    } 
    #endregion

    #region WaitForEvent
    public static EventArgs WaitForEvent<T>(T target, Action<T> eventAssignment, Action action) where T : class
    {
        EventArgs e;
        WaitForEvent(target, eventAssignment, out e, action);
        return e;
    }

    public static void WaitForEvent<T>(T target, Action<T> eventAssignment, out EventArgs e, Action action) where T : class
    {
        string eventName = EventHelper.GetEventName(target, eventAssignment);
        WaitForEvent(target, eventName, out e, action);
    }

    public static EventArgs WaitForEvent<T>(T target, string eventName, Action action)
    {
        EventArgs e;
        WaitForEvent(target, eventName, out e, action);
        return e;
    }

    public static void WaitForEvent<T>(T target, string eventName, out EventArgs e, Action action)
    {
        var watcher = Create(target, eventName);

        try
        {
            action();
        }
        catch (Exception)
        {
            watcher.Clear();
            throw;
        }
        finally
        {
            e = watcher.EventArg;
            watcher.Dispose();
        }
    } 
    #endregion

    private EventWatcher(object target, EventInfo eventInfo, int timeout)
    {
        _target = target;
        _eventInfo = eventInfo;
        _timeout = timeout;

        _reset = new ManualResetEvent(false);

        // Create our event listener
        _listener = CreateEventListenerDelegate(eventInfo.EventHandlerType);

        _eventInfo.AddEventHandler(target, _listener);
    }

    public void SetEventWasRaised(EventArgs e)
    {
        EventArg = e;
        _reset.Set();
    }

    public void Clear()
    {
        _reset.Set();
    }

    public EventArgs EventArg { get; private set; }

    private Delegate CreateEventListenerDelegate(Type eventType)
    {
        // Get the event delegate's parameters from its 'Invoke' method.
        MethodInfo invokeMethod = eventType.GetMethod("Invoke");
        var parameters = invokeMethod.GetParameters()
            .Select((p) => Expression.Parameter(p.ParameterType, p.Name)).ToList();

        var setMethod = typeof(EventWatcher).GetMethod("SetEventWasRaised");
        var body = Expression.Call(Expression.Constant(this), setMethod, parameters[1]);

        // Create the listener.
        var listener = Expression.Lambda(eventType, body, parameters);
        return listener.Compile();
    }

    public void Dispose()
    {
        bool eventWasFired = _reset.WaitOne(_timeout);
        // Remove the event listener.
        _eventInfo.RemoveEventHandler(_target,_listener);

        if (!eventWasFired)
        {
            string message = String.Format("Test timed-out waiting for event \"{0}\" to be raised.", _eventInfo.Name);
            Assert.Fail(message);
        }
    }
}

public class EventWatcherSetup<T>
{
    private readonly T _target;
    private readonly string _eventName;

    public EventWatcherSetup(T target, string eventName)
    {
        _target = target;
        _eventName = eventName;
    }

    public EventArgs WaitOne(Action action)
    {
        EventArgs e;

        WaitOne(action, out e);
        
        return e;
    }
    public void WaitOne(Action action, out EventArgs e)
    {
        EventWatcher.WaitForEvent(_target,_eventName, out e, action);
    }

}

public class EventAssignmentInterceptor : Castle.DynamicProxy.IInterceptor
{
    public Castle.DynamicProxy.IInvocation LastInvocation;

    #region Implementation of IInterceptor

    public void Intercept(IInvocation invocation)
    {
        LastInvocation = invocation;
    }

    #endregion

    public string GetEventName()
    {
        string methodName = LastInvocation.Method.Name;
        return methodName.Replace("add_", "").Replace("remove_", "");
    }
}

public class EventHelper
{
    public static string GetEventName<T>(T target, Action<T> action) where T : class
    {
        var generator = new Castle.DynamicProxy.ProxyGenerator();
        var interceptor = new EventAssignmentInterceptor();

        T proxy;

        if (typeof(T).IsInterface)
        {
            proxy = generator.CreateInterfaceProxyWithoutTarget<T>(interceptor);
        }
        else
        {
            proxy = generator.CreateClassProxy<T>(interceptor);
        }

        action(proxy);

        return interceptor.GetEventName();
    }
}

submit to reddit

Friday, March 18, 2011

Parsing Nullable Enumerations

I had an interesting challenge yesterday that involved converting a string to an enumerated value. This seems like it would be really trivial, but I had a few extra considerations that made it harder to deal with.

We're all familiar with the Enum.Parse method that has existed in the .NET Framework since version 1.1.  It's a somewhat clunky and verbose method:

MyEnum result = (MyEnum)Enum.Parse(typeof(MyEnum), "First");

Recently, the .NET Framework 4.0 introduced a new static method on the Enum type which uses generics.  It's much cleaner, if you don't mind the out parameters.

MyEnum result;

if (Enum.TryParse("First", out result))
{
    // do something with 'result'
}

Unfortunately, the above methods are constrained to work with struct value-types only and won’t work with Nullable types. After some experimenting and fussing with casting between generic types I managed to get a solution that works with both standard and nullable enumerations.

I’m sure there’s a bit extra boxing in here, so as always your feedback is welcome.  Otherwise, this free-as-in-beer extension method is going on my tool belt.

public static TResult ConvertToEnum<TResult>(this string source)
{
    // we can't get our values out of a Nullable so we need
    // to get the underlying type
    Type enumType = GetUnderlyingTypeIfNullable(typeof(TResult));
        
    // unfortunatetly, .net 4.0 doesn't have constraints for Enum
    if (!enumType.IsEnum)
        throw new NotSupportedException("Only enums can be converted here, chum.");

    if (!String.IsNullOrEmpty(source))
    {        
        object rawValue = GetRawEnumValueFromString(enumType, source);
        
        // if there was a match
        if (rawValue != null)
        {
            // having the value isn't enough, we need to
            // convert this back to an enum so that we can 
            // perform an implicit cast back to the generic type
            object enumValue = Enum.Parse(enumType, rawValue.ToString());
        
            // implicit cast back to generic type
            // if the generic type was nullable, the cast
            // from non-nullable to nullable is also implicit
            return (TResult)enumValue;
        }
    }
    
    // if no original value, or no match use the default value.
    // returns 0 for enum, null for nullable<enum>
    return default(TResult);
}

private static Type GetUnderlyingTypeIfNullable(Type type)
{
    if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(Nullable<>))
    {
        // Nullable<T> only has one type argument
        return type.GetGenericArguments().First();
    }
    
    return targetType;
}

private static object GetRawEnumValueFromString(Type enumType, string source)
{
    FieldInfo[] fields = enumType.GetFields(BindingFlags.Public | BindingFlags.Static);

    // attempt to find our string value
    foreach (FieldInfo field in fields)
    {
        object value =  field.GetRawConstantValue();

        // exact match
        if (field.Name == source)
        {
            return value;
        }

        // attempt to locate in attributes
        var attribs = field.GetCustomAttributes(typeof (XmlEnumAttribute), false);
        {
            if (attribs.Cast<XmlEnumAttribute>().Any(attrib => source == attrib.Name))
            {
                return value;
            }
        }
    }

    return null;
}

Heh: “Free as in Beer”. Happy St. Patty’s, everybody.

submit to reddit

Friday, March 04, 2011

Add a Custom Toolbar for Source Control

My current project uses TFS and I spent a lot of time in and out of source control, switching between workspaces to manage defects. I found myself needing access to my workspaces and getting really frustrated with how clunky this operation is within Visual Studio.  There are two ways you can open source control.

  1. The Source Control item in the Team System tool window.  I don’t always have the Team Explorer tool window open and when I open it, it takes a few seconds to get details from the server.
  2. View –> Other Windows –> Source Control Explorer.  Useful, but there’s too much mouse movement and clicking to be accessible.

So, rather than creating a custom keyboard shortcut that I would forget I added a toolbar that is always in plain-sight. It’s so convenient that I take it for granted, and when I pair with others they comment on it. So for their convenience (and yours), here’s how it’s done.

Add a new Toolbar

From the Menubar, select “Tools –> Customize”.  It’ll pop up this dialog. 

Click on “New” and give your toolbar a name.

Toolbar_Customize_AddToolbar

Add the Commands

Switch to the Commands tab and select the name of your toolbar in the Toolbar dropdown.

Toolbar_Customize_Empty

Click on the “Add Command” button and select the following commands:

  • View : TfsSourceControlExplorer
  • View : TfsPendingChanges

Toolbar_Customize_AddCommand

Now style the button’s accordingly using the “Modify Selection” button.  I’ve set mine to use the Default styling, which is just the button icon.

Enjoy

CustomToolbar_result

Thursday, March 03, 2011

Building Custom Test Frameworks

Author’s note: this post focuses on writing test tools to simplify repetitive tasks such as complex assertions and is inspired by colleagues that wrote a tool that nearly evolved beyond their control. I am not encouraging developers to write frameworks that mask symptoms of poorly isolated components or other design problems.

For some, evolving code through writing tests is seen as a labour of love. For others, it's just labour. The latter is especially true when you realize that you have to write a lot of repetitious tests that will likely introduce test friction later on. When faced with this dilemma, developers rise to the challenge and write code.

Fortunately, there's a lot of great test framework tools popping up in the .net community that are designed to plug in to your test code. (I’m running out of battery power on my laptop as I write this, so my list of tools is lacking.  Send me a note and I’ll list your tool here) These tools can certainly make things easier by removing some obstacles or laborious activities, but if you're planning on writing your own tool or framework there a few pitfalls.

Common Pitfalls

Obscured Intent

One of the goals of well written unit tests is to provide live examples of proper usage so that developers can learn more about the code's intended behaviour. This benefit can be significantly hindered when the usage and clear intent of your code has been abstracted into your framework.

Eventual Complexity

Applying automation to a problem follows the 80/20 rule where the majority of problems fit nicely into your abstraction. The edge cases however have a tendency to add bloat and it doesn't take much to quickly trash the idea of a simple tool. This is rarely a consequence of poor planning or bad design; additional complexity tends to creep in over time as your code evolves or as more consumers of the tool come on board.

There's a consequence to this type of complexity: if few developers understand the tool's implementation, you risk limiting these developers to be tool maintainers. Even worse, if these developers leave your team there's a risk that the entire suite of tests will be abandoned if they start failing.

Dependence on Tool Quality / False Positives

In TDD, application defects hide in the tests you haven't written, so quality is a reflection of the accuracy and completeness of the tests. Likewise, tests that leverage a custom test framework are only as reliable as the completeness and accuracy of the tool. If the framework takes on the lions share of the work, then the outcome of the test is abdicated to the tool. This is dangerous because any change to the tool's logic could unintentionally allow subtle defects in the tested code to go unnoticed. False positives = tests that lie!

Recommendations

Tests for Tools

Oddly enough, if your goal is to write a tool so that you don't need to write tests, you are going to need to write tests for the tool. Having tests for your custom tool ensures that false positives aren’t introduced as the tool is enhanced over time. From a knowledge transfer perspective, the tests serve as a baseline to describe the tool’s responsibilities and enable others to add new functionality (and new tests) with minimal oversight from the original tool author.

Design for Readability

Great frameworks fit in and enable you to express intent; so be careful of over-reaching and trying to do too much. Make sure your tool doesn't abstract away clues as to what the test is intended for, and if possible use descriptive method names to improve readability. Documenting your tool with Xml documentation syntax and comments is also very helpful for intent.

Evolve Naturally

Unless you spend weeks planning out your unit tests, custom test tools are the product of necessity that are realized when you start to write the second or third duplicated test.  I tend to realize my test tools as “found treasures” of my normal TDD development cycle.

Rather than diving in and writing a framework or tool first, focus on satisfying the requirement of working software by writing the test using normal TDD best practices (red, green, refactor). During the clean up and refactor step, you will find duplication between tests or even different fixtures. When that happens, promote the duplicated code to a helper method, then a helper class. I like to think that great TDD is about refactoring mercilessly while keeping the original intent of the code clear – it’s about balance, know when you’ve gone too far.

If you a reach a point where you can extract commonalities between a few helper classes into a generic solution, you’ll find yourself standing in the doorway between where your fixtures were self-contained and where they’ve become dependent on shared test logic.  Learn to recognize this moment because this is when you should stop writing tests for production code for a moment and write some tests for the tool. It’s also a good idea to keep a few of the manual tests around so that you can go back into the production code and deliberately break it to prove that both sets of tests are equivalent.

…Until next time. Happy coding.  (Now where’s my dang power supply?)

Tuesday, February 08, 2011

Plaintext + Dropbox = Ubiquitous Text

I am loving this. If you blog or like to jot down notes while on the go then I highly recommend the following setup.

Dropbox

Dropbox is a small cloud-based utility that synchronizes files between all your devices. It works on PC, Mac, iOS, Android and Blackberry.  It even has a web interface that you can browse from any computer, and even track previous versions of files. Best of all, it's free (for 2Gb of storage).

There are a number of applications that use Dropbox as their storage medium. My favorite so far is PlainText.

PlainText

PlainText is a bare bones text editor with a minimal UI that uses Dropbox as it's file system. Simply download to your iPhone, iPod Touch or iPad, login with your Dropbox account and your data magically syncs each time you touch the contents. Only limitation is that it doesn't have the ability to move files to different folders, but I can do that on my PC.

Now it's possible for me to sketch down an idea on my iPhone while riding the subway or streetcar, then pickup where I left off and flesh out the details on my iPad while watching TV. I can even switch between devices with almost no wait, so I can proof read or work through an idea whenever I have a free moment.

Getting started

Setup is easy.  On your PC or Mac, go to Dropbox and setup an account, the software will download on the next page.  Install the application and associate it to your user account. The free version gives you 2 GB of storage (more if you enlist your friends) and they offer more storage through paid upgrades.

On your iOS device, download PlainText.  In the settings, link it to your Dropbox account.

PlainText-Dropbox-signup

PlainText by default will create a folder in your Dropbox called “PlainText”.  This is helpful so that you’re only syncing small files.  Here’s a quick screen-capture of the available settings.

Photo Feb 06, 2 33 00 PM

You can create files and folders on your iOS device.  Any changes will automatically sync to your PC/Mac.

Dropbox-sync

Disclosure: This is not a paid advertisement, I just really like this configuration.  I should point out that the links to sign-up are to my referral page.  If you are vehemently opposed to this you can visit the site referral free here: https://www.dropbox.com/

Thursday, January 27, 2011

Using Xslt in MSBuild

Recently, I had the opportunity to integrate some of my Xslt skills into our MSBuild script.  As MSBuild is a relatively new technology for me (compared to NAnt) as I ventured into this task I was reminded that there’s always more than one way to get the job done.  Prior to .NET 4.0, MSBuild didn’t natively support Xslt capabilities so the developer community independently produced their own solutions.  So far, I’ve identified the following three implementations:

I thought it would be fun to contrast the different implementations.

MSBuild Community Tasks: Xslt

Hosted on tigris.org, the msbuildtasks implementation was the task I ultimately ended up using. Of the three listed, this one seems to be the most feature complete – but working with it is extremely frustrating because there is no online help.  Fortunately, there is a help file that ships with the installer that provides decent coverage for the Task (albeit a bit buried).

The interesting feature that sets this Task apart is that it aggregates your xml files into a single in-memory document and performs the transformation once.  Keep in mind that this has an impact on how you structure your XPath queries.  For example, an expression like count(//element-name) will count all instances in the aggregated contents.

Because it is an aggregator, your xml documents will be loaded as children elements of a root element.  As such, you must supply the name for the Root element. If you are only transforming a single file, the root element can be left blank (“”).

My only complaint about this implementation is that it only supports writing the transformation to a file. For my purposes, I would have really liked the ability to pipe the output into a Property or ItemGroup. To achieve this, you must read the contents of the file into your script. Not a big deal, just an extra step.
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
   <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" />

   <Target Name="Example">

      <ItemGroup>
        <XmlFiles Include="*.xml" />
      </ItemGroup>

      <PropertyGroup>
        <XslFile>example.xslt</XslFile>
      </PropertyGroup>

      <!-- perform our xslt transformation for all xml files -->
      <Xslt
         Inputs="@(XmlFiles)"
         Xsl="$(XslFile)"
         RootTag="Root"
         Output="xslt-result.html"/>

      <!-- read the file into a property -->
      <ReadLinesFromFile File="xslt-result.html">
         <Output TaskParameter="Lines" ItemName="MyOuput"/>
      </ReadLinesFromFile>  

      <!-- display the output (with no delimiter) -->
      <Message Text="@(MyOutput,'')" />

    </Target>

</Project>

In reading the help documentation, I noticed that this task will pass the extended meta-data of the Inputs into the document as parameters to the Xslt.  This looks interesting and I may want to revisit this in a future post.

MSBuild Extension Pack: XmlTask

I really liked this implementation but unfortunately our build server wasn’t using this set of extensions, so I had to retrofit to use the community tasks version.

This task boasts a few interesting features:

  1. The output of the transformation can write to a file or a property/item.
  2. Ability to validate the Xml.
  3. Ability to suppress the Xml declaration (though you can do this in your xslt, see below)

This example is comparable to above but uses the Output element of the Task to route the transformation into an ItemGroup (MyOutput).

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
   <Import Project="$(MSBuildExtensions)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks" />
            
   <Target Name="Example">

      <ItemGroup>
        <XmlFiles Include="*.xml" />
      </ItemGroup>

      <PropertyGroup>
        <XslFile>example.xslt</XslFile>
      </PropertyGroup>

      <MSBuild.ExtensionPack.Xml.XmlTask 
          TaskAction="Transform" 
          XmlFile="%(XmlFiles.Identity)" 
          XslTransformFile="$(XslFile)"
          >
          <Output 
             ItemName="MyOutput" 
             TaskParameter="Output"
             />
      </MSBuild.ExtensionPack.Xml.XmlTask>        
        
      <Message Text="@(MyOutput, '')" />
        
   </Target>

</Project>

Note that I can switch between using multiple and a single input files simply by changing the XmlFile attribute from an ItemGroup to a Property.  Unlike the Community Tasks implementation, my Xslt stylesheet doesn’t need special consideration for an artificial root element which makes the stylesheet shorter to develop and easier to maintain.

MSBuild 4.0: XsltTransformation

Last but not least is the newcomer on the block, MSBuild 4.0’s XsltTransformation Task.

This task, to my knowledge, can only write the transformation to a file.  If you’re only transforming a single file this won’t be a problem for you, but if you have multiple files that you want to concatenate into a single output, it’ll take some finesse.

I solved this problem by creating a unique output file for each transformation.  Aside from some extra files to clean-up, the task worked great.

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
            
   <Target Name="Example">

      <ItemGroup>
        <XmlFiles Include="*.xml" />
      </ItemGroup>

      <PropertyGroup>
        <XslFile>example.xslt</XslFile>
      </PropertyGroup>

      <!-- Perform the transform for each file, but
              write the output of each transformation to its
              own file. -->
      <XslTransformation
         OutputPaths="output.%(XmlFiles.FileName).html"
         XmlInputPaths="%(XmlFiles.Identity)"
         XslInputPath="$(XslFile)"
         />
 
      <!-- Read all our transformation outputs back into an ItemGroup -->
      <ReadLinesFromFile File="output.%(XmlFiles.FileName).html">
         <Output TaskParameter="Lines" ItemName="MyOutput" />
      </ReadLinesFromFile>
        
      <Message Text="@(MyOutput, '')" />
        
   </Target>

</Project>

This implementation also supports passing parameters, which may come in handy.

Some Xslt Love

And for completeness sake, here’s a skeleton Xslt file that supresses the xml-declaration.

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    
    <xsl:output method="html" omit-xml-declaration="yes" />
    
    <xsl:template match="/">
        <p>your output here</p>
    </xsl:template>

</xsl:stylesheet>

So go out there and unleash some Xslt-Fu.

submit to reddit

Tuesday, January 25, 2011

MSBuild for NAnt Junkies

alien-autopsyOver the last seven years any time I needed to write a build script, I've turned to NAnt as the tool to get the job done. I’ve relied on NAnt so much that the NAnt task reference page is usually on my Most Visited page in Chrome. The concepts behind NAnt are straight forward and once you’ve understand the basics it’s a great skill to have.

My current project is using MSBuild as the principle build script and this week I had the pleasure of having to tweak it. Although I’m a little late to adopting MSBuild as a scripting tool, this was a good opportunity to get to know its idiosyncrasies and learn a new technology. My verdict? They share a lot of similar characteristics, but if you’re familiar with NAnt you’ll find that MSBuild has a few foreign concepts that take some time to wrap your head around.

While this is still new territory for me, I thought it would be fun to compare the two and share my findings here for my reference (and for yours!).

The Basics

Structure

At the surface, MSBuild and NAnt appear to related as they share a common lexicon and overall structure. The scripts are comprised of Projects and Targets.

NAnt:
<?xml version="1.0"?>
<project default="MyTarget" >
    
    <target name="MyTarget">
        <echo message="Hello World" />
    </target>

</project>
MSBuild:
<Project DefaultTargets="MyTarget"
      xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <Target Name="MyTarget">
        <Message Text="Hello World" />
    </Target>

</Project>

Execution of the scripts is nearly identical, with only minor differences in the input arguments:

nant -buildfile:example.build MyTarget
msbuild example.csproj /target:MyTarget

Tasks

Both NAnt and MSBuild share a rich Task functionality set and support the ability for writing your own Tasks if the default tasks aren’t enough.  They also support writing custom tasks inline using your favourite .NET language.

Properties

The most notable difference between the two technologies starts to appear in how they handle Properties.

NAnt has a fixed schema such that all properties are defined exactly the same way.  You reference your properties in your scripts by prefacing them with ${property-name}

Using properties with NAnt
<property name="myproperty" value="hello world" />

<echo message='${myproperty}" />

MSBuild on the other hand leverages the self-describing nature of XML to describe properties. This can really throw you off, since your properties won’t adhere to a known schema which means there’s no IntelliSense support. This seems like an odd move as MSBuild is the underlying file structure for Visual Studio project files, but it provides an opportunity to provide additional meta-data about the properties, which I’ll explain later in this post.

Rather than use an element named “property”, any element that appears inside a PropertyGroup element is a property.  Similarly to NAnt, you reference properties using a $(property-name) syntax except be sure to note this is round brackets instead of curly braces.

Using Properties with MSBuild:
<PropertyGroup>
    <MyProperty>Hello World</MyProperty>
</PropertyGroup>

<Message Text="$(MyProperty)" />

MSBuild also supports the concept of ItemGroups, which equate roughly to multi-valued properties. The closest equivalent in NAnt would be filesets, though they are only limited to certain tasks, like a list of files and folders to be used with the Copy task.

Much like Properties, the element name represents the name of the ItemGroup. You reference an ItemGroup using a @(ItemGroup-Name) syntax. By default, ItemGroup are evaluated as semi-colon delimited lists.

Using ItemGroups with MSBuild:
<ItemGroup>
    <MyList Include="one" />
    <MyList Include="two" />
    <MyList Include="three" />
</ItemGroup>

<Message Text="@(MyList)" />

Produces the output:

one;two;three

By default, when accessing items in an ItemGroup using the @(item-group-name) syntax, the items are concatenated using a semi-colon delimiter. You can specify different delimiters as well, for example the following produces a comma-delimited list.

<Message Text="@(MyList, ',')" />

Advanced Stuff

Conditions

There are times where you want to conditionally invoke a target, task or configure a property and both tools accommodate this.  NAnt describes conditions in the form of two attributes: “if” or “unless” – unless is the opposite of if. NAnt also supports a very impressive list of built-in functions including string, file, date and environment routines.  MSBuild doesn’t have extensive support for inline expressions but does offer a set of Property Functions and standard boolean logic.  However, MSBuild has a very long list of Tasks and community extensions that provide coverage in this area. 

To assist with the conditional expressions you may write, NAnt also ships with out of the box properties.  MSBuild has considerably more reserved properties, though most of them seem centered around the Visual Studio compilation process.

Output

Sometimes your build script needs to control flow of execution based on the outcome of the preceding tasks.  In the NAnt world, a lot of the default Tasks expose an attribute that contains the output.

This NAnt example directs the exit code of the Exec task to the property myOutput.

<target name="RunProgram">

    <exec
        program="batch.bat"
        resultproperty="MyOutput"
        />

    <echo message="${MyOutput}" />
</target>

MSBuild has a similar strategy, except there’s a finer level of control: all Tasks can contain an Output element that dictates where to direct output of the Task.  This is advantageous if there are multiple output values that you want to collect from the Task.

This MSBuild example uses the Output element to put the value of the Exec Task’s ExitCode into the Property MyOutput:

<Target Name="RunProgram">

    <Exec Command="batch.bat">
        <Output
            TaskParameter="ExitCode"
            PropertyName="MyOutput" />
    </Exec>
    
    <Message Text="$(MyOutput)" />

</Target>

MetaData

Earlier I mentioned that MSBuild leverages the self-describing nature of XML to provide additional meta-data about properties (more specifically Item Groups).  This loose format to describe our properties means that we can embed a lot of extra data that can be leveraged by custom Tasks.  After this example, it’s clear to see how MSBuild and Visual Studio use meta-data to assist in the compilation of our projects.

We can embed meta data by adding custom children elements below each Item:

<ItemGroup>
    <StarWarsMovie Include="episode3.xml">
        <Number>III</Number>
        <Name>Revenge of the Sith</Main>
        <Awesome>no</Awesome>
    </StarWarsMovie>
    <StarWarsMovie Include="episode4.xml">
        <Number>III</Number>
        <Name>A New Hope</Name>
        <Awesome>yes</Awesome>
    </StarWarsMovie>
    <StarWarsMovie Include="episode5.xml">
        <Number>III</Number>
        <Name>The Empire Strikes Back</Name>
        <Awesome>yes</Awesome>
    </StarWarsMovie>
</ItemGroup>

And we can access the meta data of each individual Item using the %(meta-name) syntax.  For example, we can spit out only the awesome Star Wars Films by filtering them by their meta-data.

<Message Text="@(StarWarsMovie)"
         Condition=" '%(Awesome)' == 'yes'"/>

The above displays the names of the episodes that are awesome (“episode4.xml;episode5.xml”) but there’s a really interesting and powerful aspect happening with the above statement.  The “%” character accesses the individual items before they are appended to the output, which is very similar to a for loop.  So where “@(StarWarsMovie)” sends the entire ItemGroup to the Message task, we can execute the Message Task in a loop using this syntax:

<Message Text="%(StarWarsMovie.Name)" 
         Condition=" '%(Awesome)' == 'yes'"/>

Which produces the output:

A New Hope
The Empire Strikes Back

There’s also a set of baked-in meta-data properties available to all ItemGroups.  The most notable is the Identity meta-property which gives us the name of the individual Item.

Transforms

Another interesting feature that MSBuild provides is the ability to convert the contents of a list into another format using a transform syntax @( ItemGroup –> expected-format).  When combined with Meta-Data, we’re able to create rich representations of our data. Sticking with my Star Wars theme (hey, at least I'm consistent), the following transform:

@(StarWarsMovies -> 'Star Wars Episode %(Number): %(Name)', ', ')

Creates a comma-delimited list:

Star Wars Episode III: Revenge of the Sith, Star Wars Epsiode IV: A New Hope, Star Wars Epside V: The Empire Strikes Back

Wrap up

While MSBuild shares a lot of common ground with NAnt, there are certainly a lot of interesting features that warrant a closer look.

Happy Coding.

submit to reddit

Wednesday, January 05, 2011

Selenium Toolkit for .NET 0.84 Released

Welcome 2011, things are shaping up to be a great year.  Things have been quiet around here, aside from the limited notices I sent out on twitter, you may not have noticed that I published two releases of the Selenium Toolkit for .NET since November.

So what's new in the Toolkit?

While there have been a few minor fixes and code changes, the last two releases for the toolkit have been focused on keeping the toolkit current with its external dependencies (NUnit / Selenium / Selenium IDE).  As such, expect to see more frequent releases this year.

By far, the biggest change in the toolkit is the new WiX based installer.  While I’m personally excited about the ability to include the installer as part of an automated build, you can do a lot of great things with WiX’s declarative syntax that would normally require custom code.  For example, the installer is able to conditionally deploy files based on the presence of a registry key and folder value.

The second biggest change (enabled through the WiX installer) is a resolution to a common problem related to the Toolkit’s NUnit addin.  Without going into the specific details, the short story is that NUnit Addins only work with the version of NUnit it was compiled with.  This leads to some confusion where users would copy the addin to their recently installed latest version of NUnit only to find out the hard way that nothing works.  Since version 0.83 the installer now includes separate NUnit version specific addins and will deploy them to the appropriate NUnit addin folder during installation.  NUnit versions 2.5.3, 2.5.5, 2.5.7, 2.5.8 and 2.5.9 are now supported as separate addins.

In case you’re wondering, 2.5.4 and 2.5.6 were published for only a few weeks before they were superseded by 2.5.5 and 2.5.7 respectively.

Also, you asked, I listened: the Toolkit is now published as both an MSI and ZIP.

What about Selenium 2.0 / WebDriver?

As my current project at work is a monster WPF application, I haven’t had a lot of cycles to play with the new WebDriver feature-set.  This may change, as I’m planning on reaching out to some web-based projects within my organization and offer some test guidance.

As far as Selenium 2.0 betas are concerned, I am keeping a close eye on the project and as soon as the .NET bindings and overall functionality stabilize I may upgrade the internal bits to leverage the 1.0 backward compatible features in the 2.0 beta (WebDriverBackedSeleniumBrowser).  I may even fork the code and offer two releases.  Time will tell, as will your feedback.

So with that, short-n-sweet – Happy New Year.  You can find the latest release here:  http://seleniumtoolkit.codeplex.com/releases/view/58383

submit to reddit

Monday, December 13, 2010

Unit Test Review Checklist

Whenever someone mentions that they want to bring a code-review process into a project, my first inclination is to have teams review the tests.  Tests should be considered production assets that describe the purpose and responsibilities of the code, and developers should be making sure that their tests satisfy this goal.  If the tests look good and they cover the functionality that is required, I don’t see much point in dwelling on the implementation: if the implementation sucks, you can refactor freely assuming you have proper coverage from your tests.

A unit test review process is a great way to ensure that developers are writing useful tests that will help deliver high quality code that is flexible to change.  As code-review processes have the tendency to create stylistic debates within a development group, having a checklist of the 5-10 points to look for is a great way to keep things moving in the right direction.

Here’s my criteria that I’d like to see in a unit test.  Hopefully you find this useful, too.

Pass Guideline
 

Is the test well named?

  • Does the test name clearly represent the functionality being tested?
  • Will someone with little or no knowledge of the component be able to decipher why the test exists?
 

Is the test independent?

  • Does the test represent a single unit of work?
  • Can the test be executed on its own or is it dependent on the outcome of tests that preceded it?  For example, a fixture that shares state between tests may inadvertently include side-effects.
 

Is the test reliable?

  • Does the test produce the same outcome consistently?
  • Does the test run without manual intervention to determine a successful outcome?
 

Is the test durable?

  • Is the test designed so that it isn’t susceptible to changes in other parts of the application?  For example:
    • Does the test have complex or lengthy setup?
    • Will changes in the subject's dependencies lead to breakages?
 

Are the assertions valid?

  • Does the test contain assertions that describe the functionality being tested?
  • Do the assertions include helpful error messaging that describe what should have happened if the test fails?
  • Are any assertions redundant such as testing features of the CLR (constructors or get/set properties)?
  • Are the assertions specific to this test or are they duplicated in other tests?
 

Is the test well structured and documented?

  • Does the test contain sufficient comments?
  • Does the test highlight the subject under test and corresponding functionality being exercised?
  • Is there a clear indication of arrange/act/assert pattern such that the importance of the arrange makes sense, the action easily identifiable and the assertions clear?
 

Is the test isolating responsibilities of the subject?

  • Does the test make any assumptions about the internals of dependencies that aren't immediately related to the subject?
  • Is the test inaccurately or indirectly testing a responsibility or implementation of another object?  If so, move the test to the appropriate fixture.
 

Are all scenarios covered?

  • Does the fixture test all paths of execution? Not just for the code that has been written, but also for scenarios that may have been missed:
    • invalid input?;
    • failures in dependencies? 
 

Does the test handle resources correctly?

  • If the test utilizes external dependencies (databases, static resources), is the state of these dependencies returned to a neutral state?
  • Are resources allocated efficiently to ensure that the test runs as fast as possible:
    • setup/teardown effectively;
    • No usages of Thread.Sleep()?
 

Is the complexity of the test balanced?

Tests need to strike a balance between Don't Repeat Yourself and clear examples of how to use the subject.

  • Is the test too complicated or difficult to follow?
  • Is there a lot of duplicated code that would lead to time consuming effort if the subject changed? 

 

What do you think?  Are there criteria that you’d add?  Things you disagree with?

Happy coding.

Monday, October 18, 2010

Callbacks with Moq

My current project is using Moq as our test mocking framework. Although I've been a fan of RhinoMocks for several years I've found that in general, they both support the same features but I'm starting to think I like Moq's syntax better. I'm not going to go into an in-depth comparison of the mocking frameworks, as that's well documented elsewhere. Instead, I want to zero in on a feature that I've had my eye on for sometime now.

Both RhinoMocks and Moq have support for a peculiar feature that let's you invoke a callback method in your test code when a method on your mock is called. This feature seems odd to me because in the majority of cases a mock is either going to return a value or throw an exception - surely invoking some arbitrary method in your test must represent a very small niche requirement.

According to the RhinoMock documentation, the callback takes a delegate that returns true or false. From this it's clear that you would use this delegate to inspect the method's incoming parameters and return false if it didn't meet your expectations. However, as Ayende eludes, this is a powerful feature that can be easily be abused.

Moq provides this feature too, but the syntax is different. Rather than requiring a delegate Func<bool>, Moq's Callback mimics the signature of the mocked method. From this syntax it's not obvious that the callback should be used for validating inbound parameters, which suggests that it could be abused, but it also implies freedom for the test author to do other things. Granted this can get out of control and can be abusive, but perhaps a level of discretion about it's usage is also implied?

Here are a few examples where Callbacks have been helpful:

Validating inbound method parameters

The best example I can imagine for inspecting an inbound parameter is for a method that has no return value. For example, sending an object to a logger. If we were to assume that validation logic of the inbound parameter is the responsibility of the logger, we would only identify invalid arguments at runtime. Using the callback technique, we can write a test to enforce that the object being logged meets minimum validation criteria.

[TestMethod]
public void Ensure_Exception_Is_Logged_Properly()
{
  Exception exception;

  Mock.Get(Logger)
      .Setup( l => l.Error(It.IsAny<Exception>())
      .Callback<Exception>( ex => exception = ex )

  Subject.DoSomethingThatLogsException();

  Assert.AreEqual(ex.Message, "Fatal error doing something");
}

Changing state of inbound parameters

Imagine a WPF that uses the MVVM pattern and we need to launch a view model as a modal dialog. The user can make changes to the view model in the dialog and click ok, or they can click cancel. If the user clicks ok, the view model state needs to reflect their changes. However if they click cancel, any changes made need to be discarded.

Here's the code:

public class MyViewModel : ViewModel
{
  /* snip */

  public virtual bool Show()
  {
      var clone = this.Clone();
      var popup = new PopupWindow();
      popup.DataContext = clone;

      if (_popupService.ShowModal(popup))
      {
          CopyStateFrom(clone);
          return true;
      }
      return false;
  }

}

Assuming that the popup service is a mock object that returns true when the Ok button is clicked, how do I test that the contents of the popup dialog are copied back into the subject? How do I guarantee that changes aren't applied if the user clicks cancel?

The challenges with the above code is that the clone is a copy of my subject. I have no interception means to mock this object unless I introduce a mock ObjectCloner into the subject (that is ridiculous btw). In addition to this, the changes to the view model happen while the dialog is shown.

While the test looks unnatural, Callbacks fit this scenario really well.

[TestMethod]
public void When_User_Clicks_Ok_Ensure_Changes_Are_Applied()
{
  Mock.Get(PopupService)
      .Setup(p => p.ShowModal(It.IsAny<PopupWindow>())
      .Callback<PopupWindow>( ChangeViewModel )
      .Returns(true);

  var vm = new MyViewModel(PopupService)
              {
                  MyProperty = "Unchanged"
              };

  vm.Show();

  Assert.AreEqual("Changed", vm.MyProperty);
}

private void ChangeViewModel(PopupWindow window)
{
  var viewModel = window.DataContext as MyViewModel;
  viewModel.MyProperty = "Changed";
}

The key distinction here is that changes that occur to the popup are in no way related to the implementation of the popup service. The changes in state are a side-effect of the object passing through the mock. We could have rolled our own mock to simulate this behavior, but Callbacks make this unnecessary.

Conclusion

All in all, Callbacks are an interesting feature that allow us to write sophisticated functionality for our mock implementations. They provide a convenient interception point for parameters that would normally be difficult to get under the test microscope.

How are you using callbacks? What scenarios have you found where callbacks were necessary?

submit to reddit

Tuesday, October 12, 2010

Working with Existing Tests

You know, it’s easy to forget the basics after you’ve been doing something for a while.  Such is the case with TDD – I don’t have to remind myself of the fundamental “red, green, refactor” mantra everything I write a new test, it’s just baked in.  When it’s time to write something new, the good habits kick in and I write a test.  After all, this is what the Driven part of Test Driven Development is about: we drive our development through the creation of tests.

The funny thing is, the goal of TDD isn’t to produce tests.  Tests are merely a by-product of the development of the code, and having tests that demonstrate that the code works is one of the benefits.  Once they’re written, we forget about them and move on – we only return to them if something unexpected broke.

Wait.  Why are they breaking?  Maybe we forgot something, somewhere.

The Safety Net Myth

One of the reasons that tests break is because there’s a common perception that once the code is written, we no longer need the tests to drive development.  “We’ve got tests, so let’s just see what breaks after I make these changes…”

This strategy works when you want to try “what-if” scenarios or simple proper refactorings, but it falls flat for long-term coding sessions.  The value of the tests diminish quickly the longer the coding session lasts.  Simply put, tests are not safety nets – if you go off making changes for a few days you’re only going to find that the tests get in the way as they don’t represent your changes and your code won’t compile.

This may seem rudimentary, but let’s go back and review the absolute basics of TDD methodology:

  1. Start by writing a failing test. (RED)
  2. Implement the code necessary to make that test pass. (GREEN)
  3. Remove any duplication and clean it up.  (REFACTOR)

It’s easy to forget the basics.  The very first step is to make sure we have a test that doesn’t pass before we do any work, and this is easily overlooked when we already have tests for that functionality.

Writing tests for new functionality

If you want to introduce new functionality to your code base, challenge your team to introduce those changes to the tests first.  This may seem altruistic to some, especially if it’s been a long time since the tests were written or if no-one on the team is familiar with the tests or their value. 

Here’s a ridiculously simple tip:

  1. Locate the code you think may need to change for this feature.
  2. Introduce a fatal error into the code.  Maybe comment out the return value and return null, or throw an exception.
  3. Run the tests.

With luck, all the areas of your tests that are impacted by this code are broken.  Review these tests and ask yourself:

  • Does this test represent a valid requirement after I introduce my change?  If not, it’s safe to remove it.
  • How does this test relate to the change that I’m introducing?  Would my change alter the expected results of this test?  If yes, change the expected results. These tests should fail after you remove the fatal flaw you introduced moments ago.
  • Do any of these tests represent the new functionality I want to introduce?  If not, write that test now.

(If nothing breaks, you’ve got a different problem.  Do some research on what it would take to get this code under a test, and write tests for new functionality.)

Conclusion

The duct tape programmer will argue that you can’t make an omelette without breaking some eggs, which is true – we should have the courage to stand up and fix things that are wrong.  But I’d argue that you must do your homework first - if you don’t check for other ingredients, you’re just making scrambled eggs. 

In my experience, long term refactorings that don’t leverage the tests are a recipe for test-abandonment; your tests and code should always be moments from being able to compile.  The best way to keep the tests valid is to remember the basics – they should be broken before you start introducing changes.

submit to reddit

Tuesday, October 05, 2010

Writing Easier to Understand Tests

For certain, the long term success of any project that leverages Tests has got to be Tests that are easy to understand and provide value.

For me, readability is the gateway to value. I want to be able to open the test, and BOOM! Here’s the subject, here’s the dependencies, this is what I’m doing, and here’s what I expect it to do. If I can’t figure it out within a few seconds, I start to question the value of the tests. And that’s exactly what happened to me earlier this week.

The test I was looking at had some issues, but the developer who wrote it had their heart in the right place and made attempts to keep it relatively straight-forward. It was using a Context-Specification style of test and was using Moq to Mock out the physical dependencies, but I got tied up in the mechanics of the test. I found that the trouble I was having was determining which Mocks were part of the test versus Mocks that were in the test to support related dependencies.

Below is an example of a similar test and the steps I took to clean it up. Along the way I found something interesting, and I hope you do, too.

Original (Cruft Flavor)

Here’s a sample of the test. For clarity sake, I only want to illustrate the initial setup of the test, so I’ve omitted the actual test part. Note, I’m using a flavor of context specification that I’ve blogged about before, if it seems like a strange syntax, you may want to read up.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected Mock<IDataProvider> _mockDataProvider;
   protected Mock<IUserContext> _mockUserContext;
   protected IDataProvider _dataProvider;
   protected IUserContext _userContext;

   public override void Context()
   {
       _mockDataProvider = new Mock<IDataProvider>();
       _mockUserContext = new Mock<IUserContext>();

       _userContext = _mockUserContext.Object;
       _dataProvider = _mockDataProvider.Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(_dataProvider,_userContext);
   }

   // public class SubContext : TemplateResolverSpecs
   // etc
}

This is a fairly simple example and certainly those familiar with Moq’s syntax and general dependency injection patterns won’t have too much difficultly understanding what’s going on here. But you have to admit that while this is a trivial example there’s a lot of code here for what’s needed – and you had to read all of it.

The Rewrite

When I started to re-write this test, my motivation was for sub-classing the test fixture to create different contexts -- maybe I would want to create a context where I used Mocks, and another for real dependencies. I started to debate whether it would be wise to put the Mocks in a subclass or in the base when it occurred to me why the test was confusing in the first place: the Mocks are an implementation detail that are getting in the way of understanding the dependencies to the subject. The Mocks aren’t important at all – it’s the dependencies that matter!

So, here’s the same setup with the Mocks moved out of the way, only referenced in the initialization of the test’s Context.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected IDataProvider DataProvider;
   protected IUserContext UserContext;

   public override void Context()
   {
       DataProvider = new Mock<IDataProvider>().Object;
       UserContext = new Mock<IUserContext>().Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(DataProvider, UserContext);
   }
}

Much better, don’t you think? Note, I’ve also removed the underscore and changed the case on my fields because they’re protected and that goes a long way to improve readability, too.

Where’d my Mock go?

So you’re probably thinking “that’s stupid great Bryan, but I was actually using those mocks” – and that’s a valid observation, but the best part is you don’t really need them anymore. Moq has a cool feature that let’s you obtain the Mock wrapper from the mocked-object anytime you want, so you only need the mock when it’s time to use it.

Simply use the static Get method on the Mock class to obtain a reference to your mock:

Mock.Get(DataProvider)
   .Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
   .Returns( new TemplateRecord() );

For contrast sake, here’s what the original would have looked like:

_mockDataProvider.Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
                .Returns( new TemplateRecord() );

They’re basically the same, but the difference is I don’t have to remember the name of the variable for the mock anymore. And as an added bonus, our Mock.Setup calls will all line up with the same indentation, regardless of the length of the dependencies’ variable name.

Conclusion

While the above is just an example of how tests can be trimmed down for added readability, my hope is that this readability influences developers to declare their dependencies up front rather than weighing themselves and their tests down with the mechanics of the tests. If you find yourself suddenly requiring a Mock in the middle of the test, or creating Mocks for items that aren’t immediate dependencies, it should serve as a red flag that you might not be testing the subject in isolation and you may want to step-back and re-think things a bit.

submit to reddit