Monday, November 28, 2011

Fixing Parallel Test Execution in Visual Studio 2010

As the number of tests in my project grow, so does the length of my continuous integration build. Fortunately, the new parallel test execution of Visual Studio 2010 allow us to trim down the amount of time consumed by our unit tests. If your unit tests meet the criteria for thread-safety you can configure your unit tests to run in parallel simply by adding the following to your test run configuration:

<?xml version="1.0" encoding="UTF-8"?>
<TestSettings name="Local" id="5082845d-c149-4ade-a9f5-5ff568d7ae62" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
  <Description>These are default test settings for a local test run.</Description>
  <Deployment enabled="false" />
  <Execution parallelTestCount="0">
    <TestTypeSpecific />
    <AgentRule name="Execution Agents">
    </AgentRule>
  </Execution>
</TestSettings>

The ideal setting of “0” implies that the test runner will automatically figure out the number of concurrent tests to execute based on the number of processors on the local machine. Based on this, a single-core CPU will run 1 test simultaneously, a dual-core CPU can run 2 and a quad-core CPU can 4. Technically, a quad-core hyper-threaded machine has 8 processors but when parallelTestCount is set to zero the test run on that machine fails instantly:

Test run is aborting on '<machine-name>', number of hung tests exceeds maximum allowable '5'.

So what gives?

Well, routing through the disassembled source code for the test runner we learn that the number of tests that can be executed simultaneously interferes with the maximum number of tests that can hang before the entire test run is considered to be in a failed state. Unfortunately the maximum number of tests that can hang has been hardcoded to 5. Effectively, when the 6th test begins to execute the test runner believes that the other 5 executing tests are in a failed state so it aborts everything. Maybe the team writing this feature picked “5” as an arbitrary number, or legitimately believed there wouldn’t be more than 4 CPUs before the product shipped, or simply didn’t make the connection between the setting and the possible hardware. I do sympathize for the mistake: the developers wanted the number to be low because a higher number could add several minutes to a build if the tests were actually in an non-responsive state.

The Connect issue lists this feature as being fixed, although their are no posted workarounds and a there’s a lot of feedback that feature doesn’t work on high-end machines even with the latest service pack. But it is fixed, no-one knows about it.

Simply add the following to your registry (you will likely have to create the key) and configure the maximum amount based on your CPU. I’m showing the default value of 5, but I figure number of CPUs + 1 is probably right.

Windows 32 bit:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\10.0\EnterpriseTools\QualityTools\Agent]
"MaximumAllowedHangs"="5"
Windows 64 bit:
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\10.0\EnterpriseTools\QualityTools\Agent]
"MaximumAllowedHangs"="5" 

Note: although you should be able to set the parallelTestCount setting to anything you want, overall performance is constrained by the raw computing power of the CPU, so anything more than 1 test per CPU creates contention between threads which degrades performance. Sometimes I set the parallelTestCount to 4 on my dual-core CPU to check for possible concurrency issues with the code or tests.

Epilogue

So what’s with the Connect issue? Having worked on enterprise software my guess is this: the defect was logged and subsequently fixed, the instructions were given to the tester and verified, but these instructions never tracked forward with the release notes or correlated back to the Connect issue. Ultimately there’s probably a small handful of people at Microsoft that actually know this registry setting exists, fewer that understand why, and those that do either work on a different team or no longer work for the company. Software is hard: one small fissure and the whole thing seems to fall apart.

Something within the process is clearly missing. However, as a software craftsman and TDD advocate I’m less concerned that the process didn’t capture the workaround as I am that the code randomly pulls settings from the registry – this is a magic string hack that’s destined to get lost in the weeds. Why isn’t this number calculated based on the number of processors? Or better, why not make MaximumAllowedHangs configurable from the test settings file so that it can be unit tested without tampering with the environment? How much more effort would it really take, assuming both solutions would need proper documentation and tests?

Hope this helps.

Thursday, November 24, 2011

iPhone to PC Adapter

Merry Happy Thanks Giving! I had some time on my hands so I decided to try something new.

Here’s a quick review of my iPhone headset to PC adapter that I bought a few weeks ago. Hopefully this video comes just in time for Christmas ideas and Black Friday shopping.

By the way, Thanks Giving was 5 weeks ago.

Tuesday, November 22, 2011

Static is Dead to Me

The more software I write with a test-first methodology, the more I struggle with the use of singletons and static classes. They’ve become a design smell, and I’ve grown to realize that if given enough care and thought towards a design most static dependencies aren’t needed. My current position is that most singletons are misplaced artefacts without a proper home, and static methods seem like an amateurish gateway to procedural programming.

Despite my obvious knee-jerk loathing for static, in truth there’s nothing really wrong with it -- it’s hard to build any real-world application without the use of some static methods. I continue to use static in my applications but its use is reserved for fundamental top-level application services. All told, there should only be a handful of classes that are accessible as static members.

From a test-driven development perspective, there are several strong arguments against the use of static:

  • Lack of Isolation. When a class uses a static method in another type, it becomes directly coupled to that implementation. From a testing perspective it becomes impossible to test the consuming class without satisfying the requirements of the static dependency. This increases the complexity and fragility of the tests as the implementation details of the static dependency leak into many tests.
  • Side Effects. Static methods allow us to define objects that maintain state that is global in nature. This global state represents a problem from a testing perspective because any state that must be set up for a test fixture must be reset after the test completes. Failure to clean-up the global state can corrupt the environment and lead to side-effects in other tests that depend on this shared state.
  • Inability to run tests in parallel. A fundamental requirement for reliable tests is a predictable, well-known state before and after each test. If tests depend on global state that can be mutated by multiple tests simultaneously then it is impossible to run more than one test at a time. Given the raw computing power of a hyper-threaded, multi-core machine, it seems a crime to design our code that limits testing to only one core.
  • Hidden dependencies. Classes that pull dependencies into them from a static singleton or service-locator creates an API that lies to you. Tests for these classes become unnecessarily complex and increasingly brittle.

An Alternative to Static

Rather than designing objects to be global services that are accessed through static methods, I design them to be regular objects first. There are no static methods or fields. Classes are designed to be thread-safe, but make no assumptions about their lifetime.

This small change means that I expect all interaction to be with an instance of my object rather through a member that is Type specific. This suggests two problems: how will consumers of my class obtain a reference to my object, and how do I ensure that all consumers use the same object?

Obtaining a Reference

Not surprisingly, the problem related to obtaining a reference to my object is easily solved using my favourite Inversion of Control technique, Constructor Injection. While there are many IoC patterns to choose from, I prefer constructor injection for the following reasons:

  • Consuming classes do not have to depend on a specific framework to obtain a reference.
  • Consuming classes are explicit about their required dependencies. This fosters a consistent and meaningful API where objects are assembled in predictable and organized manner rather than randomly consumed.
  • Consuming classes don’t have to worry about which instance or the lifetime of the object they have received. This solves many testing problems as concurrent tests can work with different objects.

The difference between accessing my object through a static member versus an object instance is very subtle, but the main distinction is that using the object reference requires some forethought as the dependency must be explicitly defined as a member of the consuming class.

Obtaining the Same Reference

By forgoing the use of static, we’ve removed the language feature that would simplify the ability to ensure only a single instance of our object is consumed. Without static we need to solve this problem through the structure of our code or through features of our application architecture. Without a doubt it’s harder, but I consider the well structured and tested code worth it (so don’t give up).

Eliminating Singletons through Code Structure:

My original position is that most singletons are simply misplaced artefacts. By this I mean static is used for its convenience rather than to expose the class as a global application service. In these situations it’s far more likely that the static class provides a service that is used in one area of the application graph. I’d argue that with some analysis the abstraction or related classes could be rearranged to create and host the service as an instance thereby negating the need for a singleton.

This approach typically isn’t easy because the analysis requires you to understand the lifetime and relationship of all objects in the graph. For small applications with emergent design, the effort is obtainable and extremely rewarding when all the pieces fit together nicely. The effort for larger applications may require a few attempts to get it right. Regardless of application size, sticking with an inverted dependencies approach will make the problem obvious when it occurs.

An inversion of control container can lessen the pain.

Eliminating Singletons through Architecture:

Perhaps my favourite mechanism for eliminating singletons is to use an Inversion of Control container and configure it with the responsibility of maintaining the object lifetime.

This example shows programmatic registration of a concrete type as a singleton.

private void InitializeGlobalServices(IUnityContainer container)
{
   // configure the container to cache a single instance of the service
   // the first time it is used.
   container.RegisterType<MyServiceProvider>(new ContainerControlledLifetimeManager());
}

The real advantage here is that any object can be made static without rewriting code. As a further optimization, we can also introduce an interface:

private void InitializeGlobalService(IUnityContainer container)
{
   container.RegisterType<IServiceProvider,MyServiceProvider(
       new ContainerControlledLifetimeManager());
}

Conclusion

Somewhere in my career I picked up the design philosophy that “objects should not have a top”, meaning that they should be open-ended in order to remix them into different applications. Eventually the "top" is the main entry-point into the application which is responsible for assembling the objects to create the application.

Dependency Injection fits nicely into this philosophy and in my view is the principle delivery mechanism for loosely coupled and testable implementations. Static however works against this in every regard: it couples us to implementations and limits our capability to test.

The clear winner is dependency injection backed by the power of an inversion of control container that can do the heavy lifting for you. As per my previous post, if you limit usage of the container to the top-level components, life is simple.

Happy coding.

submit to reddit

Monday, October 24, 2011

Guided by Tests–Wrap Up

This post is ninth and last in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

This last post is aimed at wrapping up the series by looking back at some of the metrics we can collect from our code and tests. There’s some interesting data about the experiment as well as feedback for the design available.

We’ll use three different data sources for our application metrics:

  • Visual Studio Code Analysis
  • MSTest Code Coverage
  • NDepend Code Analysis

Visual Studio Code Analysis

The code analysis features of Visual Studio 2010 provide a convenient view of some common static analysis metrics. Note that this feature only ships with the Premium and Ultimate versions. Here’s a quick screen capture that shows a high level view of the metrics for our project.

Tip: When reading the above graph, keep an eye on the the individual class values not the roll-up namespace values.

image

Here’s a breakdown of what some of these metrics mean and what we can learn from our experiment.

Maintainability Index

I like to think of the Maintainability Index as a high level metric that summarizes how much trouble a class is going to give you over time. Higher numbers are better, and problems usually start below the 20-30 range. The formula for the maintainability index is actually quite complex, but looking at the above data you can see how the other metrics drive the index down.

Our GraphBuilder is the lowest in our project, coming in at an index of 69. This is largely influenced by the density of operations and complexity to lines of code – our GraphBuilder is responsible for constructing the graph from conditional logic of the model. The maintainability index is interesting, but I don’t hold much stock in it alone.

Lines of Code

Lines of Code is the logical lines of code which means code lines without whitespace and stylistic formatting. Some tools, like NDepend, record other metrics for lines of code, such as number of IL instructions per line. Visual Studio’s Lines of Code metric is simple and straight forward.

There are a few interesting observations for our code base.

First, the number of lines of code per class is quite low. Even the NDependStreamParser which tops the chart at 22 lines is extremely low considering that it reads data from several different Xml Elements. The presence of many small classes suggests that classes are designed to do one thing well.

Secondly, there are more lines of code in our test project than production code. Some may point to this as evidence that unit testing introduces more effort than writing the actual code – I see the additional code as the following:

  • We created hand-rolled mocks and utility classes to generate input. These are not present in the production code.
  • Testing code is more complicated than writing it as there are many different paths the code might take. There should be more code here.
  • We didn’t write the code and then the tests, we did them at the same time
  • Our tests ensured that we only wrote code needed to make the tests pass. This allowed us to aggressively remove all duplication in the production code. Did I mention the largest and most complicated class is 22 lines long?

Cyclomatic Complexity

Cyclomatic Complexity, also known as Conditional Complexity, represents the number of paths of execution a method can have – so classes that have a switch or multiple if statements will have higher complexity than those that do not. The metric is normally applied at the method level, not the class, but if you look at the graph above, more than half of the classes average below 5 and the other half are less than 12. Best practices suggest that Cyclomatic Complexity should be less than 15-20 per method. So we’re good.

Although the graph above doesn’t display our 45 methods, the cyclomatic complexity for most methods is 1-2. The only exception to this is our NDependStreamParser and GraphBuilder, which have methods with a complexity value of 6 and 5 respectively.

In my view, I see cyclomatic complexity as a metric for how many tests are needed for a class.

Depth of Inheritance

The “depth of inheritance” metric refers to the number of base classes involved in the inheritance of a class. Best practices aim to keep this number as low as possible since each level in the inheritance hierarchy represents a dependency that can influence or break implementers.

Our graph shows very low inheritance depth which supports our design philosophy of using composition and dependency inversion instead of inheritance. There are a few red flags in the graph though: our AssemblyGraphLayout has an inheritance depth of 9, a consequence of extending a class from the GraphSharp library and it highlights possible brittleness surrounding that library.

Class Coupling

The class coupling metric is a very interesting metric because it shows us how many classes our object consumes or creates. Although we don’t get much visibility into the coupling (NDepend can help us here) it suggests that classes with a higher coupling are much more sensitive to changes. Our GraphBuilder has a Class Coupling of 11, including several types from the System namespace (IEnumerable<T>, Dictionary, KeyValuePair) but also has knowledge of our Graph and model data.

Class coupling combined with many lines of code and high cyclomatic complexity are highly sensitive to change, which explains why the GraphBuilder has the lowest Maintenance Index of the bunch.

Code Coverage

Code coverage is a metric that shows which execution paths within our code base are covered by unit tests. While code coverage can’t vouch for the quality of the tests or production code, it can indicate the strength of the testing strategy.

Under the rules of our experiment, there should be 100% code coverage because we’ve mandated that no code is written without a test. We have 93.75% coverage, which has the following breakdown:

DependencyViewer-Coverage

Interestingly enough, the three areas of with no code coverage are the key obstacles we identified early in the experiment. Here are the snippets of code that have no coverage:

Application Start-up Routine

protected override void OnStartup(StartupEventArgs e)
{
    var shell = new Shell();
    shell.DataContext = new MainViewModelFactory().Create();

    shell.Show();
}

Launching the File Dialog

internal virtual void ShowDialog(OpenFileDialog dialog)
{
    dialog.ShowDialog();
}

Code behind for Shell

public Shell()
{
    InitializeComponent();
}

We’ve designed our solution to limit the amount of “untestable” code, so these lines of code are expected. From this we can establish that our testing strategy has three weaknesses, two of which are covered by launching the application. If we wanted to write automation for testing the user-interface, these would be the areas we’d want early feedback from.

NDepend Analysis

NDepend is a static code analysis tool that provides more detailed information than the standard Visual Studio tools. The product has several different pricing levels including open-source and evaluation licenses and has many great visualizations and features that can help you learn more about your code. While there are many visuals that I could present here, I’m listing two interesting diagrams.

DependencyGraph:

This graph shows dependencies between namespaces. I’ve configured this graph to show two things:

  • Box Size represents Afferent Coupling where larger boxes are used by many classes
  • Line Size represents the number of methods between the dependent components.

DependencyMatrix-Graph

Dependency Matrix:

A slightly different view of the same information is the Dependency Matrix. This is the graph represented in a cross-tabular format.

DependencyMatrix-Namespaces

Both of these views help us better visualize the Class Coupling metric that Visual Studio pointed out earlier, but the information it provides is quite revealing. Both diagrams show that the Model namespace is used by the ViewModels and Controls namespaces. This represents a refactoring or restructuring problem as these layers really should not be aware of the model: Views shouldn’t have details about the ViewModel; Service layer should only know about the Model and ViewModels; Controls should know about ViewModel data if needed; ViewModels should represent view-abstractions and thus should not have Model knowledge.

The violations have occurred because we established our AssemblyGraph and related items as part of the model.  This was a concept we wrestled with at the beginning of the exercise, and now the NDepend graph helps visualize the problem more concretely. As a result of this choice, we’re left with the following violations:

  • The control required to layout our graph is a GraphLayout that uses our Model objects.
  • The MainViewModel references the Graph as a property which is bound to the View.

The diagram shows a very thin line, but this model to view model state problem has become a common theme in recent projects. Maybe we need a state container to hold onto the Model objects and maybe the Graph should be composed of ViewModel objects instead of Model data. It’s worth consideration, and maybe I’ll revisit this and blog about this problem some more in an upcoming post.

Conclusion

To wrap up the series, let’s look back on how we got here:

I hope you have enjoyed the series and found something to take away. I challenge you to find a similar problem and champion its development within your development group.

Happy Coding.

submit to reddit

Tuesday, October 11, 2011

Guided by Tests–Day Seven

This post is eighth in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

Today is the day we test the untestable. Early in the experiment we hit a small road block when our code needed to interact with the user-interface. Since unit-testing the user-interface wasn’t something we wanted to pursue, we wrapped the OpenFileDialog behind a wrapper with an expectation that we would return to build out that component with tests later. The session for this day would prove to be an interesting challenge.

The Challenges of Physical Dependencies

Although using a wrapper to shield our code from difficult to test dependencies is a common and well accepted technique, it would be irresponsible not to test the internal implementation details of that wrapper. Testing against physical dependencies is hard because they introduce a massive amount of overhead, but if we can isolate the logic from the physical dependency we can use unit tests to get 80-90% of the way there. To get the remaining part, you either test manually or write a set of integration or functional tests.

The technique outlined below can be used for testing user-interface components like this one, email components and in a pinch it can work for network related services.

Testing our Wrapper

Time to write some tests for our IFileDialog. I have some good news and bad news.

The good news is Microsoft provides a common OpenFileDialog as part of WPF, meaning that I don’t have to roll my own and I can achieve a common look and feel with other applications with little effort. This also means we can assume that the FileOpenDialog is defect free, so we don’t have to write unit tests for it.

The bad news is, I use this common so infrequently that I forget how to use it.

So instead of writing a small utility application to play with the component, I write a test that shows me exactly how the component works:

[TestMethod]
public void WhenSelectAFileFromTheDialog_AndUserSelectsAFile_ShouldReturnFileName()
{
    var dialog = new OpenFileDialog();
    dialog.Show(); // this will show the dialog

    Assert.IsNotNull(dialog.FileName); 
}

When I run this test, the file dialog is displayed. If I don’t select a file, the test fails. Now that we know how it works, we can rewrite our test and move this code into a concrete implementation.

Unit Test:

[TestMethod]
public void WhenSelectingAFile_AndUserMakesAValidSelection_ShouldReturnFileName()
{
    var subject = new FileDialog();
    string fileName = subject.SelectFile();
    Assert.IsNotNull(fileName);
}

Production Code:

public class FileDialog : IFileDialog
{
    public string SelectFile()
    {
        var dialog = new OpenFileDialog();
        dialog.Show();

        return dialog.FileName;
    }
}

The implementation is functionally correct, but when I run the test I have to select a file in order to have the test pass. This is not ideal. We need a means to intercept the dialog and simulate the user selecting a file. Otherwise, someone will have to babysit the build and manually click file dialog prompts until the early morning hours.

Partial Mocks To the Rescue

Instead of isolating the instance of our OpenFileDialog with a mock implementation, we intercept the activity and allow ourselves to supply a different implementation for our test. The following shows a simple change to the code to make this possible.

public class FileDialog : IFileDialog
{
    public string SelectFile()
    {
        var dialog = new OpenFileDialog();

        Show(dialog);

        return dialog.FileName;
    }

    internal virtual void Show(OpenFileDialog dialog)
    {
        dialog.Show();
    }
}

This next part is a bit weird. In the last several posts, we’ve used Moq to replace our dependencies with fake stand-in implementations. For this post, we’re going to mock the subject of the test, and fake out specific methods on the subject. Go back and re-read that. You can stop re-reading that now.

As an aside: I often don’t like showing this technique because I’ve seen it get abused. I’ve seen abuses where developers use this technique to avoid breaking classes down into smaller responsibilities; they fill their classes with virtual methods and then stub out huge chunks of the subject. This feels like shoddy craftsmanship and doesn’t sit well with me – granted, it works, but it leads to problems. First, the areas that they’re subverting never get tested. Secondly, it’s too easy for developers to forget what they’re doing and they start to write tests for the mocking framework instead of the subject’s functionality. So use with care. In this example, I’m subverting one line of a well tested third-party component in order to avoid human-involvement in the test.

In order to intercept the Show method and replace it with our own implementation we can use Moq’s Callback feature. I’ve written about this Moq’s support for Callbacks before, but in a nutshell Moq can intercept the original method and inbound arguments for use within your test.

Our test now looks like this:

[TestMethod]
public void WhenSelectingAFile_AndUserMakesAValidSelection_ShouldReturnFileName()
{
    // setup a partial mock for our subject
    var mock = new Mock<FileDialog>();
    FileDialog subject = mock.Object;

    // The Show method in our FileDialog is virtual, so we can setup
    //    an alternate behavior when it's called.
    // We configure the Show method to call the SelectAFile method
    //    with the original arguments
    mock.Setup( partialMock => partialMock.Show(It.IsAny<OpenFileDialog>())
        .Callback( SelectAFile );

    string fileName = subject.SelectFile();
    Assert.IsNotNull(fileName);
}

// we alter the original inbound argument to simulate
//    the user selecting a file
private void SelectAFile(OpenFileDialog dialog)
{
    dialog.FileName = "Foo";
}

Now when our test runs the FileDialog returns “Foo” without launching a popup. Now we can write tests for a few extra scenarios:

[TestMethod]
public void WhenSelectingAFile_AndTheUserCancelsTheFileDialog_NoFileNameShouldBeReturned()
{
    // we're mocking out the call to show the dialog,
    // so without any setup on the mock, the dialog will not return a value.

    string fileName = Subject.SelectFile();

    Assert.IsNull(fileName);
}

[TestMethod]
public void WhenSelectingAFile_BeforeUserSelectsAFile_EnsureDefaultDirectoryIsApplicationRootFolder()
{
    // ARRANGE:
    string expectedDirectory = Environment.CurrentDirectory;

    Mock.Get(Subject)
        .Setup(d => d.ShowDialog(It.IsAny<OpenFileDialog>()))
        .Callback<OpenFileDialog>(
            win32Dialog =>
            {
                // ASSERT: validate that the directory is set when the dialog is shown
                Assert.AreEqual(expectedDirectory, win32Dialog.InitialDirectory);
            });

    // ACT: invoke showing the dialog
    Subject.SelectFile();
}

Next: Review some of the application metrics for our experiment in the Guided by Tests – Wrap Up.

submit to reddit

Tuesday, October 04, 2011

Guided by Tests–Day Six

This post is seventh in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

Today is the day where we put it all together, wire up a user-interface and cross our fingers when we run the application. I joked with the team that “today is the day we find out what tests we missed.” Bugs are simply tests we didn’t think about.

Outstanding Design Concerns

At this point we have all the parts for our primary use case. And although we’ve designed the parts to be small with simple responsibilities, we’ve been vague about how the structure of the overall application. Before we can show a user-interface, we’ve got a few remaining design choices.

Associating the Command to the View

In the last post we looked at the NewProjectCommand. After reviewing our choices about the relationship between the Command and the ViewModel we decided that the most pragmatic choice was to have the Command hold a reference to the ViewModel, though we didn’t define where either object fit into the overall application. The next question presented to the team was whether we make the command independent of the ViewModel and reside in a menu-control or other top-level ViewModel, or should we add a Command property to the ViewModel?

The team decided to take the convenience route and put our NewProjectCommand as a property on our MainViewModel. Ultimately, this decision means that our MainViewModel becomes the top-level ViewModel that the View will bind to. This decision is largely view-centric and should be easy to change in the future if we need to reorganize the structure of the application.

With this decision resolved, we’ve got a better picture of how the user-interface and logical model will work. Before we start working on the xaml, we need to figure out how all these pieces come together.

Assembling the MainViewModel

The parts of our application have been designed with the Dependency Inversion Principle in mind – our objects don’t have a “top” and expect that the caller will pass in the proper configuration. Eventually, something must take the responsibility to configure the object graph. The next question is: where should this happen?

Someone suggested stitching the components together in the application start-up routine. While this is logically where this activity should occur, it doesn’t make sense to burden the start-up routine with these implementation details. If we did, we’d likely have a monster start-up routine that would change constantly. Plus, we’d have little means to test it without invoking the user-interface.

Achieving 100% code coverage, though a noble effort, is not always realistic. In most applications it’s likely that 10% of the codebase won’t have tests because those areas border on the extreme edges of the application. Writing unit tests for extreme edges is hard (“unit” tests for user-interface components is especially difficult) and it helps to have a balanced testing strategy that includes unit, system integration and functional UI automation to cover these areas.

Rather than including these details in the application start-up routine, we’ll move as much code as possible into testable components. The goal is to make the start-up routine so simple that it won’t require changes. This will minimize the untestable areas of the application.

To accomplish this, we decided to create a Factory for a MainViewModel.

Realizations for the MainViewModel Factory

The primary responsibility of our factory is to assemble our ViewModel with a fully configured NewProjectCommand. The test looks something like this:

[TestMethod]
public void CreatingAViewModel_WithDefaults_ShouldSetupViewModel()
{
    var factory = new MainViewModelFactory();
    var viewModel = factory.Create();
    
    Assert.IsNotNull(viewModel, "ViewModel was not created.");
    Assert.IsNotNull(viewModel.NewProject, "Command was not wired up.");
}

In order to make the test pass, we’ll need to wire-up the all the components of our solution within the factory. The argument checking we’ve introduced in the constructors of our command and loader guarantees that. Ultimately, the factory looks like this:

public class MainViewModelFactory
{
    public MainViewModel Create()
    {
        var vm = new MainViewModel();

        IFileDialog dialog = new MockFileDialog();
        IGraphDataParser graphParser = new NDependStreamParser();
        var graphBuilder = new GraphBuilder();

        var loader = new ProjectLoader(dialog, graphParser);

        vm.NewProject = new NewProjectCommand(vm, loader, graphBuilder);

        return vm;
    }
}

With that all in place, the application start-up routine is greatly simplified. We could probably optimize it further, but we’re anxious to see our application running today, so this will do:

public partial class App : Application
{
    protected override void OnStartup(StartupEventArgs e)
    {
        var shell = new Shell();
        shell.DataContext = new MainViewModelFactory().Create();
        shell.Show();
    }
}

Running the App for the first time

Ready for our first bug? We borrowed some XAML from Sacha’s post, and after we discovered that we needed a one-line user control, we wired up a crude Button to our NewProject command. Then we launched the app, crossed our fingers, and upon clicking the “New Project” button….. nothing happened!

We set a breakpoint and then walked through the command’s execution. Everything was perfect, except we had forgotten one small detail: we forgot to notify the user-interface that we had produced a graph.

As a teaching opportunity, we wrote a unit test for our first bug:

[TestClass]
public class MainViewModelTests
{
    [TestMethod]
    public void WhenGraphPropertyChanges_ShouldNotifyTheUI()
    {
        string propertyName = null;

        var viewModel = new MainViewModel();
        viewModel.PropertyChanged += (sender,e) => 
            propertyName = e.PropertyName;

        viewModel.Graph = new AssemblyGraph();

        Assert.AreEqual("Graph", propertyName,
            "The user interface wasn't notified.");
    }
}

And on second attempt (read: messing with xaml styles for a while) the app produces the following graph:

WPF-ComponentDependenciesDiagram

Next: Read about polishing off the File Dialog in Day Seven

submit to reddit

Tuesday, September 20, 2011

Guided by Tests–Day Five

This post is sixth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today is the fifth day of our five day experiment to write an application over lunch hours using TDD. Five days was a bit ambitious, as a lot of time was spent teaching concepts, so the team agreed to tack on a few extra days just to see it all come together. So, my bad.

By the fifth day, we had all the necessary pieces to construct a graph from our NDepend xml file. Today we focused our attention on how these pieces would interact.

Next Steps

If we look back to our original flow diagram (shown below), upon receiving input from the user we need to orchestrate between loading model data from a file and converting into a Graph object so that it can be put into the UI. As we have the loading and conversion parts in place, our focus is how to receive input, perform the work and update the UI.

LogicalFlowDiagram

Within WPF and the MVVM architecture, the primary mechanism to handle input from the user is a concept known as Commanding, and commands are implemented by the ICommand interface:

namespace System.Windows.Input
{
    public interface ICommand
    {
        bool CanExecute( object parameter );
        void Execute( object parameter );

        event EventHandler CanExecuteChanged;
    }
}

While it’s clear that we’ll use a custom command to perform our orchestration logic, the question that remains is how we should implement it. There are several options available, and two popular MVVM choices are:

The elegance of the DelegateCommand allows the user to supply delegates for the Execute and CanExecute methods, and typically these delegates live within the body of the ViewModel class. When given the choice I prefer command classes for application-level operations as it aligns well to the Single Responsibility Principle and our separation of concerns approach. (I tend to use the DelegateCommand option to perform view-centric operations such as toggling visibility of view-elements, etc.)

Writing a Test for an ICommand

As a team, we dug into writing the test for our NewProjectCommand. Assuming we’d use the ICommand interface, we stubbed in the parts we knew before we hit our first roadblock:

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        var command = new NewProjectCommand();
        
        command.Execute( null );

        Assert.Fail(); // what should we assert?
    }
}

Two immediate concerns arise:

First, we have a testing concern. How can we assert that the command accomplished what it needed to do? The argument supplied to the Execute command typically originates from xaml binding syntax, which we won’t need, so it's not likely that the command will take any parameters. Moreover, the Execute method doesn't have a return value, so we won't have any insight into the outcome of the method.

Our second concern is a design problem. It's clear that our command will need to associate graph-data to a ViewModel representing our user-interface, but how much information should the Command have about the ViewModel and how will the two communicate?

This is one of the parts I love about Test Driven Development. There is no separation between testing concerns and design problems because every test you write is a design choice.

Reviewing our Coupling Options

We have several options to establish the relationship between our Command and our ViewModel:

  • Accessing the ViewModel through the WPF’s Application.Current;
  • Making our ViewModel a Singleton;
  • Create a globally available ServiceLocator that can locate the ViewModel for us;
  • Pass the ViewModel to the command through Constructor Injection
  • Have the Command and ViewModel communicate through an independent publisher/subscriber model

Here’s a further breakdown of those options…

Application.Current.MainWindow

The most readily available solution within WPF is to leverage the framework’s application model. You can access the top-level ViewModel with some awkward casting:

var viewModel = Application.Current.MainWindow.DataContext as MainViewModel;

I’m not a huge fan of this for many reasons:

  • Overhead: the test suddenly requires additional plumbing code to initialize the WPF Application with a MainWindow bound to the proper ViewModel. This isn’t difficult to do, but it adds unwarranted complexity.
  • Coupling: Any time we bind to a top-level property or object, we’re effectively limiting our ability to change that object. In this case, we’re assuming that the MainWindow will always be bound to this ViewModel; if this were to change, all downstream consumers would also need to change.
  • Shared State: By consuming a static property we are effectively making the state of ViewModel shared by all tests. This adds some additional complexity to the tests to ensure that the shared-state is properly reset to a neutral form. As a consequence, it’s impossible to run the tests in parallel.

ViewModel as Singleton / ServiceLocator

This approach is a slight improvement over accessing the DataContext of the current application’s MainWindow. It eliminates the concerns surrounding configuring the WPF Application, and we gain some type-safety as we shouldn’t have to do any casting to get our ViewModel.

Despite this, Singletons like the Application.Current variety are hidden dependencies that make it difficult to understand the scope and responsibilities of an object. I tend to avoid this approach for the same reasons listed above.

Constructor Injection

Rather than having the Command reach out to static resource to obtain a reference to the ViewModel, we can use Constructor Injection to pass a reference into the Command so that it has all the resources it needs. (This is the approach we’ve been using thus far for our application, too) This approach makes sense from an API perspective as consumers of your code will be able to understand the Command’s dependencies when they try to instantiate it. The downside to this approach is additional complexity is needed to construct the command. (Hint: We’ll see this in the next post.)

This approach also eliminates the shared-state and parallelism problem but it still couples us the Command to the ViewModel. This might not be a problem if the relationship between the two objects remains fixed – for example, if the application were to adopt a multi-tabbed interface the relationship between the command and ViewModel would need to change.

Message-based Communication

The best form of loose-coupling comes from message-based communication, where the ViewModel and the Command know absolutely nothing about each other and only communicate indirectly through an intermediary broker. In this implementation, the Command would orchestrate the construction of the Graph and then publish a message through the broker. The broker, in turn, would deliver the message to a receiver that would associate the graph to the ViewModel.

Such an implementation would allow both the ViewModel and the Command implementations to be change independently as long as they both adhere to the message publish/subscription contracts. I prefer this approach for large scale applications, though it introduces a level of indirection that can be frustrating at times.

This approach would likely work well if we needed to support a multi-tabbed user interface.

Back to the Tests

Given our time constraints and our immediate needs, the team decided constructor injection was the way to go for our test.

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        var viewModel = new MainViewModel();
        var command = new NewProjectCommand(viewModel);
        
        command.Execute( null );

        Assert.IsNotNull( viewModel.Graph,
            "Graph was not displayed to the user.");
    }
}

To complete the test the team made the following realizations:

Realization Code Written
Tests are failing because Execute throws a NotImplementedException. Let’s commit a sin and get the test to pass. Implementation:
public void Execute(object unused)
{
    // hack to get the test to pass
    _viewModel.Graph = new AssemblyGraph();
}
Our command will need a ProjectLoader. Following guidance from the previous day, we extracted an interface and quickly added it to the constructor. Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>().Object;

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader);

Implementation:
///<summary>Default Constructor</summary>
public NewProjectCommand(
        MainViewModel viewModel,
        IProjectLoader projectLoader)
{
    _viewModel = viewModel;
    _projectLoader = projectLoader;
}
We should only construct the graph if we get data from the loader.

(Fortunately, moq will automatically return non-null for IEnumerable types, so our tests pass accidentally)
Implementation:
public void Execute(object unused)
{
    IEnumerable<ProjectAssembly> model
        = _projectLoader.Load();

    if (model != null)
    {
        _viewModel.Graph = new AssemblyGraph();
    }
}
We’ll need a GraphBuilder to convert our model data into a graph for our viewmodel. Following some obvious implementation, we’ll add the GraphBuilder to the constructor.

A few minor changes and our test passes.
Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>();
var graphBuilder = new GraphBuilder();

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader,
                    graphBuilder);

Implementation:
public void Execute(object unused)
{
    IEnumerable<ProjectAssemby> model
        = _projectLoader.Load();

    if (model != null)
    {
        _viewModel.Graph = 
        	_graphBuilder.BuildGraph( model );
    }
}
Building upon our findings from the previous days, we recognize our coupling to the GraphBuilder and we recognize that we should probably write some tests that demonstrate that the NewProjectCommand can handle failures from both the IProjectLoader and GraphBuilder dependencies. But rather than extract an interface for the GraphBuilder, we decide that we’d simply make the BuildGraph method virtual instead – just to show we can. 

Only now, when we run the test the test fails. It seems our graph was not created?
Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>().Object;
var graphBuilder = new Mock<GraphBuilder>().Object;

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader,
                    graphBuilder);

// ...

Finally, in order to get our test to pass, we need to configure the mock for our GraphBuilder to construct a Graph. The final test (shown below) looks like this. Note the IsAnyModel is a handy shortcut for simplifiying Moq’s matcher syntax.

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        // ARRANGE: setup dependencies
        var viewModel = new MainViewModel();
        var projectLoader = new Mock<IProjectLoader>().Object;
        var graphBuilder = new Mock<GraphBuilder>().Object;

        // Initialize subject under test
        var command = new NewProjectCommand(
                            viewModel,
                            projectLoader,
                            graphBuilder);
        
        Mock.Get(graphBuilder)
            .Setup( g => g.BuildGraph( IsAnyModel() ))
            .Returns( new AssemblyGraph() );

        // ACT: Execute our Command
        command.Execute( null );

        // ARRANGE: Verify that the command executed correctly
        Assert.IsNotNull( viewModel.Graph,
            "Graph was not displayed to the user.");
    }

    private IEnumerable<ProjectAssembly> IsAnyModel()
    {
        return It.IsAny<IEnumerable<ProjectAssembly>>();
    }
}

Of course, we’d need a few additional tests:

  • When the project loader or graph builder throw an exception.
  • When the project loader doesn’t load a project, the graph should not be changed
  • When the Command is created incorrectly, such as null arguments

Next: Day Six

submit to reddit

Tuesday, September 13, 2011

Guided by Tests–Day Four

This post is fifth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

As previously mentioned during Day Three, we split the group into two teams so that one focused on the process to load a new project while the other focused on constructing a graph. This post will focus on the efforts of the team working through the tests and implementation of the GraphBuilder.

This team had a unique advantage over the other team in that they had a blog post that outlined how to use the Graph# framework. I advised the team that they could refer to the post, even download the example if needed, but all code introduced into the project had to follow the rules of the experiment: all code must be written to satisfy the requirements of a test.

A Change in Approach

The goals for this team we’re different too. We already had our data well defined and had a reasonable expectation of what the results should be. As such, we took a different approach to writing the tests. Up to this point, our process has involved writing one test at a time and only the code needed to satisfy that test. We wouldn’t identify other tests that we’d need to write until we felt the current test was complete.

For this team, we had a small brain-storming session and defined all the possible scenarios we would need to test upfront.

I love this approach and tend to use it when working with my teams. I usually sit down with the developer and envision how the code would be used. From this discussion we stub out a series of failing tests (Assert.Fail) and after some high level guidance about what we need to build I leave them to implement the tests and code. The clear advantage to this approach is that I can step in for an over-the-shoulder code-review and can quickly get feedback on how things are going. When the developer says things are moving along I can simply challenge them to “prove it”. The developer is more than happy to show their progress with working tests, and the failing tests represent a great opportunity to determine if the developer has thought about how to finish them. Win/Win.

The test cases we identified for our graph builder:

  • When building a graph from an empty list, it should produce an empty graph
  • When building a graph from a single assembly, the graph should contain one vertex.
  • When building a graph with two independent assemblies, the graph should contain two vertices and there shouldn’t be any edges between them.
  • When building a graph with one assembly referencing another, the graph should contain two vertices and one edge
  • When building a graph where two assemblies have forward and backward relationships (the first item lists the second vertex as a dependency, the second item lists the first as a “referenced by”), the graph should contain unique edges between items.

By the time the team had begun to develop the third test, most of the dependent object model had been defined. The remaining tests represented implementation details. For example, to establish a relationship between assemblies we would need to store them into a lookup table. Whether this lookup table should reside within the GraphBuilder or pushed lower into the Graph itself is an optimization that can be determined later if needed. The tests would not need to change to support this refactoring effort.

Interesting Finds

The session on the fourth day involved a review of the implementation and an opportunity to refactor both the tests and the code. One of the great realizations was the ability to reduce the verbosity of initializing the test data.

We started with a lot of duplication and overhead in the tests:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = new ProjectAssembly
                        {
                            FullName = "Assembly1"
                        };
    var assembly2 = new ProjectAssembly
                        {
                            FullName = "Assembly2"
                        };

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

We moved some of the initialization logic into a helper method, which improved readability:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = CreateProjectAssembly("Assembly1");
    var assembly2 = CreateProjectAssembly("Assembly2");

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

private ProjectAssembly CreateProjectAssembly(string name)
{
    return new ProjectAssembly()
            {
                FullName = name
            };
}

However, once we discovered the name of the assembly wasn’t important and that they just had to be unique, we optimized this further:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = CreateProjectAssembly();
    var assembly2 = CreateProjectAssembly();

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

private ProjectAssembly CreateProjectAssembly(string name = null)
{
    if (name == null)
        name = Guid.NewGuid().ToString();

    return new ProjectAssembly()
            {
                FullName = name
            };
}

If we really wanted to, we could optimize this further by pushing this initialization logic into the production code directly.

[TestMethod]
public void WhenConstructingAProjectAssembly_WithNoArguments_ShouldAutogenerateAFullName()
{
    var assembly = new ProjectAssembly();

    bool nameIsPresent = !String.IsNullOrEmpty(assembly.FullName);

    Assert.IsTrue( nameIsPresent,
        "Name was not automatically generated.");
}

Continue Reading: Day Five

submit to reddit

Friday, September 09, 2011

Guided by Tests–Day Three

This post is fourth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

By day three, our collective knowledge about what we were building was beginning to shape. After reviewing the tests from the previous day, it was time for an interesting discussion: what’s next?

Determining Next Steps

In order to determine our next steps, the team turned the logical flow diagram of our use-case. We decomposed the logical flow into the following steps:

  1. Receive input from the UI, maybe a menu control or some other UI action.
  2. Prompt the user for a file, likely using a standard open file dialog.
  3. Take the response from the dialog and feed it into our stream parser.
  4. Take the output of the stream parser and build a graph
  5. Take the graph and update the UI, likely a ViewModel.

Following a Separation of Concerns approach, we want to design our solution so that each part has very little knowledge about the surrounding parts. It was decided that we can clearly separate the building of the graph from the prompting the user part. In my view, we know very little about the UI at this point so we shouldn’t concern ourselves with how the UI initiates this activity. Instead, we can treat the prompting the user for a file and orchestrating interaction with our parser as a single concern.

It was time to split the group in two and start on different parts. Team one would focus on the code that would call the NDependStreamParser; Team two would focus on the code that consumed the list of ProjectAssembly items to produce a graph.

Note: Day Four was spent reviewing and finishing the code for team two. For the purposes of this post, I’m going to focus on the efforts of Team one.

The Next Test

The team decided that we should name this concern, “NewProjectLoader” as it would orchestrate the loading of our model. We knew that we’d be prompting for a file, so we named the test accordingly:

[TestClass]
public class NewProjectLoaderTests
{
    [TestMethod]
    public void WhenLoadingANewProject_WithAValidFile_ShouldLoadModel()
    {
        Assert.Fail();
    }
}

Within a few minutes the team quickly filled in the immediately visible details of the test.

Realization Code Written
Following the first example from the day before, the team filled in their assertions and auto-generated the parts they needed.

To make the tests pass, they hard-coded a response.
Test Code:
var loader = new NewProjectLoader();

IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
Implementation:
public IEnumerable<ProjectAssembly> Load()
{
    return new List<ProjectAssembly>();
}
How should we prompt the user for a file? Hmmm.
 

Our Next Constraint

The team now needed to prompt the user to select a file. Fortunately, WPF provides the OpenFileDialog so we won’t have to roll our own dialog. Unfortunately, if we introduce it into our code we’ll be tightly coupled to the user-interface.

To isolate ourselves from this dependency, we need to introduce a small interface:

namespace DependencyViewer
{
    public interface IFileDialog
    {
        string SelectFile();
    }
}

Through tests, we introduced these changes:

Realization Code Written
We need to introduce our File Dialog to our Loader.

We decide our best option is to introduce the dependency through the constructor of the loader.

This creates a small compilation error that is quickly resolved.

Our tests still passes.

Test Code:
IFileDialog dialog = null;
var loader = new NewProjectLoader(dialog);
IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
Implementation:
public class NewProjectLoader
{
    private IFileDialog _dialog;

    public NewProjectLoader(
	IFileDialog dialog)
    {
        _dialog = dialog;
    }
    // ...
Now, how should we prompt for a file?

The code should delegate to our IFileDialog and we can assume that if they select a file, the return value will not be null.

The test compiles, but the test fails because the dialog is null.
Implementation:
public IEnumerable<ProjectAssembly> Load()
{
    string fileName = _dialog.SelectFile();
    if (!String.IsNullOrEmpty(fileName))
    {
        return new List<ProjectAssembly>();
    }

    throw new NotImplementedException();
}
We don’t have an implementation for IFileDialog. So we’ll define a dummy implementation and use Visual Studio to auto-generate the defaults.

Our test fails because the auto-generated code throws an error (NotImplementedException).
Test Code:
IFileDialog dialog = new MockFileDialog();
var loader = new NewProjectLoader(dialog);
IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
We can easily fix the test replacing the exception with a non-null file name. Implementation:
public class MockFileDialog 
    : IFileDialog
{
    public string SelectFile()
    {
        return "Foo";
    }
}
The test passes, but we’re not done. We need to construct a valid model.

We use a technique known as “Obvious Implementation” and we introduce our NDependStreamParser directly into our Loader.

The test breaks again, this time because “Foo” is not a valid filename.
Implementation:
string fileName = _dialog.SelectFile();
if (!String.IsNullOrEmpty(fileName))
{
    using(var stream = XmlReader.Create(fileName))
    {
        var parser = new NDependStreamParser();
        return parser.Parse(stream);
    }
}
//...
Because our solution is tied to a FileStream, we need to specify a proper file name.  To do this we need to modify our MockFileDialog so that we we can assign a FileName from within the test.

In order to get a valid file, we need to include a file as part of the project and then enable Deployment as part of the mstest test settings.

(Note: We could have changed the signature of the loader to take a filename, but we chose to keep the dependency to the file here mainly for time concerns.)
Implementation:

public class MockFileDialog
    : IFileDialog
{
    public string FileName;

    public string SelectFile()
    {
        return FileName;
    }
}

Test Code:
[DeploymentItem("AssembliesDependencies.xml")]
[TestMethod]
public void WhenLoadingANewProject...()
{
    var dialog = new MockFileDialog();
    dialog.FileName = "AssembliesDependencies.xml";
    var loader = new NewProjectLoader(dialog);

    //...

Isolating Further

While our test passes and it represents the functionality we want, we’ve introduced a design problem such that we’re coupled to the implementation details of the NDependStreamParser. Some may make the case that this is the nature of our application, we only need this class and if the parser’s broken so is our loader. I don’t necessarily agree.

The problem with this type of coupling is that when the parser breaks, the unit tests for the loader will also break. If we allow this type of coupling you can draw a logical conclusion that other classes will have tight coupling and thus when the parser breaks it will have a cascade effect on the majority of our tests. This defeats the purpose of our early feedback mechanism. Besides, why design our classes to be black-boxes that will have to change if we introduce different types of parsers?

The solution is to introduce an interface for our parser. Resharper makes this really easy, simply click our class and choose “Extract Interface”.

public interface IGraphDataParser
{
    IEnumerable<ProjectAssembly> Parse(XmlReader reader);
}

Adding a Mocking Framework

Whereas we created a hand-rolled mock (aka Test Double) for our IFileDialog, it’s time to introduce a mocking framework that can create mock objects in memory.  Using NuGet to simplify our assembly management, we add a reference to Moq to our test project.

Refactoring Steps

We made the following small refactoring changes to decouple ourselves from the NDependStreamParser.

Realization Code Written
Stream Parser should be a field.
 

Implementation:
// NewProjectLoader.cs

IFileDialog _dialog;
NDependStreamParser _parser;

public IEnumerable<ProjectAssembly> Load()
{
    string fileName = _dialog.SelectFile();
    if (!String.IsNullOrEmpty(fileName))
    {
        using (var stream = 
		XmlReader.Create(fileName))
        {
            _parser = new NDependStreamParser();
            return _parser.Parse(stream);
        }
    }

    throw new NotImplementedException();
}
We need to use the interface rather than the concrete type. Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    // ...
We should initialize the parser in the constructor instead of the Load method. Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    public NewProjectLoader(IFileDialog dialog)
    {
        _dialog = dialog;
        _parser = new NDependStreamParser();
    }

    // ...
We should initialize the parser from outside the constructor.

This introduces a minor compilation problem that requires us to change the test slightly.
Test Code:
var dialog = new MockFileDialog();
var parser = new NDependStreamParser();

var loader = new NewProjectLoader(dialog, parser);

Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    public NewProjectLoader(
            IFileDialog dialog,
            IGraphDataParser parser)
    {
        _dialog = dialog;
        _parser = parser;
    }

    // ...
We need to replace our NDependStreamParser with a mock implementation.

Test Code:

var dialog = new MockFileDialog();
var paser = new Mock<IGraphDataParser>().Object;

var loader = new NewProjectLoader(dialog, parser);

Strangely enough, there’s a little known feature of Moq that will ensure mocks that return IEnumerable collections will never be null, so our test passes!

Additional Tests

We wrote the following additional tests:

  • WhenLoadingANewProject_WithNoFileSpecfied_ShouldNotReturnAModel

Next: Day Four

submit to reddit

Thursday, September 08, 2011

Guided by Tests–Day Two

This post is third in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today we break new ground on our application, starting with writing our first test. Today is still a teaching session, where I’ll write the first set of tests to demonstrate naming conventions and how to demonstrate how to use TDD with the rules we defined the day before. But first, we need to figure out where we should start.

Logical Flow

In order to determine where we should start it helps to draw out the logical flow of our primary use case: create a new dependency viewer from an NDepend AssembliesDependencies.xml file.  The logical flow looks something like this:

LogicalFlowDiagram

  1. User clicks “New”
  2. The user is prompted to select a file
  3. Some logical processing occurs where the file is read,
  4. …a graph is produced,
  5. …and the UI is updated.

The question on where to start is an interesting one. Given limited knowledge of what we need to build or how these components will interact, what area of the logical flow do we know the most about? What part can we reliably predict the outcome?

Starting from scratch, it seemed the most reasonable choice was to start with the part that reads our NDepend file. We know the structure of the file and we know that the contents of the file will represent our model.

Testing Constraints

When developing with a focus on testability, there are certain common problems that arise when trying to get a class under the test microscope. You learn to recognize them instantly, and I’ve jokingly referred to this as spidey-sense – you just know these are going to be problematic before you start.

While this is not a definitive list, the obvious ones are:

  • User Interface: Areas that involve the user-interface can be problematic for several reasons:
    • Some test-runners have a technical limitation and cannot launch a user-interface based on the threading model.
    • The UI may require complex configuration or additional prerequisites (style libraries, etc) and is subject to change frequently
    • The UI may unintentionally require human interaction during the tests, thereby limiting our ability to reliably automate.
  • File System: Any time we need files or folder structure, we are dependent on the environment to be setup a certain way with dummy data.
  • Database / Network: Being dependent on external services is additional overhead that we want to avoid. Not only will tests run considerably slower, but the outcome of the test is dependent on many factors that may not be under our control (service availability, database schema, user permissions, existing data).

Some of the less obvious ones are design considerations which may make it difficult to test, such as tight coupling to implementation details of other classes (static methods, use of “new”, etc).

In our case, our first test would be dependent on the file system. We will likely need to test several different scenarios, which will require many different files.  While we could go this route, working with the file system directly would only slow us down. We needed to find a way to isolate ourselves.

The team tossed around several different suggestions, including passing just xml as string. Ultimately, as this class must read the contents of the file we decided that the best way to work with Xml was an XmlReader. We could simulate many different scenarios by setting up a stream containing our test data.

Our First Test

So after deciding that the name of our class would be named NDependStreamParser, our first test looked something like this:

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace DependencyViewer
{
    [TestClass]
    public class NDependStreamParserTests
    {
        [TestMethod]
        public void TestMethod1()
        {
            Assert.Fail();
        }
    }
}

We know very little about what we need. But at the very least, the golden rule is to ensure that all tests must fail from the very beginning. Writing “Assert.Fail();” is a good habit to establish.

In order to help identify what we need, it helps to work backward. So we start by writing our assertions first and then, working from the bottom up, fill in the missing pieces.  Our discovery followed this progression:

Realization Code Written
At the end of the tests, we’ll have some sort of results. The results should not be null.

At this point, test compiles, but it’s red.
Test Code:
object results = null;
Assert.IsNotNull( results,
           “Results were not produced.”);
Where will the results come from? We’ll need a parser. The results will come after we call Parse.

The code won’t compile because this doesn’t exist. If we use the auto-generate features of Visual Studio / Resharper the test compiles, but because of the default NotImplementedException, the test fails.
Test Code:
var parser = new NDependStreamParser(); 
object results = parser.Parse();
Assert.IsNotNull( results,
           “Results were not produced.”);
We need to make the test pass.

Do whatever we need to make it green.
Implementation:

public object Parse()
{
    // yes, it's a dirty hack. 
    // but now the test passes.
    return new object();
}
Our tests passes, but we’re clearly not done. How will we parse? The data needs to come from somewhere. We need to read from a stream.

Introducing the stream argument into the Parse method won’t compile (so the tests are red), but this is a quick fix in the implementation.
Test Code:

var parser = new NDependStreamParser();
var sr = new StringReader("");
var reader = XmlReader.Create(sr);
object results = parser.Parse(reader);
// ...

Implementation:

public object Parse(XmlReader reader) { //...
Our return type shouldn’t be “Object”. What should it be?

After a short review of the NDepend AssembliesDependencies.xml file, we decide that we should read the list of assemblies from the file into a model object which we arbitrarily decide should be called ProjectAssembly. At a minimum, Parse should return an IEnumerable<ProjectAssembly>.

There are a few minor compilation problems to address here, including the auto-generation of the ProjectAssembly class. These are all simple changes that can be made in under 60 seconds.
Test Code:

var parser = new NDependStreamParser();
var sr = new StringReader("");
var reader = XmlReader.Create(sr);
IEnumerable<ProjectAssembly> results 
    = parser.Parse(reader);
// ...

Implementation:

public IEnumerable<ProjectAssembly> Parse(
	XmlReader reader)
{
    return new List<ProjectAssembly>();
}

At this point, we’re much more informed about how we’re going to read the contents from the file. We’re also ready to make some design decisions and rename our test accordingly to reflect what we’ve learned. We decide that (for simplicity sake) the parser should always return a list of items even if the file is empty. While the implementation may be crude, the test is complete for this scenario, so we rename our test to match this decision and add an additional assertion to improve the intent of the test.

Sidenote: The naming convention for these tests is based on Roy Osherove’s naming convention, which has three parts:

  • Feature being tested
  • Scenario
  • Expected Behaviour
[TestMethod]
public void WhenParsingAStream_WithNoData_ShouldProduceEmptyContent()
{
    var parser = new NDependStreamParser();
    var sr = new StringReader("");
    var reader = XmlReader.Create(sr);

    IEnumerable<ProjectAssembly> results =
        parser.Parse(reader);

    Assert.IsNotNull( results,
        "The results were not produced." );

    Assert.AreEqual( 0, results.Count(),
        "The results should be empty." );
}

Adding Tests

We’re now ready to start adding additional tests. Based on what we know now, we can start each test with a proper name and then fill in the details.

With each test, we learn a little bit more about our model and the expected behaviour of the parser. The NDepend file contains a list of assemblies, where each assembly contains a list of assemblies that it references and a list of assemblies that it depends on. The subsequent tests we wrote:

  • WhenParsingAStream_ThatContainsAssemblies_ShouldProduceContent
  • WhenParsingAStream_ThatContainsAssembliesWithReferences_EnsureReferenceInformationIsAvailable
  • WhenParsingAStream_ThatContainsAssembliesWithDependencies_EnsureDependencyInformationIsAvailable

It’s important to note that these tests aren’t just building the implementation details of the parser, we’re building our model object as well. Properties are added to the model as needed.

Refactoring

Under the TDD mantra “Red, Green, Refactor”, “Refactor” implies that you should refactor the implementation after you’ve written the tests. However, the scope of the refactor should apply to both tests and implementation.

Within the implementation, you should be able to optimize the code freely assuming that you aren’t adding additional functionality. (My original implementation details of using the XmlParser was embarrassing, and I ended up experimenting with the reader syntax later that night until I found a clean elegant solution. The tests were invaluable for discovering what was possible.)

Within the tests, refactoring means removing as much duplication as possible without obscuring the intent of the test. By the time we started the third test, the string-concatenation to assemble our xml and plumbing code to create our XmlReader was copied and pasted several times. This plumbing logic slowly evolved into a utility class that used an XmlWriter to construct our test data.

Next: Day Three

submit to reddit