Tuesday, September 20, 2011

Guided by Tests–Day Five

This post is sixth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today is the fifth day of our five day experiment to write an application over lunch hours using TDD. Five days was a bit ambitious, as a lot of time was spent teaching concepts, so the team agreed to tack on a few extra days just to see it all come together. So, my bad.

By the fifth day, we had all the necessary pieces to construct a graph from our NDepend xml file. Today we focused our attention on how these pieces would interact.

Next Steps

If we look back to our original flow diagram (shown below), upon receiving input from the user we need to orchestrate between loading model data from a file and converting into a Graph object so that it can be put into the UI. As we have the loading and conversion parts in place, our focus is how to receive input, perform the work and update the UI.

LogicalFlowDiagram

Within WPF and the MVVM architecture, the primary mechanism to handle input from the user is a concept known as Commanding, and commands are implemented by the ICommand interface:

namespace System.Windows.Input
{
    public interface ICommand
    {
        bool CanExecute( object parameter );
        void Execute( object parameter );

        event EventHandler CanExecuteChanged;
    }
}

While it’s clear that we’ll use a custom command to perform our orchestration logic, the question that remains is how we should implement it. There are several options available, and two popular MVVM choices are:

The elegance of the DelegateCommand allows the user to supply delegates for the Execute and CanExecute methods, and typically these delegates live within the body of the ViewModel class. When given the choice I prefer command classes for application-level operations as it aligns well to the Single Responsibility Principle and our separation of concerns approach. (I tend to use the DelegateCommand option to perform view-centric operations such as toggling visibility of view-elements, etc.)

Writing a Test for an ICommand

As a team, we dug into writing the test for our NewProjectCommand. Assuming we’d use the ICommand interface, we stubbed in the parts we knew before we hit our first roadblock:

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        var command = new NewProjectCommand();
        
        command.Execute( null );

        Assert.Fail(); // what should we assert?
    }
}

Two immediate concerns arise:

First, we have a testing concern. How can we assert that the command accomplished what it needed to do? The argument supplied to the Execute command typically originates from xaml binding syntax, which we won’t need, so it's not likely that the command will take any parameters. Moreover, the Execute method doesn't have a return value, so we won't have any insight into the outcome of the method.

Our second concern is a design problem. It's clear that our command will need to associate graph-data to a ViewModel representing our user-interface, but how much information should the Command have about the ViewModel and how will the two communicate?

This is one of the parts I love about Test Driven Development. There is no separation between testing concerns and design problems because every test you write is a design choice.

Reviewing our Coupling Options

We have several options to establish the relationship between our Command and our ViewModel:

  • Accessing the ViewModel through the WPF’s Application.Current;
  • Making our ViewModel a Singleton;
  • Create a globally available ServiceLocator that can locate the ViewModel for us;
  • Pass the ViewModel to the command through Constructor Injection
  • Have the Command and ViewModel communicate through an independent publisher/subscriber model

Here’s a further breakdown of those options…

Application.Current.MainWindow

The most readily available solution within WPF is to leverage the framework’s application model. You can access the top-level ViewModel with some awkward casting:

var viewModel = Application.Current.MainWindow.DataContext as MainViewModel;

I’m not a huge fan of this for many reasons:

  • Overhead: the test suddenly requires additional plumbing code to initialize the WPF Application with a MainWindow bound to the proper ViewModel. This isn’t difficult to do, but it adds unwarranted complexity.
  • Coupling: Any time we bind to a top-level property or object, we’re effectively limiting our ability to change that object. In this case, we’re assuming that the MainWindow will always be bound to this ViewModel; if this were to change, all downstream consumers would also need to change.
  • Shared State: By consuming a static property we are effectively making the state of ViewModel shared by all tests. This adds some additional complexity to the tests to ensure that the shared-state is properly reset to a neutral form. As a consequence, it’s impossible to run the tests in parallel.

ViewModel as Singleton / ServiceLocator

This approach is a slight improvement over accessing the DataContext of the current application’s MainWindow. It eliminates the concerns surrounding configuring the WPF Application, and we gain some type-safety as we shouldn’t have to do any casting to get our ViewModel.

Despite this, Singletons like the Application.Current variety are hidden dependencies that make it difficult to understand the scope and responsibilities of an object. I tend to avoid this approach for the same reasons listed above.

Constructor Injection

Rather than having the Command reach out to static resource to obtain a reference to the ViewModel, we can use Constructor Injection to pass a reference into the Command so that it has all the resources it needs. (This is the approach we’ve been using thus far for our application, too) This approach makes sense from an API perspective as consumers of your code will be able to understand the Command’s dependencies when they try to instantiate it. The downside to this approach is additional complexity is needed to construct the command. (Hint: We’ll see this in the next post.)

This approach also eliminates the shared-state and parallelism problem but it still couples us the Command to the ViewModel. This might not be a problem if the relationship between the two objects remains fixed – for example, if the application were to adopt a multi-tabbed interface the relationship between the command and ViewModel would need to change.

Message-based Communication

The best form of loose-coupling comes from message-based communication, where the ViewModel and the Command know absolutely nothing about each other and only communicate indirectly through an intermediary broker. In this implementation, the Command would orchestrate the construction of the Graph and then publish a message through the broker. The broker, in turn, would deliver the message to a receiver that would associate the graph to the ViewModel.

Such an implementation would allow both the ViewModel and the Command implementations to be change independently as long as they both adhere to the message publish/subscription contracts. I prefer this approach for large scale applications, though it introduces a level of indirection that can be frustrating at times.

This approach would likely work well if we needed to support a multi-tabbed user interface.

Back to the Tests

Given our time constraints and our immediate needs, the team decided constructor injection was the way to go for our test.

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        var viewModel = new MainViewModel();
        var command = new NewProjectCommand(viewModel);
        
        command.Execute( null );

        Assert.IsNotNull( viewModel.Graph,
            "Graph was not displayed to the user.");
    }
}

To complete the test the team made the following realizations:

Realization Code Written
Tests are failing because Execute throws a NotImplementedException. Let’s commit a sin and get the test to pass. Implementation:
public void Execute(object unused)
{
    // hack to get the test to pass
    _viewModel.Graph = new AssemblyGraph();
}
Our command will need a ProjectLoader. Following guidance from the previous day, we extracted an interface and quickly added it to the constructor. Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>().Object;

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader);

Implementation:
///<summary>Default Constructor</summary>
public NewProjectCommand(
        MainViewModel viewModel,
        IProjectLoader projectLoader)
{
    _viewModel = viewModel;
    _projectLoader = projectLoader;
}
We should only construct the graph if we get data from the loader.

(Fortunately, moq will automatically return non-null for IEnumerable types, so our tests pass accidentally)
Implementation:
public void Execute(object unused)
{
    IEnumerable<ProjectAssembly> model
        = _projectLoader.Load();

    if (model != null)
    {
        _viewModel.Graph = new AssemblyGraph();
    }
}
We’ll need a GraphBuilder to convert our model data into a graph for our viewmodel. Following some obvious implementation, we’ll add the GraphBuilder to the constructor.

A few minor changes and our test passes.
Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>();
var graphBuilder = new GraphBuilder();

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader,
                    graphBuilder);

Implementation:
public void Execute(object unused)
{
    IEnumerable<ProjectAssemby> model
        = _projectLoader.Load();

    if (model != null)
    {
        _viewModel.Graph = 
        	_graphBuilder.BuildGraph( model );
    }
}
Building upon our findings from the previous days, we recognize our coupling to the GraphBuilder and we recognize that we should probably write some tests that demonstrate that the NewProjectCommand can handle failures from both the IProjectLoader and GraphBuilder dependencies. But rather than extract an interface for the GraphBuilder, we decide that we’d simply make the BuildGraph method virtual instead – just to show we can. 

Only now, when we run the test the test fails. It seems our graph was not created?
Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>().Object;
var graphBuilder = new Mock<GraphBuilder>().Object;

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader,
                    graphBuilder);

// ...

Finally, in order to get our test to pass, we need to configure the mock for our GraphBuilder to construct a Graph. The final test (shown below) looks like this. Note the IsAnyModel is a handy shortcut for simplifiying Moq’s matcher syntax.

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        // ARRANGE: setup dependencies
        var viewModel = new MainViewModel();
        var projectLoader = new Mock<IProjectLoader>().Object;
        var graphBuilder = new Mock<GraphBuilder>().Object;

        // Initialize subject under test
        var command = new NewProjectCommand(
                            viewModel,
                            projectLoader,
                            graphBuilder);
        
        Mock.Get(graphBuilder)
            .Setup( g => g.BuildGraph( IsAnyModel() ))
            .Returns( new AssemblyGraph() );

        // ACT: Execute our Command
        command.Execute( null );

        // ARRANGE: Verify that the command executed correctly
        Assert.IsNotNull( viewModel.Graph,
            "Graph was not displayed to the user.");
    }

    private IEnumerable<ProjectAssembly> IsAnyModel()
    {
        return It.IsAny<IEnumerable<ProjectAssembly>>();
    }
}

Of course, we’d need a few additional tests:

  • When the project loader or graph builder throw an exception.
  • When the project loader doesn’t load a project, the graph should not be changed
  • When the Command is created incorrectly, such as null arguments

Next: Day Six

submit to reddit

Tuesday, September 13, 2011

Guided by Tests–Day Four

This post is fifth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

As previously mentioned during Day Three, we split the group into two teams so that one focused on the process to load a new project while the other focused on constructing a graph. This post will focus on the efforts of the team working through the tests and implementation of the GraphBuilder.

This team had a unique advantage over the other team in that they had a blog post that outlined how to use the Graph# framework. I advised the team that they could refer to the post, even download the example if needed, but all code introduced into the project had to follow the rules of the experiment: all code must be written to satisfy the requirements of a test.

A Change in Approach

The goals for this team we’re different too. We already had our data well defined and had a reasonable expectation of what the results should be. As such, we took a different approach to writing the tests. Up to this point, our process has involved writing one test at a time and only the code needed to satisfy that test. We wouldn’t identify other tests that we’d need to write until we felt the current test was complete.

For this team, we had a small brain-storming session and defined all the possible scenarios we would need to test upfront.

I love this approach and tend to use it when working with my teams. I usually sit down with the developer and envision how the code would be used. From this discussion we stub out a series of failing tests (Assert.Fail) and after some high level guidance about what we need to build I leave them to implement the tests and code. The clear advantage to this approach is that I can step in for an over-the-shoulder code-review and can quickly get feedback on how things are going. When the developer says things are moving along I can simply challenge them to “prove it”. The developer is more than happy to show their progress with working tests, and the failing tests represent a great opportunity to determine if the developer has thought about how to finish them. Win/Win.

The test cases we identified for our graph builder:

  • When building a graph from an empty list, it should produce an empty graph
  • When building a graph from a single assembly, the graph should contain one vertex.
  • When building a graph with two independent assemblies, the graph should contain two vertices and there shouldn’t be any edges between them.
  • When building a graph with one assembly referencing another, the graph should contain two vertices and one edge
  • When building a graph where two assemblies have forward and backward relationships (the first item lists the second vertex as a dependency, the second item lists the first as a “referenced by”), the graph should contain unique edges between items.

By the time the team had begun to develop the third test, most of the dependent object model had been defined. The remaining tests represented implementation details. For example, to establish a relationship between assemblies we would need to store them into a lookup table. Whether this lookup table should reside within the GraphBuilder or pushed lower into the Graph itself is an optimization that can be determined later if needed. The tests would not need to change to support this refactoring effort.

Interesting Finds

The session on the fourth day involved a review of the implementation and an opportunity to refactor both the tests and the code. One of the great realizations was the ability to reduce the verbosity of initializing the test data.

We started with a lot of duplication and overhead in the tests:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = new ProjectAssembly
                        {
                            FullName = "Assembly1"
                        };
    var assembly2 = new ProjectAssembly
                        {
                            FullName = "Assembly2"
                        };

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

We moved some of the initialization logic into a helper method, which improved readability:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = CreateProjectAssembly("Assembly1");
    var assembly2 = CreateProjectAssembly("Assembly2");

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

private ProjectAssembly CreateProjectAssembly(string name)
{
    return new ProjectAssembly()
            {
                FullName = name
            };
}

However, once we discovered the name of the assembly wasn’t important and that they just had to be unique, we optimized this further:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = CreateProjectAssembly();
    var assembly2 = CreateProjectAssembly();

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

private ProjectAssembly CreateProjectAssembly(string name = null)
{
    if (name == null)
        name = Guid.NewGuid().ToString();

    return new ProjectAssembly()
            {
                FullName = name
            };
}

If we really wanted to, we could optimize this further by pushing this initialization logic into the production code directly.

[TestMethod]
public void WhenConstructingAProjectAssembly_WithNoArguments_ShouldAutogenerateAFullName()
{
    var assembly = new ProjectAssembly();

    bool nameIsPresent = !String.IsNullOrEmpty(assembly.FullName);

    Assert.IsTrue( nameIsPresent,
        "Name was not automatically generated.");
}

Continue Reading: Day Five

submit to reddit

Friday, September 09, 2011

Guided by Tests–Day Three

This post is fourth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

By day three, our collective knowledge about what we were building was beginning to shape. After reviewing the tests from the previous day, it was time for an interesting discussion: what’s next?

Determining Next Steps

In order to determine our next steps, the team turned the logical flow diagram of our use-case. We decomposed the logical flow into the following steps:

  1. Receive input from the UI, maybe a menu control or some other UI action.
  2. Prompt the user for a file, likely using a standard open file dialog.
  3. Take the response from the dialog and feed it into our stream parser.
  4. Take the output of the stream parser and build a graph
  5. Take the graph and update the UI, likely a ViewModel.

Following a Separation of Concerns approach, we want to design our solution so that each part has very little knowledge about the surrounding parts. It was decided that we can clearly separate the building of the graph from the prompting the user part. In my view, we know very little about the UI at this point so we shouldn’t concern ourselves with how the UI initiates this activity. Instead, we can treat the prompting the user for a file and orchestrating interaction with our parser as a single concern.

It was time to split the group in two and start on different parts. Team one would focus on the code that would call the NDependStreamParser; Team two would focus on the code that consumed the list of ProjectAssembly items to produce a graph.

Note: Day Four was spent reviewing and finishing the code for team two. For the purposes of this post, I’m going to focus on the efforts of Team one.

The Next Test

The team decided that we should name this concern, “NewProjectLoader” as it would orchestrate the loading of our model. We knew that we’d be prompting for a file, so we named the test accordingly:

[TestClass]
public class NewProjectLoaderTests
{
    [TestMethod]
    public void WhenLoadingANewProject_WithAValidFile_ShouldLoadModel()
    {
        Assert.Fail();
    }
}

Within a few minutes the team quickly filled in the immediately visible details of the test.

Realization Code Written
Following the first example from the day before, the team filled in their assertions and auto-generated the parts they needed.

To make the tests pass, they hard-coded a response.
Test Code:
var loader = new NewProjectLoader();

IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
Implementation:
public IEnumerable<ProjectAssembly> Load()
{
    return new List<ProjectAssembly>();
}
How should we prompt the user for a file? Hmmm.
 

Our Next Constraint

The team now needed to prompt the user to select a file. Fortunately, WPF provides the OpenFileDialog so we won’t have to roll our own dialog. Unfortunately, if we introduce it into our code we’ll be tightly coupled to the user-interface.

To isolate ourselves from this dependency, we need to introduce a small interface:

namespace DependencyViewer
{
    public interface IFileDialog
    {
        string SelectFile();
    }
}

Through tests, we introduced these changes:

Realization Code Written
We need to introduce our File Dialog to our Loader.

We decide our best option is to introduce the dependency through the constructor of the loader.

This creates a small compilation error that is quickly resolved.

Our tests still passes.

Test Code:
IFileDialog dialog = null;
var loader = new NewProjectLoader(dialog);
IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
Implementation:
public class NewProjectLoader
{
    private IFileDialog _dialog;

    public NewProjectLoader(
	IFileDialog dialog)
    {
        _dialog = dialog;
    }
    // ...
Now, how should we prompt for a file?

The code should delegate to our IFileDialog and we can assume that if they select a file, the return value will not be null.

The test compiles, but the test fails because the dialog is null.
Implementation:
public IEnumerable<ProjectAssembly> Load()
{
    string fileName = _dialog.SelectFile();
    if (!String.IsNullOrEmpty(fileName))
    {
        return new List<ProjectAssembly>();
    }

    throw new NotImplementedException();
}
We don’t have an implementation for IFileDialog. So we’ll define a dummy implementation and use Visual Studio to auto-generate the defaults.

Our test fails because the auto-generated code throws an error (NotImplementedException).
Test Code:
IFileDialog dialog = new MockFileDialog();
var loader = new NewProjectLoader(dialog);
IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
We can easily fix the test replacing the exception with a non-null file name. Implementation:
public class MockFileDialog 
    : IFileDialog
{
    public string SelectFile()
    {
        return "Foo";
    }
}
The test passes, but we’re not done. We need to construct a valid model.

We use a technique known as “Obvious Implementation” and we introduce our NDependStreamParser directly into our Loader.

The test breaks again, this time because “Foo” is not a valid filename.
Implementation:
string fileName = _dialog.SelectFile();
if (!String.IsNullOrEmpty(fileName))
{
    using(var stream = XmlReader.Create(fileName))
    {
        var parser = new NDependStreamParser();
        return parser.Parse(stream);
    }
}
//...
Because our solution is tied to a FileStream, we need to specify a proper file name.  To do this we need to modify our MockFileDialog so that we we can assign a FileName from within the test.

In order to get a valid file, we need to include a file as part of the project and then enable Deployment as part of the mstest test settings.

(Note: We could have changed the signature of the loader to take a filename, but we chose to keep the dependency to the file here mainly for time concerns.)
Implementation:

public class MockFileDialog
    : IFileDialog
{
    public string FileName;

    public string SelectFile()
    {
        return FileName;
    }
}

Test Code:
[DeploymentItem("AssembliesDependencies.xml")]
[TestMethod]
public void WhenLoadingANewProject...()
{
    var dialog = new MockFileDialog();
    dialog.FileName = "AssembliesDependencies.xml";
    var loader = new NewProjectLoader(dialog);

    //...

Isolating Further

While our test passes and it represents the functionality we want, we’ve introduced a design problem such that we’re coupled to the implementation details of the NDependStreamParser. Some may make the case that this is the nature of our application, we only need this class and if the parser’s broken so is our loader. I don’t necessarily agree.

The problem with this type of coupling is that when the parser breaks, the unit tests for the loader will also break. If we allow this type of coupling you can draw a logical conclusion that other classes will have tight coupling and thus when the parser breaks it will have a cascade effect on the majority of our tests. This defeats the purpose of our early feedback mechanism. Besides, why design our classes to be black-boxes that will have to change if we introduce different types of parsers?

The solution is to introduce an interface for our parser. Resharper makes this really easy, simply click our class and choose “Extract Interface”.

public interface IGraphDataParser
{
    IEnumerable<ProjectAssembly> Parse(XmlReader reader);
}

Adding a Mocking Framework

Whereas we created a hand-rolled mock (aka Test Double) for our IFileDialog, it’s time to introduce a mocking framework that can create mock objects in memory.  Using NuGet to simplify our assembly management, we add a reference to Moq to our test project.

Refactoring Steps

We made the following small refactoring changes to decouple ourselves from the NDependStreamParser.

Realization Code Written
Stream Parser should be a field.
 

Implementation:
// NewProjectLoader.cs

IFileDialog _dialog;
NDependStreamParser _parser;

public IEnumerable<ProjectAssembly> Load()
{
    string fileName = _dialog.SelectFile();
    if (!String.IsNullOrEmpty(fileName))
    {
        using (var stream = 
		XmlReader.Create(fileName))
        {
            _parser = new NDependStreamParser();
            return _parser.Parse(stream);
        }
    }

    throw new NotImplementedException();
}
We need to use the interface rather than the concrete type. Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    // ...
We should initialize the parser in the constructor instead of the Load method. Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    public NewProjectLoader(IFileDialog dialog)
    {
        _dialog = dialog;
        _parser = new NDependStreamParser();
    }

    // ...
We should initialize the parser from outside the constructor.

This introduces a minor compilation problem that requires us to change the test slightly.
Test Code:
var dialog = new MockFileDialog();
var parser = new NDependStreamParser();

var loader = new NewProjectLoader(dialog, parser);

Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    public NewProjectLoader(
            IFileDialog dialog,
            IGraphDataParser parser)
    {
        _dialog = dialog;
        _parser = parser;
    }

    // ...
We need to replace our NDependStreamParser with a mock implementation.

Test Code:

var dialog = new MockFileDialog();
var paser = new Mock<IGraphDataParser>().Object;

var loader = new NewProjectLoader(dialog, parser);

Strangely enough, there’s a little known feature of Moq that will ensure mocks that return IEnumerable collections will never be null, so our test passes!

Additional Tests

We wrote the following additional tests:

  • WhenLoadingANewProject_WithNoFileSpecfied_ShouldNotReturnAModel

Next: Day Four

submit to reddit

Thursday, September 08, 2011

Guided by Tests–Day Two

This post is third in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today we break new ground on our application, starting with writing our first test. Today is still a teaching session, where I’ll write the first set of tests to demonstrate naming conventions and how to demonstrate how to use TDD with the rules we defined the day before. But first, we need to figure out where we should start.

Logical Flow

In order to determine where we should start it helps to draw out the logical flow of our primary use case: create a new dependency viewer from an NDepend AssembliesDependencies.xml file.  The logical flow looks something like this:

LogicalFlowDiagram

  1. User clicks “New”
  2. The user is prompted to select a file
  3. Some logical processing occurs where the file is read,
  4. …a graph is produced,
  5. …and the UI is updated.

The question on where to start is an interesting one. Given limited knowledge of what we need to build or how these components will interact, what area of the logical flow do we know the most about? What part can we reliably predict the outcome?

Starting from scratch, it seemed the most reasonable choice was to start with the part that reads our NDepend file. We know the structure of the file and we know that the contents of the file will represent our model.

Testing Constraints

When developing with a focus on testability, there are certain common problems that arise when trying to get a class under the test microscope. You learn to recognize them instantly, and I’ve jokingly referred to this as spidey-sense – you just know these are going to be problematic before you start.

While this is not a definitive list, the obvious ones are:

  • User Interface: Areas that involve the user-interface can be problematic for several reasons:
    • Some test-runners have a technical limitation and cannot launch a user-interface based on the threading model.
    • The UI may require complex configuration or additional prerequisites (style libraries, etc) and is subject to change frequently
    • The UI may unintentionally require human interaction during the tests, thereby limiting our ability to reliably automate.
  • File System: Any time we need files or folder structure, we are dependent on the environment to be setup a certain way with dummy data.
  • Database / Network: Being dependent on external services is additional overhead that we want to avoid. Not only will tests run considerably slower, but the outcome of the test is dependent on many factors that may not be under our control (service availability, database schema, user permissions, existing data).

Some of the less obvious ones are design considerations which may make it difficult to test, such as tight coupling to implementation details of other classes (static methods, use of “new”, etc).

In our case, our first test would be dependent on the file system. We will likely need to test several different scenarios, which will require many different files.  While we could go this route, working with the file system directly would only slow us down. We needed to find a way to isolate ourselves.

The team tossed around several different suggestions, including passing just xml as string. Ultimately, as this class must read the contents of the file we decided that the best way to work with Xml was an XmlReader. We could simulate many different scenarios by setting up a stream containing our test data.

Our First Test

So after deciding that the name of our class would be named NDependStreamParser, our first test looked something like this:

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace DependencyViewer
{
    [TestClass]
    public class NDependStreamParserTests
    {
        [TestMethod]
        public void TestMethod1()
        {
            Assert.Fail();
        }
    }
}

We know very little about what we need. But at the very least, the golden rule is to ensure that all tests must fail from the very beginning. Writing “Assert.Fail();” is a good habit to establish.

In order to help identify what we need, it helps to work backward. So we start by writing our assertions first and then, working from the bottom up, fill in the missing pieces.  Our discovery followed this progression:

Realization Code Written
At the end of the tests, we’ll have some sort of results. The results should not be null.

At this point, test compiles, but it’s red.
Test Code:
object results = null;
Assert.IsNotNull( results,
           “Results were not produced.”);
Where will the results come from? We’ll need a parser. The results will come after we call Parse.

The code won’t compile because this doesn’t exist. If we use the auto-generate features of Visual Studio / Resharper the test compiles, but because of the default NotImplementedException, the test fails.
Test Code:
var parser = new NDependStreamParser(); 
object results = parser.Parse();
Assert.IsNotNull( results,
           “Results were not produced.”);
We need to make the test pass.

Do whatever we need to make it green.
Implementation:

public object Parse()
{
    // yes, it's a dirty hack. 
    // but now the test passes.
    return new object();
}
Our tests passes, but we’re clearly not done. How will we parse? The data needs to come from somewhere. We need to read from a stream.

Introducing the stream argument into the Parse method won’t compile (so the tests are red), but this is a quick fix in the implementation.
Test Code:

var parser = new NDependStreamParser();
var sr = new StringReader("");
var reader = XmlReader.Create(sr);
object results = parser.Parse(reader);
// ...

Implementation:

public object Parse(XmlReader reader) { //...
Our return type shouldn’t be “Object”. What should it be?

After a short review of the NDepend AssembliesDependencies.xml file, we decide that we should read the list of assemblies from the file into a model object which we arbitrarily decide should be called ProjectAssembly. At a minimum, Parse should return an IEnumerable<ProjectAssembly>.

There are a few minor compilation problems to address here, including the auto-generation of the ProjectAssembly class. These are all simple changes that can be made in under 60 seconds.
Test Code:

var parser = new NDependStreamParser();
var sr = new StringReader("");
var reader = XmlReader.Create(sr);
IEnumerable<ProjectAssembly> results 
    = parser.Parse(reader);
// ...

Implementation:

public IEnumerable<ProjectAssembly> Parse(
	XmlReader reader)
{
    return new List<ProjectAssembly>();
}

At this point, we’re much more informed about how we’re going to read the contents from the file. We’re also ready to make some design decisions and rename our test accordingly to reflect what we’ve learned. We decide that (for simplicity sake) the parser should always return a list of items even if the file is empty. While the implementation may be crude, the test is complete for this scenario, so we rename our test to match this decision and add an additional assertion to improve the intent of the test.

Sidenote: The naming convention for these tests is based on Roy Osherove’s naming convention, which has three parts:

  • Feature being tested
  • Scenario
  • Expected Behaviour
[TestMethod]
public void WhenParsingAStream_WithNoData_ShouldProduceEmptyContent()
{
    var parser = new NDependStreamParser();
    var sr = new StringReader("");
    var reader = XmlReader.Create(sr);

    IEnumerable<ProjectAssembly> results =
        parser.Parse(reader);

    Assert.IsNotNull( results,
        "The results were not produced." );

    Assert.AreEqual( 0, results.Count(),
        "The results should be empty." );
}

Adding Tests

We’re now ready to start adding additional tests. Based on what we know now, we can start each test with a proper name and then fill in the details.

With each test, we learn a little bit more about our model and the expected behaviour of the parser. The NDepend file contains a list of assemblies, where each assembly contains a list of assemblies that it references and a list of assemblies that it depends on. The subsequent tests we wrote:

  • WhenParsingAStream_ThatContainsAssemblies_ShouldProduceContent
  • WhenParsingAStream_ThatContainsAssembliesWithReferences_EnsureReferenceInformationIsAvailable
  • WhenParsingAStream_ThatContainsAssembliesWithDependencies_EnsureDependencyInformationIsAvailable

It’s important to note that these tests aren’t just building the implementation details of the parser, we’re building our model object as well. Properties are added to the model as needed.

Refactoring

Under the TDD mantra “Red, Green, Refactor”, “Refactor” implies that you should refactor the implementation after you’ve written the tests. However, the scope of the refactor should apply to both tests and implementation.

Within the implementation, you should be able to optimize the code freely assuming that you aren’t adding additional functionality. (My original implementation details of using the XmlParser was embarrassing, and I ended up experimenting with the reader syntax later that night until I found a clean elegant solution. The tests were invaluable for discovering what was possible.)

Within the tests, refactoring means removing as much duplication as possible without obscuring the intent of the test. By the time we started the third test, the string-concatenation to assemble our xml and plumbing code to create our XmlReader was copied and pasted several times. This plumbing logic slowly evolved into a utility class that used an XmlWriter to construct our test data.

Next: Day Three

submit to reddit

Wednesday, September 07, 2011

Guided by Tests–Day One

This post is second in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today is the pitch. We'll talk about what we're going to build and how we're going to do it, but first we need to understand the motivation behind our methodology. We need to understand why we test.

Methodology

Kent Beck's TDD by Example was one of the first books I read about TDD and to this day it's still one of my favourites. I know many prefer Roy Osherove's Art of Unit Testing (which I also highly recommend) because it's newer and has .net examples, but in comparison they are very different books.  Roy's book represents the evolved practice of Unit Testing and it provides a solid path to understand testing and the modern testing frameworks that have taken shape since Kent's book was written in 2002. Kent's book is raw, the primordial ooze that walked upon land and declared itself sentient, it defines the methodology as an offshoot of extreme programming and is the basis for the frameworks Roy explains how to use. If you're looking for a book on how to start and the mechanics of TDD, this is it.

As an offshoot of extreme programming, the core philosophy of TDD is:

  • Demonstrate working software at all times
  • Improve confidence while making changes
  • Build only the software you need

My experiment is based on the strategies defined in Kent's book – the premise is to demonstrate working software and get early feedback – but in a nutshell, you make incredibly small incremental changes and compile and run tests a lot (30-50 times per hour). I sometimes refer to this rapid code/test cycle as the Kent Beck Method or Proper TDD. I don't necessarily follow this technique when I code, but it's a great way to start until you learn how to find your balance. Eventually, running tests becomes instinct or muscle memory – you just know when you should.

The Rules

For our experiment, we followed these rules:

1. Tests must be written first.

This means that before we do anything, we start by writing a test. If we need to create classes in order to get the test to compile, that is a secondary concern: create the test first, even if it doesn’t compile, then introduce what you need.

2. Code only gets written to satisfy a test.

This means that we write only the code that is needed to make a test pass. If while writing the code you're tempted to add additional functionality that you might need, don't succumb to writing that code now. Instead, focus on the current test and make a note for additional tests. This ensures that we only write the code that is needed.

3. Code must always compile (at least within 30 seconds of making a change)

This may seem like a difficult practice to adopt, but it is extremely valuable. To follow this rule, make small changes that can be corrected easily and compile frequently to provide early feedback. If you find yourself breaking a contract that takes twenty minutes to fix, you're doing it wrong.

4. Time between red/green must be very short (<1 minute, meaning rollback if you're not sure.)

This is also a very difficult rule to keep, but it forces you to recognize the consequences of your changes. You should know before you run the test whether it's going to pass or fail. Your brain will explode when you're wrong (and that’s a good thing).

The Problem

As mentioned in the first post, my current project uses NDepend to perform static analysis as part of our build process. The build generates a static image. As our solution analyzes over 80 assemblies, the text becomes illegible, and its somewhat difficult to read (image text has been blurred to hide assembly names).

Geez!

The two primary use cases I wanted the application to have:

  • Allow the user to select an NDepend AssembliesDependencies.xml file and display it as a graph
  • Once the graph has been rendered, provide an option to choose which assemblies should be displayed

The graph would be implemented using Graph#, and we could borrow much of the presentation details from Sacha Barber’s post. Using Graph# would provide a rich user-experience, all we’d need to do is implement the surrounding project framework.  Piece of cake!

I asked the team if there were any additional considerations that they would like to see. We would take some of these considerations into account when designing the solution. Some suggestions were made:

  • Multi-tab support so that more than one graph can be open at a time
  • Ability to color code the nodes
  • Ability to save a project (and load it presumably)

Getting Started

With the few remaining minutes in the lunch hour, we set up the project:

  1. Create a new WPF Project.  Solution Name: DependencyViewer, Project Name: DependencyViewer.
  2. Create a new Test Project, DependencyViewer.Tests
  3. Added a reference from the Test Project to the WPF Project.
  4. Deleted all automatically generated files:
    1. App.xaml
    2. MainWindow.xaml
    3. TestClass1.
  5. Renamed the default namespace of the Test Project to match the namespace of the WPF Application. For more info, read this detailed post which explains why you should.

While I have some preferences on third-party tools, etc – we’ll get to those as they’re needed.

Next: Day Two

submit to reddit

Tuesday, September 06, 2011

Guided By Tests

Last week I started a TDD experiment at work. I gathered some interest from management and then sought out participants with a very simple concept: build an application in 5 days, over lunch hour, using nothing but unit tests to drive development.

My next few posts will be part of a series that shares the details of that experiment.

Motivation

Over the years, I've had many conversations with colleagues about TDD. One of the most common comments made is that they don't know what to test or where to start. Another common theme is the desire to get into a project at the very beginning where all the project members have bought into the concept. I've always found this comment to be strange. It's a great comment and it makes perfect sense but -- why the beginning? Why not the middle or end of the project after the tests are put into place? I sense they're expressing two things. First they're expressing the obvious, that life for the development team would be much different if they had tests from the beginning. And secondly, they're interested in understanding how the team managed to establish and sustain the process.

The goal of my experiment would show how to start a project with TDD and to walk through the process of deconstructing requirements into testable components. The process would use tests as the delivery mechanism but would also showcase how to design software for testability using separation of concerns and SOLID OO design principles. Requirements and documentation would be deliberately vague, and the application would need to showcase common real world problems. The tricky part would be finding an application to build.

Fortunately, I was recently inspired by a post by Sacha Barber that provides a great example of how to use the open source graphing framework Graph# to build a graph application in WPF. I immediately saw how I could use this. My current project uses NDepend for static analysis and as part of the build process it produces an image of our dependency graph. With over 80 nodes and illegible text, the static image is mostly useless. However, the build process spits out the details for the graph as an XML file. With some simple xml parsing, I could use Sacha's example to build my own graph.

So rather than spending my weekend working on this application, why not turn this into a teaching exercise? (Ironically, I'm spending my weekends blogging about it)

Follow up posts:

submit to reddit