Tuesday, October 11, 2011

Guided by Tests–Day Seven

This post is eighth in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

Today is the day we test the untestable. Early in the experiment we hit a small road block when our code needed to interact with the user-interface. Since unit-testing the user-interface wasn’t something we wanted to pursue, we wrapped the OpenFileDialog behind a wrapper with an expectation that we would return to build out that component with tests later. The session for this day would prove to be an interesting challenge.

The Challenges of Physical Dependencies

Although using a wrapper to shield our code from difficult to test dependencies is a common and well accepted technique, it would be irresponsible not to test the internal implementation details of that wrapper. Testing against physical dependencies is hard because they introduce a massive amount of overhead, but if we can isolate the logic from the physical dependency we can use unit tests to get 80-90% of the way there. To get the remaining part, you either test manually or write a set of integration or functional tests.

The technique outlined below can be used for testing user-interface components like this one, email components and in a pinch it can work for network related services.

Testing our Wrapper

Time to write some tests for our IFileDialog. I have some good news and bad news.

The good news is Microsoft provides a common OpenFileDialog as part of WPF, meaning that I don’t have to roll my own and I can achieve a common look and feel with other applications with little effort. This also means we can assume that the FileOpenDialog is defect free, so we don’t have to write unit tests for it.

The bad news is, I use this common so infrequently that I forget how to use it.

So instead of writing a small utility application to play with the component, I write a test that shows me exactly how the component works:

[TestMethod]
public void WhenSelectAFileFromTheDialog_AndUserSelectsAFile_ShouldReturnFileName()
{
    var dialog = new OpenFileDialog();
    dialog.Show(); // this will show the dialog

    Assert.IsNotNull(dialog.FileName); 
}

When I run this test, the file dialog is displayed. If I don’t select a file, the test fails. Now that we know how it works, we can rewrite our test and move this code into a concrete implementation.

Unit Test:

[TestMethod]
public void WhenSelectingAFile_AndUserMakesAValidSelection_ShouldReturnFileName()
{
    var subject = new FileDialog();
    string fileName = subject.SelectFile();
    Assert.IsNotNull(fileName);
}

Production Code:

public class FileDialog : IFileDialog
{
    public string SelectFile()
    {
        var dialog = new OpenFileDialog();
        dialog.Show();

        return dialog.FileName;
    }
}

The implementation is functionally correct, but when I run the test I have to select a file in order to have the test pass. This is not ideal. We need a means to intercept the dialog and simulate the user selecting a file. Otherwise, someone will have to babysit the build and manually click file dialog prompts until the early morning hours.

Partial Mocks To the Rescue

Instead of isolating the instance of our OpenFileDialog with a mock implementation, we intercept the activity and allow ourselves to supply a different implementation for our test. The following shows a simple change to the code to make this possible.

public class FileDialog : IFileDialog
{
    public string SelectFile()
    {
        var dialog = new OpenFileDialog();

        Show(dialog);

        return dialog.FileName;
    }

    internal virtual void Show(OpenFileDialog dialog)
    {
        dialog.Show();
    }
}

This next part is a bit weird. In the last several posts, we’ve used Moq to replace our dependencies with fake stand-in implementations. For this post, we’re going to mock the subject of the test, and fake out specific methods on the subject. Go back and re-read that. You can stop re-reading that now.

As an aside: I often don’t like showing this technique because I’ve seen it get abused. I’ve seen abuses where developers use this technique to avoid breaking classes down into smaller responsibilities; they fill their classes with virtual methods and then stub out huge chunks of the subject. This feels like shoddy craftsmanship and doesn’t sit well with me – granted, it works, but it leads to problems. First, the areas that they’re subverting never get tested. Secondly, it’s too easy for developers to forget what they’re doing and they start to write tests for the mocking framework instead of the subject’s functionality. So use with care. In this example, I’m subverting one line of a well tested third-party component in order to avoid human-involvement in the test.

In order to intercept the Show method and replace it with our own implementation we can use Moq’s Callback feature. I’ve written about this Moq’s support for Callbacks before, but in a nutshell Moq can intercept the original method and inbound arguments for use within your test.

Our test now looks like this:

[TestMethod]
public void WhenSelectingAFile_AndUserMakesAValidSelection_ShouldReturnFileName()
{
    // setup a partial mock for our subject
    var mock = new Mock<FileDialog>();
    FileDialog subject = mock.Object;

    // The Show method in our FileDialog is virtual, so we can setup
    //    an alternate behavior when it's called.
    // We configure the Show method to call the SelectAFile method
    //    with the original arguments
    mock.Setup( partialMock => partialMock.Show(It.IsAny<OpenFileDialog>())
        .Callback( SelectAFile );

    string fileName = subject.SelectFile();
    Assert.IsNotNull(fileName);
}

// we alter the original inbound argument to simulate
//    the user selecting a file
private void SelectAFile(OpenFileDialog dialog)
{
    dialog.FileName = "Foo";
}

Now when our test runs the FileDialog returns “Foo” without launching a popup. Now we can write tests for a few extra scenarios:

[TestMethod]
public void WhenSelectingAFile_AndTheUserCancelsTheFileDialog_NoFileNameShouldBeReturned()
{
    // we're mocking out the call to show the dialog,
    // so without any setup on the mock, the dialog will not return a value.

    string fileName = Subject.SelectFile();

    Assert.IsNull(fileName);
}

[TestMethod]
public void WhenSelectingAFile_BeforeUserSelectsAFile_EnsureDefaultDirectoryIsApplicationRootFolder()
{
    // ARRANGE:
    string expectedDirectory = Environment.CurrentDirectory;

    Mock.Get(Subject)
        .Setup(d => d.ShowDialog(It.IsAny<OpenFileDialog>()))
        .Callback<OpenFileDialog>(
            win32Dialog =>
            {
                // ASSERT: validate that the directory is set when the dialog is shown
                Assert.AreEqual(expectedDirectory, win32Dialog.InitialDirectory);
            });

    // ACT: invoke showing the dialog
    Subject.SelectFile();
}

Next: Review some of the application metrics for our experiment in the Guided by Tests – Wrap Up.

submit to reddit

Tuesday, October 04, 2011

Guided by Tests–Day Six

This post is seventh in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

Today is the day where we put it all together, wire up a user-interface and cross our fingers when we run the application. I joked with the team that “today is the day we find out what tests we missed.” Bugs are simply tests we didn’t think about.

Outstanding Design Concerns

At this point we have all the parts for our primary use case. And although we’ve designed the parts to be small with simple responsibilities, we’ve been vague about how the structure of the overall application. Before we can show a user-interface, we’ve got a few remaining design choices.

Associating the Command to the View

In the last post we looked at the NewProjectCommand. After reviewing our choices about the relationship between the Command and the ViewModel we decided that the most pragmatic choice was to have the Command hold a reference to the ViewModel, though we didn’t define where either object fit into the overall application. The next question presented to the team was whether we make the command independent of the ViewModel and reside in a menu-control or other top-level ViewModel, or should we add a Command property to the ViewModel?

The team decided to take the convenience route and put our NewProjectCommand as a property on our MainViewModel. Ultimately, this decision means that our MainViewModel becomes the top-level ViewModel that the View will bind to. This decision is largely view-centric and should be easy to change in the future if we need to reorganize the structure of the application.

With this decision resolved, we’ve got a better picture of how the user-interface and logical model will work. Before we start working on the xaml, we need to figure out how all these pieces come together.

Assembling the MainViewModel

The parts of our application have been designed with the Dependency Inversion Principle in mind – our objects don’t have a “top” and expect that the caller will pass in the proper configuration. Eventually, something must take the responsibility to configure the object graph. The next question is: where should this happen?

Someone suggested stitching the components together in the application start-up routine. While this is logically where this activity should occur, it doesn’t make sense to burden the start-up routine with these implementation details. If we did, we’d likely have a monster start-up routine that would change constantly. Plus, we’d have little means to test it without invoking the user-interface.

Achieving 100% code coverage, though a noble effort, is not always realistic. In most applications it’s likely that 10% of the codebase won’t have tests because those areas border on the extreme edges of the application. Writing unit tests for extreme edges is hard (“unit” tests for user-interface components is especially difficult) and it helps to have a balanced testing strategy that includes unit, system integration and functional UI automation to cover these areas.

Rather than including these details in the application start-up routine, we’ll move as much code as possible into testable components. The goal is to make the start-up routine so simple that it won’t require changes. This will minimize the untestable areas of the application.

To accomplish this, we decided to create a Factory for a MainViewModel.

Realizations for the MainViewModel Factory

The primary responsibility of our factory is to assemble our ViewModel with a fully configured NewProjectCommand. The test looks something like this:

[TestMethod]
public void CreatingAViewModel_WithDefaults_ShouldSetupViewModel()
{
    var factory = new MainViewModelFactory();
    var viewModel = factory.Create();
    
    Assert.IsNotNull(viewModel, "ViewModel was not created.");
    Assert.IsNotNull(viewModel.NewProject, "Command was not wired up.");
}

In order to make the test pass, we’ll need to wire-up the all the components of our solution within the factory. The argument checking we’ve introduced in the constructors of our command and loader guarantees that. Ultimately, the factory looks like this:

public class MainViewModelFactory
{
    public MainViewModel Create()
    {
        var vm = new MainViewModel();

        IFileDialog dialog = new MockFileDialog();
        IGraphDataParser graphParser = new NDependStreamParser();
        var graphBuilder = new GraphBuilder();

        var loader = new ProjectLoader(dialog, graphParser);

        vm.NewProject = new NewProjectCommand(vm, loader, graphBuilder);

        return vm;
    }
}

With that all in place, the application start-up routine is greatly simplified. We could probably optimize it further, but we’re anxious to see our application running today, so this will do:

public partial class App : Application
{
    protected override void OnStartup(StartupEventArgs e)
    {
        var shell = new Shell();
        shell.DataContext = new MainViewModelFactory().Create();
        shell.Show();
    }
}

Running the App for the first time

Ready for our first bug? We borrowed some XAML from Sacha’s post, and after we discovered that we needed a one-line user control, we wired up a crude Button to our NewProject command. Then we launched the app, crossed our fingers, and upon clicking the “New Project” button….. nothing happened!

We set a breakpoint and then walked through the command’s execution. Everything was perfect, except we had forgotten one small detail: we forgot to notify the user-interface that we had produced a graph.

As a teaching opportunity, we wrote a unit test for our first bug:

[TestClass]
public class MainViewModelTests
{
    [TestMethod]
    public void WhenGraphPropertyChanges_ShouldNotifyTheUI()
    {
        string propertyName = null;

        var viewModel = new MainViewModel();
        viewModel.PropertyChanged += (sender,e) => 
            propertyName = e.PropertyName;

        viewModel.Graph = new AssemblyGraph();

        Assert.AreEqual("Graph", propertyName,
            "The user interface wasn't notified.");
    }
}

And on second attempt (read: messing with xaml styles for a while) the app produces the following graph:

WPF-ComponentDependenciesDiagram

Next: Read about polishing off the File Dialog in Day Seven

submit to reddit

Tuesday, September 20, 2011

Guided by Tests–Day Five

This post is sixth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today is the fifth day of our five day experiment to write an application over lunch hours using TDD. Five days was a bit ambitious, as a lot of time was spent teaching concepts, so the team agreed to tack on a few extra days just to see it all come together. So, my bad.

By the fifth day, we had all the necessary pieces to construct a graph from our NDepend xml file. Today we focused our attention on how these pieces would interact.

Next Steps

If we look back to our original flow diagram (shown below), upon receiving input from the user we need to orchestrate between loading model data from a file and converting into a Graph object so that it can be put into the UI. As we have the loading and conversion parts in place, our focus is how to receive input, perform the work and update the UI.

LogicalFlowDiagram

Within WPF and the MVVM architecture, the primary mechanism to handle input from the user is a concept known as Commanding, and commands are implemented by the ICommand interface:

namespace System.Windows.Input
{
    public interface ICommand
    {
        bool CanExecute( object parameter );
        void Execute( object parameter );

        event EventHandler CanExecuteChanged;
    }
}

While it’s clear that we’ll use a custom command to perform our orchestration logic, the question that remains is how we should implement it. There are several options available, and two popular MVVM choices are:

The elegance of the DelegateCommand allows the user to supply delegates for the Execute and CanExecute methods, and typically these delegates live within the body of the ViewModel class. When given the choice I prefer command classes for application-level operations as it aligns well to the Single Responsibility Principle and our separation of concerns approach. (I tend to use the DelegateCommand option to perform view-centric operations such as toggling visibility of view-elements, etc.)

Writing a Test for an ICommand

As a team, we dug into writing the test for our NewProjectCommand. Assuming we’d use the ICommand interface, we stubbed in the parts we knew before we hit our first roadblock:

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        var command = new NewProjectCommand();
        
        command.Execute( null );

        Assert.Fail(); // what should we assert?
    }
}

Two immediate concerns arise:

First, we have a testing concern. How can we assert that the command accomplished what it needed to do? The argument supplied to the Execute command typically originates from xaml binding syntax, which we won’t need, so it's not likely that the command will take any parameters. Moreover, the Execute method doesn't have a return value, so we won't have any insight into the outcome of the method.

Our second concern is a design problem. It's clear that our command will need to associate graph-data to a ViewModel representing our user-interface, but how much information should the Command have about the ViewModel and how will the two communicate?

This is one of the parts I love about Test Driven Development. There is no separation between testing concerns and design problems because every test you write is a design choice.

Reviewing our Coupling Options

We have several options to establish the relationship between our Command and our ViewModel:

  • Accessing the ViewModel through the WPF’s Application.Current;
  • Making our ViewModel a Singleton;
  • Create a globally available ServiceLocator that can locate the ViewModel for us;
  • Pass the ViewModel to the command through Constructor Injection
  • Have the Command and ViewModel communicate through an independent publisher/subscriber model

Here’s a further breakdown of those options…

Application.Current.MainWindow

The most readily available solution within WPF is to leverage the framework’s application model. You can access the top-level ViewModel with some awkward casting:

var viewModel = Application.Current.MainWindow.DataContext as MainViewModel;

I’m not a huge fan of this for many reasons:

  • Overhead: the test suddenly requires additional plumbing code to initialize the WPF Application with a MainWindow bound to the proper ViewModel. This isn’t difficult to do, but it adds unwarranted complexity.
  • Coupling: Any time we bind to a top-level property or object, we’re effectively limiting our ability to change that object. In this case, we’re assuming that the MainWindow will always be bound to this ViewModel; if this were to change, all downstream consumers would also need to change.
  • Shared State: By consuming a static property we are effectively making the state of ViewModel shared by all tests. This adds some additional complexity to the tests to ensure that the shared-state is properly reset to a neutral form. As a consequence, it’s impossible to run the tests in parallel.

ViewModel as Singleton / ServiceLocator

This approach is a slight improvement over accessing the DataContext of the current application’s MainWindow. It eliminates the concerns surrounding configuring the WPF Application, and we gain some type-safety as we shouldn’t have to do any casting to get our ViewModel.

Despite this, Singletons like the Application.Current variety are hidden dependencies that make it difficult to understand the scope and responsibilities of an object. I tend to avoid this approach for the same reasons listed above.

Constructor Injection

Rather than having the Command reach out to static resource to obtain a reference to the ViewModel, we can use Constructor Injection to pass a reference into the Command so that it has all the resources it needs. (This is the approach we’ve been using thus far for our application, too) This approach makes sense from an API perspective as consumers of your code will be able to understand the Command’s dependencies when they try to instantiate it. The downside to this approach is additional complexity is needed to construct the command. (Hint: We’ll see this in the next post.)

This approach also eliminates the shared-state and parallelism problem but it still couples us the Command to the ViewModel. This might not be a problem if the relationship between the two objects remains fixed – for example, if the application were to adopt a multi-tabbed interface the relationship between the command and ViewModel would need to change.

Message-based Communication

The best form of loose-coupling comes from message-based communication, where the ViewModel and the Command know absolutely nothing about each other and only communicate indirectly through an intermediary broker. In this implementation, the Command would orchestrate the construction of the Graph and then publish a message through the broker. The broker, in turn, would deliver the message to a receiver that would associate the graph to the ViewModel.

Such an implementation would allow both the ViewModel and the Command implementations to be change independently as long as they both adhere to the message publish/subscription contracts. I prefer this approach for large scale applications, though it introduces a level of indirection that can be frustrating at times.

This approach would likely work well if we needed to support a multi-tabbed user interface.

Back to the Tests

Given our time constraints and our immediate needs, the team decided constructor injection was the way to go for our test.

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        var viewModel = new MainViewModel();
        var command = new NewProjectCommand(viewModel);
        
        command.Execute( null );

        Assert.IsNotNull( viewModel.Graph,
            "Graph was not displayed to the user.");
    }
}

To complete the test the team made the following realizations:

Realization Code Written
Tests are failing because Execute throws a NotImplementedException. Let’s commit a sin and get the test to pass. Implementation:
public void Execute(object unused)
{
    // hack to get the test to pass
    _viewModel.Graph = new AssemblyGraph();
}
Our command will need a ProjectLoader. Following guidance from the previous day, we extracted an interface and quickly added it to the constructor. Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>().Object;

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader);

Implementation:
///<summary>Default Constructor</summary>
public NewProjectCommand(
        MainViewModel viewModel,
        IProjectLoader projectLoader)
{
    _viewModel = viewModel;
    _projectLoader = projectLoader;
}
We should only construct the graph if we get data from the loader.

(Fortunately, moq will automatically return non-null for IEnumerable types, so our tests pass accidentally)
Implementation:
public void Execute(object unused)
{
    IEnumerable<ProjectAssembly> model
        = _projectLoader.Load();

    if (model != null)
    {
        _viewModel.Graph = new AssemblyGraph();
    }
}
We’ll need a GraphBuilder to convert our model data into a graph for our viewmodel. Following some obvious implementation, we’ll add the GraphBuilder to the constructor.

A few minor changes and our test passes.
Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>();
var graphBuilder = new GraphBuilder();

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader,
                    graphBuilder);

Implementation:
public void Execute(object unused)
{
    IEnumerable<ProjectAssemby> model
        = _projectLoader.Load();

    if (model != null)
    {
        _viewModel.Graph = 
        	_graphBuilder.BuildGraph( model );
    }
}
Building upon our findings from the previous days, we recognize our coupling to the GraphBuilder and we recognize that we should probably write some tests that demonstrate that the NewProjectCommand can handle failures from both the IProjectLoader and GraphBuilder dependencies. But rather than extract an interface for the GraphBuilder, we decide that we’d simply make the BuildGraph method virtual instead – just to show we can. 

Only now, when we run the test the test fails. It seems our graph was not created?
Test Code:
var viewModel = new MainViewModel();
var projectLoader = new Mock<IProjectLoader>().Object;
var graphBuilder = new Mock<GraphBuilder>().Object;

var command = new NewProjectCommand(
                    viewModel,
                    projectLoader,
                    graphBuilder);

// ...

Finally, in order to get our test to pass, we need to configure the mock for our GraphBuilder to construct a Graph. The final test (shown below) looks like this. Note the IsAnyModel is a handy shortcut for simplifiying Moq’s matcher syntax.

[TestClass]
public class NewProjectCommandTests
{
    [TestMethod]
    public void WhenExecutingTheCommand_DefaultScenario_ShouldShowGraphInUI()
    {
        // ARRANGE: setup dependencies
        var viewModel = new MainViewModel();
        var projectLoader = new Mock<IProjectLoader>().Object;
        var graphBuilder = new Mock<GraphBuilder>().Object;

        // Initialize subject under test
        var command = new NewProjectCommand(
                            viewModel,
                            projectLoader,
                            graphBuilder);
        
        Mock.Get(graphBuilder)
            .Setup( g => g.BuildGraph( IsAnyModel() ))
            .Returns( new AssemblyGraph() );

        // ACT: Execute our Command
        command.Execute( null );

        // ARRANGE: Verify that the command executed correctly
        Assert.IsNotNull( viewModel.Graph,
            "Graph was not displayed to the user.");
    }

    private IEnumerable<ProjectAssembly> IsAnyModel()
    {
        return It.IsAny<IEnumerable<ProjectAssembly>>();
    }
}

Of course, we’d need a few additional tests:

  • When the project loader or graph builder throw an exception.
  • When the project loader doesn’t load a project, the graph should not be changed
  • When the Command is created incorrectly, such as null arguments

Next: Day Six

submit to reddit

Tuesday, September 13, 2011

Guided by Tests–Day Four

This post is fifth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

As previously mentioned during Day Three, we split the group into two teams so that one focused on the process to load a new project while the other focused on constructing a graph. This post will focus on the efforts of the team working through the tests and implementation of the GraphBuilder.

This team had a unique advantage over the other team in that they had a blog post that outlined how to use the Graph# framework. I advised the team that they could refer to the post, even download the example if needed, but all code introduced into the project had to follow the rules of the experiment: all code must be written to satisfy the requirements of a test.

A Change in Approach

The goals for this team we’re different too. We already had our data well defined and had a reasonable expectation of what the results should be. As such, we took a different approach to writing the tests. Up to this point, our process has involved writing one test at a time and only the code needed to satisfy that test. We wouldn’t identify other tests that we’d need to write until we felt the current test was complete.

For this team, we had a small brain-storming session and defined all the possible scenarios we would need to test upfront.

I love this approach and tend to use it when working with my teams. I usually sit down with the developer and envision how the code would be used. From this discussion we stub out a series of failing tests (Assert.Fail) and after some high level guidance about what we need to build I leave them to implement the tests and code. The clear advantage to this approach is that I can step in for an over-the-shoulder code-review and can quickly get feedback on how things are going. When the developer says things are moving along I can simply challenge them to “prove it”. The developer is more than happy to show their progress with working tests, and the failing tests represent a great opportunity to determine if the developer has thought about how to finish them. Win/Win.

The test cases we identified for our graph builder:

  • When building a graph from an empty list, it should produce an empty graph
  • When building a graph from a single assembly, the graph should contain one vertex.
  • When building a graph with two independent assemblies, the graph should contain two vertices and there shouldn’t be any edges between them.
  • When building a graph with one assembly referencing another, the graph should contain two vertices and one edge
  • When building a graph where two assemblies have forward and backward relationships (the first item lists the second vertex as a dependency, the second item lists the first as a “referenced by”), the graph should contain unique edges between items.

By the time the team had begun to develop the third test, most of the dependent object model had been defined. The remaining tests represented implementation details. For example, to establish a relationship between assemblies we would need to store them into a lookup table. Whether this lookup table should reside within the GraphBuilder or pushed lower into the Graph itself is an optimization that can be determined later if needed. The tests would not need to change to support this refactoring effort.

Interesting Finds

The session on the fourth day involved a review of the implementation and an opportunity to refactor both the tests and the code. One of the great realizations was the ability to reduce the verbosity of initializing the test data.

We started with a lot of duplication and overhead in the tests:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = new ProjectAssembly
                        {
                            FullName = "Assembly1"
                        };
    var assembly2 = new ProjectAssembly
                        {
                            FullName = "Assembly2"
                        };

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

We moved some of the initialization logic into a helper method, which improved readability:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = CreateProjectAssembly("Assembly1");
    var assembly2 = CreateProjectAssembly("Assembly2");

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

private ProjectAssembly CreateProjectAssembly(string name)
{
    return new ProjectAssembly()
            {
                FullName = name
            };
}

However, once we discovered the name of the assembly wasn’t important and that they just had to be unique, we optimized this further:

[TestMethod]
public void AnExample_ItDoesntMatter_JustKeepReading()
{
    var assembly1 = CreateProjectAssembly();
    var assembly2 = CreateProjectAssembly();

    _projectList.Add(assembly1);
    _projectList.Add(assembly2);

    Graph graph = Subject.BuildGraph(_projectList);

    // Assertions...
}

private ProjectAssembly CreateProjectAssembly(string name = null)
{
    if (name == null)
        name = Guid.NewGuid().ToString();

    return new ProjectAssembly()
            {
                FullName = name
            };
}

If we really wanted to, we could optimize this further by pushing this initialization logic into the production code directly.

[TestMethod]
public void WhenConstructingAProjectAssembly_WithNoArguments_ShouldAutogenerateAFullName()
{
    var assembly = new ProjectAssembly();

    bool nameIsPresent = !String.IsNullOrEmpty(assembly.FullName);

    Assert.IsTrue( nameIsPresent,
        "Name was not automatically generated.");
}

Continue Reading: Day Five

submit to reddit

Friday, September 09, 2011

Guided by Tests–Day Three

This post is fourth in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

By day three, our collective knowledge about what we were building was beginning to shape. After reviewing the tests from the previous day, it was time for an interesting discussion: what’s next?

Determining Next Steps

In order to determine our next steps, the team turned the logical flow diagram of our use-case. We decomposed the logical flow into the following steps:

  1. Receive input from the UI, maybe a menu control or some other UI action.
  2. Prompt the user for a file, likely using a standard open file dialog.
  3. Take the response from the dialog and feed it into our stream parser.
  4. Take the output of the stream parser and build a graph
  5. Take the graph and update the UI, likely a ViewModel.

Following a Separation of Concerns approach, we want to design our solution so that each part has very little knowledge about the surrounding parts. It was decided that we can clearly separate the building of the graph from the prompting the user part. In my view, we know very little about the UI at this point so we shouldn’t concern ourselves with how the UI initiates this activity. Instead, we can treat the prompting the user for a file and orchestrating interaction with our parser as a single concern.

It was time to split the group in two and start on different parts. Team one would focus on the code that would call the NDependStreamParser; Team two would focus on the code that consumed the list of ProjectAssembly items to produce a graph.

Note: Day Four was spent reviewing and finishing the code for team two. For the purposes of this post, I’m going to focus on the efforts of Team one.

The Next Test

The team decided that we should name this concern, “NewProjectLoader” as it would orchestrate the loading of our model. We knew that we’d be prompting for a file, so we named the test accordingly:

[TestClass]
public class NewProjectLoaderTests
{
    [TestMethod]
    public void WhenLoadingANewProject_WithAValidFile_ShouldLoadModel()
    {
        Assert.Fail();
    }
}

Within a few minutes the team quickly filled in the immediately visible details of the test.

Realization Code Written
Following the first example from the day before, the team filled in their assertions and auto-generated the parts they needed.

To make the tests pass, they hard-coded a response.
Test Code:
var loader = new NewProjectLoader();

IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
Implementation:
public IEnumerable<ProjectAssembly> Load()
{
    return new List<ProjectAssembly>();
}
How should we prompt the user for a file? Hmmm.
 

Our Next Constraint

The team now needed to prompt the user to select a file. Fortunately, WPF provides the OpenFileDialog so we won’t have to roll our own dialog. Unfortunately, if we introduce it into our code we’ll be tightly coupled to the user-interface.

To isolate ourselves from this dependency, we need to introduce a small interface:

namespace DependencyViewer
{
    public interface IFileDialog
    {
        string SelectFile();
    }
}

Through tests, we introduced these changes:

Realization Code Written
We need to introduce our File Dialog to our Loader.

We decide our best option is to introduce the dependency through the constructor of the loader.

This creates a small compilation error that is quickly resolved.

Our tests still passes.

Test Code:
IFileDialog dialog = null;
var loader = new NewProjectLoader(dialog);
IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
Implementation:
public class NewProjectLoader
{
    private IFileDialog _dialog;

    public NewProjectLoader(
	IFileDialog dialog)
    {
        _dialog = dialog;
    }
    // ...
Now, how should we prompt for a file?

The code should delegate to our IFileDialog and we can assume that if they select a file, the return value will not be null.

The test compiles, but the test fails because the dialog is null.
Implementation:
public IEnumerable<ProjectAssembly> Load()
{
    string fileName = _dialog.SelectFile();
    if (!String.IsNullOrEmpty(fileName))
    {
        return new List<ProjectAssembly>();
    }

    throw new NotImplementedException();
}
We don’t have an implementation for IFileDialog. So we’ll define a dummy implementation and use Visual Studio to auto-generate the defaults.

Our test fails because the auto-generated code throws an error (NotImplementedException).
Test Code:
IFileDialog dialog = new MockFileDialog();
var loader = new NewProjectLoader(dialog);
IEnumerable<ProjectAssembly> model 
	= loader.Load();

Assert.IsNotNull(model,
    "Model was not loaded.");
We can easily fix the test replacing the exception with a non-null file name. Implementation:
public class MockFileDialog 
    : IFileDialog
{
    public string SelectFile()
    {
        return "Foo";
    }
}
The test passes, but we’re not done. We need to construct a valid model.

We use a technique known as “Obvious Implementation” and we introduce our NDependStreamParser directly into our Loader.

The test breaks again, this time because “Foo” is not a valid filename.
Implementation:
string fileName = _dialog.SelectFile();
if (!String.IsNullOrEmpty(fileName))
{
    using(var stream = XmlReader.Create(fileName))
    {
        var parser = new NDependStreamParser();
        return parser.Parse(stream);
    }
}
//...
Because our solution is tied to a FileStream, we need to specify a proper file name.  To do this we need to modify our MockFileDialog so that we we can assign a FileName from within the test.

In order to get a valid file, we need to include a file as part of the project and then enable Deployment as part of the mstest test settings.

(Note: We could have changed the signature of the loader to take a filename, but we chose to keep the dependency to the file here mainly for time concerns.)
Implementation:

public class MockFileDialog
    : IFileDialog
{
    public string FileName;

    public string SelectFile()
    {
        return FileName;
    }
}

Test Code:
[DeploymentItem("AssembliesDependencies.xml")]
[TestMethod]
public void WhenLoadingANewProject...()
{
    var dialog = new MockFileDialog();
    dialog.FileName = "AssembliesDependencies.xml";
    var loader = new NewProjectLoader(dialog);

    //...

Isolating Further

While our test passes and it represents the functionality we want, we’ve introduced a design problem such that we’re coupled to the implementation details of the NDependStreamParser. Some may make the case that this is the nature of our application, we only need this class and if the parser’s broken so is our loader. I don’t necessarily agree.

The problem with this type of coupling is that when the parser breaks, the unit tests for the loader will also break. If we allow this type of coupling you can draw a logical conclusion that other classes will have tight coupling and thus when the parser breaks it will have a cascade effect on the majority of our tests. This defeats the purpose of our early feedback mechanism. Besides, why design our classes to be black-boxes that will have to change if we introduce different types of parsers?

The solution is to introduce an interface for our parser. Resharper makes this really easy, simply click our class and choose “Extract Interface”.

public interface IGraphDataParser
{
    IEnumerable<ProjectAssembly> Parse(XmlReader reader);
}

Adding a Mocking Framework

Whereas we created a hand-rolled mock (aka Test Double) for our IFileDialog, it’s time to introduce a mocking framework that can create mock objects in memory.  Using NuGet to simplify our assembly management, we add a reference to Moq to our test project.

Refactoring Steps

We made the following small refactoring changes to decouple ourselves from the NDependStreamParser.

Realization Code Written
Stream Parser should be a field.
 

Implementation:
// NewProjectLoader.cs

IFileDialog _dialog;
NDependStreamParser _parser;

public IEnumerable<ProjectAssembly> Load()
{
    string fileName = _dialog.SelectFile();
    if (!String.IsNullOrEmpty(fileName))
    {
        using (var stream = 
		XmlReader.Create(fileName))
        {
            _parser = new NDependStreamParser();
            return _parser.Parse(stream);
        }
    }

    throw new NotImplementedException();
}
We need to use the interface rather than the concrete type. Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    // ...
We should initialize the parser in the constructor instead of the Load method. Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    public NewProjectLoader(IFileDialog dialog)
    {
        _dialog = dialog;
        _parser = new NDependStreamParser();
    }

    // ...
We should initialize the parser from outside the constructor.

This introduces a minor compilation problem that requires us to change the test slightly.
Test Code:
var dialog = new MockFileDialog();
var parser = new NDependStreamParser();

var loader = new NewProjectLoader(dialog, parser);

Implementation:
public class NewProjectLoader
{
    IFileDialog _dialog;
    IGraphDataParser _parser;

    public NewProjectLoader(
            IFileDialog dialog,
            IGraphDataParser parser)
    {
        _dialog = dialog;
        _parser = parser;
    }

    // ...
We need to replace our NDependStreamParser with a mock implementation.

Test Code:

var dialog = new MockFileDialog();
var paser = new Mock<IGraphDataParser>().Object;

var loader = new NewProjectLoader(dialog, parser);

Strangely enough, there’s a little known feature of Moq that will ensure mocks that return IEnumerable collections will never be null, so our test passes!

Additional Tests

We wrote the following additional tests:

  • WhenLoadingANewProject_WithNoFileSpecfied_ShouldNotReturnAModel

Next: Day Four

submit to reddit

Thursday, September 08, 2011

Guided by Tests–Day Two

This post is third in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today we break new ground on our application, starting with writing our first test. Today is still a teaching session, where I’ll write the first set of tests to demonstrate naming conventions and how to demonstrate how to use TDD with the rules we defined the day before. But first, we need to figure out where we should start.

Logical Flow

In order to determine where we should start it helps to draw out the logical flow of our primary use case: create a new dependency viewer from an NDepend AssembliesDependencies.xml file.  The logical flow looks something like this:

LogicalFlowDiagram

  1. User clicks “New”
  2. The user is prompted to select a file
  3. Some logical processing occurs where the file is read,
  4. …a graph is produced,
  5. …and the UI is updated.

The question on where to start is an interesting one. Given limited knowledge of what we need to build or how these components will interact, what area of the logical flow do we know the most about? What part can we reliably predict the outcome?

Starting from scratch, it seemed the most reasonable choice was to start with the part that reads our NDepend file. We know the structure of the file and we know that the contents of the file will represent our model.

Testing Constraints

When developing with a focus on testability, there are certain common problems that arise when trying to get a class under the test microscope. You learn to recognize them instantly, and I’ve jokingly referred to this as spidey-sense – you just know these are going to be problematic before you start.

While this is not a definitive list, the obvious ones are:

  • User Interface: Areas that involve the user-interface can be problematic for several reasons:
    • Some test-runners have a technical limitation and cannot launch a user-interface based on the threading model.
    • The UI may require complex configuration or additional prerequisites (style libraries, etc) and is subject to change frequently
    • The UI may unintentionally require human interaction during the tests, thereby limiting our ability to reliably automate.
  • File System: Any time we need files or folder structure, we are dependent on the environment to be setup a certain way with dummy data.
  • Database / Network: Being dependent on external services is additional overhead that we want to avoid. Not only will tests run considerably slower, but the outcome of the test is dependent on many factors that may not be under our control (service availability, database schema, user permissions, existing data).

Some of the less obvious ones are design considerations which may make it difficult to test, such as tight coupling to implementation details of other classes (static methods, use of “new”, etc).

In our case, our first test would be dependent on the file system. We will likely need to test several different scenarios, which will require many different files.  While we could go this route, working with the file system directly would only slow us down. We needed to find a way to isolate ourselves.

The team tossed around several different suggestions, including passing just xml as string. Ultimately, as this class must read the contents of the file we decided that the best way to work with Xml was an XmlReader. We could simulate many different scenarios by setting up a stream containing our test data.

Our First Test

So after deciding that the name of our class would be named NDependStreamParser, our first test looked something like this:

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace DependencyViewer
{
    [TestClass]
    public class NDependStreamParserTests
    {
        [TestMethod]
        public void TestMethod1()
        {
            Assert.Fail();
        }
    }
}

We know very little about what we need. But at the very least, the golden rule is to ensure that all tests must fail from the very beginning. Writing “Assert.Fail();” is a good habit to establish.

In order to help identify what we need, it helps to work backward. So we start by writing our assertions first and then, working from the bottom up, fill in the missing pieces.  Our discovery followed this progression:

Realization Code Written
At the end of the tests, we’ll have some sort of results. The results should not be null.

At this point, test compiles, but it’s red.
Test Code:
object results = null;
Assert.IsNotNull( results,
           “Results were not produced.”);
Where will the results come from? We’ll need a parser. The results will come after we call Parse.

The code won’t compile because this doesn’t exist. If we use the auto-generate features of Visual Studio / Resharper the test compiles, but because of the default NotImplementedException, the test fails.
Test Code:
var parser = new NDependStreamParser(); 
object results = parser.Parse();
Assert.IsNotNull( results,
           “Results were not produced.”);
We need to make the test pass.

Do whatever we need to make it green.
Implementation:

public object Parse()
{
    // yes, it's a dirty hack. 
    // but now the test passes.
    return new object();
}
Our tests passes, but we’re clearly not done. How will we parse? The data needs to come from somewhere. We need to read from a stream.

Introducing the stream argument into the Parse method won’t compile (so the tests are red), but this is a quick fix in the implementation.
Test Code:

var parser = new NDependStreamParser();
var sr = new StringReader("");
var reader = XmlReader.Create(sr);
object results = parser.Parse(reader);
// ...

Implementation:

public object Parse(XmlReader reader) { //...
Our return type shouldn’t be “Object”. What should it be?

After a short review of the NDepend AssembliesDependencies.xml file, we decide that we should read the list of assemblies from the file into a model object which we arbitrarily decide should be called ProjectAssembly. At a minimum, Parse should return an IEnumerable<ProjectAssembly>.

There are a few minor compilation problems to address here, including the auto-generation of the ProjectAssembly class. These are all simple changes that can be made in under 60 seconds.
Test Code:

var parser = new NDependStreamParser();
var sr = new StringReader("");
var reader = XmlReader.Create(sr);
IEnumerable<ProjectAssembly> results 
    = parser.Parse(reader);
// ...

Implementation:

public IEnumerable<ProjectAssembly> Parse(
	XmlReader reader)
{
    return new List<ProjectAssembly>();
}

At this point, we’re much more informed about how we’re going to read the contents from the file. We’re also ready to make some design decisions and rename our test accordingly to reflect what we’ve learned. We decide that (for simplicity sake) the parser should always return a list of items even if the file is empty. While the implementation may be crude, the test is complete for this scenario, so we rename our test to match this decision and add an additional assertion to improve the intent of the test.

Sidenote: The naming convention for these tests is based on Roy Osherove’s naming convention, which has three parts:

  • Feature being tested
  • Scenario
  • Expected Behaviour
[TestMethod]
public void WhenParsingAStream_WithNoData_ShouldProduceEmptyContent()
{
    var parser = new NDependStreamParser();
    var sr = new StringReader("");
    var reader = XmlReader.Create(sr);

    IEnumerable<ProjectAssembly> results =
        parser.Parse(reader);

    Assert.IsNotNull( results,
        "The results were not produced." );

    Assert.AreEqual( 0, results.Count(),
        "The results should be empty." );
}

Adding Tests

We’re now ready to start adding additional tests. Based on what we know now, we can start each test with a proper name and then fill in the details.

With each test, we learn a little bit more about our model and the expected behaviour of the parser. The NDepend file contains a list of assemblies, where each assembly contains a list of assemblies that it references and a list of assemblies that it depends on. The subsequent tests we wrote:

  • WhenParsingAStream_ThatContainsAssemblies_ShouldProduceContent
  • WhenParsingAStream_ThatContainsAssembliesWithReferences_EnsureReferenceInformationIsAvailable
  • WhenParsingAStream_ThatContainsAssembliesWithDependencies_EnsureDependencyInformationIsAvailable

It’s important to note that these tests aren’t just building the implementation details of the parser, we’re building our model object as well. Properties are added to the model as needed.

Refactoring

Under the TDD mantra “Red, Green, Refactor”, “Refactor” implies that you should refactor the implementation after you’ve written the tests. However, the scope of the refactor should apply to both tests and implementation.

Within the implementation, you should be able to optimize the code freely assuming that you aren’t adding additional functionality. (My original implementation details of using the XmlParser was embarrassing, and I ended up experimenting with the reader syntax later that night until I found a clean elegant solution. The tests were invaluable for discovering what was possible.)

Within the tests, refactoring means removing as much duplication as possible without obscuring the intent of the test. By the time we started the third test, the string-concatenation to assemble our xml and plumbing code to create our XmlReader was copied and pasted several times. This plumbing logic slowly evolved into a utility class that used an XmlWriter to construct our test data.

Next: Day Three

submit to reddit

Wednesday, September 07, 2011

Guided by Tests–Day One

This post is second in a series about a group TDD experiment to build an application in 5 days using only tests.  Read the beginning here.

Today is the pitch. We'll talk about what we're going to build and how we're going to do it, but first we need to understand the motivation behind our methodology. We need to understand why we test.

Methodology

Kent Beck's TDD by Example was one of the first books I read about TDD and to this day it's still one of my favourites. I know many prefer Roy Osherove's Art of Unit Testing (which I also highly recommend) because it's newer and has .net examples, but in comparison they are very different books.  Roy's book represents the evolved practice of Unit Testing and it provides a solid path to understand testing and the modern testing frameworks that have taken shape since Kent's book was written in 2002. Kent's book is raw, the primordial ooze that walked upon land and declared itself sentient, it defines the methodology as an offshoot of extreme programming and is the basis for the frameworks Roy explains how to use. If you're looking for a book on how to start and the mechanics of TDD, this is it.

As an offshoot of extreme programming, the core philosophy of TDD is:

  • Demonstrate working software at all times
  • Improve confidence while making changes
  • Build only the software you need

My experiment is based on the strategies defined in Kent's book – the premise is to demonstrate working software and get early feedback – but in a nutshell, you make incredibly small incremental changes and compile and run tests a lot (30-50 times per hour). I sometimes refer to this rapid code/test cycle as the Kent Beck Method or Proper TDD. I don't necessarily follow this technique when I code, but it's a great way to start until you learn how to find your balance. Eventually, running tests becomes instinct or muscle memory – you just know when you should.

The Rules

For our experiment, we followed these rules:

1. Tests must be written first.

This means that before we do anything, we start by writing a test. If we need to create classes in order to get the test to compile, that is a secondary concern: create the test first, even if it doesn’t compile, then introduce what you need.

2. Code only gets written to satisfy a test.

This means that we write only the code that is needed to make a test pass. If while writing the code you're tempted to add additional functionality that you might need, don't succumb to writing that code now. Instead, focus on the current test and make a note for additional tests. This ensures that we only write the code that is needed.

3. Code must always compile (at least within 30 seconds of making a change)

This may seem like a difficult practice to adopt, but it is extremely valuable. To follow this rule, make small changes that can be corrected easily and compile frequently to provide early feedback. If you find yourself breaking a contract that takes twenty minutes to fix, you're doing it wrong.

4. Time between red/green must be very short (<1 minute, meaning rollback if you're not sure.)

This is also a very difficult rule to keep, but it forces you to recognize the consequences of your changes. You should know before you run the test whether it's going to pass or fail. Your brain will explode when you're wrong (and that’s a good thing).

The Problem

As mentioned in the first post, my current project uses NDepend to perform static analysis as part of our build process. The build generates a static image. As our solution analyzes over 80 assemblies, the text becomes illegible, and its somewhat difficult to read (image text has been blurred to hide assembly names).

Geez!

The two primary use cases I wanted the application to have:

  • Allow the user to select an NDepend AssembliesDependencies.xml file and display it as a graph
  • Once the graph has been rendered, provide an option to choose which assemblies should be displayed

The graph would be implemented using Graph#, and we could borrow much of the presentation details from Sacha Barber’s post. Using Graph# would provide a rich user-experience, all we’d need to do is implement the surrounding project framework.  Piece of cake!

I asked the team if there were any additional considerations that they would like to see. We would take some of these considerations into account when designing the solution. Some suggestions were made:

  • Multi-tab support so that more than one graph can be open at a time
  • Ability to color code the nodes
  • Ability to save a project (and load it presumably)

Getting Started

With the few remaining minutes in the lunch hour, we set up the project:

  1. Create a new WPF Project.  Solution Name: DependencyViewer, Project Name: DependencyViewer.
  2. Create a new Test Project, DependencyViewer.Tests
  3. Added a reference from the Test Project to the WPF Project.
  4. Deleted all automatically generated files:
    1. App.xaml
    2. MainWindow.xaml
    3. TestClass1.
  5. Renamed the default namespace of the Test Project to match the namespace of the WPF Application. For more info, read this detailed post which explains why you should.

While I have some preferences on third-party tools, etc – we’ll get to those as they’re needed.

Next: Day Two

submit to reddit

Tuesday, September 06, 2011

Guided By Tests

Last week I started a TDD experiment at work. I gathered some interest from management and then sought out participants with a very simple concept: build an application in 5 days, over lunch hour, using nothing but unit tests to drive development.

My next few posts will be part of a series that shares the details of that experiment.

Motivation

Over the years, I've had many conversations with colleagues about TDD. One of the most common comments made is that they don't know what to test or where to start. Another common theme is the desire to get into a project at the very beginning where all the project members have bought into the concept. I've always found this comment to be strange. It's a great comment and it makes perfect sense but -- why the beginning? Why not the middle or end of the project after the tests are put into place? I sense they're expressing two things. First they're expressing the obvious, that life for the development team would be much different if they had tests from the beginning. And secondly, they're interested in understanding how the team managed to establish and sustain the process.

The goal of my experiment would show how to start a project with TDD and to walk through the process of deconstructing requirements into testable components. The process would use tests as the delivery mechanism but would also showcase how to design software for testability using separation of concerns and SOLID OO design principles. Requirements and documentation would be deliberately vague, and the application would need to showcase common real world problems. The tricky part would be finding an application to build.

Fortunately, I was recently inspired by a post by Sacha Barber that provides a great example of how to use the open source graphing framework Graph# to build a graph application in WPF. I immediately saw how I could use this. My current project uses NDepend for static analysis and as part of the build process it produces an image of our dependency graph. With over 80 nodes and illegible text, the static image is mostly useless. However, the build process spits out the details for the graph as an XML file. With some simple xml parsing, I could use Sacha's example to build my own graph.

So rather than spending my weekend working on this application, why not turn this into a teaching exercise? (Ironically, I'm spending my weekends blogging about it)

Follow up posts:

submit to reddit

Friday, August 12, 2011

On Dependency Injection and Violating Encapsulation Concerns

For me, life is greatly simplified when dependencies are inverted and constructor injection is used to provide a clean mechanism to introduce dependencies to a class. It's explicit and easy to test.

Others however argue that some dependencies are private implementation details to the class and external callers shouldn't know about them. The argument is that exposing these dependencies through the constructor violates encapsulation and introduces coupling. This argument is usually coupled with resistance to using interfaces or classes with virtual methods.

From my experience, there are times when composition in the constructor makes sense but there's a fine line when constructor injection should be used. I want to use this post to elaborate on these arguments and provide my perspective on this debate.

Does Constructor Injection violate Encapsulation?

Does exposing the internal dependencies of a class in the constructor expose the implementation details of a class? If you are allowing callers to construct the class directly, then you are most certainly breaking encapsulation as the callers must posses the knowledge of how to construct your class.  However, regardless of the constructor arguments if callers know how to construct your class you are also coupling to a direct implementation and will undoubtedly create a test impediment elsewhere. There are some simple fixes for this (including more constructor injection or another inversion of control technique) which I'll outline later.

So if constructor injection violates encapsulation, when is it safe to use composition in the constructor?

It really depends on the complexity and relationship of the subject to its dependencies. If the internal dependencies are simple and contain no external dependencies there is likely no harm in using composition to instantiate them in the constructor (the same argument can be made for small static utility methods). For instance, a utility class that performs some translation or other processing activity with limited outcomes may work as a private implementation detail. Problems arise as soon as these dependencies take on further dependencies or produce output that influences the conditional logic of the subject under test. When this happens the argument to keep these as internally controlled dependencies becomes flawed and it may be necessary to upgrade the "private implementation detail" to an inverted dependency.

Before:

This example shows a class that has a private implementation detail that influences logic of the subject under test. The consequence of this design is that the internal details of the validator leak into the test specifications. This leads to very brittle tests as tests must concern themselves with object construction and validation rules -- any change to the object or validation logic will break the tests.

public class MyModelTranslator
{
    public MyModelTranslator()
    {
       _validator = new MyModelValidator();
    }

    public MyViewModel CreateResult(MyModelObject model)
    {
        var result = new MyViewModel();
        
        if (_validator.Validate( model ));
        {
             result.Id = model.Id;
        }
        else {
            result.State = MyResultState.Invalid;
        }
        return result;
    }
}

[Test]
public void WhenCreatingAResult_FromAVaidModelObejct_ShouldHaveSameId()
{
    var subject = new MyModelTranslator();

    // create model object with deep understanding how Id's
    // and expiry dates are related 
    var model = new MyModelObject()
    {
        Id = "S1402011"
        Expiry = Datetime.Parse("12/31/2011");
    };
    var result = subject.CreateResult( model );
    Assert.AreEqual("S1402011", result.Id);
}

After:

This example shows the above example corrected to use a constructor injected dependency that can be mocked. In doing so, the test is not encumbered with object creation or validation details and can easily simulate validate states without introducing brittleness.

public class MyModelTranslator
{
    public MyModelTranslator(MyModelValidator validator);
    {
       _validator = validator;
    }

    // ...
}

[Test]
public void WhenCreatingAResult_FromAVaidModelObejct_ShouldHaveSameId()
{
    var validatorMock = new Mock<MyModelValidator>();

    var subject = new MyModelTranslator(validatorMock.Object);
    var model = new MyModelObject()
    {
      Id = "Dummy"
    };

    // we no longer care how validation is done, 
    // but we expect validation to occur and can easily control
    // and test outcomes
    ValidatorMock.Setup( x => x.Validate( model )).Returns( true );

    var result = subject.CreateResult( model );
    Assert.AreEqual("Dummy", result.Id);
}

Combatting Coupling with Inversion of Control

As the above points out, if we expose the dependencies of a class in the constructor we move the coupling problems out of the class and into the callers. This is not good but it can easily resolved.

An example that shows that our controller class now knows about the MyModelValidator:

public class MyController
{
    public MyController()
    {
        _translator = new MyModelTranslator( new MyModelValidator() );
    }
}

More Constructor Injection

Ironically, the solution for solving coupling and construction problems is more constructor injection. This creates a Russian Doll where construction logic is deferred and pushed higher and higher up the stack resulting in a top level component which is responsible for constructing the entire object graph. While this provides an intuitive API that clearly outlines dependencies, it's tedious to instantiate by hand. This is where dependency injection  frameworks like Unity, StructureMap and others come in: they can do the heavy lifting for you. If done right, your application should only have well defined points where the dependency container is used.

public class MyController : IController
{
    public MyController(MyModelTranslator translator)
    {
        _translator = translator;
    }
}

Note that this assumes that the constructor arguments are abstractions -- either interfaces, abstract classes or classes with virtual methods. In effect, by exposing the constructor arguments we trade off the highly sealed encapsulated black box for an open for extensibility model.

Factories / Builders

In some cases, using Constructor Injection everywhere might not be a good fit. For example, if you had a very large and complex object graph, creating everything upfront using constructor injection might represent a performance problem. Likewise, not all parts of the object graph will be used immediately and you may need to defer construction until needed.  In these cases, the good ol' Gang of Four Factory pattern is a handy mechanism to lazy load components.

So rather that construct the entire object graph and pass resolved dependencies as constructor arguments, pass the factory instead and create the lower-level sub-dependencies when needed.

public class MyControllerFactory : IControllerFactory
{
    private IUnityContainer _container;
    
    public MyControllerFactory(IUnityContainer container)
    {
        _container = container;
    }

    public IController Create()
    {
        _container.Resolve<MyController>();
    }
}

Service Location

While Service Location is an effective mechanism for introducing inversion of control, I'm listing it last for a reason. Unlike Constructor Injection, Service Location couples all your code to an intermediate provider. Depending on your application and relative complexity this might be acceptable, but from personal experience it's a very slippery slope -- actually, it would be more appropriate to call it a cliff because removing or replacing the container from an existing codebase can be a massive undertaking.

I see service location as a useful tool when refactoring legacy code towards Constructor Injection. The lower "leaf-nodes" of the object graph can take advantage of constructor injection and the higher nodes can temporarily use service location to create the "leaves". As you refactor, the service locator is removed and replaced with constructor injection, which results in an easier to test and discoverable API. This also provides a pragmatic bottom up refactor approach.

Summary

  • Use "new" when dependencies have no external dependencies or don't influence conditional flow of the subject.  Conversely, use constructor injection when dependencies have additional dependencies.
  • Use an IoC container to construct objects that use constructor injection, but only use the IoC container in high level components that are responsible for construction of the object graph. If constructing the entire object graph is a performance concern, consider encapsulating the container in a Factory that can be used to resolve remaining parts of the object graph when needed.
  • To prevent violating encapsulation concerns, use inversion of control to promote using abstractions instead of concrete dependencies.

submit to reddit

Thursday, July 14, 2011

So You've Decided To Go "Full Retard"!

Congratulations! You've done something very clever! Your out of the box thinking has now retarded the development process!

black-downey-simple-jack

Oh? You're not sure what I'm talking about? Well remember that thing you did? I'm pretty sure you knew that it felt like a hack when you wrote it but then you quickly convinced yourself that you'd invented something insanely brilliant. Well, it wasn't. It was a hack. And now, somewhere, kittens are dying.

Yes, I know the compiler didn't complain but that doesn't technically make it valid code. Just because you can use operator overloading to concatenate files doesn't mean you should.  Oh yeah and remember that non-standard event signature you implemented to save time? Well it turns out that we really did need event arguments and proper error handling, so now Jimmy, who is replacing you btw, is going to have to rewrite it. kthxbai.

It's not clever, or agile or lean or whatever it is you think it is. It's sloppy. I can't think of a profession where cutting corners is okay.

So before you start bringing the protestors to your aid that "retarded" is an offensive word, I mean it for the true sense, for slowing down the development or progress of action, process, etc.  Anytime a developer squints and wonders what was going through your mind -- you've failed to help others understand your code.  And know what the funny thing is? It doesn't take much more to do it right. It might take an few extra seconds to put xml comments at the top of the method, or a few extra minutes to write a unit test.  If you consider that we spend more time trying to understand code than writing it, it really makes sense to ensure that code remains consistent.

For me, clever is often synonymous with stupid. Everyone's entitled to a few embarrassing "clever" moments a year as long as the own up to it. Going full in and swearing by it, well, …sometimes you come back empty handed.

submit to reddit

Tuesday, July 12, 2011

Visual Studio Regular Expressions for Find & Replace

Visual Studio has had support for regular expressions for Find & Replace for several versions, but I've only really used it for simple searches. I recently had a problem where I needed to introduce a set of changes to a very large object model. It occurred to me that this could be greatly simplified with some pattern matching, but I was genuinely surprised to learn that Visual Studio had their own brand of Regular Expressions.

After spending some time learning the new syntax I had a really simple expression to modify all of my property setters:

Original:

public string PropertyName
{
    get { return _propertyName; }
    set
    {
        _propertyName = value;
        RaisePropertyChanged("PropertyName");
    }
}

Goal:

public string PropertyName
{
    get { return _propertyName; }
    set
    {
        if ( value == _propertyName )
             return;            
        _propertyName = value;
        RaisePropertyChanged("PropertyName");
    }
}

Here’s a quick capture and breakdown of the pattern I used.

image

Find:

^{:Wh*}<{_:a+} = value;
  • ^ = beginning of line
  • { = start of capture group #1
  • :Wh = Any whitespace character
  • * = zero or more occurrences
  • } = end of capture group #1
  • < = beginning of word
  • { = start of capture group #2
  • _ = I want to the text to start with an underscore
  • :a = any alpha numerical character
  • + = 1 or more alpha numerical characters
  • } end of capture group #2
  • “ = value;” = exact text match

Replace:

\1(if (\2 == value)\n\1\t\return;\n\1\2 = value;

The Replace algorithm is fairly straight forward, where “\1” and “\2” represent capture groups 1 and 2.  Since capture group #1 represents the leading whitespace, I’m using it in the replace pattern to keep the original padding and to base new lines from that point.  For example, “\n\1\t” introduces a newline, the original whitespace and then a new tab.

It’s seems insane that Microsoft implemented their own regular expression engine, but there’s some interesting things in there, such as being able to match on quoted text, etc.

I know this ain’t much, but hopefully it will inspire you to write some nifty expressions.  Cheers.

submit to reddit

Wednesday, June 29, 2011

Build Server Code Analysis Settings

I've been looking at ways to improve the reporting of Code Analysis as part of our Team Build. During my research I found that the RunCodeAnalysis setting as defined in the TFSBuild.proj differs significantly from the local MSBuild project schema options (see this post for a detailed break down of project level analysis settings).

Specifically, TFSBuild defines RunCodeAnalysis as "Always", "Default" and "Never" while MSBuild defines this as a simple true/false Boolean.  To make sense of this I hunted this down to this section of the Microsoft.TeamFoundation.Build.targets file:

<Target Name="CoreCompileSolution">

  <PropertyGroup>
    <CodeAnalysisOption Condition=" '$(RunCodeAnalysis)'=='Always'">RunCodeAnalysis=true</CodeAnalysisOption>
    <CodeAnalysisOption Condition=" '$(RunCodeAnalysis)'=='Never'">RunCodeAnalysis=false</CodeAnalysisOption>
    <!-- ... -->
  </PropertyGroup>
  <!-- ... -->
</Target>

From this we can infer that "Default" setting does not provide a value to the runtime, while "Always" and "Never" map to True/False respectively.  But this raises the question, what's the difference between Default and Always, and what should I specify to effectively run Code Analysis?

To answer this question, let's take an example of a small project with two projects.  Both projects are configured in the csproj with default settings.  For the purposes of the demo, I've altered the output folder to be at a folder called Output at the root of the solution:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
  <DebugSymbols>true</DebugSymbols>
  <DebugType>full</DebugType>
  <Optimize>false</Optimize>
  <OutputPath>..\Output</OutputPath>
  <DefineConstants>DEBUG;TRACE</DefineConstants>
  <ErrorReport>prompt</ErrorReport>
  <WarningLevel>4</WarningLevel>
</PropertyGroup>

When we specify code analysis to run with the "Always" setting:

Msbuild.exe Example.sln /p:RunCodeAnalysis=True

The Output folder contains the following files:

CodeAnalysisExperiment1.dll 
CodeAnalysisExperiment1.dll.CodeAnalysisLog.xml 
CodeAnalysisExperiment1.dll.lastcodeanalysissucceeded 
CodeAnalysisExperiment1.pdb 
CodeAnalysisExperiment2.dll 
CodeAnalysisExperiment2.dll.CodeAnalysisLog.xml 
CodeAnalysisExperiment2.dll.lastcodeanalysissucceeded 
CodeAnalysisExperiment2.pdb

Close inspection of the console output shows that the Minimal ruleset was applied to my empty projects, despite having not configured either project for code analysis.

DefaultRuleset

If I tweak the project settings slightly such that one project explicitly declares RunCodeAnalysis as True and the other as False

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> 
  <DebugSymbols>true</DebugSymbols> 
  <DebugType>full</DebugType> 
  <Optimize>false</Optimize> 
  <OutputPath>..\Output</OutputPath> 
  <DefineConstants>DEBUG;TRACE</DefineConstants> 
  <ErrorReport>prompt</ErrorReport> 
  <WarningLevel>4</WarningLevel> 
  <RunCodeAnalysis>false</RunCodeAnalysis> 
</PropertyGroup>

and…

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> 
  <DebugSymbols>true</DebugSymbols> 
  <DebugType>full</DebugType> 
  <Optimize>false</Optimize> 
  <OutputPath>..\Output</OutputPath> 
  <DefineConstants>DEBUG;TRACE</DefineConstants> 
  <ErrorReport>prompt</ErrorReport> 
  <WarningLevel>4</WarningLevel> 
  <RunCodeAnalysis>true</RunCodeAnalysis> 
</PropertyGroup>

...and then specify the "Default" setting:

msbuild Example.sln

The Output folder now contains the following files:

CodeAnalysisExperiment1.dll 
CodeAnalysisExperiment1.dll.CodeAnalysisLog.xml 
CodeAnalysisExperiment1.dll.lastcodeanalysissucceeded 
CodeAnalysisExperiment1.pdb 
CodeAnalysisExperiment2.dll 
CodeAnalysisExperiment2.pdb

From this we can infer that the "Default" setting looks to the project settings to determine if analysis should be done, whereas the "Always" setting will override the analysis and "Never" will disable analysis altogether.

For my purposes, I have some projects (unit tests, etc) that I don't necessarily want to run code analysis on -- for me, "Always" is a bit of a deal breaker.  However, there's an interesting lesson to take away here: command line arguments passed to msbuild are carried to all targets.  Which means we can override most project settings this way:

// use 'all rules' for projects that have analysis turned on 
msbuild.exe Example.sln /p:CodeAnalysisRuleSet="Allrules.set"

// use a custom dictionary during code analysis 
msbulid.exe Example.sln /p:CodeAnalysisDictionary="..\AnalysisDictionary.xml"

// redirect output to a different folder 
msbulid.exe Example.sln /p:OutputPath="..\Output2"

Cheers!

Happy Coding.

submit to reddit