Monday, October 24, 2011

Guided by Tests–Wrap Up

This post is ninth and last in a series about a group TDD experiment to build an application in 5 7 days using only tests.  Read the beginning here.

This last post is aimed at wrapping up the series by looking back at some of the metrics we can collect from our code and tests. There’s some interesting data about the experiment as well as feedback for the design available.

We’ll use three different data sources for our application metrics:

  • Visual Studio Code Analysis
  • MSTest Code Coverage
  • NDepend Code Analysis

Visual Studio Code Analysis

The code analysis features of Visual Studio 2010 provide a convenient view of some common static analysis metrics. Note that this feature only ships with the Premium and Ultimate versions. Here’s a quick screen capture that shows a high level view of the metrics for our project.

Tip: When reading the above graph, keep an eye on the the individual class values not the roll-up namespace values.

image

Here’s a breakdown of what some of these metrics mean and what we can learn from our experiment.

Maintainability Index

I like to think of the Maintainability Index as a high level metric that summarizes how much trouble a class is going to give you over time. Higher numbers are better, and problems usually start below the 20-30 range. The formula for the maintainability index is actually quite complex, but looking at the above data you can see how the other metrics drive the index down.

Our GraphBuilder is the lowest in our project, coming in at an index of 69. This is largely influenced by the density of operations and complexity to lines of code – our GraphBuilder is responsible for constructing the graph from conditional logic of the model. The maintainability index is interesting, but I don’t hold much stock in it alone.

Lines of Code

Lines of Code is the logical lines of code which means code lines without whitespace and stylistic formatting. Some tools, like NDepend, record other metrics for lines of code, such as number of IL instructions per line. Visual Studio’s Lines of Code metric is simple and straight forward.

There are a few interesting observations for our code base.

First, the number of lines of code per class is quite low. Even the NDependStreamParser which tops the chart at 22 lines is extremely low considering that it reads data from several different Xml Elements. The presence of many small classes suggests that classes are designed to do one thing well.

Secondly, there are more lines of code in our test project than production code. Some may point to this as evidence that unit testing introduces more effort than writing the actual code – I see the additional code as the following:

  • We created hand-rolled mocks and utility classes to generate input. These are not present in the production code.
  • Testing code is more complicated than writing it as there are many different paths the code might take. There should be more code here.
  • We didn’t write the code and then the tests, we did them at the same time
  • Our tests ensured that we only wrote code needed to make the tests pass. This allowed us to aggressively remove all duplication in the production code. Did I mention the largest and most complicated class is 22 lines long?

Cyclomatic Complexity

Cyclomatic Complexity, also known as Conditional Complexity, represents the number of paths of execution a method can have – so classes that have a switch or multiple if statements will have higher complexity than those that do not. The metric is normally applied at the method level, not the class, but if you look at the graph above, more than half of the classes average below 5 and the other half are less than 12. Best practices suggest that Cyclomatic Complexity should be less than 15-20 per method. So we’re good.

Although the graph above doesn’t display our 45 methods, the cyclomatic complexity for most methods is 1-2. The only exception to this is our NDependStreamParser and GraphBuilder, which have methods with a complexity value of 6 and 5 respectively.

In my view, I see cyclomatic complexity as a metric for how many tests are needed for a class.

Depth of Inheritance

The “depth of inheritance” metric refers to the number of base classes involved in the inheritance of a class. Best practices aim to keep this number as low as possible since each level in the inheritance hierarchy represents a dependency that can influence or break implementers.

Our graph shows very low inheritance depth which supports our design philosophy of using composition and dependency inversion instead of inheritance. There are a few red flags in the graph though: our AssemblyGraphLayout has an inheritance depth of 9, a consequence of extending a class from the GraphSharp library and it highlights possible brittleness surrounding that library.

Class Coupling

The class coupling metric is a very interesting metric because it shows us how many classes our object consumes or creates. Although we don’t get much visibility into the coupling (NDepend can help us here) it suggests that classes with a higher coupling are much more sensitive to changes. Our GraphBuilder has a Class Coupling of 11, including several types from the System namespace (IEnumerable<T>, Dictionary, KeyValuePair) but also has knowledge of our Graph and model data.

Class coupling combined with many lines of code and high cyclomatic complexity are highly sensitive to change, which explains why the GraphBuilder has the lowest Maintenance Index of the bunch.

Code Coverage

Code coverage is a metric that shows which execution paths within our code base are covered by unit tests. While code coverage can’t vouch for the quality of the tests or production code, it can indicate the strength of the testing strategy.

Under the rules of our experiment, there should be 100% code coverage because we’ve mandated that no code is written without a test. We have 93.75% coverage, which has the following breakdown:

DependencyViewer-Coverage

Interestingly enough, the three areas of with no code coverage are the key obstacles we identified early in the experiment. Here are the snippets of code that have no coverage:

Application Start-up Routine

protected override void OnStartup(StartupEventArgs e)
{
    var shell = new Shell();
    shell.DataContext = new MainViewModelFactory().Create();

    shell.Show();
}

Launching the File Dialog

internal virtual void ShowDialog(OpenFileDialog dialog)
{
    dialog.ShowDialog();
}

Code behind for Shell

public Shell()
{
    InitializeComponent();
}

We’ve designed our solution to limit the amount of “untestable” code, so these lines of code are expected. From this we can establish that our testing strategy has three weaknesses, two of which are covered by launching the application. If we wanted to write automation for testing the user-interface, these would be the areas we’d want early feedback from.

NDepend Analysis

NDepend is a static code analysis tool that provides more detailed information than the standard Visual Studio tools. The product has several different pricing levels including open-source and evaluation licenses and has many great visualizations and features that can help you learn more about your code. While there are many visuals that I could present here, I’m listing two interesting diagrams.

DependencyGraph:

This graph shows dependencies between namespaces. I’ve configured this graph to show two things:

  • Box Size represents Afferent Coupling where larger boxes are used by many classes
  • Line Size represents the number of methods between the dependent components.

DependencyMatrix-Graph

Dependency Matrix:

A slightly different view of the same information is the Dependency Matrix. This is the graph represented in a cross-tabular format.

DependencyMatrix-Namespaces

Both of these views help us better visualize the Class Coupling metric that Visual Studio pointed out earlier, but the information it provides is quite revealing. Both diagrams show that the Model namespace is used by the ViewModels and Controls namespaces. This represents a refactoring or restructuring problem as these layers really should not be aware of the model: Views shouldn’t have details about the ViewModel; Service layer should only know about the Model and ViewModels; Controls should know about ViewModel data if needed; ViewModels should represent view-abstractions and thus should not have Model knowledge.

The violations have occurred because we established our AssemblyGraph and related items as part of the model.  This was a concept we wrestled with at the beginning of the exercise, and now the NDepend graph helps visualize the problem more concretely. As a result of this choice, we’re left with the following violations:

  • The control required to layout our graph is a GraphLayout that uses our Model objects.
  • The MainViewModel references the Graph as a property which is bound to the View.

The diagram shows a very thin line, but this model to view model state problem has become a common theme in recent projects. Maybe we need a state container to hold onto the Model objects and maybe the Graph should be composed of ViewModel objects instead of Model data. It’s worth consideration, and maybe I’ll revisit this and blog about this problem some more in an upcoming post.

Conclusion

To wrap up the series, let’s look back on how we got here:

I hope you have enjoyed the series and found something to take away. I challenge you to find a similar problem and champion its development within your development group.

Happy Coding.

submit to reddit

0 comments: