Thursday, March 03, 2011

Building Custom Test Frameworks

Author’s note: this post focuses on writing test tools to simplify repetitive tasks such as complex assertions and is inspired by colleagues that wrote a tool that nearly evolved beyond their control. I am not encouraging developers to write frameworks that mask symptoms of poorly isolated components or other design problems.

For some, evolving code through writing tests is seen as a labour of love. For others, it's just labour. The latter is especially true when you realize that you have to write a lot of repetitious tests that will likely introduce test friction later on. When faced with this dilemma, developers rise to the challenge and write code.

Fortunately, there's a lot of great test framework tools popping up in the .net community that are designed to plug in to your test code. (I’m running out of battery power on my laptop as I write this, so my list of tools is lacking.  Send me a note and I’ll list your tool here) These tools can certainly make things easier by removing some obstacles or laborious activities, but if you're planning on writing your own tool or framework there a few pitfalls.

Common Pitfalls

Obscured Intent

One of the goals of well written unit tests is to provide live examples of proper usage so that developers can learn more about the code's intended behaviour. This benefit can be significantly hindered when the usage and clear intent of your code has been abstracted into your framework.

Eventual Complexity

Applying automation to a problem follows the 80/20 rule where the majority of problems fit nicely into your abstraction. The edge cases however have a tendency to add bloat and it doesn't take much to quickly trash the idea of a simple tool. This is rarely a consequence of poor planning or bad design; additional complexity tends to creep in over time as your code evolves or as more consumers of the tool come on board.

There's a consequence to this type of complexity: if few developers understand the tool's implementation, you risk limiting these developers to be tool maintainers. Even worse, if these developers leave your team there's a risk that the entire suite of tests will be abandoned if they start failing.

Dependence on Tool Quality / False Positives

In TDD, application defects hide in the tests you haven't written, so quality is a reflection of the accuracy and completeness of the tests. Likewise, tests that leverage a custom test framework are only as reliable as the completeness and accuracy of the tool. If the framework takes on the lions share of the work, then the outcome of the test is abdicated to the tool. This is dangerous because any change to the tool's logic could unintentionally allow subtle defects in the tested code to go unnoticed. False positives = tests that lie!

Recommendations

Tests for Tools

Oddly enough, if your goal is to write a tool so that you don't need to write tests, you are going to need to write tests for the tool. Having tests for your custom tool ensures that false positives aren’t introduced as the tool is enhanced over time. From a knowledge transfer perspective, the tests serve as a baseline to describe the tool’s responsibilities and enable others to add new functionality (and new tests) with minimal oversight from the original tool author.

Design for Readability

Great frameworks fit in and enable you to express intent; so be careful of over-reaching and trying to do too much. Make sure your tool doesn't abstract away clues as to what the test is intended for, and if possible use descriptive method names to improve readability. Documenting your tool with Xml documentation syntax and comments is also very helpful for intent.

Evolve Naturally

Unless you spend weeks planning out your unit tests, custom test tools are the product of necessity that are realized when you start to write the second or third duplicated test.  I tend to realize my test tools as “found treasures” of my normal TDD development cycle.

Rather than diving in and writing a framework or tool first, focus on satisfying the requirement of working software by writing the test using normal TDD best practices (red, green, refactor). During the clean up and refactor step, you will find duplication between tests or even different fixtures. When that happens, promote the duplicated code to a helper method, then a helper class. I like to think that great TDD is about refactoring mercilessly while keeping the original intent of the code clear – it’s about balance, know when you’ve gone too far.

If you a reach a point where you can extract commonalities between a few helper classes into a generic solution, you’ll find yourself standing in the doorway between where your fixtures were self-contained and where they’ve become dependent on shared test logic.  Learn to recognize this moment because this is when you should stop writing tests for production code for a moment and write some tests for the tool. It’s also a good idea to keep a few of the manual tests around so that you can go back into the production code and deliberately break it to prove that both sets of tests are equivalent.

…Until next time. Happy coding.  (Now where’s my dang power supply?)

0 comments: