You know, it’s easy to forget the basics after you’ve been doing something for a while. Such is the case with TDD – I don’t have to remind myself of the fundamental “red, green, refactor” mantra everything I write a new test, it’s just baked in. When it’s time to write something new, the good habits kick in and I write a test. After all, this is what the Driven part of Test Driven Development is about: we drive our development through the creation of tests.
The funny thing is, the goal of TDD isn’t to produce tests. Tests are merely a by-product of the development of the code, and having tests that demonstrate that the code works is one of the benefits. Once they’re written, we forget about them and move on – we only return to them if something unexpected broke.
Wait. Why are they breaking? Maybe we forgot something, somewhere.
The Safety Net Myth
One of the reasons that tests break is because there’s a common perception that once the code is written, we no longer need the tests to drive development. “We’ve got tests, so let’s just see what breaks after I make these changes…”
This strategy works when you want to try “what-if” scenarios or simple proper refactorings, but it falls flat for long-term coding sessions. The value of the tests diminish quickly the longer the coding session lasts. Simply put, tests are not safety nets – if you go off making changes for a few days you’re only going to find that the tests get in the way as they don’t represent your changes and your code won’t compile.
This may seem rudimentary, but let’s go back and review the absolute basics of TDD methodology:
- Start by writing a failing test. (RED)
- Implement the code necessary to make that test pass. (GREEN)
- Remove any duplication and clean it up. (REFACTOR)
It’s easy to forget the basics. The very first step is to make sure we have a test that doesn’t pass before we do any work, and this is easily overlooked when we already have tests for that functionality.
Writing tests for new functionality
If you want to introduce new functionality to your code base, challenge your team to introduce those changes to the tests first. This may seem altruistic to some, especially if it’s been a long time since the tests were written or if no-one on the team is familiar with the tests or their value.
Here’s a ridiculously simple tip:
- Locate the code you think may need to change for this feature.
- Introduce a fatal error into the code. Maybe comment out the return value and return null, or throw an exception.
- Run the tests.
With luck, all the areas of your tests that are impacted by this code are broken. Review these tests and ask yourself:
- Does this test represent a valid requirement after I introduce my change? If not, it’s safe to remove it.
- How does this test relate to the change that I’m introducing? Would my change alter the expected results of this test? If yes, change the expected results. These tests should fail after you remove the fatal flaw you introduced moments ago.
- Do any of these tests represent the new functionality I want to introduce? If not, write that test now.
(If nothing breaks, you’ve got a different problem. Do some research on what it would take to get this code under a test, and write tests for new functionality.)
Conclusion
The duct tape programmer will argue that you can’t make an omelette without breaking some eggs, which is true – we should have the courage to stand up and fix things that are wrong. But I’d argue that you must do your homework first - if you don’t check for other ingredients, you’re just making scrambled eggs.
In my experience, long term refactorings that don’t leverage the tests are a recipe for test-abandonment; your tests and code should always be moments from being able to compile. The best way to keep the tests valid is to remember the basics – they should be broken before you start introducing changes.
0 comments:
Post a Comment