Monday, October 18, 2010

Callbacks with Moq

My current project is using Moq as our test mocking framework. Although I've been a fan of RhinoMocks for several years I've found that in general, they both support the same features but I'm starting to think I like Moq's syntax better. I'm not going to go into an in-depth comparison of the mocking frameworks, as that's well documented elsewhere. Instead, I want to zero in on a feature that I've had my eye on for sometime now.

Both RhinoMocks and Moq have support for a peculiar feature that let's you invoke a callback method in your test code when a method on your mock is called. This feature seems odd to me because in the majority of cases a mock is either going to return a value or throw an exception - surely invoking some arbitrary method in your test must represent a very small niche requirement.

According to the RhinoMock documentation, the callback takes a delegate that returns true or false. From this it's clear that you would use this delegate to inspect the method's incoming parameters and return false if it didn't meet your expectations. However, as Ayende eludes, this is a powerful feature that can be easily be abused.

Moq provides this feature too, but the syntax is different. Rather than requiring a delegate Func<bool>, Moq's Callback mimics the signature of the mocked method. From this syntax it's not obvious that the callback should be used for validating inbound parameters, which suggests that it could be abused, but it also implies freedom for the test author to do other things. Granted this can get out of control and can be abusive, but perhaps a level of discretion about it's usage is also implied?

Here are a few examples where Callbacks have been helpful:

Validating inbound method parameters

The best example I can imagine for inspecting an inbound parameter is for a method that has no return value. For example, sending an object to a logger. If we were to assume that validation logic of the inbound parameter is the responsibility of the logger, we would only identify invalid arguments at runtime. Using the callback technique, we can write a test to enforce that the object being logged meets minimum validation criteria.

[TestMethod]
public void Ensure_Exception_Is_Logged_Properly()
{
  Exception exception;

  Mock.Get(Logger)
      .Setup( l => l.Error(It.IsAny<Exception>())
      .Callback<Exception>( ex => exception = ex )

  Subject.DoSomethingThatLogsException();

  Assert.AreEqual(ex.Message, "Fatal error doing something");
}

Changing state of inbound parameters

Imagine a WPF that uses the MVVM pattern and we need to launch a view model as a modal dialog. The user can make changes to the view model in the dialog and click ok, or they can click cancel. If the user clicks ok, the view model state needs to reflect their changes. However if they click cancel, any changes made need to be discarded.

Here's the code:

public class MyViewModel : ViewModel
{
  /* snip */

  public virtual bool Show()
  {
      var clone = this.Clone();
      var popup = new PopupWindow();
      popup.DataContext = clone;

      if (_popupService.ShowModal(popup))
      {
          CopyStateFrom(clone);
          return true;
      }
      return false;
  }

}

Assuming that the popup service is a mock object that returns true when the Ok button is clicked, how do I test that the contents of the popup dialog are copied back into the subject? How do I guarantee that changes aren't applied if the user clicks cancel?

The challenges with the above code is that the clone is a copy of my subject. I have no interception means to mock this object unless I introduce a mock ObjectCloner into the subject (that is ridiculous btw). In addition to this, the changes to the view model happen while the dialog is shown.

While the test looks unnatural, Callbacks fit this scenario really well.

[TestMethod]
public void When_User_Clicks_Ok_Ensure_Changes_Are_Applied()
{
  Mock.Get(PopupService)
      .Setup(p => p.ShowModal(It.IsAny<PopupWindow>())
      .Callback<PopupWindow>( ChangeViewModel )
      .Returns(true);

  var vm = new MyViewModel(PopupService)
              {
                  MyProperty = "Unchanged"
              };

  vm.Show();

  Assert.AreEqual("Changed", vm.MyProperty);
}

private void ChangeViewModel(PopupWindow window)
{
  var viewModel = window.DataContext as MyViewModel;
  viewModel.MyProperty = "Changed";
}

The key distinction here is that changes that occur to the popup are in no way related to the implementation of the popup service. The changes in state are a side-effect of the object passing through the mock. We could have rolled our own mock to simulate this behavior, but Callbacks make this unnecessary.

Conclusion

All in all, Callbacks are an interesting feature that allow us to write sophisticated functionality for our mock implementations. They provide a convenient interception point for parameters that would normally be difficult to get under the test microscope.

How are you using callbacks? What scenarios have you found where callbacks were necessary?

submit to reddit

Tuesday, October 12, 2010

Working with Existing Tests

You know, it’s easy to forget the basics after you’ve been doing something for a while.  Such is the case with TDD – I don’t have to remind myself of the fundamental “red, green, refactor” mantra everything I write a new test, it’s just baked in.  When it’s time to write something new, the good habits kick in and I write a test.  After all, this is what the Driven part of Test Driven Development is about: we drive our development through the creation of tests.

The funny thing is, the goal of TDD isn’t to produce tests.  Tests are merely a by-product of the development of the code, and having tests that demonstrate that the code works is one of the benefits.  Once they’re written, we forget about them and move on – we only return to them if something unexpected broke.

Wait.  Why are they breaking?  Maybe we forgot something, somewhere.

The Safety Net Myth

One of the reasons that tests break is because there’s a common perception that once the code is written, we no longer need the tests to drive development.  “We’ve got tests, so let’s just see what breaks after I make these changes…”

This strategy works when you want to try “what-if” scenarios or simple proper refactorings, but it falls flat for long-term coding sessions.  The value of the tests diminish quickly the longer the coding session lasts.  Simply put, tests are not safety nets – if you go off making changes for a few days you’re only going to find that the tests get in the way as they don’t represent your changes and your code won’t compile.

This may seem rudimentary, but let’s go back and review the absolute basics of TDD methodology:

  1. Start by writing a failing test. (RED)
  2. Implement the code necessary to make that test pass. (GREEN)
  3. Remove any duplication and clean it up.  (REFACTOR)

It’s easy to forget the basics.  The very first step is to make sure we have a test that doesn’t pass before we do any work, and this is easily overlooked when we already have tests for that functionality.

Writing tests for new functionality

If you want to introduce new functionality to your code base, challenge your team to introduce those changes to the tests first.  This may seem altruistic to some, especially if it’s been a long time since the tests were written or if no-one on the team is familiar with the tests or their value. 

Here’s a ridiculously simple tip:

  1. Locate the code you think may need to change for this feature.
  2. Introduce a fatal error into the code.  Maybe comment out the return value and return null, or throw an exception.
  3. Run the tests.

With luck, all the areas of your tests that are impacted by this code are broken.  Review these tests and ask yourself:

  • Does this test represent a valid requirement after I introduce my change?  If not, it’s safe to remove it.
  • How does this test relate to the change that I’m introducing?  Would my change alter the expected results of this test?  If yes, change the expected results. These tests should fail after you remove the fatal flaw you introduced moments ago.
  • Do any of these tests represent the new functionality I want to introduce?  If not, write that test now.

(If nothing breaks, you’ve got a different problem.  Do some research on what it would take to get this code under a test, and write tests for new functionality.)

Conclusion

The duct tape programmer will argue that you can’t make an omelette without breaking some eggs, which is true – we should have the courage to stand up and fix things that are wrong.  But I’d argue that you must do your homework first - if you don’t check for other ingredients, you’re just making scrambled eggs. 

In my experience, long term refactorings that don’t leverage the tests are a recipe for test-abandonment; your tests and code should always be moments from being able to compile.  The best way to keep the tests valid is to remember the basics – they should be broken before you start introducing changes.

submit to reddit

Tuesday, October 05, 2010

Writing Easier to Understand Tests

For certain, the long term success of any project that leverages Tests has got to be Tests that are easy to understand and provide value.

For me, readability is the gateway to value. I want to be able to open the test, and BOOM! Here’s the subject, here’s the dependencies, this is what I’m doing, and here’s what I expect it to do. If I can’t figure it out within a few seconds, I start to question the value of the tests. And that’s exactly what happened to me earlier this week.

The test I was looking at had some issues, but the developer who wrote it had their heart in the right place and made attempts to keep it relatively straight-forward. It was using a Context-Specification style of test and was using Moq to Mock out the physical dependencies, but I got tied up in the mechanics of the test. I found that the trouble I was having was determining which Mocks were part of the test versus Mocks that were in the test to support related dependencies.

Below is an example of a similar test and the steps I took to clean it up. Along the way I found something interesting, and I hope you do, too.

Original (Cruft Flavor)

Here’s a sample of the test. For clarity sake, I only want to illustrate the initial setup of the test, so I’ve omitted the actual test part. Note, I’m using a flavor of context specification that I’ve blogged about before, if it seems like a strange syntax, you may want to read up.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected Mock<IDataProvider> _mockDataProvider;
   protected Mock<IUserContext> _mockUserContext;
   protected IDataProvider _dataProvider;
   protected IUserContext _userContext;

   public override void Context()
   {
       _mockDataProvider = new Mock<IDataProvider>();
       _mockUserContext = new Mock<IUserContext>();

       _userContext = _mockUserContext.Object;
       _dataProvider = _mockDataProvider.Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(_dataProvider,_userContext);
   }

   // public class SubContext : TemplateResolverSpecs
   // etc
}

This is a fairly simple example and certainly those familiar with Moq’s syntax and general dependency injection patterns won’t have too much difficultly understanding what’s going on here. But you have to admit that while this is a trivial example there’s a lot of code here for what’s needed – and you had to read all of it.

The Rewrite

When I started to re-write this test, my motivation was for sub-classing the test fixture to create different contexts -- maybe I would want to create a context where I used Mocks, and another for real dependencies. I started to debate whether it would be wise to put the Mocks in a subclass or in the base when it occurred to me why the test was confusing in the first place: the Mocks are an implementation detail that are getting in the way of understanding the dependencies to the subject. The Mocks aren’t important at all – it’s the dependencies that matter!

So, here’s the same setup with the Mocks moved out of the way, only referenced in the initialization of the test’s Context.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected IDataProvider DataProvider;
   protected IUserContext UserContext;

   public override void Context()
   {
       DataProvider = new Mock<IDataProvider>().Object;
       UserContext = new Mock<IUserContext>().Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(DataProvider, UserContext);
   }
}

Much better, don’t you think? Note, I’ve also removed the underscore and changed the case on my fields because they’re protected and that goes a long way to improve readability, too.

Where’d my Mock go?

So you’re probably thinking “that’s stupid great Bryan, but I was actually using those mocks” – and that’s a valid observation, but the best part is you don’t really need them anymore. Moq has a cool feature that let’s you obtain the Mock wrapper from the mocked-object anytime you want, so you only need the mock when it’s time to use it.

Simply use the static Get method on the Mock class to obtain a reference to your mock:

Mock.Get(DataProvider)
   .Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
   .Returns( new TemplateRecord() );

For contrast sake, here’s what the original would have looked like:

_mockDataProvider.Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
                .Returns( new TemplateRecord() );

They’re basically the same, but the difference is I don’t have to remember the name of the variable for the mock anymore. And as an added bonus, our Mock.Setup calls will all line up with the same indentation, regardless of the length of the dependencies’ variable name.

Conclusion

While the above is just an example of how tests can be trimmed down for added readability, my hope is that this readability influences developers to declare their dependencies up front rather than weighing themselves and their tests down with the mechanics of the tests. If you find yourself suddenly requiring a Mock in the middle of the test, or creating Mocks for items that aren’t immediate dependencies, it should serve as a red flag that you might not be testing the subject in isolation and you may want to step-back and re-think things a bit.

submit to reddit

Monday, October 04, 2010

Manually adding Resources Files to Visual Studio Projects

Suppose you're moving files between projects and you have to move a Settings or Resource File.  If you drag these files between projects, you’ll notice that Visual Studio doesn’t preserve the relationships between the designer files:

 settings-resource

Since these are embedded resources, you’ll need a few extra steps to get things sorted out:

  1. Right-click on the Project and choose Unload Project.  Visual Studio may churn for a few seconds.
  2. Once unloaded, Right-click on the project file and choose Edit <Project-Name>.
  3. Locate your resource files and setup them up to be embedded resources with auto-generated designer files:
<ItemGroup>
  <None Include="Resources\Settings.settings">
    <Generator>SettingsSingleFileGenerator</Generator>
    <LastGenOutput>Settings.Designer.cs</LastGenOutput>
  </None>
  <Compile Include="Resources\Settings.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Settings.settings</DependentUpon>
    <DesignTimeSharedInput>True</DesignTimeSharedInput>
  </Compile>
  <EmbeddedResource Include="Resources\Resources.resx">
    <Generator>ResXFileCodeGenerator</Generator>
    <LastGenOutput>Resources.Designer.cs</LastGenOutput>
  </EmbeddedResource>
  <Compile Include="Resources\Resources.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Resources.resx</DependentUpon>
    <DesignTime>True</DesignTime>
  </Compile>
</ItemGroup>

Cheers.

submit to reddit