Monday, December 13, 2010

Unit Test Review Checklist

Whenever someone mentions that they want to bring a code-review process into a project, my first inclination is to have teams review the tests.  Tests should be considered production assets that describe the purpose and responsibilities of the code, and developers should be making sure that their tests satisfy this goal.  If the tests look good and they cover the functionality that is required, I don’t see much point in dwelling on the implementation: if the implementation sucks, you can refactor freely assuming you have proper coverage from your tests.

A unit test review process is a great way to ensure that developers are writing useful tests that will help deliver high quality code that is flexible to change.  As code-review processes have the tendency to create stylistic debates within a development group, having a checklist of the 5-10 points to look for is a great way to keep things moving in the right direction.

Here’s my criteria that I’d like to see in a unit test.  Hopefully you find this useful, too.

Pass Guideline
 

Is the test well named?

  • Does the test name clearly represent the functionality being tested?
  • Will someone with little or no knowledge of the component be able to decipher why the test exists?
 

Is the test independent?

  • Does the test represent a single unit of work?
  • Can the test be executed on its own or is it dependent on the outcome of tests that preceded it?  For example, a fixture that shares state between tests may inadvertently include side-effects.
 

Is the test reliable?

  • Does the test produce the same outcome consistently?
  • Does the test run without manual intervention to determine a successful outcome?
 

Is the test durable?

  • Is the test designed so that it isn’t susceptible to changes in other parts of the application?  For example:
    • Does the test have complex or lengthy setup?
    • Will changes in the subject's dependencies lead to breakages?
 

Are the assertions valid?

  • Does the test contain assertions that describe the functionality being tested?
  • Do the assertions include helpful error messaging that describe what should have happened if the test fails?
  • Are any assertions redundant such as testing features of the CLR (constructors or get/set properties)?
  • Are the assertions specific to this test or are they duplicated in other tests?
 

Is the test well structured and documented?

  • Does the test contain sufficient comments?
  • Does the test highlight the subject under test and corresponding functionality being exercised?
  • Is there a clear indication of arrange/act/assert pattern such that the importance of the arrange makes sense, the action easily identifiable and the assertions clear?
 

Is the test isolating responsibilities of the subject?

  • Does the test make any assumptions about the internals of dependencies that aren't immediately related to the subject?
  • Is the test inaccurately or indirectly testing a responsibility or implementation of another object?  If so, move the test to the appropriate fixture.
 

Are all scenarios covered?

  • Does the fixture test all paths of execution? Not just for the code that has been written, but also for scenarios that may have been missed:
    • invalid input?;
    • failures in dependencies? 
 

Does the test handle resources correctly?

  • If the test utilizes external dependencies (databases, static resources), is the state of these dependencies returned to a neutral state?
  • Are resources allocated efficiently to ensure that the test runs as fast as possible:
    • setup/teardown effectively;
    • No usages of Thread.Sleep()?
 

Is the complexity of the test balanced?

Tests need to strike a balance between Don't Repeat Yourself and clear examples of how to use the subject.

  • Is the test too complicated or difficult to follow?
  • Is there a lot of duplicated code that would lead to time consuming effort if the subject changed? 

 

What do you think?  Are there criteria that you’d add?  Things you disagree with?

Happy coding.

Monday, October 18, 2010

Callbacks with Moq

My current project is using Moq as our test mocking framework. Although I've been a fan of RhinoMocks for several years I've found that in general, they both support the same features but I'm starting to think I like Moq's syntax better. I'm not going to go into an in-depth comparison of the mocking frameworks, as that's well documented elsewhere. Instead, I want to zero in on a feature that I've had my eye on for sometime now.

Both RhinoMocks and Moq have support for a peculiar feature that let's you invoke a callback method in your test code when a method on your mock is called. This feature seems odd to me because in the majority of cases a mock is either going to return a value or throw an exception - surely invoking some arbitrary method in your test must represent a very small niche requirement.

According to the RhinoMock documentation, the callback takes a delegate that returns true or false. From this it's clear that you would use this delegate to inspect the method's incoming parameters and return false if it didn't meet your expectations. However, as Ayende eludes, this is a powerful feature that can be easily be abused.

Moq provides this feature too, but the syntax is different. Rather than requiring a delegate Func<bool>, Moq's Callback mimics the signature of the mocked method. From this syntax it's not obvious that the callback should be used for validating inbound parameters, which suggests that it could be abused, but it also implies freedom for the test author to do other things. Granted this can get out of control and can be abusive, but perhaps a level of discretion about it's usage is also implied?

Here are a few examples where Callbacks have been helpful:

Validating inbound method parameters

The best example I can imagine for inspecting an inbound parameter is for a method that has no return value. For example, sending an object to a logger. If we were to assume that validation logic of the inbound parameter is the responsibility of the logger, we would only identify invalid arguments at runtime. Using the callback technique, we can write a test to enforce that the object being logged meets minimum validation criteria.

[TestMethod]
public void Ensure_Exception_Is_Logged_Properly()
{
  Exception exception;

  Mock.Get(Logger)
      .Setup( l => l.Error(It.IsAny<Exception>())
      .Callback<Exception>( ex => exception = ex )

  Subject.DoSomethingThatLogsException();

  Assert.AreEqual(ex.Message, "Fatal error doing something");
}

Changing state of inbound parameters

Imagine a WPF that uses the MVVM pattern and we need to launch a view model as a modal dialog. The user can make changes to the view model in the dialog and click ok, or they can click cancel. If the user clicks ok, the view model state needs to reflect their changes. However if they click cancel, any changes made need to be discarded.

Here's the code:

public class MyViewModel : ViewModel
{
  /* snip */

  public virtual bool Show()
  {
      var clone = this.Clone();
      var popup = new PopupWindow();
      popup.DataContext = clone;

      if (_popupService.ShowModal(popup))
      {
          CopyStateFrom(clone);
          return true;
      }
      return false;
  }

}

Assuming that the popup service is a mock object that returns true when the Ok button is clicked, how do I test that the contents of the popup dialog are copied back into the subject? How do I guarantee that changes aren't applied if the user clicks cancel?

The challenges with the above code is that the clone is a copy of my subject. I have no interception means to mock this object unless I introduce a mock ObjectCloner into the subject (that is ridiculous btw). In addition to this, the changes to the view model happen while the dialog is shown.

While the test looks unnatural, Callbacks fit this scenario really well.

[TestMethod]
public void When_User_Clicks_Ok_Ensure_Changes_Are_Applied()
{
  Mock.Get(PopupService)
      .Setup(p => p.ShowModal(It.IsAny<PopupWindow>())
      .Callback<PopupWindow>( ChangeViewModel )
      .Returns(true);

  var vm = new MyViewModel(PopupService)
              {
                  MyProperty = "Unchanged"
              };

  vm.Show();

  Assert.AreEqual("Changed", vm.MyProperty);
}

private void ChangeViewModel(PopupWindow window)
{
  var viewModel = window.DataContext as MyViewModel;
  viewModel.MyProperty = "Changed";
}

The key distinction here is that changes that occur to the popup are in no way related to the implementation of the popup service. The changes in state are a side-effect of the object passing through the mock. We could have rolled our own mock to simulate this behavior, but Callbacks make this unnecessary.

Conclusion

All in all, Callbacks are an interesting feature that allow us to write sophisticated functionality for our mock implementations. They provide a convenient interception point for parameters that would normally be difficult to get under the test microscope.

How are you using callbacks? What scenarios have you found where callbacks were necessary?

submit to reddit

Tuesday, October 12, 2010

Working with Existing Tests

You know, it’s easy to forget the basics after you’ve been doing something for a while.  Such is the case with TDD – I don’t have to remind myself of the fundamental “red, green, refactor” mantra everything I write a new test, it’s just baked in.  When it’s time to write something new, the good habits kick in and I write a test.  After all, this is what the Driven part of Test Driven Development is about: we drive our development through the creation of tests.

The funny thing is, the goal of TDD isn’t to produce tests.  Tests are merely a by-product of the development of the code, and having tests that demonstrate that the code works is one of the benefits.  Once they’re written, we forget about them and move on – we only return to them if something unexpected broke.

Wait.  Why are they breaking?  Maybe we forgot something, somewhere.

The Safety Net Myth

One of the reasons that tests break is because there’s a common perception that once the code is written, we no longer need the tests to drive development.  “We’ve got tests, so let’s just see what breaks after I make these changes…”

This strategy works when you want to try “what-if” scenarios or simple proper refactorings, but it falls flat for long-term coding sessions.  The value of the tests diminish quickly the longer the coding session lasts.  Simply put, tests are not safety nets – if you go off making changes for a few days you’re only going to find that the tests get in the way as they don’t represent your changes and your code won’t compile.

This may seem rudimentary, but let’s go back and review the absolute basics of TDD methodology:

  1. Start by writing a failing test. (RED)
  2. Implement the code necessary to make that test pass. (GREEN)
  3. Remove any duplication and clean it up.  (REFACTOR)

It’s easy to forget the basics.  The very first step is to make sure we have a test that doesn’t pass before we do any work, and this is easily overlooked when we already have tests for that functionality.

Writing tests for new functionality

If you want to introduce new functionality to your code base, challenge your team to introduce those changes to the tests first.  This may seem altruistic to some, especially if it’s been a long time since the tests were written or if no-one on the team is familiar with the tests or their value. 

Here’s a ridiculously simple tip:

  1. Locate the code you think may need to change for this feature.
  2. Introduce a fatal error into the code.  Maybe comment out the return value and return null, or throw an exception.
  3. Run the tests.

With luck, all the areas of your tests that are impacted by this code are broken.  Review these tests and ask yourself:

  • Does this test represent a valid requirement after I introduce my change?  If not, it’s safe to remove it.
  • How does this test relate to the change that I’m introducing?  Would my change alter the expected results of this test?  If yes, change the expected results. These tests should fail after you remove the fatal flaw you introduced moments ago.
  • Do any of these tests represent the new functionality I want to introduce?  If not, write that test now.

(If nothing breaks, you’ve got a different problem.  Do some research on what it would take to get this code under a test, and write tests for new functionality.)

Conclusion

The duct tape programmer will argue that you can’t make an omelette without breaking some eggs, which is true – we should have the courage to stand up and fix things that are wrong.  But I’d argue that you must do your homework first - if you don’t check for other ingredients, you’re just making scrambled eggs. 

In my experience, long term refactorings that don’t leverage the tests are a recipe for test-abandonment; your tests and code should always be moments from being able to compile.  The best way to keep the tests valid is to remember the basics – they should be broken before you start introducing changes.

submit to reddit

Tuesday, October 05, 2010

Writing Easier to Understand Tests

For certain, the long term success of any project that leverages Tests has got to be Tests that are easy to understand and provide value.

For me, readability is the gateway to value. I want to be able to open the test, and BOOM! Here’s the subject, here’s the dependencies, this is what I’m doing, and here’s what I expect it to do. If I can’t figure it out within a few seconds, I start to question the value of the tests. And that’s exactly what happened to me earlier this week.

The test I was looking at had some issues, but the developer who wrote it had their heart in the right place and made attempts to keep it relatively straight-forward. It was using a Context-Specification style of test and was using Moq to Mock out the physical dependencies, but I got tied up in the mechanics of the test. I found that the trouble I was having was determining which Mocks were part of the test versus Mocks that were in the test to support related dependencies.

Below is an example of a similar test and the steps I took to clean it up. Along the way I found something interesting, and I hope you do, too.

Original (Cruft Flavor)

Here’s a sample of the test. For clarity sake, I only want to illustrate the initial setup of the test, so I’ve omitted the actual test part. Note, I’m using a flavor of context specification that I’ve blogged about before, if it seems like a strange syntax, you may want to read up.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected Mock<IDataProvider> _mockDataProvider;
   protected Mock<IUserContext> _mockUserContext;
   protected IDataProvider _dataProvider;
   protected IUserContext _userContext;

   public override void Context()
   {
       _mockDataProvider = new Mock<IDataProvider>();
       _mockUserContext = new Mock<IUserContext>();

       _userContext = _mockUserContext.Object;
       _dataProvider = _mockDataProvider.Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(_dataProvider,_userContext);
   }

   // public class SubContext : TemplateResolverSpecs
   // etc
}

This is a fairly simple example and certainly those familiar with Moq’s syntax and general dependency injection patterns won’t have too much difficultly understanding what’s going on here. But you have to admit that while this is a trivial example there’s a lot of code here for what’s needed – and you had to read all of it.

The Rewrite

When I started to re-write this test, my motivation was for sub-classing the test fixture to create different contexts -- maybe I would want to create a context where I used Mocks, and another for real dependencies. I started to debate whether it would be wise to put the Mocks in a subclass or in the base when it occurred to me why the test was confusing in the first place: the Mocks are an implementation detail that are getting in the way of understanding the dependencies to the subject. The Mocks aren’t important at all – it’s the dependencies that matter!

So, here’s the same setup with the Mocks moved out of the way, only referenced in the initialization of the test’s Context.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected IDataProvider DataProvider;
   protected IUserContext UserContext;

   public override void Context()
   {
       DataProvider = new Mock<IDataProvider>().Object;
       UserContext = new Mock<IUserContext>().Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(DataProvider, UserContext);
   }
}

Much better, don’t you think? Note, I’ve also removed the underscore and changed the case on my fields because they’re protected and that goes a long way to improve readability, too.

Where’d my Mock go?

So you’re probably thinking “that’s stupid great Bryan, but I was actually using those mocks” – and that’s a valid observation, but the best part is you don’t really need them anymore. Moq has a cool feature that let’s you obtain the Mock wrapper from the mocked-object anytime you want, so you only need the mock when it’s time to use it.

Simply use the static Get method on the Mock class to obtain a reference to your mock:

Mock.Get(DataProvider)
   .Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
   .Returns( new TemplateRecord() );

For contrast sake, here’s what the original would have looked like:

_mockDataProvider.Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
                .Returns( new TemplateRecord() );

They’re basically the same, but the difference is I don’t have to remember the name of the variable for the mock anymore. And as an added bonus, our Mock.Setup calls will all line up with the same indentation, regardless of the length of the dependencies’ variable name.

Conclusion

While the above is just an example of how tests can be trimmed down for added readability, my hope is that this readability influences developers to declare their dependencies up front rather than weighing themselves and their tests down with the mechanics of the tests. If you find yourself suddenly requiring a Mock in the middle of the test, or creating Mocks for items that aren’t immediate dependencies, it should serve as a red flag that you might not be testing the subject in isolation and you may want to step-back and re-think things a bit.

submit to reddit

Monday, October 04, 2010

Manually adding Resources Files to Visual Studio Projects

Suppose you're moving files between projects and you have to move a Settings or Resource File.  If you drag these files between projects, you’ll notice that Visual Studio doesn’t preserve the relationships between the designer files:

 settings-resource

Since these are embedded resources, you’ll need a few extra steps to get things sorted out:

  1. Right-click on the Project and choose Unload Project.  Visual Studio may churn for a few seconds.
  2. Once unloaded, Right-click on the project file and choose Edit <Project-Name>.
  3. Locate your resource files and setup them up to be embedded resources with auto-generated designer files:
<ItemGroup>
  <None Include="Resources\Settings.settings">
    <Generator>SettingsSingleFileGenerator</Generator>
    <LastGenOutput>Settings.Designer.cs</LastGenOutput>
  </None>
  <Compile Include="Resources\Settings.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Settings.settings</DependentUpon>
    <DesignTimeSharedInput>True</DesignTimeSharedInput>
  </Compile>
  <EmbeddedResource Include="Resources\Resources.resx">
    <Generator>ResXFileCodeGenerator</Generator>
    <LastGenOutput>Resources.Designer.cs</LastGenOutput>
  </EmbeddedResource>
  <Compile Include="Resources\Resources.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Resources.resx</DependentUpon>
    <DesignTime>True</DesignTime>
  </Compile>
</ItemGroup>

Cheers.

submit to reddit

Wednesday, August 18, 2010

An update

“The rumors of my death have been largely exaggerated." – Mark Twain.

It’s been forever and a day since my last post, largely due to the size and complexity of my current project, but for those wondering (and to clear the writers block in my head), here’s a quick update:

Selenium Toolkit for .NET

Although there hasn’t been a formal release on my open source project since last November, I have been covertly making minor commits here and there to keep the NUnit dependencies in sync with NUnit’s recent updates.  I may update the NUnit libraries to the latest release (2.5.7) this weekend.  I realize the irony here: the purpose of my project is to provide a simple installer for Selenium and my NUnit addin, forcing you to download the source and compile it is beyond excuse.  Here are some of the things that I have been tinkering with to include in a major release, but I haven’t fully sat down to finish:

  • WiX-based Installation:  I’ve been meaning to drop the out of the box Visual Studio installer project in favor of a WiX-based installer.  The real intent is to install side-by-side versions of my NUnit addin for each version of NUnit installed locally.
  • Resharper 5.0 integration:  The good folks at JetBrains, through their commitment to open source projects, have recognized my open source project and have donated a license of Resharper (read: YOU GUYS ROCK!).  To return the favor, I am looking to produce a custom runner for tests with my WebFixture attribute so that you can right click any fixture or test to launch the selenium process just like you would with NUnit or MSTest.
  • Visual Studio 2008/2010 integration: Not to leave the rest of the development world without Resharper licenses in the cold, I have been playing with a custom version MSTest extension.  Unfortunately for me, the out-of-the-box functionality for MSTest isn’t enough for my needs – I would really like to control execution of the environment before and after the test as well as when all tests have finished execution – there doesn’t seem to be a good hook for this.  It appears that my best option is to implement my own test adapter and plumbing.  I’ll leave the details of my suffering and choking on this to your imagination or maybe another post.

What about Selenium 2.0, aka WebDriver?  I’m still sitting on the fence on how to adapt the toolkit to the new API but haven’t had a lot of time to play with it since my current project is a very thick client (no web).  I am interested to hear your thoughts, but my immediate reaction is to use a parameterized approach:

[WebTest]
public void UsingDefaultBrowser(WebDriver browser)
{
}

Other Ramblings

I won’t bore you to death with the details of the last few months – my twitter stream can do that – but expect to hear me spout more on WPF / Prism / Unity, TDD, Context/Specification soon.

In the meantime, happy coding.

Tuesday, February 09, 2010

Running code in a separate AppDomain

Suppose you’ve got a chunk of code that you need to run as part of your application but you’re concerned that it might bring down your app or introduce a memory leak.  Fortunately, the .NET runtime provides an easy mechanism to run arbitrary code in a separate AppDomain.  Not only can you isolate all exceptions to that AppDomain, but when the AppDomain unloads you can reclaim all the memory that was consumed.
Here’s a quick walkthrough that demonstrates creating an AppDomain and running some isolated code.

Create a new AppDomain

First we’ll create a new AppDomain based off the information of the currently running AppDomain.
AppDomainSetup currentSetup = AppDomain.CurrentDomain.SetupInformation;

var info = new AppDomainSetup()
              {
                  ApplicationBase = currentSetup.ApplicationBase,
                  LoaderOptimization = currentSetup.LoaderOptimization
              };

var domain = AppDomain.CreateDomain("Widget Domain", null, info);

Unwrap your MarshalByRefObject

Next we’ll create an object in that AppDomain and serialize a handle to it so that we can control the code in the remote AppDomain.  It’s important to make sure the object you’re creating inherits from MarshalByRefObject and is marked as serializable.  If you forget this step, the entire object will serialize over to the original AppDomain and you lose all isolation.
string assemblyName = "AppDomainExperiment";
string typeName = "AppDomainExperiment.MemoryEatingWidget";

IWidget widget = (IWidget)domain.CreateInstanceAndUnwrap(assemblyName, typeName);

Unload the domain

Once we’ve finished with the object, we can broom the entire AppDomain which frees up all resources attached to it.  In the example below, I’ve deliberately created a static reference to an object to prevent it from going out of scope.
AppDomain.Unload(domain);

Putting it all together

Here’s a sample that shows all the moving parts.
namespace AppDomainExperiment
{
    using System;
    using System.Collections.Generic;
    using System.IO;
    using Microsoft.VisualStudio.TestTools.UnitTesting;

    [Test]
    public class AppDomainLoadTests
    {
        [TestMethod]
        public void RunMarshalByRefObjectInSeparateAppDomain()
        {
            Console.WriteLine("Executing in AppDomain: {0}", AppDomain.CurrentDomain.Id);
            WriteMemory("Before creating the runner");

            using(var runner = new WidgetRunner("AppDomainExperiment",
                                                "AppDomainExperiment.MemoryEatingWidget"))
            {

                WriteMemory("After creating the runner");

                runner.Run(Console.Out);

                WriteMemory("After executing the runner");
            }

            WriteMemory("After disposing the runner");
        }

        private static void WriteMemory(string where)
        {
            GC.Collect();
            GC.WaitForPendingFinalizers();
            long memory = GC.GetTotalMemory(false);

            Console.WriteLine("Memory used '{0}': {1}", where, memory.ToString());
        }
    }

    public interface IWidget
    {
        void Run(TextWriter writer);
    }

    public class WidgetRunner
    {
        private readonly string _assemblyName;
        private readonly string _typeName;
        private AppDomain _domain;

        public WidgetRunner(string assemblyName, string typeName)
        {
            _assemblyName = assemblyName;
            _typeName = typeName;
        }

        #region IWidget Members

        public void Run(TextWriter writer)
        {
            AppDomainSetup currentSetup = AppDomain.CurrentDomain.SetupInformation;

            var info = new AppDomainSetup()
                          {
                              ApplicationBase = currentSetup.ApplicationBase,
                              LoaderOptimization = currentSetup.LoaderOptimization
                          };

            _domain = AppDomain.CreateDomain("Widget Domain", null, info);

            var widget = (IWidget)_domain.CreateInstanceAndUnwrap(_assemblyName, _typeName);

            if (!(widget is MarshalByRefObject))
            {
                throw new NotSupportedException("Widget must be MarshalBeRefObject");
            }
            widget.Run(writer);
        }

        #endregion

        #region IDisposable Members

        public void Dispose()
        {
            GC.SuppressFinalize(this);
            AppDomain.Unload(_domain);
        }

        #endregion
    }

    [Serializable]
    public class MemoryEatingWidget : MarshalByRefObject, IWidget
    {
        private IList<string> _memoryEater;

        private static IWidget Instance;

        #region IAppLauncher Members

        public void Run(TextWriter writer)
        {
            writer.WriteLine("Executing in AppDomain: {0}", AppDomain.CurrentDomain.Id);

            _memoryEater = new List<string>();

            // create some really big strings
            for(int i = 0; i < 100; i++)
            {
                var s = new String('c', i*100000);
                _memoryEater.Add(s);
            }

            // THIS SHOULD PREVENT THE MEMORY FROM BEING GC'd
            Instance = this;
        }

        #endregion

        #region IDisposable Members

        public void Dispose()
        {
            
        }

        #endregion
    }
}
Running the test shows the following output:
Executing in AppDomain: 2
Memory used 'Before creating the runner': 569060
Memory used 'After creating the runner': 487508
Executing in AppDomain: 3
Memory used 'After executing the runner': 990525340
Memory used 'After disposing the runner': 500340
Based on this output, the main take away is that the memory is reclaimed when the AppDomain is unloaded.  Why do the numbers not match up in the beginning and end?  It’s one of those mysteries of the managed garbage collector, it reminds me of my favorite Norm McDonald joke from SNL:
“Who are safer drivers? Men, or women?? Well, according to a new survey, 55% of adults feel that women are most responsible for minor fender-benders, while 78% blame men for most fatal crashes. Please note that the percentages in these pie graphs do not add up to 100% because the math was done by a woman. [Crowd groans.] For those of you hissing at that joke, it should be noted that that joke was written by a woman. So, now you don't know what the hell to do, do you? [Laughter] Nah, I'm just kidding, we don't hire women”
Happy Coding.

submit to reddit

Monday, February 08, 2010

Twelve Days of Code – Wrap up

Well it’s been a very long twelve days indeed, and I accomplished more than I thought I would.  But alas, all good things must come to end, so after a short hiatus on the blog I’m back to close out the Twelve Days of Code series for 2009.

For your convenience, here’s a list of the posts:

I want to thank all those who showed interest in the concept and if there are folks out there who were following along at home, please drop me a line or a comment.

For those interested in seeing some of the .NET 4.0 code and extending my work, the code is available for download.

I may pick up the experiment again once the next release candidate for Visual Studio is released.

submit to reddit

Tuesday, January 12, 2010

The Three Step Developer Flow

A long time ago, a mentor of mine passed on some good advise for developers that has stuck well with me: “Make it work.  Make it right.  Make it fast.”  While this simple mantra is likely influenced by Donald Knuth’s famous and misquoted statement that “premature optimization is the root of all evil, it’s more about how a developer should approach development altogether.

Breaking it down…

What I’ve always loved about this simple advice is that if a developer takes the steps out of order, such as putting emphasis on design or performance, there’s a very strong possibility that the code will never work.

Make it work…

Developers should take the most pragmatic solution possible to get the solution to work.  In some cases this should be considered prototype code that should be thrown away before going into production.  Sadly, I’m sure that 80% of all production code is prototype code with unrealized design.

Make it right…

Now that you know you know how to get the code to work, take some time to get into a shape that you can live with.  Interesting enough, emphasis should not be placed on designing for performance at this point.  If you can’t get to this stage, it should be considered technical debt to be resolved later.

Make it fast….

At this point you should have working code that looks clean and elegant, but how does it stack up when it’s integrated with production components or put under load?  If you had spent any time in the last two steps optimizing the code to handle load of a thousand users but it’s only called once in the application – you may have wasted your time and optimized prematurely.  To truly know, code should be examined under a profiler to determine if the code meets the performance goals of the application.  This is all a part of embedding a “culture of Performance” into your organization.

Aligned with Test Driven Development

It’s interesting that this concept overlaps with Test Driven Development’s mantra “Red, Green, Refactor” quite well.  Rather than developing prototype code as a console app, I write tests to prove that it works.  When it’s time to clean up the code and make it right, I’m refactoring both the tests and the code in small increments – after each change, I can verify that it still works. 

Later, if we identify performance issues with the code, I can use the tests as production assets to help me understand what the code needs to do.  This provides guidance when ripping out chunks of poorly performing code.

By following either the “red / green / refactor” or “make it work / right / fast" mantras mean that I don’t incorporate best practices or obvious implementation when writing the code? Hardly. I’ll write what I think needs to be written, but it’s important not to get carried away.

Write tests.  Test often.

submit to reddit

Thursday, January 07, 2010

Twelve Days of Code – Unity Framework

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the seventh in this series. This post focuses on using the Microsoft Unity Framework in our .NET 4.0 application.

This post assumes assumes that you are familiar with the last few posts and have a working understanding of Inversion of Control.  If you’re new to this series, please check out some of the previous posts and then come back.  If you’ve never worked with Inversion of Control, it’s primarily about decoupling objects from their implementation (Factory Pattern, Dependency Injection, Service Locator) – but the best place to start is probably here.

Overview

Our goal for this post is to remove as many hard-code references to types as possible.  Currently, the constructor of our MainWindow initializes all the dependencies of the Model and then manually sets the DataContext.  We need to pull this out of the constructor and decouple the binding from TaskApplicationViewModel to ITaskApplication.

Initializing the Container

We’ll introduce our inversion of control container in the Application’s OnStartup event.  The container’s job is to provide type location and object lifetime management for our objects, so all the hard-coded initialization logic in the MainWindow constructor is registered here.

public partial class App : Application
{
    protected override void OnStartup(StartupEventArgs e)
    {
        IUnityContainer container = new UnityContainer();
        
        container.RegisterType<ITaskApplication, TaskApplicationViewModel>();
        container.RegisterType<ITaskSessionController, TaskSessionController>();
        container.RegisterType<IAlarmController, TaskAlarmController>();
        container.RegisterType<ISessionRepository, TaskSessionRepository>();

        Window window = new MainWindow();
        window.DataContext = container.Resolve<ITaskApplication>();
        window.Show();

        
        base.OnStartup(e);
    }
}

Also note, we’re initializing the MainWindow with our data context and then displaying the window.  This small change means that we must remove the MainWindow.xaml reference in the App.xaml (otherwise we’ll launch two windows).

Next Steps

Aside from the simplified object construction, it would appear that the above code doesn’t buy us much: we have roughly the same number of lines of code and we are simply delegating objection construction to Unity.  Alternatively, we could move the hard-coded registrations to a configuration file (though that’s not my preference here).

In the next post, we’ll see how using Prism’s Bootstrapper and Module configuration allows us to move this logic into their own modules.

In the next step, we’ll look at using P

submit to reddit

Saturday, January 02, 2010

Twelve Days of Code – Entity Framework 4.0

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the sixth in this series. Today I’ll be adding some persistence logic to the Pomodoro application.

I should point I’ve never been a huge fan of object-relational mapping tools, and while I’ve done a few tours with SQL Server I haven’t played much with SQLite.  I’ve heard good things about the upcoming version of the Entity Framework in .NET 4.0, so this post gives me a chance to play with both.

Getting Ready

As SQLite isn’t one of the default providers supported by Visual Studio 2010 (Beta 2), I downloaded and installed SQLite.  The installation adds the SQLite libraries to the Global Assembly Cache, and adds Designer Support for connecting to a SQLite database through the Server Explorer.  The installation and GAC’d assemblies may prove to be an issue later when we want to deploy the application, but we’ll worry about that later.

Creating a Session Repository

So far, the project structure has a “Core” and “Shell” project where the “Core” project contains the central interfaces for the application.  Since the ITaskSessionController already has the responsibility of handling starting and stopping of sessions, it is the ideal candidate to interact with a ISessionRepository for recording these activities against a persistent store.

To handle auditing of sessions, I created a ISessionRepository interface which lives in the Core library:

interface ISessionRepository
{
    void RecordCompletion(ITaskSession session);
    void RecordCancellation(ITaskSession session);
}

Although we don't have an implementation for this interface, we do know how that an object with this signature will be used by the TaskSessionController.  In anticipation of these changes, we add tests to the TaskSessionController that verify it communicates with its dependency:

public TaskSessionControllerSpecs : SpecFor<TaskSessionController>
{
  // ... initial context setup with Mock ISessionRepository
  // omitted for clarity

  public class when_a_session_completes : TaskSessionControllerSpecs
  {
     // omitted for clarity

     [TestMethod]
     public void ensure_activity_is_recorded_in_repository()
     {
         repositoryMock.Verify( 
            r => r.RecordCompletion(
                    It.Is<ITaskSession>( (s) => s == session )
                        );
     }
  }
}

To ensure the tests pass, we extend the TaskSessionController to take a ISessionRepository in its constructor and add the appropriate implementation.  Naturally, because the constructor of the TaskSessionController has changed, we adjust the fixture setup so that the code will compile.  Below is a snippet of modified TaskSessionController:

public class TaskSessionController : ITaskSessionController
{
    public TaskSessionController(ISessionRepository repository)
    {
        SessionRepository = repository;
    }

    protected ISessionRepository SessionRepository
    {
        get; set;
    }

    public void Finish(ITaskSession session)
    {
        session.Stop()

        SessionRepository.RecordCompletion(session);
    }

    // ...omitted for clarity
}

Adding ADO.NET Entity Framework to the Project

While we could add the implementation of the ISessionRepository into the Core library, I’m going to add a new library Pomodoro.Data where we’ll add the Entity Framework model.  This strategy allows us to extend the Core model and provides us with some freedom to create alternate persistence strategies simply by swapping out the Pomodoro.Data assembly.

Once the project is created, we add the Entity Framework to the project using the Add New “ADO.NET Entity Data Model” and follow the wizard:

Pomodoro.Data

Note, that the wizard adds the appropriate references to the project automatically. 

Since we don’t have a working database, we’ll choose to create an Empty Model.  Later on, we’ll generate the database from our defined model.

empty-model 

Creating a Data Model

One of the new features of the Entity Framework 4.0 is that it allows you to bind to an existing data model.  Although the TaskSession could be considered as a candidate for an existing model, it doesn’t fit the bill cleanly – Sessions represent countdown timers and they don’t track the final outcome.  Instead, we’ll use the default behavior of the framework and manually generate a model class, TaskSessionRecord:

add-new-entity

For our auditing purposes, we only need to record an ID, Start and End Times and whether the session was completed or cancelled.

task-session-record

Creating the Database from the Model

After the model is complete, we generate the database from the model:

  1. Right click the designer and choose “Model Browser”
  2. In the Data Store, choose “Generate Database from Model”
  3. Create a new database connection.  In our case, we specify SQLite provider
  4. Finish our the wizard by clicking Next and Finish.

new-database-connection

The wizard produces the Database and the matching DDL file to generate the tables.  Note that SQLite must be installed in order to have it appear as an option in the wizard.

Unfortunately, I wasn’t able to create the SQLite database using any of the tools with Visual Studio.  Instead, I cheated and manually created the TaskSessionRecord table.  We’ll hang onto the generated ddl file because we may want to programmatically generate the database at some point.  For the time being, I’ll cheat and copy the database to the bin\Debug folder.

Implementing the Repository

The repository implementation is fairly straightforward.  We simply instantiate the object context (we specified part of the name when we added the ADO.NET Entity Data Model to the project), add a new TaskSessionRecord to the EntitySet and then save the changes to commit the transaction:

namespace Pomodoro.Data
{
    public class TaskSessionRepository : ISessionRepository
    {
        public void RecordCompletion(ITaskSession session)
        {
            using (var context = new PomodoroDataContainer())
            {
                context.TaskSessionRecords.AddObject(new TaskSessionRecord(session, true));
                context.SaveChanges();
            }
        }

        public void RecordCancellation(ITaskSession session)
        {
            using (var context = new PomodoroDataContainer())
            {
                context.TaskSessionRecords.AddObject(new TaskSessionRecord(session, false));
                context.SaveChanges();
            }
        }
    }
}

Note that to simplify the code, I extended the auto generated TaskSessionRecord class to provide a convenience constructor.  Since the auto generated class is marked as partial, the convenience constructor is placed in its own file.  As some of the existing generated code requires the presence of an empty constructor (which is implicitly defined by the compiler if not present), we must also include a default constructor.

public partial class TaskSessionRecord
{
    // needed to satisfy some of the existing generated code
    public TaskSessionRecord()
    {
    }

    public TaskSessionRecord(ITaskSession session, bool complete)
    {
        Id = session.Id;
        StartTime = session.StartTime;
        EndTime = session.EndTime;
        Complete = complete;
    }
}

Integrating into the Shell

To integrate the new SessionRepository into the Pomodoro application we need to add the database, Pomodoro.Data assembly and the appropriate configuration settings.  For the time being, we’ll add a reference to Pomodoro.Data library to the Shell application – this strategy may change if we introduce a composite application pattern such as Prism or MEF.  For brevity sake, I’ll manually copy the database into the bin\Debug folder.

The connection string settings appear in the app.config like so:

<connectionStrings>
  <-- formatted for readibiliy -->
  <add name="PomodoroDataContainer" 
       connectionString="metadata=res://*/PomodoroData.csdl|res://*/PomodoroData.ssdl|res://*/PomodoroData.msl;
                    provider=System.Data.SQLite;
                    provider connection string=&quot;data source=Pomodoro.sqlite&quot;" 
       providerName="System.Data.EntityClient" />
</connectionStrings>

One Last Gotcha

As the solution is compiled against .NET Framework 4.0 and our Sqlite assemblies are compiled against .NET 2.0, we receive a really nasty error when the System.Data.SQLite assembly loads into the AppDomain:

Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information

We solve this problem by adding useLegacyV2RuntimeActivationPolicy="true" to the app.config:

<startup useLegacyV2RuntimeActivationPolicy="true">
  <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
</startup>

Next Steps

In the next post, we’ll look at adding Unity as a dependency injection container to the application.

submit to reddit