Monday, December 13, 2010

Unit Test Review Checklist

Whenever someone mentions that they want to bring a code-review process into a project, my first inclination is to have teams review the tests.  Tests should be considered production assets that describe the purpose and responsibilities of the code, and developers should be making sure that their tests satisfy this goal.  If the tests look good and they cover the functionality that is required, I don’t see much point in dwelling on the implementation: if the implementation sucks, you can refactor freely assuming you have proper coverage from your tests.

A unit test review process is a great way to ensure that developers are writing useful tests that will help deliver high quality code that is flexible to change.  As code-review processes have the tendency to create stylistic debates within a development group, having a checklist of the 5-10 points to look for is a great way to keep things moving in the right direction.

Here’s my criteria that I’d like to see in a unit test.  Hopefully you find this useful, too.

Pass Guideline
 

Is the test well named?

  • Does the test name clearly represent the functionality being tested?
  • Will someone with little or no knowledge of the component be able to decipher why the test exists?
 

Is the test independent?

  • Does the test represent a single unit of work?
  • Can the test be executed on its own or is it dependent on the outcome of tests that preceded it?  For example, a fixture that shares state between tests may inadvertently include side-effects.
 

Is the test reliable?

  • Does the test produce the same outcome consistently?
  • Does the test run without manual intervention to determine a successful outcome?
 

Is the test durable?

  • Is the test designed so that it isn’t susceptible to changes in other parts of the application?  For example:
    • Does the test have complex or lengthy setup?
    • Will changes in the subject's dependencies lead to breakages?
 

Are the assertions valid?

  • Does the test contain assertions that describe the functionality being tested?
  • Do the assertions include helpful error messaging that describe what should have happened if the test fails?
  • Are any assertions redundant such as testing features of the CLR (constructors or get/set properties)?
  • Are the assertions specific to this test or are they duplicated in other tests?
 

Is the test well structured and documented?

  • Does the test contain sufficient comments?
  • Does the test highlight the subject under test and corresponding functionality being exercised?
  • Is there a clear indication of arrange/act/assert pattern such that the importance of the arrange makes sense, the action easily identifiable and the assertions clear?
 

Is the test isolating responsibilities of the subject?

  • Does the test make any assumptions about the internals of dependencies that aren't immediately related to the subject?
  • Is the test inaccurately or indirectly testing a responsibility or implementation of another object?  If so, move the test to the appropriate fixture.
 

Are all scenarios covered?

  • Does the fixture test all paths of execution? Not just for the code that has been written, but also for scenarios that may have been missed:
    • invalid input?;
    • failures in dependencies? 
 

Does the test handle resources correctly?

  • If the test utilizes external dependencies (databases, static resources), is the state of these dependencies returned to a neutral state?
  • Are resources allocated efficiently to ensure that the test runs as fast as possible:
    • setup/teardown effectively;
    • No usages of Thread.Sleep()?
 

Is the complexity of the test balanced?

Tests need to strike a balance between Don't Repeat Yourself and clear examples of how to use the subject.

  • Is the test too complicated or difficult to follow?
  • Is there a lot of duplicated code that would lead to time consuming effort if the subject changed? 

 

What do you think?  Are there criteria that you’d add?  Things you disagree with?

Happy coding.

Monday, October 18, 2010

Callbacks with Moq

My current project is using Moq as our test mocking framework. Although I've been a fan of RhinoMocks for several years I've found that in general, they both support the same features but I'm starting to think I like Moq's syntax better. I'm not going to go into an in-depth comparison of the mocking frameworks, as that's well documented elsewhere. Instead, I want to zero in on a feature that I've had my eye on for sometime now.

Both RhinoMocks and Moq have support for a peculiar feature that let's you invoke a callback method in your test code when a method on your mock is called. This feature seems odd to me because in the majority of cases a mock is either going to return a value or throw an exception - surely invoking some arbitrary method in your test must represent a very small niche requirement.

According to the RhinoMock documentation, the callback takes a delegate that returns true or false. From this it's clear that you would use this delegate to inspect the method's incoming parameters and return false if it didn't meet your expectations. However, as Ayende eludes, this is a powerful feature that can be easily be abused.

Moq provides this feature too, but the syntax is different. Rather than requiring a delegate Func<bool>, Moq's Callback mimics the signature of the mocked method. From this syntax it's not obvious that the callback should be used for validating inbound parameters, which suggests that it could be abused, but it also implies freedom for the test author to do other things. Granted this can get out of control and can be abusive, but perhaps a level of discretion about it's usage is also implied?

Here are a few examples where Callbacks have been helpful:

Validating inbound method parameters

The best example I can imagine for inspecting an inbound parameter is for a method that has no return value. For example, sending an object to a logger. If we were to assume that validation logic of the inbound parameter is the responsibility of the logger, we would only identify invalid arguments at runtime. Using the callback technique, we can write a test to enforce that the object being logged meets minimum validation criteria.

[TestMethod]
public void Ensure_Exception_Is_Logged_Properly()
{
  Exception exception;

  Mock.Get(Logger)
      .Setup( l => l.Error(It.IsAny<Exception>())
      .Callback<Exception>( ex => exception = ex )

  Subject.DoSomethingThatLogsException();

  Assert.AreEqual(ex.Message, "Fatal error doing something");
}

Changing state of inbound parameters

Imagine a WPF that uses the MVVM pattern and we need to launch a view model as a modal dialog. The user can make changes to the view model in the dialog and click ok, or they can click cancel. If the user clicks ok, the view model state needs to reflect their changes. However if they click cancel, any changes made need to be discarded.

Here's the code:

public class MyViewModel : ViewModel
{
  /* snip */

  public virtual bool Show()
  {
      var clone = this.Clone();
      var popup = new PopupWindow();
      popup.DataContext = clone;

      if (_popupService.ShowModal(popup))
      {
          CopyStateFrom(clone);
          return true;
      }
      return false;
  }

}

Assuming that the popup service is a mock object that returns true when the Ok button is clicked, how do I test that the contents of the popup dialog are copied back into the subject? How do I guarantee that changes aren't applied if the user clicks cancel?

The challenges with the above code is that the clone is a copy of my subject. I have no interception means to mock this object unless I introduce a mock ObjectCloner into the subject (that is ridiculous btw). In addition to this, the changes to the view model happen while the dialog is shown.

While the test looks unnatural, Callbacks fit this scenario really well.

[TestMethod]
public void When_User_Clicks_Ok_Ensure_Changes_Are_Applied()
{
  Mock.Get(PopupService)
      .Setup(p => p.ShowModal(It.IsAny<PopupWindow>())
      .Callback<PopupWindow>( ChangeViewModel )
      .Returns(true);

  var vm = new MyViewModel(PopupService)
              {
                  MyProperty = "Unchanged"
              };

  vm.Show();

  Assert.AreEqual("Changed", vm.MyProperty);
}

private void ChangeViewModel(PopupWindow window)
{
  var viewModel = window.DataContext as MyViewModel;
  viewModel.MyProperty = "Changed";
}

The key distinction here is that changes that occur to the popup are in no way related to the implementation of the popup service. The changes in state are a side-effect of the object passing through the mock. We could have rolled our own mock to simulate this behavior, but Callbacks make this unnecessary.

Conclusion

All in all, Callbacks are an interesting feature that allow us to write sophisticated functionality for our mock implementations. They provide a convenient interception point for parameters that would normally be difficult to get under the test microscope.

How are you using callbacks? What scenarios have you found where callbacks were necessary?

submit to reddit

Tuesday, October 12, 2010

Working with Existing Tests

You know, it’s easy to forget the basics after you’ve been doing something for a while.  Such is the case with TDD – I don’t have to remind myself of the fundamental “red, green, refactor” mantra everything I write a new test, it’s just baked in.  When it’s time to write something new, the good habits kick in and I write a test.  After all, this is what the Driven part of Test Driven Development is about: we drive our development through the creation of tests.

The funny thing is, the goal of TDD isn’t to produce tests.  Tests are merely a by-product of the development of the code, and having tests that demonstrate that the code works is one of the benefits.  Once they’re written, we forget about them and move on – we only return to them if something unexpected broke.

Wait.  Why are they breaking?  Maybe we forgot something, somewhere.

The Safety Net Myth

One of the reasons that tests break is because there’s a common perception that once the code is written, we no longer need the tests to drive development.  “We’ve got tests, so let’s just see what breaks after I make these changes…”

This strategy works when you want to try “what-if” scenarios or simple proper refactorings, but it falls flat for long-term coding sessions.  The value of the tests diminish quickly the longer the coding session lasts.  Simply put, tests are not safety nets – if you go off making changes for a few days you’re only going to find that the tests get in the way as they don’t represent your changes and your code won’t compile.

This may seem rudimentary, but let’s go back and review the absolute basics of TDD methodology:

  1. Start by writing a failing test. (RED)
  2. Implement the code necessary to make that test pass. (GREEN)
  3. Remove any duplication and clean it up.  (REFACTOR)

It’s easy to forget the basics.  The very first step is to make sure we have a test that doesn’t pass before we do any work, and this is easily overlooked when we already have tests for that functionality.

Writing tests for new functionality

If you want to introduce new functionality to your code base, challenge your team to introduce those changes to the tests first.  This may seem altruistic to some, especially if it’s been a long time since the tests were written or if no-one on the team is familiar with the tests or their value. 

Here’s a ridiculously simple tip:

  1. Locate the code you think may need to change for this feature.
  2. Introduce a fatal error into the code.  Maybe comment out the return value and return null, or throw an exception.
  3. Run the tests.

With luck, all the areas of your tests that are impacted by this code are broken.  Review these tests and ask yourself:

  • Does this test represent a valid requirement after I introduce my change?  If not, it’s safe to remove it.
  • How does this test relate to the change that I’m introducing?  Would my change alter the expected results of this test?  If yes, change the expected results. These tests should fail after you remove the fatal flaw you introduced moments ago.
  • Do any of these tests represent the new functionality I want to introduce?  If not, write that test now.

(If nothing breaks, you’ve got a different problem.  Do some research on what it would take to get this code under a test, and write tests for new functionality.)

Conclusion

The duct tape programmer will argue that you can’t make an omelette without breaking some eggs, which is true – we should have the courage to stand up and fix things that are wrong.  But I’d argue that you must do your homework first - if you don’t check for other ingredients, you’re just making scrambled eggs. 

In my experience, long term refactorings that don’t leverage the tests are a recipe for test-abandonment; your tests and code should always be moments from being able to compile.  The best way to keep the tests valid is to remember the basics – they should be broken before you start introducing changes.

submit to reddit

Tuesday, October 05, 2010

Writing Easier to Understand Tests

For certain, the long term success of any project that leverages Tests has got to be Tests that are easy to understand and provide value.

For me, readability is the gateway to value. I want to be able to open the test, and BOOM! Here’s the subject, here’s the dependencies, this is what I’m doing, and here’s what I expect it to do. If I can’t figure it out within a few seconds, I start to question the value of the tests. And that’s exactly what happened to me earlier this week.

The test I was looking at had some issues, but the developer who wrote it had their heart in the right place and made attempts to keep it relatively straight-forward. It was using a Context-Specification style of test and was using Moq to Mock out the physical dependencies, but I got tied up in the mechanics of the test. I found that the trouble I was having was determining which Mocks were part of the test versus Mocks that were in the test to support related dependencies.

Below is an example of a similar test and the steps I took to clean it up. Along the way I found something interesting, and I hope you do, too.

Original (Cruft Flavor)

Here’s a sample of the test. For clarity sake, I only want to illustrate the initial setup of the test, so I’ve omitted the actual test part. Note, I’m using a flavor of context specification that I’ve blogged about before, if it seems like a strange syntax, you may want to read up.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected Mock<IDataProvider> _mockDataProvider;
   protected Mock<IUserContext> _mockUserContext;
   protected IDataProvider _dataProvider;
   protected IUserContext _userContext;

   public override void Context()
   {
       _mockDataProvider = new Mock<IDataProvider>();
       _mockUserContext = new Mock<IUserContext>();

       _userContext = _mockUserContext.Object;
       _dataProvider = _mockDataProvider.Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(_dataProvider,_userContext);
   }

   // public class SubContext : TemplateResolverSpecs
   // etc
}

This is a fairly simple example and certainly those familiar with Moq’s syntax and general dependency injection patterns won’t have too much difficultly understanding what’s going on here. But you have to admit that while this is a trivial example there’s a lot of code here for what’s needed – and you had to read all of it.

The Rewrite

When I started to re-write this test, my motivation was for sub-classing the test fixture to create different contexts -- maybe I would want to create a context where I used Mocks, and another for real dependencies. I started to debate whether it would be wise to put the Mocks in a subclass or in the base when it occurred to me why the test was confusing in the first place: the Mocks are an implementation detail that are getting in the way of understanding the dependencies to the subject. The Mocks aren’t important at all – it’s the dependencies that matter!

So, here’s the same setup with the Mocks moved out of the way, only referenced in the initialization of the test’s Context.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected IDataProvider DataProvider;
   protected IUserContext UserContext;

   public override void Context()
   {
       DataProvider = new Mock<IDataProvider>().Object;
       UserContext = new Mock<IUserContext>().Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(DataProvider, UserContext);
   }
}

Much better, don’t you think? Note, I’ve also removed the underscore and changed the case on my fields because they’re protected and that goes a long way to improve readability, too.

Where’d my Mock go?

So you’re probably thinking “that’s stupid great Bryan, but I was actually using those mocks” – and that’s a valid observation, but the best part is you don’t really need them anymore. Moq has a cool feature that let’s you obtain the Mock wrapper from the mocked-object anytime you want, so you only need the mock when it’s time to use it.

Simply use the static Get method on the Mock class to obtain a reference to your mock:

Mock.Get(DataProvider)
   .Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
   .Returns( new TemplateRecord() );

For contrast sake, here’s what the original would have looked like:

_mockDataProvider.Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
                .Returns( new TemplateRecord() );

They’re basically the same, but the difference is I don’t have to remember the name of the variable for the mock anymore. And as an added bonus, our Mock.Setup calls will all line up with the same indentation, regardless of the length of the dependencies’ variable name.

Conclusion

While the above is just an example of how tests can be trimmed down for added readability, my hope is that this readability influences developers to declare their dependencies up front rather than weighing themselves and their tests down with the mechanics of the tests. If you find yourself suddenly requiring a Mock in the middle of the test, or creating Mocks for items that aren’t immediate dependencies, it should serve as a red flag that you might not be testing the subject in isolation and you may want to step-back and re-think things a bit.

submit to reddit

Monday, October 04, 2010

Manually adding Resources Files to Visual Studio Projects

Suppose you're moving files between projects and you have to move a Settings or Resource File.  If you drag these files between projects, you’ll notice that Visual Studio doesn’t preserve the relationships between the designer files:

 settings-resource

Since these are embedded resources, you’ll need a few extra steps to get things sorted out:

  1. Right-click on the Project and choose Unload Project.  Visual Studio may churn for a few seconds.
  2. Once unloaded, Right-click on the project file and choose Edit <Project-Name>.
  3. Locate your resource files and setup them up to be embedded resources with auto-generated designer files:
<ItemGroup>
  <None Include="Resources\Settings.settings">
    <Generator>SettingsSingleFileGenerator</Generator>
    <LastGenOutput>Settings.Designer.cs</LastGenOutput>
  </None>
  <Compile Include="Resources\Settings.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Settings.settings</DependentUpon>
    <DesignTimeSharedInput>True</DesignTimeSharedInput>
  </Compile>
  <EmbeddedResource Include="Resources\Resources.resx">
    <Generator>ResXFileCodeGenerator</Generator>
    <LastGenOutput>Resources.Designer.cs</LastGenOutput>
  </EmbeddedResource>
  <Compile Include="Resources\Resources.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Resources.resx</DependentUpon>
    <DesignTime>True</DesignTime>
  </Compile>
</ItemGroup>

Cheers.

submit to reddit

Wednesday, August 18, 2010

An update

“The rumors of my death have been largely exaggerated." – Mark Twain.

It’s been forever and a day since my last post, largely due to the size and complexity of my current project, but for those wondering (and to clear the writers block in my head), here’s a quick update:

Selenium Toolkit for .NET

Although there hasn’t been a formal release on my open source project since last November, I have been covertly making minor commits here and there to keep the NUnit dependencies in sync with NUnit’s recent updates.  I may update the NUnit libraries to the latest release (2.5.7) this weekend.  I realize the irony here: the purpose of my project is to provide a simple installer for Selenium and my NUnit addin, forcing you to download the source and compile it is beyond excuse.  Here are some of the things that I have been tinkering with to include in a major release, but I haven’t fully sat down to finish:

  • WiX-based Installation:  I’ve been meaning to drop the out of the box Visual Studio installer project in favor of a WiX-based installer.  The real intent is to install side-by-side versions of my NUnit addin for each version of NUnit installed locally.
  • Resharper 5.0 integration:  The good folks at JetBrains, through their commitment to open source projects, have recognized my open source project and have donated a license of Resharper (read: YOU GUYS ROCK!).  To return the favor, I am looking to produce a custom runner for tests with my WebFixture attribute so that you can right click any fixture or test to launch the selenium process just like you would with NUnit or MSTest.
  • Visual Studio 2008/2010 integration: Not to leave the rest of the development world without Resharper licenses in the cold, I have been playing with a custom version MSTest extension.  Unfortunately for me, the out-of-the-box functionality for MSTest isn’t enough for my needs – I would really like to control execution of the environment before and after the test as well as when all tests have finished execution – there doesn’t seem to be a good hook for this.  It appears that my best option is to implement my own test adapter and plumbing.  I’ll leave the details of my suffering and choking on this to your imagination or maybe another post.

What about Selenium 2.0, aka WebDriver?  I’m still sitting on the fence on how to adapt the toolkit to the new API but haven’t had a lot of time to play with it since my current project is a very thick client (no web).  I am interested to hear your thoughts, but my immediate reaction is to use a parameterized approach:

[WebTest]
public void UsingDefaultBrowser(WebDriver browser)
{
}

Other Ramblings

I won’t bore you to death with the details of the last few months – my twitter stream can do that – but expect to hear me spout more on WPF / Prism / Unity, TDD, Context/Specification soon.

In the meantime, happy coding.

Tuesday, February 09, 2010

Running code in a separate AppDomain

Suppose you’ve got a chunk of code that you need to run as part of your application but you’re concerned that it might bring down your app or introduce a memory leak.  Fortunately, the .NET runtime provides an easy mechanism to run arbitrary code in a separate AppDomain.  Not only can you isolate all exceptions to that AppDomain, but when the AppDomain unloads you can reclaim all the memory that was consumed.
Here’s a quick walkthrough that demonstrates creating an AppDomain and running some isolated code.

Create a new AppDomain

First we’ll create a new AppDomain based off the information of the currently running AppDomain.
AppDomainSetup currentSetup = AppDomain.CurrentDomain.SetupInformation;

var info = new AppDomainSetup()
              {
                  ApplicationBase = currentSetup.ApplicationBase,
                  LoaderOptimization = currentSetup.LoaderOptimization
              };

var domain = AppDomain.CreateDomain("Widget Domain", null, info);

Unwrap your MarshalByRefObject

Next we’ll create an object in that AppDomain and serialize a handle to it so that we can control the code in the remote AppDomain.  It’s important to make sure the object you’re creating inherits from MarshalByRefObject and is marked as serializable.  If you forget this step, the entire object will serialize over to the original AppDomain and you lose all isolation.
string assemblyName = "AppDomainExperiment";
string typeName = "AppDomainExperiment.MemoryEatingWidget";

IWidget widget = (IWidget)domain.CreateInstanceAndUnwrap(assemblyName, typeName);

Unload the domain

Once we’ve finished with the object, we can broom the entire AppDomain which frees up all resources attached to it.  In the example below, I’ve deliberately created a static reference to an object to prevent it from going out of scope.
AppDomain.Unload(domain);

Putting it all together

Here’s a sample that shows all the moving parts.
namespace AppDomainExperiment
{
    using System;
    using System.Collections.Generic;
    using System.IO;
    using Microsoft.VisualStudio.TestTools.UnitTesting;

    [Test]
    public class AppDomainLoadTests
    {
        [TestMethod]
        public void RunMarshalByRefObjectInSeparateAppDomain()
        {
            Console.WriteLine("Executing in AppDomain: {0}", AppDomain.CurrentDomain.Id);
            WriteMemory("Before creating the runner");

            using(var runner = new WidgetRunner("AppDomainExperiment",
                                                "AppDomainExperiment.MemoryEatingWidget"))
            {

                WriteMemory("After creating the runner");

                runner.Run(Console.Out);

                WriteMemory("After executing the runner");
            }

            WriteMemory("After disposing the runner");
        }

        private static void WriteMemory(string where)
        {
            GC.Collect();
            GC.WaitForPendingFinalizers();
            long memory = GC.GetTotalMemory(false);

            Console.WriteLine("Memory used '{0}': {1}", where, memory.ToString());
        }
    }

    public interface IWidget
    {
        void Run(TextWriter writer);
    }

    public class WidgetRunner
    {
        private readonly string _assemblyName;
        private readonly string _typeName;
        private AppDomain _domain;

        public WidgetRunner(string assemblyName, string typeName)
        {
            _assemblyName = assemblyName;
            _typeName = typeName;
        }

        #region IWidget Members

        public void Run(TextWriter writer)
        {
            AppDomainSetup currentSetup = AppDomain.CurrentDomain.SetupInformation;

            var info = new AppDomainSetup()
                          {
                              ApplicationBase = currentSetup.ApplicationBase,
                              LoaderOptimization = currentSetup.LoaderOptimization
                          };

            _domain = AppDomain.CreateDomain("Widget Domain", null, info);

            var widget = (IWidget)_domain.CreateInstanceAndUnwrap(_assemblyName, _typeName);

            if (!(widget is MarshalByRefObject))
            {
                throw new NotSupportedException("Widget must be MarshalBeRefObject");
            }
            widget.Run(writer);
        }

        #endregion

        #region IDisposable Members

        public void Dispose()
        {
            GC.SuppressFinalize(this);
            AppDomain.Unload(_domain);
        }

        #endregion
    }

    [Serializable]
    public class MemoryEatingWidget : MarshalByRefObject, IWidget
    {
        private IList<string> _memoryEater;

        private static IWidget Instance;

        #region IAppLauncher Members

        public void Run(TextWriter writer)
        {
            writer.WriteLine("Executing in AppDomain: {0}", AppDomain.CurrentDomain.Id);

            _memoryEater = new List<string>();

            // create some really big strings
            for(int i = 0; i < 100; i++)
            {
                var s = new String('c', i*100000);
                _memoryEater.Add(s);
            }

            // THIS SHOULD PREVENT THE MEMORY FROM BEING GC'd
            Instance = this;
        }

        #endregion

        #region IDisposable Members

        public void Dispose()
        {
            
        }

        #endregion
    }
}
Running the test shows the following output:
Executing in AppDomain: 2
Memory used 'Before creating the runner': 569060
Memory used 'After creating the runner': 487508
Executing in AppDomain: 3
Memory used 'After executing the runner': 990525340
Memory used 'After disposing the runner': 500340
Based on this output, the main take away is that the memory is reclaimed when the AppDomain is unloaded.  Why do the numbers not match up in the beginning and end?  It’s one of those mysteries of the managed garbage collector, it reminds me of my favorite Norm McDonald joke from SNL:
“Who are safer drivers? Men, or women?? Well, according to a new survey, 55% of adults feel that women are most responsible for minor fender-benders, while 78% blame men for most fatal crashes. Please note that the percentages in these pie graphs do not add up to 100% because the math was done by a woman. [Crowd groans.] For those of you hissing at that joke, it should be noted that that joke was written by a woman. So, now you don't know what the hell to do, do you? [Laughter] Nah, I'm just kidding, we don't hire women”
Happy Coding.

submit to reddit

Monday, February 08, 2010

Twelve Days of Code – Wrap up

Well it’s been a very long twelve days indeed, and I accomplished more than I thought I would.  But alas, all good things must come to end, so after a short hiatus on the blog I’m back to close out the Twelve Days of Code series for 2009.

For your convenience, here’s a list of the posts:

I want to thank all those who showed interest in the concept and if there are folks out there who were following along at home, please drop me a line or a comment.

For those interested in seeing some of the .NET 4.0 code and extending my work, the code is available for download.

I may pick up the experiment again once the next release candidate for Visual Studio is released.

submit to reddit

Tuesday, January 12, 2010

The Three Step Developer Flow

A long time ago, a mentor of mine passed on some good advise for developers that has stuck well with me: “Make it work.  Make it right.  Make it fast.”  While this simple mantra is likely influenced by Donald Knuth’s famous and misquoted statement that “premature optimization is the root of all evil, it’s more about how a developer should approach development altogether.

Breaking it down…

What I’ve always loved about this simple advice is that if a developer takes the steps out of order, such as putting emphasis on design or performance, there’s a very strong possibility that the code will never work.

Make it work…

Developers should take the most pragmatic solution possible to get the solution to work.  In some cases this should be considered prototype code that should be thrown away before going into production.  Sadly, I’m sure that 80% of all production code is prototype code with unrealized design.

Make it right…

Now that you know you know how to get the code to work, take some time to get into a shape that you can live with.  Interesting enough, emphasis should not be placed on designing for performance at this point.  If you can’t get to this stage, it should be considered technical debt to be resolved later.

Make it fast….

At this point you should have working code that looks clean and elegant, but how does it stack up when it’s integrated with production components or put under load?  If you had spent any time in the last two steps optimizing the code to handle load of a thousand users but it’s only called once in the application – you may have wasted your time and optimized prematurely.  To truly know, code should be examined under a profiler to determine if the code meets the performance goals of the application.  This is all a part of embedding a “culture of Performance” into your organization.

Aligned with Test Driven Development

It’s interesting that this concept overlaps with Test Driven Development’s mantra “Red, Green, Refactor” quite well.  Rather than developing prototype code as a console app, I write tests to prove that it works.  When it’s time to clean up the code and make it right, I’m refactoring both the tests and the code in small increments – after each change, I can verify that it still works. 

Later, if we identify performance issues with the code, I can use the tests as production assets to help me understand what the code needs to do.  This provides guidance when ripping out chunks of poorly performing code.

By following either the “red / green / refactor” or “make it work / right / fast" mantras mean that I don’t incorporate best practices or obvious implementation when writing the code? Hardly. I’ll write what I think needs to be written, but it’s important not to get carried away.

Write tests.  Test often.

submit to reddit

Thursday, January 07, 2010

Twelve Days of Code – Unity Framework

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the seventh in this series. This post focuses on using the Microsoft Unity Framework in our .NET 4.0 application.

This post assumes assumes that you are familiar with the last few posts and have a working understanding of Inversion of Control.  If you’re new to this series, please check out some of the previous posts and then come back.  If you’ve never worked with Inversion of Control, it’s primarily about decoupling objects from their implementation (Factory Pattern, Dependency Injection, Service Locator) – but the best place to start is probably here.

Overview

Our goal for this post is to remove as many hard-code references to types as possible.  Currently, the constructor of our MainWindow initializes all the dependencies of the Model and then manually sets the DataContext.  We need to pull this out of the constructor and decouple the binding from TaskApplicationViewModel to ITaskApplication.

Initializing the Container

We’ll introduce our inversion of control container in the Application’s OnStartup event.  The container’s job is to provide type location and object lifetime management for our objects, so all the hard-coded initialization logic in the MainWindow constructor is registered here.

public partial class App : Application
{
    protected override void OnStartup(StartupEventArgs e)
    {
        IUnityContainer container = new UnityContainer();
        
        container.RegisterType<ITaskApplication, TaskApplicationViewModel>();
        container.RegisterType<ITaskSessionController, TaskSessionController>();
        container.RegisterType<IAlarmController, TaskAlarmController>();
        container.RegisterType<ISessionRepository, TaskSessionRepository>();

        Window window = new MainWindow();
        window.DataContext = container.Resolve<ITaskApplication>();
        window.Show();

        
        base.OnStartup(e);
    }
}

Also note, we’re initializing the MainWindow with our data context and then displaying the window.  This small change means that we must remove the MainWindow.xaml reference in the App.xaml (otherwise we’ll launch two windows).

Next Steps

Aside from the simplified object construction, it would appear that the above code doesn’t buy us much: we have roughly the same number of lines of code and we are simply delegating objection construction to Unity.  Alternatively, we could move the hard-coded registrations to a configuration file (though that’s not my preference here).

In the next post, we’ll see how using Prism’s Bootstrapper and Module configuration allows us to move this logic into their own modules.

In the next step, we’ll look at using P

submit to reddit

Saturday, January 02, 2010

Twelve Days of Code – Entity Framework 4.0

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the sixth in this series. Today I’ll be adding some persistence logic to the Pomodoro application.

I should point I’ve never been a huge fan of object-relational mapping tools, and while I’ve done a few tours with SQL Server I haven’t played much with SQLite.  I’ve heard good things about the upcoming version of the Entity Framework in .NET 4.0, so this post gives me a chance to play with both.

Getting Ready

As SQLite isn’t one of the default providers supported by Visual Studio 2010 (Beta 2), I downloaded and installed SQLite.  The installation adds the SQLite libraries to the Global Assembly Cache, and adds Designer Support for connecting to a SQLite database through the Server Explorer.  The installation and GAC’d assemblies may prove to be an issue later when we want to deploy the application, but we’ll worry about that later.

Creating a Session Repository

So far, the project structure has a “Core” and “Shell” project where the “Core” project contains the central interfaces for the application.  Since the ITaskSessionController already has the responsibility of handling starting and stopping of sessions, it is the ideal candidate to interact with a ISessionRepository for recording these activities against a persistent store.

To handle auditing of sessions, I created a ISessionRepository interface which lives in the Core library:

interface ISessionRepository
{
    void RecordCompletion(ITaskSession session);
    void RecordCancellation(ITaskSession session);
}

Although we don't have an implementation for this interface, we do know how that an object with this signature will be used by the TaskSessionController.  In anticipation of these changes, we add tests to the TaskSessionController that verify it communicates with its dependency:

public TaskSessionControllerSpecs : SpecFor<TaskSessionController>
{
  // ... initial context setup with Mock ISessionRepository
  // omitted for clarity

  public class when_a_session_completes : TaskSessionControllerSpecs
  {
     // omitted for clarity

     [TestMethod]
     public void ensure_activity_is_recorded_in_repository()
     {
         repositoryMock.Verify( 
            r => r.RecordCompletion(
                    It.Is<ITaskSession>( (s) => s == session )
                        );
     }
  }
}

To ensure the tests pass, we extend the TaskSessionController to take a ISessionRepository in its constructor and add the appropriate implementation.  Naturally, because the constructor of the TaskSessionController has changed, we adjust the fixture setup so that the code will compile.  Below is a snippet of modified TaskSessionController:

public class TaskSessionController : ITaskSessionController
{
    public TaskSessionController(ISessionRepository repository)
    {
        SessionRepository = repository;
    }

    protected ISessionRepository SessionRepository
    {
        get; set;
    }

    public void Finish(ITaskSession session)
    {
        session.Stop()

        SessionRepository.RecordCompletion(session);
    }

    // ...omitted for clarity
}

Adding ADO.NET Entity Framework to the Project

While we could add the implementation of the ISessionRepository into the Core library, I’m going to add a new library Pomodoro.Data where we’ll add the Entity Framework model.  This strategy allows us to extend the Core model and provides us with some freedom to create alternate persistence strategies simply by swapping out the Pomodoro.Data assembly.

Once the project is created, we add the Entity Framework to the project using the Add New “ADO.NET Entity Data Model” and follow the wizard:

Pomodoro.Data

Note, that the wizard adds the appropriate references to the project automatically. 

Since we don’t have a working database, we’ll choose to create an Empty Model.  Later on, we’ll generate the database from our defined model.

empty-model 

Creating a Data Model

One of the new features of the Entity Framework 4.0 is that it allows you to bind to an existing data model.  Although the TaskSession could be considered as a candidate for an existing model, it doesn’t fit the bill cleanly – Sessions represent countdown timers and they don’t track the final outcome.  Instead, we’ll use the default behavior of the framework and manually generate a model class, TaskSessionRecord:

add-new-entity

For our auditing purposes, we only need to record an ID, Start and End Times and whether the session was completed or cancelled.

task-session-record

Creating the Database from the Model

After the model is complete, we generate the database from the model:

  1. Right click the designer and choose “Model Browser”
  2. In the Data Store, choose “Generate Database from Model”
  3. Create a new database connection.  In our case, we specify SQLite provider
  4. Finish our the wizard by clicking Next and Finish.

new-database-connection

The wizard produces the Database and the matching DDL file to generate the tables.  Note that SQLite must be installed in order to have it appear as an option in the wizard.

Unfortunately, I wasn’t able to create the SQLite database using any of the tools with Visual Studio.  Instead, I cheated and manually created the TaskSessionRecord table.  We’ll hang onto the generated ddl file because we may want to programmatically generate the database at some point.  For the time being, I’ll cheat and copy the database to the bin\Debug folder.

Implementing the Repository

The repository implementation is fairly straightforward.  We simply instantiate the object context (we specified part of the name when we added the ADO.NET Entity Data Model to the project), add a new TaskSessionRecord to the EntitySet and then save the changes to commit the transaction:

namespace Pomodoro.Data
{
    public class TaskSessionRepository : ISessionRepository
    {
        public void RecordCompletion(ITaskSession session)
        {
            using (var context = new PomodoroDataContainer())
            {
                context.TaskSessionRecords.AddObject(new TaskSessionRecord(session, true));
                context.SaveChanges();
            }
        }

        public void RecordCancellation(ITaskSession session)
        {
            using (var context = new PomodoroDataContainer())
            {
                context.TaskSessionRecords.AddObject(new TaskSessionRecord(session, false));
                context.SaveChanges();
            }
        }
    }
}

Note that to simplify the code, I extended the auto generated TaskSessionRecord class to provide a convenience constructor.  Since the auto generated class is marked as partial, the convenience constructor is placed in its own file.  As some of the existing generated code requires the presence of an empty constructor (which is implicitly defined by the compiler if not present), we must also include a default constructor.

public partial class TaskSessionRecord
{
    // needed to satisfy some of the existing generated code
    public TaskSessionRecord()
    {
    }

    public TaskSessionRecord(ITaskSession session, bool complete)
    {
        Id = session.Id;
        StartTime = session.StartTime;
        EndTime = session.EndTime;
        Complete = complete;
    }
}

Integrating into the Shell

To integrate the new SessionRepository into the Pomodoro application we need to add the database, Pomodoro.Data assembly and the appropriate configuration settings.  For the time being, we’ll add a reference to Pomodoro.Data library to the Shell application – this strategy may change if we introduce a composite application pattern such as Prism or MEF.  For brevity sake, I’ll manually copy the database into the bin\Debug folder.

The connection string settings appear in the app.config like so:

<connectionStrings>
  <-- formatted for readibiliy -->
  <add name="PomodoroDataContainer" 
       connectionString="metadata=res://*/PomodoroData.csdl|res://*/PomodoroData.ssdl|res://*/PomodoroData.msl;
                    provider=System.Data.SQLite;
                    provider connection string=&quot;data source=Pomodoro.sqlite&quot;" 
       providerName="System.Data.EntityClient" />
</connectionStrings>

One Last Gotcha

As the solution is compiled against .NET Framework 4.0 and our Sqlite assemblies are compiled against .NET 2.0, we receive a really nasty error when the System.Data.SQLite assembly loads into the AppDomain:

Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information

We solve this problem by adding useLegacyV2RuntimeActivationPolicy="true" to the app.config:

<startup useLegacyV2RuntimeActivationPolicy="true">
  <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
</startup>

Next Steps

In the next post, we’ll look at adding Unity as a dependency injection container to the application.

submit to reddit

Thursday, December 31, 2009

Twelve Days of Code – Windows 7 Shell Integration

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the fifth in this series.  Today I’ll cover adding some of the cool new Windows 7 Shell integration features in WPF 4.0.

I’ll be revisiting some of the xaml and object model for this post, so it wouldn’t hurt to read up:

WPF 4.0 offers some nice Windows 7 Shell integration that are easy to add to your application.

Progress State

Since the Pomdoro application’s primary function is to provide a countdown timer, it’s seems like a natural fit to use the built-in Windows 7 Taskbar progress indicator to show the time remaining.  Hooking it up was a snap.  The progress indicator uses two values ProgressState and ProgressValue, where ProgressState indicates the progress mode (Error, Indeterminate, None, Normal, Paused) and ProgressValue is a numeric value between 0 and 1.  Two simple converters provide the translation between ViewModel and View, one to control ProgressState and the other to compute ProgressValue.

<Window.TaskbarItemInfo>
    <TaskbarItemInfo
        ProgressState="{Binding ActiveItem, Converter={StaticResource ProgressStateConverter}}"
        ProgressValue="{Binding ActiveItem, Converter={StaticResource ProgressValueConverter}}"
        >
    </TaskbarItemInfo>
</Window.TaskbarItemInfo>

Which looks something like this:

taskbar-progress

Note that for ProgressValue I’m binding to ActiveItem instead of TimeRemaining.  This is because the progress value is obtained through a percent complete calculation -- time remaining in the original session length – which requires that both values are available to the converter.  I suppose this could have been calculated through a multi-binding, but the single converter makes things much easier.

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(ITaskSession), typeof(System.Windows.Shell.TaskbarItemProgressState))]
    public class ProgressStateConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value != null && value is ITaskSession)
            {
                ITaskSession session = (ITaskSession)value;

                if (session.IsActive)
                {
                    return TaskbarItemProgressState.Normal;
                }
            }
            return TaskbarItemProgressState.None;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }


    [ValueConversion(typeof(ITaskSession), typeof(double))]
    public class ProgressValueConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value != null && value is ITaskSession)
            {
                ITaskSession session = (ITaskSession)value;

                if (session.IsActive)
                {
                    int delta = session.SessionLength - session.TimeRemaining;
                    
                    return (double)(delta / (double)session.SessionLength);
                }
            }
            return 1;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

This is all well and good, but there’s a minor setback: Progress will not increment unless the PropertyChanged event is raised for the ActiveItem property.  This is an easy fix: the ITaskSession needs to expose a NotifyProgress event and the ITaskApplication needs to notify the View when the event fires.  Since the progress indicator in the taskbar is only a few dozen pixels wide, spamming the View with each millisecond update is a bit much.  We solve this problem by throttling the amount the event is raised using a NotifyInterval property.

// from TaskSessionViewModel
void OnTimer(object sender, ElapsedEventArgs e)
{
    NotifyProperty("TimeRemaining");

    if (IsActive)
    {
        // if notify interval is set
        if (NotifyInterval > 0)
        {
            notifyIntervalCounter += TimerInterval;

            if (notifyIntervalCounter >= NotifyInterval)
            {
                if (NotifyProgress != null)
                {
                    NotifyProgress(this, EventArgs.Empty);
                }
                notifyIntervalCounter = 0;
            }
        }
    }

    if (TimeRemaining == 0)
    {
        // end timer
        IsActive = false;

        if (SessionFinished != null)
        {
            SessionFinished(this, EventArgs.Empty);
        }
    }
}

TaskbarInfo Buttons

Using the exact same command bindings mentioned in my last post, adding buttons to control the countdown timer from the AreoPeek window is dirt simple. The only gotcha is that the taskbar buttons do not support custom content, instead you must specify an image to display anything meaningful to the user.

<Window ....>

    <Window.Resources>
        <!-- play icon for taskbar button -->
        <DrawingImage x:Key="PlayImage">
            <DrawingImage.Drawing>
                <DrawingGroup>
                    <DrawingGroup.Children>
                        <GeometryDrawing Brush="Black" Geometry="F1 M 50,25L 0,0L 0,50L 50,25 Z "/>
                    </DrawingGroup.Children>
                </DrawingGroup>
            </DrawingImage.Drawing>
        </DrawingImage>

        <!-- stop icon for taskbar button -->
        <DrawingImage x:Key="StopImage">
            <DrawingImage.Drawing>
                <DrawingGroup>
                    <DrawingGroup.Children>
                        <GeometryDrawing Brush="Black" Geometry="F1 M 0,0L 50,0L 50,50L 0,50L 0,0 Z "/>
                    </DrawingGroup.Children>
                </DrawingGroup>
            </DrawingImage.Drawing>
        </DrawingImage>

        // ... converters
        
    </Window.Resources>

    <Window.TaskbarItemInfo>
        <TaskbarItemInfo ... >
            
            <TaskbarItemInfo.ThumbButtonInfos>
                <ThumbButtonInfoCollection>

                    <!-- start button -->
                    <ThumbButtonInfo
                        ImageSource="{StaticResource ResourceKey=PlayImage}"
                        Command="{Binding StartCommand}"
                        CommandParameter="{Binding ActiveItem}"
                        Visibility="{Binding ActiveItem.IsActive, 
                                     Converter={StaticResource BoolToHiddenConverter}, 
                                     FallbackValue={x:Static Member=pc:Visibility.Visible}}"
                        />

                    <!-- stop button -->
                    <ThumbButtonInfo
                        ImageSource="{StaticResource ResourceKey=StopImage}"
                        Command="{Binding CancelCommand}"
                        CommandParameter="{Binding ActiveItem}"
                        Visibility="{Binding ActiveItem.IsActive, 
                                     Converter={StaticResource BoolToVisibleConverter}, 
                                     FallbackValue={x:Static Member=pc:Visibility.Collapsed}}"
                        />
                </ThumbButtonInfoCollection>
            </TaskbarItemInfo.ThumbButtonInfos>
        </TaskbarItemInfo>
    </Window.TaskbarItemInfo>

    // ....

</Window>

The applied XAML looks like this:

taskbar-buttons

Next Steps

The next post we’ll add some auditing capability to the pomodoro application using SQLite and the Entity Framework version in .NET 4.0.

submit to reddit

Tuesday, December 29, 2009

Twelve Days of Code – Views

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the fourth in this series -- we’ll look at some basic XAML for the View, data binding and some simple styling.

Some basic Layout

Our Pomodoro application doesn’t require a sophisticated layout with complex XAML.  All we really need is an area for our count down timer, and two buttons to start and cancel the Pomodoro.  That being said, we’ll drop a textbox and two buttons into a grid, like so:

<Window x:Class="Pomodoro.Shell.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="Pomodoro" Width="250" Height="120">
    
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition />
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition />
            <ColumnDefinition />
        </Grid.ColumnDefinitions>
        
        <!-- active item -->
        <TextBlock Grid.Column="0"/>
        
        <!-- command buttons -->
        <StackPanel Orientation="Horizontal" Grid.Column="1">
            <Button Content="Start" />
            <Button Content="Stop" />
        </StackPanel>
        
    </Grid>
</Window>

Which looks something likes this:

Basic Pomodo - Yuck!

Binding to the ViewModel

The great thing behind the Model-View-ViewModel pattern is that we don’t need goofy “code-behind” logic to control the View.  Instead, we use WPF’s powerful data binding to graph the View to the ViewModel. 

There are many different ways to bind the ViewModel to the View, but here’s a quick and dirty mechanism until there’s a inversion of control container to resolve the ViewModel:

public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();

        var sessionController = new TaskSessionController();
        var alarmController = new TaskAlarmController();

        var ctx = new TaskApplicationViewModel(sessionController, alarmController);
        this.DataContext = ctx;
    }
}

The XAML binds to the TaskApplicationViewModel’s ActiveItem, StartCommand and CancelCommands:

<!-- active item -->
<TextBlock Text="{Binding ActiveItem.TimeRemaining}" />

<!-- command buttons -->
<StackPanel Orientation="Horizontal" Grid.Column="1">
    <Button Content="Start" 
            Command="{Binding StartCommand}" 
            CommandParameter="{Binding ActiveItem}"                 
            />
    <Button Content="Stop" 
            Command="{Binding CancelCommand}" 
            CommandParameter="{Binding ActiveItem}"
            />
</StackPanel>

The Count Down Timer

At this point, clicking on the Start button shows the pomodoro counting down in milliseconds.  I had considered breaking the count down timer into its own user control, but chose to be pragmatic and use a simple TextBlock.  The display can be dressed up using a simple converter:

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(int), typeof(string))]
    public class TimeRemainingConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is int)
            {
                int remaining = (int)value;

                TimeSpan span = TimeSpan.FromMilliseconds(remaining);

                return String.Format("{0:00}:{1:00}", span.Minutes, span.Seconds);
            }
            return String.Empty;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

Control Button Visibility

With the data binding so far, the Pomodoro application comes to life with a crude but functional user-interface.  To improve the experience, I’d like to make the buttons contextual so that only the appropriate button is shown based on the state of the session.  To achieve this effect, we bind to the IsActive property to control visibility state with mutually exclusive converters: BooleanToVisibleConverter and BooleanToCollapsedConverter.  I honestly don’t know why these converters aren’t part of the framework as I use this concept frequently. 

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(bool), typeof(Visibility))]
    public class BooleanToHiddenConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is bool)
            {
                bool hidden = (bool)value;
                if (hidden)
                {
                    return Visibility.Collapsed;
                }
            }
            return Visibility.Visible;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }

    [ValueConversion(typeof(bool),typeof(Visibility))]
    public class BooleanToVisibleConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is bool)
            {
                bool active = (bool)value;

                if (active)
                {
                    return Visibility.Visible;
                }
            }
            return Visibility.Collapsed;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

However, there’s a Binding problem with all of these converters: by default, the ActiveItem of the TaskApplication is null and none of the bindings will take effect until after the object has been set.  This is easily fixed with a FallbackValue in the binding syntax:

<Window ...
        xmlns:local="clr-namespace:Pomodoro.Shell.Converters"
        xmlns:pc="clr-namespace:System.Windows;assembly=PresentationCore">

    <Window.Resources>
        <local:BooleanToVisibleConverter x:Key="BoolToVisibleConverter" />
        <local:BooleanToHiddenConverter x:Key="BoolToHiddenConverter" />
        <local:TimeRemainingConverter x:Key="TimeSpanConverter" />
    </Window.Resources>

    <Grid>

        <!-- active item -->
        <TextBlock Text="{Binding ActiveItem.TimeRemaining, Converter={StaticResource TimeSpanConverter}, 
                                  FallbackValue='00:00'}" />

        <!-- command buttons -->
        <StackPanel Orientation="Horizontal" Grid.Column="1">
            <Button Content="Start" 
                    Command="{Binding StartCommand}" 
                    CommandParameter="{Binding ActiveItem}"
                    Visibility="{Binding ActiveItem.IsActive, Converter={StaticResource BoolToHiddenConverter}, 
                    FallbackValue={x:Static Member=pc:Visibility.Visible}}"                   
                    />
            <Button Content="Stop" 
                    Command="{Binding CancelCommand}" 
                    CommandParameter="{Binding ActiveItem}"
                    Visibility="{Binding ActiveItem.IsActive, Converter={StaticResource BoolToVisibleConverter}, 
                    FallbackValue={x:Static Member=pc:Visibility.Collapsed}}"
                    />
        </StackPanel>

    </Grid>
</Window>

A few extra applied stylings, and the Pomodoro app is shaping up nicely:

Not so bad Pomodoro

Next Steps

The next post, I’ll look at some of the new Windows 7 integration features available in WPF 4.0.

submit to reddit

Thursday, December 24, 2009

Twelve Days of Code – Pomodoro Object Model

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. You can play along at home too. This post talks about the process of defining the object model and parts of the presentation model.

Most of the Model-View-ViewModel examples I’ve seen have not put emphasis on the design of the Model, so I’m not sure that I can call what I’m doing proper MVVM, but i like to have a fully functional application independent of its presentation and then extend parts of the model into presentation model objects. As I was designing the object model, I chose to use interfaces rather than concrete classes. There are several reasons for this, but the main factor was to apply a top-down Test-Driven-Development methodology.

On Top-Down Test-Driven-Development…

The theory behind “top-down TDD” is that by using the Application or other top-level object as the starting point, I only have to work with that object and its immediate dependencies rather than the entire implementation. As dependencies for that object are realized, I create a simple interface rather than a fully flushed out implementation -- I find using interfaces really helps to visualize the code’s responsibilities. Using interfaces also makes the design more pliable, in that I can move responsibilities around without having to overhaul the implementation.

This approach plays well into test-driven-development since tools like RhinoMocks and Moq can quickly generate mock implementations for interfaces at runtime. These dynamic implementations make it possible to define how dependencies will operate under normal (and irregular) circumstances which allows me to concentrate on the object I’m currently constructing rather than several objects at once. After writing a few small tests, I make a conscious effort to validate or rework the object model.

Using the Context/Specification strategy I outlined a few posts ago, the process I followed resembled something like this:

  • Design a simple interface (one or two simple methods)
  • Create the concrete class and the test fixture
  • After spending a few minutes thinking about the different scenarios or states this object could represent, I create a test class for that context
  • Write a specification (test) for that context
  • Write the assertion for that specification
  • Refine the context / subject-initialization / because
  • Write the implementation until the test passes
  • Refactor.
  • Add more tests.

(Confession: I started writing the ITaskApplication first but then switched to ITaskSessionController because I had a clearer vision of what it needed to do.) More on tests later, first let’s look more at the object model.

The Pomodoro Object Model

Here’s a quick diagram of the object model as seen by its interfaces. Note that instead of referring to “Pomodoro” or “Tomato”, I’m using a more generic term “Task”:

Pomodor.Core

In the above diagram, the Task Application represents the top of the application where the user will interact to start and stop (cancel) a task session. By design, the application seats one session at a time and it delegates the responsibility for constructing and initializing a task session to the Session Controller. The Task Application is also responsible for monitoring the session’s SessionFinished event, which when fired, the Application delegates to the AlarmController and SessionController to notify the user and track completion of the session.

Presently the TaskController acts primarily as a factory for new sessions and has no immediate dependencies, though it would logically communicate with a persistence store to track usage information (when I get around to that).

The TaskSession object encapsulates the internal state of the session as well as the timer logic. When the timer completes, the SessionFinished event is fired.

The Realized Implementation

Using a TDD methodology, I created concrete implementations for each interface as I went. At this point, the division between Presentation Model (aka ViewModel) and Controller Logic (aka Model) are clearly visible:

Pomodoro.Core.Implementation

To help realize the implementation, I borrowed a few key classes from the internets:

BaseViewModel

When starting out with the Presentation Model, I found this article as a good background and starting point for the Model-View-ViewModel pattern. The article provides a ViewModelBase implementation which contains the common base ViewModel plumbing, including the IDisposable pattern. The most notable part is the implementation for INotifyPropertyChanged which contains debug only validation logic that throws an error if the ViewModel attempts to raise a PropertyChanged event for a property that does not exist. This simple trick does away with stupid bugs related to ViewModel binding errors caused by typos.

I’ve also added BeginInitialization and EndInitialization members which prevent the PropertyChanged event from firing during initialization. This trick comes in handy when the ViewModel is sufficiently complicated enough that raising the event needlessly impacts performance.

For some reason, I prefer the name BaseViewModel over ViewModelBase. How about you?

DelegateCommand

When it came time to add Commands to my ViewModels, I considered brining Prism to my application primarily to get the DelegateCommand implementation. Perhaps I missed something, but I was only able to find the source code for download. Ultimately, I chose to take the DelegateCommand code-file and include it as part of the Core library instead of compiling the libraries for Prism. The decision to not compile the libraries was based on some tweaking and missing references for .NET 4.0, and the projects were not configured to compile against a strong-name which I would need for my strongly-named application.

The DelegateCommand provides an implementation of the ICommand interface that accepts delegates for the Execute and CanExecute methods, as well as a hook to raise the CanExecuteChanged event. At some point, I may choose to pull the implementation for the Start and Cancel commands out of the TaskApplicationViewModel and into separate classes.

More About the Tests

As promised, here is some more information about the tests and methodology. Rather than list the code for the tests, I thought it would be fun to list the names of the tests that were created during this stage of the development process. Because I’m using a context/specification test-style, my tests should read like specifications:

TaskApplicationSpecs:

  • when_a_task_completes
    • should_display_an_alarm
    • should_record_completed_task
  • when_a_task_is_cancelled
    • should record cancellation
  • when_a_task_is_started
    • should_create_a_new_session
    • ensure_new_tasks_cannot_be_started
    • ensure_running_task_can_be_cancelled
  • when_no_task_is_running
    • ensure_new_task_can_be_started
    • ensure_cancel_task_command_is_disabled
TaskSessionControllerSpecs:
  • when_cancelling_a_tasksession
    • ensure_session_is_stopped
  • when_creating_a_tasksession
    • should_create_a_task_with_a_valid_identifier
    • ensure_start_date_is_set
    • ensure_session_is_started
    • ensure_session_length_is_set
  • when_finishing_a_tasksession
    • ensure_session_is_stopped
TaskSessionViewModelspecs:
  • given_a_default_session
    • should_be_disabled
    • should_not_have_start_date_set
    • should_not_have_end_date_set
    • should_have_valid_id
  • when_session_is_ended
    • ensure_endtime_is_recorded
  • when_session_is_started
    • should_be_active
    • should_have_initial_time_interval_set
    • should_have_less_time_remaining_than_original_session_length
    • should_notify_the_ui_to_update_the_counter_periodically
    • should_record_start_time_of_task
    • ensure_end_time_of_task_has_not_been_recorded
  • when_timer_completes
    • should_raise_finish_event
    • ensure_finish_event_is_only_raised_once
    • should_run_the_counter_down_to_zero
    • should_disable_task
    • ensure_endtime_has_not_been_recorded

Next Steps

The next logical step in the twelve days of code is creating the View and wiring it up to the ViewModel.

submit to reddit

Twelve Days of Code - Solution Setup

As part of the Twelve Days of Code Challenge, I’m developing a Pomodoro style application and sharing the progress here on my blog.  This post tackles day one: setting up your project.

The initial stage of a project where you are figuring out layers and packaging is a critical part of the process and one that I’ve always found interesting.  From past experience, the small details at this stage can become massive technical debt later if the wrong approach is used, so it’s best to take your time and make sure you’ve crossed all the T’s and dotted the I’s.

Creating the Solution

For this project I’ve chosen to use Visual Studio 2010 Beta 2 and so far the experience has been great.  Visual Studio 2010 is going to reset the standard and bring new levels of developer productivity (assuming they solve some of the stability issues): it’s faster and much more responsive, eats less memory and adds subtle UX refinements that improve developer flow.  To get a better sense and to recreate this feeling, I urge you to load up Visual Studio 2003 and look at the Start page – we’ve come a long way.

The New Project window has a nice overhaul, and we can specify the target framework in the dialog.  Here I’m creating a WPF Application Pomodoro.Shell.  Note that I’m specifying to create a directory for the solution and that the Solution Name and Project Name are different. 

AddSolution 

Normally at this point I would consider renaming the output of the application from “Pomodoro.Shell.exe” to some other simpler name like “pomodoro.exe”.  This is an optional step which I won’t bother with for this application.

Adding Projects

When laying out the solution, the first challenge is determining how many Visual Studio Projects we’ll need, and there are many factors to consider including dependencies, security, versioning, deployment, reuse, etc.

There appears to be a school of thought that believes every component or module should be its own assembly, and I strongly disagree.  Assemblies should be thought of as deployment-units – if the components version and deploy together, its very likely that they should be a single assembly.  As Visual Studio does not handle large number of projects well, it’s always better to start with larger assemblies and separate them later if needed.

For my pomodoro app, I’ve decided to structure the project into two primary pieces, “core” and “shell”, where “core” provides the model of the application and “shell” provides the user-interface specific plumbing.

Add Test Projects

Right from the start of the project, I’m gearing towards how it will be tested.  As such, I’ve created two test projects, one for each assembly.  This allow me to keep the logical division between assemblies.

AddProject

As soon as the projects are created, the first thing I’ll do is adjust the namespaces of the test libraries to match their counterparts.  By extension, the tests are features of the same namespace but they are packaged in a separate assembly because I do not want to deploy them with the application.  I’ve written about this before.

FixNamespaceforTests

Configure common Assembly Properties

Once we’ve settled into a project structure, the next easy win is to configure the projects to share the same the assembly details such as version number, manufacture, copyright, etc.  This is easily accomplished by creating a single file to represent this data, and then linking each project to this file.  At a later step, this file can be auto-generated as part of the build process.

using System.Reflection;

[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]
[assembly: AssemblyCompany("Bryan Cook")]
[assembly: AssemblyCopyright("Copyright © Bryan Cook 2009")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]

Tip: Link the AssemblyVersion.cs file to the root of each project, then drag it into the Properties folder.

Give the Assemblies a Strong-Name

If your code will ultimately end up on a end-user desktop, it is imperative to give the assembly a strong-name.  We can take advantage of Visual Studio’s built in features to create our strong-name-key (snk) but we’ll also take a few extra steps to ensure that each project has the same key.

  1. Open the project properties.
  2. Click on the Signing tab
  3. Check the “Sign the assembly” checkbox
  4. Choose “<New…>”
  5. Create a key with no password.
  6. Open Windows Explorer and copy the snk file to the root of the solution.
  7. Then for each project:
    1. Check the “Sign the assembly” checkbox
    2. Choose “<Browse…">”
    3. Navigate to the root of the solution and select the snk key.

Note that Visual Studio will copy the snk file to each project folder, though each project will have the same public key.

Designate Friend Assemblies

In order to aid testing, we can configure our Shell and Core assemblies to implicitly trust our test assemblies.  I’ve written about the benefits before, but the main advantage is that I don’t have to alter type visibility for testing purposes.  Since the assemblies have a strong name, the InternalsVisibleTo attribute requires the fully public key.

strong-name-internalsvisibleto

Since all the projects share the same key file, this public token will work for all the projects.  The following shows the InternalsVisibleTo attribute for the Pomodoro.Core project:

[assembly: InternalsVisibleTo("Pomodoro.Core.Tests, PublicKey=" +
"0024000004800000940000000602000000240000525341310004000001000" +
"1003be2b1a7e08d5e14167209fc318c9c16fa5d448fb48fe1f3e22a075787" +
"55b4b1cf4059185d2bd80cc5735142927fbbd3ab6eeebe6ac6af774d5fe65" +
"0a226b87ee9778cb2f6517382102894dc6d62d5a0aaa84e4403828112167a" +
"1012d5b905a37352290e4aa23f987ff2be3ccda3e27a7f7105cf5b05c0baf" +
"3ecbfd2371c0fa0")]

Setup External References

I like to put all the third-party assemblies that are referenced into the project into a “lib” folder at the root of the solution.  At the moment, I’m only referencing Moq for testing purposes.

A note on external references and source control: Team Foundation Server typically only pulls dependencies that are listed directly in the solution file.  While there are a few hacks for this (add each assembly as an existing item in a Solution Folder; or create a class library that contains the assemblies as content), I like to have all my dependencies in a separate folder with no direct association to the Visual Studio solution.  As a result, these references must be manually updated by performing a “Get Latest” from the TFS Source Control Explorer.  If you’ve got a solution for this – spill it, let’s hear your thoughts.

Setup Third-Party Tools

For all third-party tools that are used as part of the build, I like to include these in a “tools” or “etc” folder at the root of the solution.  This approach allows me to bundle all the necessary tools for other developers to allow faster ramp-up.  It adds a bit of overhead when checking things out, but certainly simplifies the build script.

Setup Build Script

There’s a few extra steps I had to take to get my .NET 4.0 project to compile using NAnt.

  1. Download the nightly build of the nant 0.86 beta1.  The nightly build solves the missing SdkInstallRoot build error.
  2. Paige Cook (no relation) has a comprehensive configuration change that needs to be applied to nant.exe.config
  3. Modify Paige’s version numbers from .NET 4.0 beta 1 to beta 2.  (Replace all references of “v4.0.20506” to “v4.0.21006”)

Here's a few points of interest for the build file listed below:

  • I’ve defined a default target “main”.  This allows me to simply execute “nant” in the root solution of the folder and it’ll take care of the rest.
  • The “main” target is solely empty because the real work is the order of the dependencies.  Currently, I’m only specifying “build”, but normally I would specify “clean, build, test”.
<project default="main">

  <!-- VERSION NUMBER (increment before release) -->
  <property name="version" value="1.0.0.0" />
  
  <!-- SOLUTION SETTINGS -->
  <property name="framework.dir" value="${framework::get-framework-directory(framework::get-target-framework())}" />
  <property name="msbuild" value="${framework.dir}\msbuild.exe" />
  <property name="vs.sln" value="TwelveDays.sln" />
  <property name="vs.config" value="Debug" />

  <!-- FOLDERS AND TOOLS -->
  <!-- Add aliases for tools here -->
  
  
  <!-- main -->
  <target name="main" depends="build">
  </target>

  <!-- build solution -->
  <target name="build" depends="version">

    <!-- compile using msbuild -->
    <exec program="${msbuild}"
      commandline="${vs.sln} /m /t:Clean;Rebuild /p:Configuration=${vs.config}"
      workingdir="."
          />

  </target>
    
  <!-- generate version number -->
  <target name="version">
    <attrib file="AssemblyVersion.cs" readonly="false" if="${file::exists('AssemblyVersion.cs')}" />
    <asminfo output="AssemblyVersion.cs" language="CSharp">
      <imports>
        <import namespace="System" />
        <import namespace="System.Reflection" />
      </imports>
      <attributes>
        <attribute type="AssemblyVersionAttribute" value="${version}" />
        <attribute type="AssemblyFileVersionAttribute" value="${version}" />
      </attributes>
    </asminfo>
  </target>

</project>

Next Steps…

In the next post, we’ll look at the object model for our Pomodoro application.

submit to reddit