Tuesday, February 08, 2011

Plaintext + Dropbox = Ubiquitous Text

I am loving this. If you blog or like to jot down notes while on the go then I highly recommend the following setup.

Dropbox

Dropbox is a small cloud-based utility that synchronizes files between all your devices. It works on PC, Mac, iOS, Android and Blackberry.  It even has a web interface that you can browse from any computer, and even track previous versions of files. Best of all, it's free (for 2Gb of storage).

There are a number of applications that use Dropbox as their storage medium. My favorite so far is PlainText.

PlainText

PlainText is a bare bones text editor with a minimal UI that uses Dropbox as it's file system. Simply download to your iPhone, iPod Touch or iPad, login with your Dropbox account and your data magically syncs each time you touch the contents. Only limitation is that it doesn't have the ability to move files to different folders, but I can do that on my PC.

Now it's possible for me to sketch down an idea on my iPhone while riding the subway or streetcar, then pickup where I left off and flesh out the details on my iPad while watching TV. I can even switch between devices with almost no wait, so I can proof read or work through an idea whenever I have a free moment.

Getting started

Setup is easy.  On your PC or Mac, go to Dropbox and setup an account, the software will download on the next page.  Install the application and associate it to your user account. The free version gives you 2 GB of storage (more if you enlist your friends) and they offer more storage through paid upgrades.

On your iOS device, download PlainText.  In the settings, link it to your Dropbox account.

PlainText-Dropbox-signup

PlainText by default will create a folder in your Dropbox called “PlainText”.  This is helpful so that you’re only syncing small files.  Here’s a quick screen-capture of the available settings.

Photo Feb 06, 2 33 00 PM

You can create files and folders on your iOS device.  Any changes will automatically sync to your PC/Mac.

Dropbox-sync

Disclosure: This is not a paid advertisement, I just really like this configuration.  I should point out that the links to sign-up are to my referral page.  If you are vehemently opposed to this you can visit the site referral free here: https://www.dropbox.com/

Thursday, January 27, 2011

Using Xslt in MSBuild

Recently, I had the opportunity to integrate some of my Xslt skills into our MSBuild script.  As MSBuild is a relatively new technology for me (compared to NAnt) as I ventured into this task I was reminded that there’s always more than one way to get the job done.  Prior to .NET 4.0, MSBuild didn’t natively support Xslt capabilities so the developer community independently produced their own solutions.  So far, I’ve identified the following three implementations:

I thought it would be fun to contrast the different implementations.

MSBuild Community Tasks: Xslt

Hosted on tigris.org, the msbuildtasks implementation was the task I ultimately ended up using. Of the three listed, this one seems to be the most feature complete – but working with it is extremely frustrating because there is no online help.  Fortunately, there is a help file that ships with the installer that provides decent coverage for the Task (albeit a bit buried).

The interesting feature that sets this Task apart is that it aggregates your xml files into a single in-memory document and performs the transformation once.  Keep in mind that this has an impact on how you structure your XPath queries.  For example, an expression like count(//element-name) will count all instances in the aggregated contents.

Because it is an aggregator, your xml documents will be loaded as children elements of a root element.  As such, you must supply the name for the Root element. If you are only transforming a single file, the root element can be left blank (“”).

My only complaint about this implementation is that it only supports writing the transformation to a file. For my purposes, I would have really liked the ability to pipe the output into a Property or ItemGroup. To achieve this, you must read the contents of the file into your script. Not a big deal, just an extra step.
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
   <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets" />

   <Target Name="Example">

      <ItemGroup>
        <XmlFiles Include="*.xml" />
      </ItemGroup>

      <PropertyGroup>
        <XslFile>example.xslt</XslFile>
      </PropertyGroup>

      <!-- perform our xslt transformation for all xml files -->
      <Xslt
         Inputs="@(XmlFiles)"
         Xsl="$(XslFile)"
         RootTag="Root"
         Output="xslt-result.html"/>

      <!-- read the file into a property -->
      <ReadLinesFromFile File="xslt-result.html">
         <Output TaskParameter="Lines" ItemName="MyOuput"/>
      </ReadLinesFromFile>  

      <!-- display the output (with no delimiter) -->
      <Message Text="@(MyOutput,'')" />

    </Target>

</Project>

In reading the help documentation, I noticed that this task will pass the extended meta-data of the Inputs into the document as parameters to the Xslt.  This looks interesting and I may want to revisit this in a future post.

MSBuild Extension Pack: XmlTask

I really liked this implementation but unfortunately our build server wasn’t using this set of extensions, so I had to retrofit to use the community tasks version.

This task boasts a few interesting features:

  1. The output of the transformation can write to a file or a property/item.
  2. Ability to validate the Xml.
  3. Ability to suppress the Xml declaration (though you can do this in your xslt, see below)

This example is comparable to above but uses the Output element of the Task to route the transformation into an ItemGroup (MyOutput).

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
   <Import Project="$(MSBuildExtensions)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks" />
            
   <Target Name="Example">

      <ItemGroup>
        <XmlFiles Include="*.xml" />
      </ItemGroup>

      <PropertyGroup>
        <XslFile>example.xslt</XslFile>
      </PropertyGroup>

      <MSBuild.ExtensionPack.Xml.XmlTask 
          TaskAction="Transform" 
          XmlFile="%(XmlFiles.Identity)" 
          XslTransformFile="$(XslFile)"
          >
          <Output 
             ItemName="MyOutput" 
             TaskParameter="Output"
             />
      </MSBuild.ExtensionPack.Xml.XmlTask>        
        
      <Message Text="@(MyOutput, '')" />
        
   </Target>

</Project>

Note that I can switch between using multiple and a single input files simply by changing the XmlFile attribute from an ItemGroup to a Property.  Unlike the Community Tasks implementation, my Xslt stylesheet doesn’t need special consideration for an artificial root element which makes the stylesheet shorter to develop and easier to maintain.

MSBuild 4.0: XsltTransformation

Last but not least is the newcomer on the block, MSBuild 4.0’s XsltTransformation Task.

This task, to my knowledge, can only write the transformation to a file.  If you’re only transforming a single file this won’t be a problem for you, but if you have multiple files that you want to concatenate into a single output, it’ll take some finesse.

I solved this problem by creating a unique output file for each transformation.  Aside from some extra files to clean-up, the task worked great.

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0">
            
   <Target Name="Example">

      <ItemGroup>
        <XmlFiles Include="*.xml" />
      </ItemGroup>

      <PropertyGroup>
        <XslFile>example.xslt</XslFile>
      </PropertyGroup>

      <!-- Perform the transform for each file, but
              write the output of each transformation to its
              own file. -->
      <XslTransformation
         OutputPaths="output.%(XmlFiles.FileName).html"
         XmlInputPaths="%(XmlFiles.Identity)"
         XslInputPath="$(XslFile)"
         />
 
      <!-- Read all our transformation outputs back into an ItemGroup -->
      <ReadLinesFromFile File="output.%(XmlFiles.FileName).html">
         <Output TaskParameter="Lines" ItemName="MyOutput" />
      </ReadLinesFromFile>
        
      <Message Text="@(MyOutput, '')" />
        
   </Target>

</Project>

This implementation also supports passing parameters, which may come in handy.

Some Xslt Love

And for completeness sake, here’s a skeleton Xslt file that supresses the xml-declaration.

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    
    <xsl:output method="html" omit-xml-declaration="yes" />
    
    <xsl:template match="/">
        <p>your output here</p>
    </xsl:template>

</xsl:stylesheet>

So go out there and unleash some Xslt-Fu.

submit to reddit

Tuesday, January 25, 2011

MSBuild for NAnt Junkies

alien-autopsyOver the last seven years any time I needed to write a build script, I've turned to NAnt as the tool to get the job done. I’ve relied on NAnt so much that the NAnt task reference page is usually on my Most Visited page in Chrome. The concepts behind NAnt are straight forward and once you’ve understand the basics it’s a great skill to have.

My current project is using MSBuild as the principle build script and this week I had the pleasure of having to tweak it. Although I’m a little late to adopting MSBuild as a scripting tool, this was a good opportunity to get to know its idiosyncrasies and learn a new technology. My verdict? They share a lot of similar characteristics, but if you’re familiar with NAnt you’ll find that MSBuild has a few foreign concepts that take some time to wrap your head around.

While this is still new territory for me, I thought it would be fun to compare the two and share my findings here for my reference (and for yours!).

The Basics

Structure

At the surface, MSBuild and NAnt appear to related as they share a common lexicon and overall structure. The scripts are comprised of Projects and Targets.

NAnt:
<?xml version="1.0"?>
<project default="MyTarget" >
    
    <target name="MyTarget">
        <echo message="Hello World" />
    </target>

</project>
MSBuild:
<Project DefaultTargets="MyTarget"
      xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

    <Target Name="MyTarget">
        <Message Text="Hello World" />
    </Target>

</Project>

Execution of the scripts is nearly identical, with only minor differences in the input arguments:

nant -buildfile:example.build MyTarget
msbuild example.csproj /target:MyTarget

Tasks

Both NAnt and MSBuild share a rich Task functionality set and support the ability for writing your own Tasks if the default tasks aren’t enough.  They also support writing custom tasks inline using your favourite .NET language.

Properties

The most notable difference between the two technologies starts to appear in how they handle Properties.

NAnt has a fixed schema such that all properties are defined exactly the same way.  You reference your properties in your scripts by prefacing them with ${property-name}

Using properties with NAnt
<property name="myproperty" value="hello world" />

<echo message='${myproperty}" />

MSBuild on the other hand leverages the self-describing nature of XML to describe properties. This can really throw you off, since your properties won’t adhere to a known schema which means there’s no IntelliSense support. This seems like an odd move as MSBuild is the underlying file structure for Visual Studio project files, but it provides an opportunity to provide additional meta-data about the properties, which I’ll explain later in this post.

Rather than use an element named “property”, any element that appears inside a PropertyGroup element is a property.  Similarly to NAnt, you reference properties using a $(property-name) syntax except be sure to note this is round brackets instead of curly braces.

Using Properties with MSBuild:
<PropertyGroup>
    <MyProperty>Hello World</MyProperty>
</PropertyGroup>

<Message Text="$(MyProperty)" />

MSBuild also supports the concept of ItemGroups, which equate roughly to multi-valued properties. The closest equivalent in NAnt would be filesets, though they are only limited to certain tasks, like a list of files and folders to be used with the Copy task.

Much like Properties, the element name represents the name of the ItemGroup. You reference an ItemGroup using a @(ItemGroup-Name) syntax. By default, ItemGroup are evaluated as semi-colon delimited lists.

Using ItemGroups with MSBuild:
<ItemGroup>
    <MyList Include="one" />
    <MyList Include="two" />
    <MyList Include="three" />
</ItemGroup>

<Message Text="@(MyList)" />

Produces the output:

one;two;three

By default, when accessing items in an ItemGroup using the @(item-group-name) syntax, the items are concatenated using a semi-colon delimiter. You can specify different delimiters as well, for example the following produces a comma-delimited list.

<Message Text="@(MyList, ',')" />

Advanced Stuff

Conditions

There are times where you want to conditionally invoke a target, task or configure a property and both tools accommodate this.  NAnt describes conditions in the form of two attributes: “if” or “unless” – unless is the opposite of if. NAnt also supports a very impressive list of built-in functions including string, file, date and environment routines.  MSBuild doesn’t have extensive support for inline expressions but does offer a set of Property Functions and standard boolean logic.  However, MSBuild has a very long list of Tasks and community extensions that provide coverage in this area. 

To assist with the conditional expressions you may write, NAnt also ships with out of the box properties.  MSBuild has considerably more reserved properties, though most of them seem centered around the Visual Studio compilation process.

Output

Sometimes your build script needs to control flow of execution based on the outcome of the preceding tasks.  In the NAnt world, a lot of the default Tasks expose an attribute that contains the output.

This NAnt example directs the exit code of the Exec task to the property myOutput.

<target name="RunProgram">

    <exec
        program="batch.bat"
        resultproperty="MyOutput"
        />

    <echo message="${MyOutput}" />
</target>

MSBuild has a similar strategy, except there’s a finer level of control: all Tasks can contain an Output element that dictates where to direct output of the Task.  This is advantageous if there are multiple output values that you want to collect from the Task.

This MSBuild example uses the Output element to put the value of the Exec Task’s ExitCode into the Property MyOutput:

<Target Name="RunProgram">

    <Exec Command="batch.bat">
        <Output
            TaskParameter="ExitCode"
            PropertyName="MyOutput" />
    </Exec>
    
    <Message Text="$(MyOutput)" />

</Target>

MetaData

Earlier I mentioned that MSBuild leverages the self-describing nature of XML to provide additional meta-data about properties (more specifically Item Groups).  This loose format to describe our properties means that we can embed a lot of extra data that can be leveraged by custom Tasks.  After this example, it’s clear to see how MSBuild and Visual Studio use meta-data to assist in the compilation of our projects.

We can embed meta data by adding custom children elements below each Item:

<ItemGroup>
    <StarWarsMovie Include="episode3.xml">
        <Number>III</Number>
        <Name>Revenge of the Sith</Main>
        <Awesome>no</Awesome>
    </StarWarsMovie>
    <StarWarsMovie Include="episode4.xml">
        <Number>III</Number>
        <Name>A New Hope</Name>
        <Awesome>yes</Awesome>
    </StarWarsMovie>
    <StarWarsMovie Include="episode5.xml">
        <Number>III</Number>
        <Name>The Empire Strikes Back</Name>
        <Awesome>yes</Awesome>
    </StarWarsMovie>
</ItemGroup>

And we can access the meta data of each individual Item using the %(meta-name) syntax.  For example, we can spit out only the awesome Star Wars Films by filtering them by their meta-data.

<Message Text="@(StarWarsMovie)"
         Condition=" '%(Awesome)' == 'yes'"/>

The above displays the names of the episodes that are awesome (“episode4.xml;episode5.xml”) but there’s a really interesting and powerful aspect happening with the above statement.  The “%” character accesses the individual items before they are appended to the output, which is very similar to a for loop.  So where “@(StarWarsMovie)” sends the entire ItemGroup to the Message task, we can execute the Message Task in a loop using this syntax:

<Message Text="%(StarWarsMovie.Name)" 
         Condition=" '%(Awesome)' == 'yes'"/>

Which produces the output:

A New Hope
The Empire Strikes Back

There’s also a set of baked-in meta-data properties available to all ItemGroups.  The most notable is the Identity meta-property which gives us the name of the individual Item.

Transforms

Another interesting feature that MSBuild provides is the ability to convert the contents of a list into another format using a transform syntax @( ItemGroup –> expected-format).  When combined with Meta-Data, we’re able to create rich representations of our data. Sticking with my Star Wars theme (hey, at least I'm consistent), the following transform:

@(StarWarsMovies -> 'Star Wars Episode %(Number): %(Name)', ', ')

Creates a comma-delimited list:

Star Wars Episode III: Revenge of the Sith, Star Wars Epsiode IV: A New Hope, Star Wars Epside V: The Empire Strikes Back

Wrap up

While MSBuild shares a lot of common ground with NAnt, there are certainly a lot of interesting features that warrant a closer look.

Happy Coding.

submit to reddit

Wednesday, January 05, 2011

Selenium Toolkit for .NET 0.84 Released

Welcome 2011, things are shaping up to be a great year.  Things have been quiet around here, aside from the limited notices I sent out on twitter, you may not have noticed that I published two releases of the Selenium Toolkit for .NET since November.

So what's new in the Toolkit?

While there have been a few minor fixes and code changes, the last two releases for the toolkit have been focused on keeping the toolkit current with its external dependencies (NUnit / Selenium / Selenium IDE).  As such, expect to see more frequent releases this year.

By far, the biggest change in the toolkit is the new WiX based installer.  While I’m personally excited about the ability to include the installer as part of an automated build, you can do a lot of great things with WiX’s declarative syntax that would normally require custom code.  For example, the installer is able to conditionally deploy files based on the presence of a registry key and folder value.

The second biggest change (enabled through the WiX installer) is a resolution to a common problem related to the Toolkit’s NUnit addin.  Without going into the specific details, the short story is that NUnit Addins only work with the version of NUnit it was compiled with.  This leads to some confusion where users would copy the addin to their recently installed latest version of NUnit only to find out the hard way that nothing works.  Since version 0.83 the installer now includes separate NUnit version specific addins and will deploy them to the appropriate NUnit addin folder during installation.  NUnit versions 2.5.3, 2.5.5, 2.5.7, 2.5.8 and 2.5.9 are now supported as separate addins.

In case you’re wondering, 2.5.4 and 2.5.6 were published for only a few weeks before they were superseded by 2.5.5 and 2.5.7 respectively.

Also, you asked, I listened: the Toolkit is now published as both an MSI and ZIP.

What about Selenium 2.0 / WebDriver?

As my current project at work is a monster WPF application, I haven’t had a lot of cycles to play with the new WebDriver feature-set.  This may change, as I’m planning on reaching out to some web-based projects within my organization and offer some test guidance.

As far as Selenium 2.0 betas are concerned, I am keeping a close eye on the project and as soon as the .NET bindings and overall functionality stabilize I may upgrade the internal bits to leverage the 1.0 backward compatible features in the 2.0 beta (WebDriverBackedSeleniumBrowser).  I may even fork the code and offer two releases.  Time will tell, as will your feedback.

So with that, short-n-sweet – Happy New Year.  You can find the latest release here:  http://seleniumtoolkit.codeplex.com/releases/view/58383

submit to reddit

Monday, December 13, 2010

Unit Test Review Checklist

Whenever someone mentions that they want to bring a code-review process into a project, my first inclination is to have teams review the tests.  Tests should be considered production assets that describe the purpose and responsibilities of the code, and developers should be making sure that their tests satisfy this goal.  If the tests look good and they cover the functionality that is required, I don’t see much point in dwelling on the implementation: if the implementation sucks, you can refactor freely assuming you have proper coverage from your tests.

A unit test review process is a great way to ensure that developers are writing useful tests that will help deliver high quality code that is flexible to change.  As code-review processes have the tendency to create stylistic debates within a development group, having a checklist of the 5-10 points to look for is a great way to keep things moving in the right direction.

Here’s my criteria that I’d like to see in a unit test.  Hopefully you find this useful, too.

Pass Guideline
 

Is the test well named?

  • Does the test name clearly represent the functionality being tested?
  • Will someone with little or no knowledge of the component be able to decipher why the test exists?
 

Is the test independent?

  • Does the test represent a single unit of work?
  • Can the test be executed on its own or is it dependent on the outcome of tests that preceded it?  For example, a fixture that shares state between tests may inadvertently include side-effects.
 

Is the test reliable?

  • Does the test produce the same outcome consistently?
  • Does the test run without manual intervention to determine a successful outcome?
 

Is the test durable?

  • Is the test designed so that it isn’t susceptible to changes in other parts of the application?  For example:
    • Does the test have complex or lengthy setup?
    • Will changes in the subject's dependencies lead to breakages?
 

Are the assertions valid?

  • Does the test contain assertions that describe the functionality being tested?
  • Do the assertions include helpful error messaging that describe what should have happened if the test fails?
  • Are any assertions redundant such as testing features of the CLR (constructors or get/set properties)?
  • Are the assertions specific to this test or are they duplicated in other tests?
 

Is the test well structured and documented?

  • Does the test contain sufficient comments?
  • Does the test highlight the subject under test and corresponding functionality being exercised?
  • Is there a clear indication of arrange/act/assert pattern such that the importance of the arrange makes sense, the action easily identifiable and the assertions clear?
 

Is the test isolating responsibilities of the subject?

  • Does the test make any assumptions about the internals of dependencies that aren't immediately related to the subject?
  • Is the test inaccurately or indirectly testing a responsibility or implementation of another object?  If so, move the test to the appropriate fixture.
 

Are all scenarios covered?

  • Does the fixture test all paths of execution? Not just for the code that has been written, but also for scenarios that may have been missed:
    • invalid input?;
    • failures in dependencies? 
 

Does the test handle resources correctly?

  • If the test utilizes external dependencies (databases, static resources), is the state of these dependencies returned to a neutral state?
  • Are resources allocated efficiently to ensure that the test runs as fast as possible:
    • setup/teardown effectively;
    • No usages of Thread.Sleep()?
 

Is the complexity of the test balanced?

Tests need to strike a balance between Don't Repeat Yourself and clear examples of how to use the subject.

  • Is the test too complicated or difficult to follow?
  • Is there a lot of duplicated code that would lead to time consuming effort if the subject changed? 

 

What do you think?  Are there criteria that you’d add?  Things you disagree with?

Happy coding.

Monday, October 18, 2010

Callbacks with Moq

My current project is using Moq as our test mocking framework. Although I've been a fan of RhinoMocks for several years I've found that in general, they both support the same features but I'm starting to think I like Moq's syntax better. I'm not going to go into an in-depth comparison of the mocking frameworks, as that's well documented elsewhere. Instead, I want to zero in on a feature that I've had my eye on for sometime now.

Both RhinoMocks and Moq have support for a peculiar feature that let's you invoke a callback method in your test code when a method on your mock is called. This feature seems odd to me because in the majority of cases a mock is either going to return a value or throw an exception - surely invoking some arbitrary method in your test must represent a very small niche requirement.

According to the RhinoMock documentation, the callback takes a delegate that returns true or false. From this it's clear that you would use this delegate to inspect the method's incoming parameters and return false if it didn't meet your expectations. However, as Ayende eludes, this is a powerful feature that can be easily be abused.

Moq provides this feature too, but the syntax is different. Rather than requiring a delegate Func<bool>, Moq's Callback mimics the signature of the mocked method. From this syntax it's not obvious that the callback should be used for validating inbound parameters, which suggests that it could be abused, but it also implies freedom for the test author to do other things. Granted this can get out of control and can be abusive, but perhaps a level of discretion about it's usage is also implied?

Here are a few examples where Callbacks have been helpful:

Validating inbound method parameters

The best example I can imagine for inspecting an inbound parameter is for a method that has no return value. For example, sending an object to a logger. If we were to assume that validation logic of the inbound parameter is the responsibility of the logger, we would only identify invalid arguments at runtime. Using the callback technique, we can write a test to enforce that the object being logged meets minimum validation criteria.

[TestMethod]
public void Ensure_Exception_Is_Logged_Properly()
{
  Exception exception;

  Mock.Get(Logger)
      .Setup( l => l.Error(It.IsAny<Exception>())
      .Callback<Exception>( ex => exception = ex )

  Subject.DoSomethingThatLogsException();

  Assert.AreEqual(ex.Message, "Fatal error doing something");
}

Changing state of inbound parameters

Imagine a WPF that uses the MVVM pattern and we need to launch a view model as a modal dialog. The user can make changes to the view model in the dialog and click ok, or they can click cancel. If the user clicks ok, the view model state needs to reflect their changes. However if they click cancel, any changes made need to be discarded.

Here's the code:

public class MyViewModel : ViewModel
{
  /* snip */

  public virtual bool Show()
  {
      var clone = this.Clone();
      var popup = new PopupWindow();
      popup.DataContext = clone;

      if (_popupService.ShowModal(popup))
      {
          CopyStateFrom(clone);
          return true;
      }
      return false;
  }

}

Assuming that the popup service is a mock object that returns true when the Ok button is clicked, how do I test that the contents of the popup dialog are copied back into the subject? How do I guarantee that changes aren't applied if the user clicks cancel?

The challenges with the above code is that the clone is a copy of my subject. I have no interception means to mock this object unless I introduce a mock ObjectCloner into the subject (that is ridiculous btw). In addition to this, the changes to the view model happen while the dialog is shown.

While the test looks unnatural, Callbacks fit this scenario really well.

[TestMethod]
public void When_User_Clicks_Ok_Ensure_Changes_Are_Applied()
{
  Mock.Get(PopupService)
      .Setup(p => p.ShowModal(It.IsAny<PopupWindow>())
      .Callback<PopupWindow>( ChangeViewModel )
      .Returns(true);

  var vm = new MyViewModel(PopupService)
              {
                  MyProperty = "Unchanged"
              };

  vm.Show();

  Assert.AreEqual("Changed", vm.MyProperty);
}

private void ChangeViewModel(PopupWindow window)
{
  var viewModel = window.DataContext as MyViewModel;
  viewModel.MyProperty = "Changed";
}

The key distinction here is that changes that occur to the popup are in no way related to the implementation of the popup service. The changes in state are a side-effect of the object passing through the mock. We could have rolled our own mock to simulate this behavior, but Callbacks make this unnecessary.

Conclusion

All in all, Callbacks are an interesting feature that allow us to write sophisticated functionality for our mock implementations. They provide a convenient interception point for parameters that would normally be difficult to get under the test microscope.

How are you using callbacks? What scenarios have you found where callbacks were necessary?

submit to reddit

Tuesday, October 12, 2010

Working with Existing Tests

You know, it’s easy to forget the basics after you’ve been doing something for a while.  Such is the case with TDD – I don’t have to remind myself of the fundamental “red, green, refactor” mantra everything I write a new test, it’s just baked in.  When it’s time to write something new, the good habits kick in and I write a test.  After all, this is what the Driven part of Test Driven Development is about: we drive our development through the creation of tests.

The funny thing is, the goal of TDD isn’t to produce tests.  Tests are merely a by-product of the development of the code, and having tests that demonstrate that the code works is one of the benefits.  Once they’re written, we forget about them and move on – we only return to them if something unexpected broke.

Wait.  Why are they breaking?  Maybe we forgot something, somewhere.

The Safety Net Myth

One of the reasons that tests break is because there’s a common perception that once the code is written, we no longer need the tests to drive development.  “We’ve got tests, so let’s just see what breaks after I make these changes…”

This strategy works when you want to try “what-if” scenarios or simple proper refactorings, but it falls flat for long-term coding sessions.  The value of the tests diminish quickly the longer the coding session lasts.  Simply put, tests are not safety nets – if you go off making changes for a few days you’re only going to find that the tests get in the way as they don’t represent your changes and your code won’t compile.

This may seem rudimentary, but let’s go back and review the absolute basics of TDD methodology:

  1. Start by writing a failing test. (RED)
  2. Implement the code necessary to make that test pass. (GREEN)
  3. Remove any duplication and clean it up.  (REFACTOR)

It’s easy to forget the basics.  The very first step is to make sure we have a test that doesn’t pass before we do any work, and this is easily overlooked when we already have tests for that functionality.

Writing tests for new functionality

If you want to introduce new functionality to your code base, challenge your team to introduce those changes to the tests first.  This may seem altruistic to some, especially if it’s been a long time since the tests were written or if no-one on the team is familiar with the tests or their value. 

Here’s a ridiculously simple tip:

  1. Locate the code you think may need to change for this feature.
  2. Introduce a fatal error into the code.  Maybe comment out the return value and return null, or throw an exception.
  3. Run the tests.

With luck, all the areas of your tests that are impacted by this code are broken.  Review these tests and ask yourself:

  • Does this test represent a valid requirement after I introduce my change?  If not, it’s safe to remove it.
  • How does this test relate to the change that I’m introducing?  Would my change alter the expected results of this test?  If yes, change the expected results. These tests should fail after you remove the fatal flaw you introduced moments ago.
  • Do any of these tests represent the new functionality I want to introduce?  If not, write that test now.

(If nothing breaks, you’ve got a different problem.  Do some research on what it would take to get this code under a test, and write tests for new functionality.)

Conclusion

The duct tape programmer will argue that you can’t make an omelette without breaking some eggs, which is true – we should have the courage to stand up and fix things that are wrong.  But I’d argue that you must do your homework first - if you don’t check for other ingredients, you’re just making scrambled eggs. 

In my experience, long term refactorings that don’t leverage the tests are a recipe for test-abandonment; your tests and code should always be moments from being able to compile.  The best way to keep the tests valid is to remember the basics – they should be broken before you start introducing changes.

submit to reddit

Tuesday, October 05, 2010

Writing Easier to Understand Tests

For certain, the long term success of any project that leverages Tests has got to be Tests that are easy to understand and provide value.

For me, readability is the gateway to value. I want to be able to open the test, and BOOM! Here’s the subject, here’s the dependencies, this is what I’m doing, and here’s what I expect it to do. If I can’t figure it out within a few seconds, I start to question the value of the tests. And that’s exactly what happened to me earlier this week.

The test I was looking at had some issues, but the developer who wrote it had their heart in the right place and made attempts to keep it relatively straight-forward. It was using a Context-Specification style of test and was using Moq to Mock out the physical dependencies, but I got tied up in the mechanics of the test. I found that the trouble I was having was determining which Mocks were part of the test versus Mocks that were in the test to support related dependencies.

Below is an example of a similar test and the steps I took to clean it up. Along the way I found something interesting, and I hope you do, too.

Original (Cruft Flavor)

Here’s a sample of the test. For clarity sake, I only want to illustrate the initial setup of the test, so I’ve omitted the actual test part. Note, I’m using a flavor of context specification that I’ve blogged about before, if it seems like a strange syntax, you may want to read up.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected Mock<IDataProvider> _mockDataProvider;
   protected Mock<IUserContext> _mockUserContext;
   protected IDataProvider _dataProvider;
   protected IUserContext _userContext;

   public override void Context()
   {
       _mockDataProvider = new Mock<IDataProvider>();
       _mockUserContext = new Mock<IUserContext>();

       _userContext = _mockUserContext.Object;
       _dataProvider = _mockDataProvider.Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(_dataProvider,_userContext);
   }

   // public class SubContext : TemplateResolverSpecs
   // etc
}

This is a fairly simple example and certainly those familiar with Moq’s syntax and general dependency injection patterns won’t have too much difficultly understanding what’s going on here. But you have to admit that while this is a trivial example there’s a lot of code here for what’s needed – and you had to read all of it.

The Rewrite

When I started to re-write this test, my motivation was for sub-classing the test fixture to create different contexts -- maybe I would want to create a context where I used Mocks, and another for real dependencies. I started to debate whether it would be wise to put the Mocks in a subclass or in the base when it occurred to me why the test was confusing in the first place: the Mocks are an implementation detail that are getting in the way of understanding the dependencies to the subject. The Mocks aren’t important at all – it’s the dependencies that matter!

So, here’s the same setup with the Mocks moved out of the way, only referenced in the initialization of the test’s Context.

using Moq;
using Model.Services;
using MyContextSpecFramework;
using Microsoft.VisualStudio.TestTools.UnitTesting;

public class TemplateResolverSpecs : ContextSpecFor<TemplateResolver>
{
   protected IDataProvider DataProvider;
   protected IUserContext UserContext;

   public override void Context()
   {
       DataProvider = new Mock<IDataProvider>().Object;
       UserContext = new Mock<IUserContext>().Object;
   }

   public override TemplateResolver InitializeSubject()
   {
       return new TemplateResolver(DataProvider, UserContext);
   }
}

Much better, don’t you think? Note, I’ve also removed the underscore and changed the case on my fields because they’re protected and that goes a long way to improve readability, too.

Where’d my Mock go?

So you’re probably thinking “that’s stupid great Bryan, but I was actually using those mocks” – and that’s a valid observation, but the best part is you don’t really need them anymore. Moq has a cool feature that let’s you obtain the Mock wrapper from the mocked-object anytime you want, so you only need the mock when it’s time to use it.

Simply use the static Get method on the Mock class to obtain a reference to your mock:

Mock.Get(DataProvider)
   .Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
   .Returns( new TemplateRecord() );

For contrast sake, here’s what the original would have looked like:

_mockDataProvider.Setup( dataProvider => dataProvider.GetTemplate(It.IsAny<string>())
                .Returns( new TemplateRecord() );

They’re basically the same, but the difference is I don’t have to remember the name of the variable for the mock anymore. And as an added bonus, our Mock.Setup calls will all line up with the same indentation, regardless of the length of the dependencies’ variable name.

Conclusion

While the above is just an example of how tests can be trimmed down for added readability, my hope is that this readability influences developers to declare their dependencies up front rather than weighing themselves and their tests down with the mechanics of the tests. If you find yourself suddenly requiring a Mock in the middle of the test, or creating Mocks for items that aren’t immediate dependencies, it should serve as a red flag that you might not be testing the subject in isolation and you may want to step-back and re-think things a bit.

submit to reddit

Monday, October 04, 2010

Manually adding Resources Files to Visual Studio Projects

Suppose you're moving files between projects and you have to move a Settings or Resource File.  If you drag these files between projects, you’ll notice that Visual Studio doesn’t preserve the relationships between the designer files:

 settings-resource

Since these are embedded resources, you’ll need a few extra steps to get things sorted out:

  1. Right-click on the Project and choose Unload Project.  Visual Studio may churn for a few seconds.
  2. Once unloaded, Right-click on the project file and choose Edit <Project-Name>.
  3. Locate your resource files and setup them up to be embedded resources with auto-generated designer files:
<ItemGroup>
  <None Include="Resources\Settings.settings">
    <Generator>SettingsSingleFileGenerator</Generator>
    <LastGenOutput>Settings.Designer.cs</LastGenOutput>
  </None>
  <Compile Include="Resources\Settings.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Settings.settings</DependentUpon>
    <DesignTimeSharedInput>True</DesignTimeSharedInput>
  </Compile>
  <EmbeddedResource Include="Resources\Resources.resx">
    <Generator>ResXFileCodeGenerator</Generator>
    <LastGenOutput>Resources.Designer.cs</LastGenOutput>
  </EmbeddedResource>
  <Compile Include="Resources\Resources.Designer.cs">
    <AutoGen>True</AutoGen>
    <DependentUpon>Resources.resx</DependentUpon>
    <DesignTime>True</DesignTime>
  </Compile>
</ItemGroup>

Cheers.

submit to reddit

Wednesday, August 18, 2010

An update

“The rumors of my death have been largely exaggerated." – Mark Twain.

It’s been forever and a day since my last post, largely due to the size and complexity of my current project, but for those wondering (and to clear the writers block in my head), here’s a quick update:

Selenium Toolkit for .NET

Although there hasn’t been a formal release on my open source project since last November, I have been covertly making minor commits here and there to keep the NUnit dependencies in sync with NUnit’s recent updates.  I may update the NUnit libraries to the latest release (2.5.7) this weekend.  I realize the irony here: the purpose of my project is to provide a simple installer for Selenium and my NUnit addin, forcing you to download the source and compile it is beyond excuse.  Here are some of the things that I have been tinkering with to include in a major release, but I haven’t fully sat down to finish:

  • WiX-based Installation:  I’ve been meaning to drop the out of the box Visual Studio installer project in favor of a WiX-based installer.  The real intent is to install side-by-side versions of my NUnit addin for each version of NUnit installed locally.
  • Resharper 5.0 integration:  The good folks at JetBrains, through their commitment to open source projects, have recognized my open source project and have donated a license of Resharper (read: YOU GUYS ROCK!).  To return the favor, I am looking to produce a custom runner for tests with my WebFixture attribute so that you can right click any fixture or test to launch the selenium process just like you would with NUnit or MSTest.
  • Visual Studio 2008/2010 integration: Not to leave the rest of the development world without Resharper licenses in the cold, I have been playing with a custom version MSTest extension.  Unfortunately for me, the out-of-the-box functionality for MSTest isn’t enough for my needs – I would really like to control execution of the environment before and after the test as well as when all tests have finished execution – there doesn’t seem to be a good hook for this.  It appears that my best option is to implement my own test adapter and plumbing.  I’ll leave the details of my suffering and choking on this to your imagination or maybe another post.

What about Selenium 2.0, aka WebDriver?  I’m still sitting on the fence on how to adapt the toolkit to the new API but haven’t had a lot of time to play with it since my current project is a very thick client (no web).  I am interested to hear your thoughts, but my immediate reaction is to use a parameterized approach:

[WebTest]
public void UsingDefaultBrowser(WebDriver browser)
{
}

Other Ramblings

I won’t bore you to death with the details of the last few months – my twitter stream can do that – but expect to hear me spout more on WPF / Prism / Unity, TDD, Context/Specification soon.

In the meantime, happy coding.

Tuesday, February 09, 2010

Running code in a separate AppDomain

Suppose you’ve got a chunk of code that you need to run as part of your application but you’re concerned that it might bring down your app or introduce a memory leak.  Fortunately, the .NET runtime provides an easy mechanism to run arbitrary code in a separate AppDomain.  Not only can you isolate all exceptions to that AppDomain, but when the AppDomain unloads you can reclaim all the memory that was consumed.
Here’s a quick walkthrough that demonstrates creating an AppDomain and running some isolated code.

Create a new AppDomain

First we’ll create a new AppDomain based off the information of the currently running AppDomain.
AppDomainSetup currentSetup = AppDomain.CurrentDomain.SetupInformation;

var info = new AppDomainSetup()
              {
                  ApplicationBase = currentSetup.ApplicationBase,
                  LoaderOptimization = currentSetup.LoaderOptimization
              };

var domain = AppDomain.CreateDomain("Widget Domain", null, info);

Unwrap your MarshalByRefObject

Next we’ll create an object in that AppDomain and serialize a handle to it so that we can control the code in the remote AppDomain.  It’s important to make sure the object you’re creating inherits from MarshalByRefObject and is marked as serializable.  If you forget this step, the entire object will serialize over to the original AppDomain and you lose all isolation.
string assemblyName = "AppDomainExperiment";
string typeName = "AppDomainExperiment.MemoryEatingWidget";

IWidget widget = (IWidget)domain.CreateInstanceAndUnwrap(assemblyName, typeName);

Unload the domain

Once we’ve finished with the object, we can broom the entire AppDomain which frees up all resources attached to it.  In the example below, I’ve deliberately created a static reference to an object to prevent it from going out of scope.
AppDomain.Unload(domain);

Putting it all together

Here’s a sample that shows all the moving parts.
namespace AppDomainExperiment
{
    using System;
    using System.Collections.Generic;
    using System.IO;
    using Microsoft.VisualStudio.TestTools.UnitTesting;

    [Test]
    public class AppDomainLoadTests
    {
        [TestMethod]
        public void RunMarshalByRefObjectInSeparateAppDomain()
        {
            Console.WriteLine("Executing in AppDomain: {0}", AppDomain.CurrentDomain.Id);
            WriteMemory("Before creating the runner");

            using(var runner = new WidgetRunner("AppDomainExperiment",
                                                "AppDomainExperiment.MemoryEatingWidget"))
            {

                WriteMemory("After creating the runner");

                runner.Run(Console.Out);

                WriteMemory("After executing the runner");
            }

            WriteMemory("After disposing the runner");
        }

        private static void WriteMemory(string where)
        {
            GC.Collect();
            GC.WaitForPendingFinalizers();
            long memory = GC.GetTotalMemory(false);

            Console.WriteLine("Memory used '{0}': {1}", where, memory.ToString());
        }
    }

    public interface IWidget
    {
        void Run(TextWriter writer);
    }

    public class WidgetRunner
    {
        private readonly string _assemblyName;
        private readonly string _typeName;
        private AppDomain _domain;

        public WidgetRunner(string assemblyName, string typeName)
        {
            _assemblyName = assemblyName;
            _typeName = typeName;
        }

        #region IWidget Members

        public void Run(TextWriter writer)
        {
            AppDomainSetup currentSetup = AppDomain.CurrentDomain.SetupInformation;

            var info = new AppDomainSetup()
                          {
                              ApplicationBase = currentSetup.ApplicationBase,
                              LoaderOptimization = currentSetup.LoaderOptimization
                          };

            _domain = AppDomain.CreateDomain("Widget Domain", null, info);

            var widget = (IWidget)_domain.CreateInstanceAndUnwrap(_assemblyName, _typeName);

            if (!(widget is MarshalByRefObject))
            {
                throw new NotSupportedException("Widget must be MarshalBeRefObject");
            }
            widget.Run(writer);
        }

        #endregion

        #region IDisposable Members

        public void Dispose()
        {
            GC.SuppressFinalize(this);
            AppDomain.Unload(_domain);
        }

        #endregion
    }

    [Serializable]
    public class MemoryEatingWidget : MarshalByRefObject, IWidget
    {
        private IList<string> _memoryEater;

        private static IWidget Instance;

        #region IAppLauncher Members

        public void Run(TextWriter writer)
        {
            writer.WriteLine("Executing in AppDomain: {0}", AppDomain.CurrentDomain.Id);

            _memoryEater = new List<string>();

            // create some really big strings
            for(int i = 0; i < 100; i++)
            {
                var s = new String('c', i*100000);
                _memoryEater.Add(s);
            }

            // THIS SHOULD PREVENT THE MEMORY FROM BEING GC'd
            Instance = this;
        }

        #endregion

        #region IDisposable Members

        public void Dispose()
        {
            
        }

        #endregion
    }
}
Running the test shows the following output:
Executing in AppDomain: 2
Memory used 'Before creating the runner': 569060
Memory used 'After creating the runner': 487508
Executing in AppDomain: 3
Memory used 'After executing the runner': 990525340
Memory used 'After disposing the runner': 500340
Based on this output, the main take away is that the memory is reclaimed when the AppDomain is unloaded.  Why do the numbers not match up in the beginning and end?  It’s one of those mysteries of the managed garbage collector, it reminds me of my favorite Norm McDonald joke from SNL:
“Who are safer drivers? Men, or women?? Well, according to a new survey, 55% of adults feel that women are most responsible for minor fender-benders, while 78% blame men for most fatal crashes. Please note that the percentages in these pie graphs do not add up to 100% because the math was done by a woman. [Crowd groans.] For those of you hissing at that joke, it should be noted that that joke was written by a woman. So, now you don't know what the hell to do, do you? [Laughter] Nah, I'm just kidding, we don't hire women”
Happy Coding.

submit to reddit

Monday, February 08, 2010

Twelve Days of Code – Wrap up

Well it’s been a very long twelve days indeed, and I accomplished more than I thought I would.  But alas, all good things must come to end, so after a short hiatus on the blog I’m back to close out the Twelve Days of Code series for 2009.

For your convenience, here’s a list of the posts:

I want to thank all those who showed interest in the concept and if there are folks out there who were following along at home, please drop me a line or a comment.

For those interested in seeing some of the .NET 4.0 code and extending my work, the code is available for download.

I may pick up the experiment again once the next release candidate for Visual Studio is released.

submit to reddit

Tuesday, January 12, 2010

The Three Step Developer Flow

A long time ago, a mentor of mine passed on some good advise for developers that has stuck well with me: “Make it work.  Make it right.  Make it fast.”  While this simple mantra is likely influenced by Donald Knuth’s famous and misquoted statement that “premature optimization is the root of all evil, it’s more about how a developer should approach development altogether.

Breaking it down…

What I’ve always loved about this simple advice is that if a developer takes the steps out of order, such as putting emphasis on design or performance, there’s a very strong possibility that the code will never work.

Make it work…

Developers should take the most pragmatic solution possible to get the solution to work.  In some cases this should be considered prototype code that should be thrown away before going into production.  Sadly, I’m sure that 80% of all production code is prototype code with unrealized design.

Make it right…

Now that you know you know how to get the code to work, take some time to get into a shape that you can live with.  Interesting enough, emphasis should not be placed on designing for performance at this point.  If you can’t get to this stage, it should be considered technical debt to be resolved later.

Make it fast….

At this point you should have working code that looks clean and elegant, but how does it stack up when it’s integrated with production components or put under load?  If you had spent any time in the last two steps optimizing the code to handle load of a thousand users but it’s only called once in the application – you may have wasted your time and optimized prematurely.  To truly know, code should be examined under a profiler to determine if the code meets the performance goals of the application.  This is all a part of embedding a “culture of Performance” into your organization.

Aligned with Test Driven Development

It’s interesting that this concept overlaps with Test Driven Development’s mantra “Red, Green, Refactor” quite well.  Rather than developing prototype code as a console app, I write tests to prove that it works.  When it’s time to clean up the code and make it right, I’m refactoring both the tests and the code in small increments – after each change, I can verify that it still works. 

Later, if we identify performance issues with the code, I can use the tests as production assets to help me understand what the code needs to do.  This provides guidance when ripping out chunks of poorly performing code.

By following either the “red / green / refactor” or “make it work / right / fast" mantras mean that I don’t incorporate best practices or obvious implementation when writing the code? Hardly. I’ll write what I think needs to be written, but it’s important not to get carried away.

Write tests.  Test often.

submit to reddit

Thursday, January 07, 2010

Twelve Days of Code – Unity Framework

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the seventh in this series. This post focuses on using the Microsoft Unity Framework in our .NET 4.0 application.

This post assumes assumes that you are familiar with the last few posts and have a working understanding of Inversion of Control.  If you’re new to this series, please check out some of the previous posts and then come back.  If you’ve never worked with Inversion of Control, it’s primarily about decoupling objects from their implementation (Factory Pattern, Dependency Injection, Service Locator) – but the best place to start is probably here.

Overview

Our goal for this post is to remove as many hard-code references to types as possible.  Currently, the constructor of our MainWindow initializes all the dependencies of the Model and then manually sets the DataContext.  We need to pull this out of the constructor and decouple the binding from TaskApplicationViewModel to ITaskApplication.

Initializing the Container

We’ll introduce our inversion of control container in the Application’s OnStartup event.  The container’s job is to provide type location and object lifetime management for our objects, so all the hard-coded initialization logic in the MainWindow constructor is registered here.

public partial class App : Application
{
    protected override void OnStartup(StartupEventArgs e)
    {
        IUnityContainer container = new UnityContainer();
        
        container.RegisterType<ITaskApplication, TaskApplicationViewModel>();
        container.RegisterType<ITaskSessionController, TaskSessionController>();
        container.RegisterType<IAlarmController, TaskAlarmController>();
        container.RegisterType<ISessionRepository, TaskSessionRepository>();

        Window window = new MainWindow();
        window.DataContext = container.Resolve<ITaskApplication>();
        window.Show();

        
        base.OnStartup(e);
    }
}

Also note, we’re initializing the MainWindow with our data context and then displaying the window.  This small change means that we must remove the MainWindow.xaml reference in the App.xaml (otherwise we’ll launch two windows).

Next Steps

Aside from the simplified object construction, it would appear that the above code doesn’t buy us much: we have roughly the same number of lines of code and we are simply delegating objection construction to Unity.  Alternatively, we could move the hard-coded registrations to a configuration file (though that’s not my preference here).

In the next post, we’ll see how using Prism’s Bootstrapper and Module configuration allows us to move this logic into their own modules.

In the next step, we’ll look at using P

submit to reddit

Saturday, January 02, 2010

Twelve Days of Code – Entity Framework 4.0

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the sixth in this series. Today I’ll be adding some persistence logic to the Pomodoro application.

I should point I’ve never been a huge fan of object-relational mapping tools, and while I’ve done a few tours with SQL Server I haven’t played much with SQLite.  I’ve heard good things about the upcoming version of the Entity Framework in .NET 4.0, so this post gives me a chance to play with both.

Getting Ready

As SQLite isn’t one of the default providers supported by Visual Studio 2010 (Beta 2), I downloaded and installed SQLite.  The installation adds the SQLite libraries to the Global Assembly Cache, and adds Designer Support for connecting to a SQLite database through the Server Explorer.  The installation and GAC’d assemblies may prove to be an issue later when we want to deploy the application, but we’ll worry about that later.

Creating a Session Repository

So far, the project structure has a “Core” and “Shell” project where the “Core” project contains the central interfaces for the application.  Since the ITaskSessionController already has the responsibility of handling starting and stopping of sessions, it is the ideal candidate to interact with a ISessionRepository for recording these activities against a persistent store.

To handle auditing of sessions, I created a ISessionRepository interface which lives in the Core library:

interface ISessionRepository
{
    void RecordCompletion(ITaskSession session);
    void RecordCancellation(ITaskSession session);
}

Although we don't have an implementation for this interface, we do know how that an object with this signature will be used by the TaskSessionController.  In anticipation of these changes, we add tests to the TaskSessionController that verify it communicates with its dependency:

public TaskSessionControllerSpecs : SpecFor<TaskSessionController>
{
  // ... initial context setup with Mock ISessionRepository
  // omitted for clarity

  public class when_a_session_completes : TaskSessionControllerSpecs
  {
     // omitted for clarity

     [TestMethod]
     public void ensure_activity_is_recorded_in_repository()
     {
         repositoryMock.Verify( 
            r => r.RecordCompletion(
                    It.Is<ITaskSession>( (s) => s == session )
                        );
     }
  }
}

To ensure the tests pass, we extend the TaskSessionController to take a ISessionRepository in its constructor and add the appropriate implementation.  Naturally, because the constructor of the TaskSessionController has changed, we adjust the fixture setup so that the code will compile.  Below is a snippet of modified TaskSessionController:

public class TaskSessionController : ITaskSessionController
{
    public TaskSessionController(ISessionRepository repository)
    {
        SessionRepository = repository;
    }

    protected ISessionRepository SessionRepository
    {
        get; set;
    }

    public void Finish(ITaskSession session)
    {
        session.Stop()

        SessionRepository.RecordCompletion(session);
    }

    // ...omitted for clarity
}

Adding ADO.NET Entity Framework to the Project

While we could add the implementation of the ISessionRepository into the Core library, I’m going to add a new library Pomodoro.Data where we’ll add the Entity Framework model.  This strategy allows us to extend the Core model and provides us with some freedom to create alternate persistence strategies simply by swapping out the Pomodoro.Data assembly.

Once the project is created, we add the Entity Framework to the project using the Add New “ADO.NET Entity Data Model” and follow the wizard:

Pomodoro.Data

Note, that the wizard adds the appropriate references to the project automatically. 

Since we don’t have a working database, we’ll choose to create an Empty Model.  Later on, we’ll generate the database from our defined model.

empty-model 

Creating a Data Model

One of the new features of the Entity Framework 4.0 is that it allows you to bind to an existing data model.  Although the TaskSession could be considered as a candidate for an existing model, it doesn’t fit the bill cleanly – Sessions represent countdown timers and they don’t track the final outcome.  Instead, we’ll use the default behavior of the framework and manually generate a model class, TaskSessionRecord:

add-new-entity

For our auditing purposes, we only need to record an ID, Start and End Times and whether the session was completed or cancelled.

task-session-record

Creating the Database from the Model

After the model is complete, we generate the database from the model:

  1. Right click the designer and choose “Model Browser”
  2. In the Data Store, choose “Generate Database from Model”
  3. Create a new database connection.  In our case, we specify SQLite provider
  4. Finish our the wizard by clicking Next and Finish.

new-database-connection

The wizard produces the Database and the matching DDL file to generate the tables.  Note that SQLite must be installed in order to have it appear as an option in the wizard.

Unfortunately, I wasn’t able to create the SQLite database using any of the tools with Visual Studio.  Instead, I cheated and manually created the TaskSessionRecord table.  We’ll hang onto the generated ddl file because we may want to programmatically generate the database at some point.  For the time being, I’ll cheat and copy the database to the bin\Debug folder.

Implementing the Repository

The repository implementation is fairly straightforward.  We simply instantiate the object context (we specified part of the name when we added the ADO.NET Entity Data Model to the project), add a new TaskSessionRecord to the EntitySet and then save the changes to commit the transaction:

namespace Pomodoro.Data
{
    public class TaskSessionRepository : ISessionRepository
    {
        public void RecordCompletion(ITaskSession session)
        {
            using (var context = new PomodoroDataContainer())
            {
                context.TaskSessionRecords.AddObject(new TaskSessionRecord(session, true));
                context.SaveChanges();
            }
        }

        public void RecordCancellation(ITaskSession session)
        {
            using (var context = new PomodoroDataContainer())
            {
                context.TaskSessionRecords.AddObject(new TaskSessionRecord(session, false));
                context.SaveChanges();
            }
        }
    }
}

Note that to simplify the code, I extended the auto generated TaskSessionRecord class to provide a convenience constructor.  Since the auto generated class is marked as partial, the convenience constructor is placed in its own file.  As some of the existing generated code requires the presence of an empty constructor (which is implicitly defined by the compiler if not present), we must also include a default constructor.

public partial class TaskSessionRecord
{
    // needed to satisfy some of the existing generated code
    public TaskSessionRecord()
    {
    }

    public TaskSessionRecord(ITaskSession session, bool complete)
    {
        Id = session.Id;
        StartTime = session.StartTime;
        EndTime = session.EndTime;
        Complete = complete;
    }
}

Integrating into the Shell

To integrate the new SessionRepository into the Pomodoro application we need to add the database, Pomodoro.Data assembly and the appropriate configuration settings.  For the time being, we’ll add a reference to Pomodoro.Data library to the Shell application – this strategy may change if we introduce a composite application pattern such as Prism or MEF.  For brevity sake, I’ll manually copy the database into the bin\Debug folder.

The connection string settings appear in the app.config like so:

<connectionStrings>
  <-- formatted for readibiliy -->
  <add name="PomodoroDataContainer" 
       connectionString="metadata=res://*/PomodoroData.csdl|res://*/PomodoroData.ssdl|res://*/PomodoroData.msl;
                    provider=System.Data.SQLite;
                    provider connection string=&quot;data source=Pomodoro.sqlite&quot;" 
       providerName="System.Data.EntityClient" />
</connectionStrings>

One Last Gotcha

As the solution is compiled against .NET Framework 4.0 and our Sqlite assemblies are compiled against .NET 2.0, we receive a really nasty error when the System.Data.SQLite assembly loads into the AppDomain:

Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information

We solve this problem by adding useLegacyV2RuntimeActivationPolicy="true" to the app.config:

<startup useLegacyV2RuntimeActivationPolicy="true">
  <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
</startup>

Next Steps

In the next post, we’ll look at adding Unity as a dependency injection container to the application.

submit to reddit

Thursday, December 31, 2009

Twelve Days of Code – Windows 7 Shell Integration

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the fifth in this series.  Today I’ll cover adding some of the cool new Windows 7 Shell integration features in WPF 4.0.

I’ll be revisiting some of the xaml and object model for this post, so it wouldn’t hurt to read up:

WPF 4.0 offers some nice Windows 7 Shell integration that are easy to add to your application.

Progress State

Since the Pomdoro application’s primary function is to provide a countdown timer, it’s seems like a natural fit to use the built-in Windows 7 Taskbar progress indicator to show the time remaining.  Hooking it up was a snap.  The progress indicator uses two values ProgressState and ProgressValue, where ProgressState indicates the progress mode (Error, Indeterminate, None, Normal, Paused) and ProgressValue is a numeric value between 0 and 1.  Two simple converters provide the translation between ViewModel and View, one to control ProgressState and the other to compute ProgressValue.

<Window.TaskbarItemInfo>
    <TaskbarItemInfo
        ProgressState="{Binding ActiveItem, Converter={StaticResource ProgressStateConverter}}"
        ProgressValue="{Binding ActiveItem, Converter={StaticResource ProgressValueConverter}}"
        >
    </TaskbarItemInfo>
</Window.TaskbarItemInfo>

Which looks something like this:

taskbar-progress

Note that for ProgressValue I’m binding to ActiveItem instead of TimeRemaining.  This is because the progress value is obtained through a percent complete calculation -- time remaining in the original session length – which requires that both values are available to the converter.  I suppose this could have been calculated through a multi-binding, but the single converter makes things much easier.

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(ITaskSession), typeof(System.Windows.Shell.TaskbarItemProgressState))]
    public class ProgressStateConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value != null && value is ITaskSession)
            {
                ITaskSession session = (ITaskSession)value;

                if (session.IsActive)
                {
                    return TaskbarItemProgressState.Normal;
                }
            }
            return TaskbarItemProgressState.None;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }


    [ValueConversion(typeof(ITaskSession), typeof(double))]
    public class ProgressValueConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value != null && value is ITaskSession)
            {
                ITaskSession session = (ITaskSession)value;

                if (session.IsActive)
                {
                    int delta = session.SessionLength - session.TimeRemaining;
                    
                    return (double)(delta / (double)session.SessionLength);
                }
            }
            return 1;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

This is all well and good, but there’s a minor setback: Progress will not increment unless the PropertyChanged event is raised for the ActiveItem property.  This is an easy fix: the ITaskSession needs to expose a NotifyProgress event and the ITaskApplication needs to notify the View when the event fires.  Since the progress indicator in the taskbar is only a few dozen pixels wide, spamming the View with each millisecond update is a bit much.  We solve this problem by throttling the amount the event is raised using a NotifyInterval property.

// from TaskSessionViewModel
void OnTimer(object sender, ElapsedEventArgs e)
{
    NotifyProperty("TimeRemaining");

    if (IsActive)
    {
        // if notify interval is set
        if (NotifyInterval > 0)
        {
            notifyIntervalCounter += TimerInterval;

            if (notifyIntervalCounter >= NotifyInterval)
            {
                if (NotifyProgress != null)
                {
                    NotifyProgress(this, EventArgs.Empty);
                }
                notifyIntervalCounter = 0;
            }
        }
    }

    if (TimeRemaining == 0)
    {
        // end timer
        IsActive = false;

        if (SessionFinished != null)
        {
            SessionFinished(this, EventArgs.Empty);
        }
    }
}

TaskbarInfo Buttons

Using the exact same command bindings mentioned in my last post, adding buttons to control the countdown timer from the AreoPeek window is dirt simple. The only gotcha is that the taskbar buttons do not support custom content, instead you must specify an image to display anything meaningful to the user.

<Window ....>

    <Window.Resources>
        <!-- play icon for taskbar button -->
        <DrawingImage x:Key="PlayImage">
            <DrawingImage.Drawing>
                <DrawingGroup>
                    <DrawingGroup.Children>
                        <GeometryDrawing Brush="Black" Geometry="F1 M 50,25L 0,0L 0,50L 50,25 Z "/>
                    </DrawingGroup.Children>
                </DrawingGroup>
            </DrawingImage.Drawing>
        </DrawingImage>

        <!-- stop icon for taskbar button -->
        <DrawingImage x:Key="StopImage">
            <DrawingImage.Drawing>
                <DrawingGroup>
                    <DrawingGroup.Children>
                        <GeometryDrawing Brush="Black" Geometry="F1 M 0,0L 50,0L 50,50L 0,50L 0,0 Z "/>
                    </DrawingGroup.Children>
                </DrawingGroup>
            </DrawingImage.Drawing>
        </DrawingImage>

        // ... converters
        
    </Window.Resources>

    <Window.TaskbarItemInfo>
        <TaskbarItemInfo ... >
            
            <TaskbarItemInfo.ThumbButtonInfos>
                <ThumbButtonInfoCollection>

                    <!-- start button -->
                    <ThumbButtonInfo
                        ImageSource="{StaticResource ResourceKey=PlayImage}"
                        Command="{Binding StartCommand}"
                        CommandParameter="{Binding ActiveItem}"
                        Visibility="{Binding ActiveItem.IsActive, 
                                     Converter={StaticResource BoolToHiddenConverter}, 
                                     FallbackValue={x:Static Member=pc:Visibility.Visible}}"
                        />

                    <!-- stop button -->
                    <ThumbButtonInfo
                        ImageSource="{StaticResource ResourceKey=StopImage}"
                        Command="{Binding CancelCommand}"
                        CommandParameter="{Binding ActiveItem}"
                        Visibility="{Binding ActiveItem.IsActive, 
                                     Converter={StaticResource BoolToVisibleConverter}, 
                                     FallbackValue={x:Static Member=pc:Visibility.Collapsed}}"
                        />
                </ThumbButtonInfoCollection>
            </TaskbarItemInfo.ThumbButtonInfos>
        </TaskbarItemInfo>
    </Window.TaskbarItemInfo>

    // ....

</Window>

The applied XAML looks like this:

taskbar-buttons

Next Steps

The next post we’ll add some auditing capability to the pomodoro application using SQLite and the Entity Framework version in .NET 4.0.

submit to reddit

Tuesday, December 29, 2009

Twelve Days of Code – Views

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the fourth in this series -- we’ll look at some basic XAML for the View, data binding and some simple styling.

Some basic Layout

Our Pomodoro application doesn’t require a sophisticated layout with complex XAML.  All we really need is an area for our count down timer, and two buttons to start and cancel the Pomodoro.  That being said, we’ll drop a textbox and two buttons into a grid, like so:

<Window x:Class="Pomodoro.Shell.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="Pomodoro" Width="250" Height="120">
    
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition />
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition />
            <ColumnDefinition />
        </Grid.ColumnDefinitions>
        
        <!-- active item -->
        <TextBlock Grid.Column="0"/>
        
        <!-- command buttons -->
        <StackPanel Orientation="Horizontal" Grid.Column="1">
            <Button Content="Start" />
            <Button Content="Stop" />
        </StackPanel>
        
    </Grid>
</Window>

Which looks something likes this:

Basic Pomodo - Yuck!

Binding to the ViewModel

The great thing behind the Model-View-ViewModel pattern is that we don’t need goofy “code-behind” logic to control the View.  Instead, we use WPF’s powerful data binding to graph the View to the ViewModel. 

There are many different ways to bind the ViewModel to the View, but here’s a quick and dirty mechanism until there’s a inversion of control container to resolve the ViewModel:

public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();

        var sessionController = new TaskSessionController();
        var alarmController = new TaskAlarmController();

        var ctx = new TaskApplicationViewModel(sessionController, alarmController);
        this.DataContext = ctx;
    }
}

The XAML binds to the TaskApplicationViewModel’s ActiveItem, StartCommand and CancelCommands:

<!-- active item -->
<TextBlock Text="{Binding ActiveItem.TimeRemaining}" />

<!-- command buttons -->
<StackPanel Orientation="Horizontal" Grid.Column="1">
    <Button Content="Start" 
            Command="{Binding StartCommand}" 
            CommandParameter="{Binding ActiveItem}"                 
            />
    <Button Content="Stop" 
            Command="{Binding CancelCommand}" 
            CommandParameter="{Binding ActiveItem}"
            />
</StackPanel>

The Count Down Timer

At this point, clicking on the Start button shows the pomodoro counting down in milliseconds.  I had considered breaking the count down timer into its own user control, but chose to be pragmatic and use a simple TextBlock.  The display can be dressed up using a simple converter:

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(int), typeof(string))]
    public class TimeRemainingConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is int)
            {
                int remaining = (int)value;

                TimeSpan span = TimeSpan.FromMilliseconds(remaining);

                return String.Format("{0:00}:{1:00}", span.Minutes, span.Seconds);
            }
            return String.Empty;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

Control Button Visibility

With the data binding so far, the Pomodoro application comes to life with a crude but functional user-interface.  To improve the experience, I’d like to make the buttons contextual so that only the appropriate button is shown based on the state of the session.  To achieve this effect, we bind to the IsActive property to control visibility state with mutually exclusive converters: BooleanToVisibleConverter and BooleanToCollapsedConverter.  I honestly don’t know why these converters aren’t part of the framework as I use this concept frequently. 

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(bool), typeof(Visibility))]
    public class BooleanToHiddenConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is bool)
            {
                bool hidden = (bool)value;
                if (hidden)
                {
                    return Visibility.Collapsed;
                }
            }
            return Visibility.Visible;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }

    [ValueConversion(typeof(bool),typeof(Visibility))]
    public class BooleanToVisibleConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is bool)
            {
                bool active = (bool)value;

                if (active)
                {
                    return Visibility.Visible;
                }
            }
            return Visibility.Collapsed;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

However, there’s a Binding problem with all of these converters: by default, the ActiveItem of the TaskApplication is null and none of the bindings will take effect until after the object has been set.  This is easily fixed with a FallbackValue in the binding syntax:

<Window ...
        xmlns:local="clr-namespace:Pomodoro.Shell.Converters"
        xmlns:pc="clr-namespace:System.Windows;assembly=PresentationCore">

    <Window.Resources>
        <local:BooleanToVisibleConverter x:Key="BoolToVisibleConverter" />
        <local:BooleanToHiddenConverter x:Key="BoolToHiddenConverter" />
        <local:TimeRemainingConverter x:Key="TimeSpanConverter" />
    </Window.Resources>

    <Grid>

        <!-- active item -->
        <TextBlock Text="{Binding ActiveItem.TimeRemaining, Converter={StaticResource TimeSpanConverter}, 
                                  FallbackValue='00:00'}" />

        <!-- command buttons -->
        <StackPanel Orientation="Horizontal" Grid.Column="1">
            <Button Content="Start" 
                    Command="{Binding StartCommand}" 
                    CommandParameter="{Binding ActiveItem}"
                    Visibility="{Binding ActiveItem.IsActive, Converter={StaticResource BoolToHiddenConverter}, 
                    FallbackValue={x:Static Member=pc:Visibility.Visible}}"                   
                    />
            <Button Content="Stop" 
                    Command="{Binding CancelCommand}" 
                    CommandParameter="{Binding ActiveItem}"
                    Visibility="{Binding ActiveItem.IsActive, Converter={StaticResource BoolToVisibleConverter}, 
                    FallbackValue={x:Static Member=pc:Visibility.Collapsed}}"
                    />
        </StackPanel>

    </Grid>
</Window>

A few extra applied stylings, and the Pomodoro app is shaping up nicely:

Not so bad Pomodoro

Next Steps

The next post, I’ll look at some of the new Windows 7 integration features available in WPF 4.0.

submit to reddit