Thursday, December 31, 2009

Twelve Days of Code – Windows 7 Shell Integration

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the fifth in this series.  Today I’ll cover adding some of the cool new Windows 7 Shell integration features in WPF 4.0.

I’ll be revisiting some of the xaml and object model for this post, so it wouldn’t hurt to read up:

WPF 4.0 offers some nice Windows 7 Shell integration that are easy to add to your application.

Progress State

Since the Pomdoro application’s primary function is to provide a countdown timer, it’s seems like a natural fit to use the built-in Windows 7 Taskbar progress indicator to show the time remaining.  Hooking it up was a snap.  The progress indicator uses two values ProgressState and ProgressValue, where ProgressState indicates the progress mode (Error, Indeterminate, None, Normal, Paused) and ProgressValue is a numeric value between 0 and 1.  Two simple converters provide the translation between ViewModel and View, one to control ProgressState and the other to compute ProgressValue.

<Window.TaskbarItemInfo>
    <TaskbarItemInfo
        ProgressState="{Binding ActiveItem, Converter={StaticResource ProgressStateConverter}}"
        ProgressValue="{Binding ActiveItem, Converter={StaticResource ProgressValueConverter}}"
        >
    </TaskbarItemInfo>
</Window.TaskbarItemInfo>

Which looks something like this:

taskbar-progress

Note that for ProgressValue I’m binding to ActiveItem instead of TimeRemaining.  This is because the progress value is obtained through a percent complete calculation -- time remaining in the original session length – which requires that both values are available to the converter.  I suppose this could have been calculated through a multi-binding, but the single converter makes things much easier.

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(ITaskSession), typeof(System.Windows.Shell.TaskbarItemProgressState))]
    public class ProgressStateConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value != null && value is ITaskSession)
            {
                ITaskSession session = (ITaskSession)value;

                if (session.IsActive)
                {
                    return TaskbarItemProgressState.Normal;
                }
            }
            return TaskbarItemProgressState.None;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }


    [ValueConversion(typeof(ITaskSession), typeof(double))]
    public class ProgressValueConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value != null && value is ITaskSession)
            {
                ITaskSession session = (ITaskSession)value;

                if (session.IsActive)
                {
                    int delta = session.SessionLength - session.TimeRemaining;
                    
                    return (double)(delta / (double)session.SessionLength);
                }
            }
            return 1;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

This is all well and good, but there’s a minor setback: Progress will not increment unless the PropertyChanged event is raised for the ActiveItem property.  This is an easy fix: the ITaskSession needs to expose a NotifyProgress event and the ITaskApplication needs to notify the View when the event fires.  Since the progress indicator in the taskbar is only a few dozen pixels wide, spamming the View with each millisecond update is a bit much.  We solve this problem by throttling the amount the event is raised using a NotifyInterval property.

// from TaskSessionViewModel
void OnTimer(object sender, ElapsedEventArgs e)
{
    NotifyProperty("TimeRemaining");

    if (IsActive)
    {
        // if notify interval is set
        if (NotifyInterval > 0)
        {
            notifyIntervalCounter += TimerInterval;

            if (notifyIntervalCounter >= NotifyInterval)
            {
                if (NotifyProgress != null)
                {
                    NotifyProgress(this, EventArgs.Empty);
                }
                notifyIntervalCounter = 0;
            }
        }
    }

    if (TimeRemaining == 0)
    {
        // end timer
        IsActive = false;

        if (SessionFinished != null)
        {
            SessionFinished(this, EventArgs.Empty);
        }
    }
}

TaskbarInfo Buttons

Using the exact same command bindings mentioned in my last post, adding buttons to control the countdown timer from the AreoPeek window is dirt simple. The only gotcha is that the taskbar buttons do not support custom content, instead you must specify an image to display anything meaningful to the user.

<Window ....>

    <Window.Resources>
        <!-- play icon for taskbar button -->
        <DrawingImage x:Key="PlayImage">
            <DrawingImage.Drawing>
                <DrawingGroup>
                    <DrawingGroup.Children>
                        <GeometryDrawing Brush="Black" Geometry="F1 M 50,25L 0,0L 0,50L 50,25 Z "/>
                    </DrawingGroup.Children>
                </DrawingGroup>
            </DrawingImage.Drawing>
        </DrawingImage>

        <!-- stop icon for taskbar button -->
        <DrawingImage x:Key="StopImage">
            <DrawingImage.Drawing>
                <DrawingGroup>
                    <DrawingGroup.Children>
                        <GeometryDrawing Brush="Black" Geometry="F1 M 0,0L 50,0L 50,50L 0,50L 0,0 Z "/>
                    </DrawingGroup.Children>
                </DrawingGroup>
            </DrawingImage.Drawing>
        </DrawingImage>

        // ... converters
        
    </Window.Resources>

    <Window.TaskbarItemInfo>
        <TaskbarItemInfo ... >
            
            <TaskbarItemInfo.ThumbButtonInfos>
                <ThumbButtonInfoCollection>

                    <!-- start button -->
                    <ThumbButtonInfo
                        ImageSource="{StaticResource ResourceKey=PlayImage}"
                        Command="{Binding StartCommand}"
                        CommandParameter="{Binding ActiveItem}"
                        Visibility="{Binding ActiveItem.IsActive, 
                                     Converter={StaticResource BoolToHiddenConverter}, 
                                     FallbackValue={x:Static Member=pc:Visibility.Visible}}"
                        />

                    <!-- stop button -->
                    <ThumbButtonInfo
                        ImageSource="{StaticResource ResourceKey=StopImage}"
                        Command="{Binding CancelCommand}"
                        CommandParameter="{Binding ActiveItem}"
                        Visibility="{Binding ActiveItem.IsActive, 
                                     Converter={StaticResource BoolToVisibleConverter}, 
                                     FallbackValue={x:Static Member=pc:Visibility.Collapsed}}"
                        />
                </ThumbButtonInfoCollection>
            </TaskbarItemInfo.ThumbButtonInfos>
        </TaskbarItemInfo>
    </Window.TaskbarItemInfo>

    // ....

</Window>

The applied XAML looks like this:

taskbar-buttons

Next Steps

The next post we’ll add some auditing capability to the pomodoro application using SQLite and the Entity Framework version in .NET 4.0.

submit to reddit

Tuesday, December 29, 2009

Twelve Days of Code – Views

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. This post is the fourth in this series -- we’ll look at some basic XAML for the View, data binding and some simple styling.

Some basic Layout

Our Pomodoro application doesn’t require a sophisticated layout with complex XAML.  All we really need is an area for our count down timer, and two buttons to start and cancel the Pomodoro.  That being said, we’ll drop a textbox and two buttons into a grid, like so:

<Window x:Class="Pomodoro.Shell.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="Pomodoro" Width="250" Height="120">
    
    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition />
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition />
            <ColumnDefinition />
        </Grid.ColumnDefinitions>
        
        <!-- active item -->
        <TextBlock Grid.Column="0"/>
        
        <!-- command buttons -->
        <StackPanel Orientation="Horizontal" Grid.Column="1">
            <Button Content="Start" />
            <Button Content="Stop" />
        </StackPanel>
        
    </Grid>
</Window>

Which looks something likes this:

Basic Pomodo - Yuck!

Binding to the ViewModel

The great thing behind the Model-View-ViewModel pattern is that we don’t need goofy “code-behind” logic to control the View.  Instead, we use WPF’s powerful data binding to graph the View to the ViewModel. 

There are many different ways to bind the ViewModel to the View, but here’s a quick and dirty mechanism until there’s a inversion of control container to resolve the ViewModel:

public partial class MainWindow : Window
{
    public MainWindow()
    {
        InitializeComponent();

        var sessionController = new TaskSessionController();
        var alarmController = new TaskAlarmController();

        var ctx = new TaskApplicationViewModel(sessionController, alarmController);
        this.DataContext = ctx;
    }
}

The XAML binds to the TaskApplicationViewModel’s ActiveItem, StartCommand and CancelCommands:

<!-- active item -->
<TextBlock Text="{Binding ActiveItem.TimeRemaining}" />

<!-- command buttons -->
<StackPanel Orientation="Horizontal" Grid.Column="1">
    <Button Content="Start" 
            Command="{Binding StartCommand}" 
            CommandParameter="{Binding ActiveItem}"                 
            />
    <Button Content="Stop" 
            Command="{Binding CancelCommand}" 
            CommandParameter="{Binding ActiveItem}"
            />
</StackPanel>

The Count Down Timer

At this point, clicking on the Start button shows the pomodoro counting down in milliseconds.  I had considered breaking the count down timer into its own user control, but chose to be pragmatic and use a simple TextBlock.  The display can be dressed up using a simple converter:

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(int), typeof(string))]
    public class TimeRemainingConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is int)
            {
                int remaining = (int)value;

                TimeSpan span = TimeSpan.FromMilliseconds(remaining);

                return String.Format("{0:00}:{1:00}", span.Minutes, span.Seconds);
            }
            return String.Empty;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

Control Button Visibility

With the data binding so far, the Pomodoro application comes to life with a crude but functional user-interface.  To improve the experience, I’d like to make the buttons contextual so that only the appropriate button is shown based on the state of the session.  To achieve this effect, we bind to the IsActive property to control visibility state with mutually exclusive converters: BooleanToVisibleConverter and BooleanToCollapsedConverter.  I honestly don’t know why these converters aren’t part of the framework as I use this concept frequently. 

namespace Pomodoro.Shell.Converters
{
    [ValueConversion(typeof(bool), typeof(Visibility))]
    public class BooleanToHiddenConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is bool)
            {
                bool hidden = (bool)value;
                if (hidden)
                {
                    return Visibility.Collapsed;
                }
            }
            return Visibility.Visible;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }

    [ValueConversion(typeof(bool),typeof(Visibility))]
    public class BooleanToVisibleConverter : IValueConverter
    {
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            if (value is bool)
            {
                bool active = (bool)value;

                if (active)
                {
                    return Visibility.Visible;
                }
            }
            return Visibility.Collapsed;
        }

        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
        {
            throw new NotImplementedException();
        }
    }
}

However, there’s a Binding problem with all of these converters: by default, the ActiveItem of the TaskApplication is null and none of the bindings will take effect until after the object has been set.  This is easily fixed with a FallbackValue in the binding syntax:

<Window ...
        xmlns:local="clr-namespace:Pomodoro.Shell.Converters"
        xmlns:pc="clr-namespace:System.Windows;assembly=PresentationCore">

    <Window.Resources>
        <local:BooleanToVisibleConverter x:Key="BoolToVisibleConverter" />
        <local:BooleanToHiddenConverter x:Key="BoolToHiddenConverter" />
        <local:TimeRemainingConverter x:Key="TimeSpanConverter" />
    </Window.Resources>

    <Grid>

        <!-- active item -->
        <TextBlock Text="{Binding ActiveItem.TimeRemaining, Converter={StaticResource TimeSpanConverter}, 
                                  FallbackValue='00:00'}" />

        <!-- command buttons -->
        <StackPanel Orientation="Horizontal" Grid.Column="1">
            <Button Content="Start" 
                    Command="{Binding StartCommand}" 
                    CommandParameter="{Binding ActiveItem}"
                    Visibility="{Binding ActiveItem.IsActive, Converter={StaticResource BoolToHiddenConverter}, 
                    FallbackValue={x:Static Member=pc:Visibility.Visible}}"                   
                    />
            <Button Content="Stop" 
                    Command="{Binding CancelCommand}" 
                    CommandParameter="{Binding ActiveItem}"
                    Visibility="{Binding ActiveItem.IsActive, Converter={StaticResource BoolToVisibleConverter}, 
                    FallbackValue={x:Static Member=pc:Visibility.Collapsed}}"
                    />
        </StackPanel>

    </Grid>
</Window>

A few extra applied stylings, and the Pomodoro app is shaping up nicely:

Not so bad Pomodoro

Next Steps

The next post, I’ll look at some of the new Windows 7 integration features available in WPF 4.0.

submit to reddit

Thursday, December 24, 2009

Twelve Days of Code – Pomodoro Object Model

As part of the twelve days of code, I’m building a Pomodoro style task tracking application and blogging about it. You can play along at home too. This post talks about the process of defining the object model and parts of the presentation model.

Most of the Model-View-ViewModel examples I’ve seen have not put emphasis on the design of the Model, so I’m not sure that I can call what I’m doing proper MVVM, but i like to have a fully functional application independent of its presentation and then extend parts of the model into presentation model objects. As I was designing the object model, I chose to use interfaces rather than concrete classes. There are several reasons for this, but the main factor was to apply a top-down Test-Driven-Development methodology.

On Top-Down Test-Driven-Development…

The theory behind “top-down TDD” is that by using the Application or other top-level object as the starting point, I only have to work with that object and its immediate dependencies rather than the entire implementation. As dependencies for that object are realized, I create a simple interface rather than a fully flushed out implementation -- I find using interfaces really helps to visualize the code’s responsibilities. Using interfaces also makes the design more pliable, in that I can move responsibilities around without having to overhaul the implementation.

This approach plays well into test-driven-development since tools like RhinoMocks and Moq can quickly generate mock implementations for interfaces at runtime. These dynamic implementations make it possible to define how dependencies will operate under normal (and irregular) circumstances which allows me to concentrate on the object I’m currently constructing rather than several objects at once. After writing a few small tests, I make a conscious effort to validate or rework the object model.

Using the Context/Specification strategy I outlined a few posts ago, the process I followed resembled something like this:

  • Design a simple interface (one or two simple methods)
  • Create the concrete class and the test fixture
  • After spending a few minutes thinking about the different scenarios or states this object could represent, I create a test class for that context
  • Write a specification (test) for that context
  • Write the assertion for that specification
  • Refine the context / subject-initialization / because
  • Write the implementation until the test passes
  • Refactor.
  • Add more tests.

(Confession: I started writing the ITaskApplication first but then switched to ITaskSessionController because I had a clearer vision of what it needed to do.) More on tests later, first let’s look more at the object model.

The Pomodoro Object Model

Here’s a quick diagram of the object model as seen by its interfaces. Note that instead of referring to “Pomodoro” or “Tomato”, I’m using a more generic term “Task”:

Pomodor.Core

In the above diagram, the Task Application represents the top of the application where the user will interact to start and stop (cancel) a task session. By design, the application seats one session at a time and it delegates the responsibility for constructing and initializing a task session to the Session Controller. The Task Application is also responsible for monitoring the session’s SessionFinished event, which when fired, the Application delegates to the AlarmController and SessionController to notify the user and track completion of the session.

Presently the TaskController acts primarily as a factory for new sessions and has no immediate dependencies, though it would logically communicate with a persistence store to track usage information (when I get around to that).

The TaskSession object encapsulates the internal state of the session as well as the timer logic. When the timer completes, the SessionFinished event is fired.

The Realized Implementation

Using a TDD methodology, I created concrete implementations for each interface as I went. At this point, the division between Presentation Model (aka ViewModel) and Controller Logic (aka Model) are clearly visible:

Pomodoro.Core.Implementation

To help realize the implementation, I borrowed a few key classes from the internets:

BaseViewModel

When starting out with the Presentation Model, I found this article as a good background and starting point for the Model-View-ViewModel pattern. The article provides a ViewModelBase implementation which contains the common base ViewModel plumbing, including the IDisposable pattern. The most notable part is the implementation for INotifyPropertyChanged which contains debug only validation logic that throws an error if the ViewModel attempts to raise a PropertyChanged event for a property that does not exist. This simple trick does away with stupid bugs related to ViewModel binding errors caused by typos.

I’ve also added BeginInitialization and EndInitialization members which prevent the PropertyChanged event from firing during initialization. This trick comes in handy when the ViewModel is sufficiently complicated enough that raising the event needlessly impacts performance.

For some reason, I prefer the name BaseViewModel over ViewModelBase. How about you?

DelegateCommand

When it came time to add Commands to my ViewModels, I considered brining Prism to my application primarily to get the DelegateCommand implementation. Perhaps I missed something, but I was only able to find the source code for download. Ultimately, I chose to take the DelegateCommand code-file and include it as part of the Core library instead of compiling the libraries for Prism. The decision to not compile the libraries was based on some tweaking and missing references for .NET 4.0, and the projects were not configured to compile against a strong-name which I would need for my strongly-named application.

The DelegateCommand provides an implementation of the ICommand interface that accepts delegates for the Execute and CanExecute methods, as well as a hook to raise the CanExecuteChanged event. At some point, I may choose to pull the implementation for the Start and Cancel commands out of the TaskApplicationViewModel and into separate classes.

More About the Tests

As promised, here is some more information about the tests and methodology. Rather than list the code for the tests, I thought it would be fun to list the names of the tests that were created during this stage of the development process. Because I’m using a context/specification test-style, my tests should read like specifications:

TaskApplicationSpecs:

  • when_a_task_completes
    • should_display_an_alarm
    • should_record_completed_task
  • when_a_task_is_cancelled
    • should record cancellation
  • when_a_task_is_started
    • should_create_a_new_session
    • ensure_new_tasks_cannot_be_started
    • ensure_running_task_can_be_cancelled
  • when_no_task_is_running
    • ensure_new_task_can_be_started
    • ensure_cancel_task_command_is_disabled
TaskSessionControllerSpecs:
  • when_cancelling_a_tasksession
    • ensure_session_is_stopped
  • when_creating_a_tasksession
    • should_create_a_task_with_a_valid_identifier
    • ensure_start_date_is_set
    • ensure_session_is_started
    • ensure_session_length_is_set
  • when_finishing_a_tasksession
    • ensure_session_is_stopped
TaskSessionViewModelspecs:
  • given_a_default_session
    • should_be_disabled
    • should_not_have_start_date_set
    • should_not_have_end_date_set
    • should_have_valid_id
  • when_session_is_ended
    • ensure_endtime_is_recorded
  • when_session_is_started
    • should_be_active
    • should_have_initial_time_interval_set
    • should_have_less_time_remaining_than_original_session_length
    • should_notify_the_ui_to_update_the_counter_periodically
    • should_record_start_time_of_task
    • ensure_end_time_of_task_has_not_been_recorded
  • when_timer_completes
    • should_raise_finish_event
    • ensure_finish_event_is_only_raised_once
    • should_run_the_counter_down_to_zero
    • should_disable_task
    • ensure_endtime_has_not_been_recorded

Next Steps

The next logical step in the twelve days of code is creating the View and wiring it up to the ViewModel.

submit to reddit

Twelve Days of Code - Solution Setup

As part of the Twelve Days of Code Challenge, I’m developing a Pomodoro style application and sharing the progress here on my blog.  This post tackles day one: setting up your project.

The initial stage of a project where you are figuring out layers and packaging is a critical part of the process and one that I’ve always found interesting.  From past experience, the small details at this stage can become massive technical debt later if the wrong approach is used, so it’s best to take your time and make sure you’ve crossed all the T’s and dotted the I’s.

Creating the Solution

For this project I’ve chosen to use Visual Studio 2010 Beta 2 and so far the experience has been great.  Visual Studio 2010 is going to reset the standard and bring new levels of developer productivity (assuming they solve some of the stability issues): it’s faster and much more responsive, eats less memory and adds subtle UX refinements that improve developer flow.  To get a better sense and to recreate this feeling, I urge you to load up Visual Studio 2003 and look at the Start page – we’ve come a long way.

The New Project window has a nice overhaul, and we can specify the target framework in the dialog.  Here I’m creating a WPF Application Pomodoro.Shell.  Note that I’m specifying to create a directory for the solution and that the Solution Name and Project Name are different. 

AddSolution 

Normally at this point I would consider renaming the output of the application from “Pomodoro.Shell.exe” to some other simpler name like “pomodoro.exe”.  This is an optional step which I won’t bother with for this application.

Adding Projects

When laying out the solution, the first challenge is determining how many Visual Studio Projects we’ll need, and there are many factors to consider including dependencies, security, versioning, deployment, reuse, etc.

There appears to be a school of thought that believes every component or module should be its own assembly, and I strongly disagree.  Assemblies should be thought of as deployment-units – if the components version and deploy together, its very likely that they should be a single assembly.  As Visual Studio does not handle large number of projects well, it’s always better to start with larger assemblies and separate them later if needed.

For my pomodoro app, I’ve decided to structure the project into two primary pieces, “core” and “shell”, where “core” provides the model of the application and “shell” provides the user-interface specific plumbing.

Add Test Projects

Right from the start of the project, I’m gearing towards how it will be tested.  As such, I’ve created two test projects, one for each assembly.  This allow me to keep the logical division between assemblies.

AddProject

As soon as the projects are created, the first thing I’ll do is adjust the namespaces of the test libraries to match their counterparts.  By extension, the tests are features of the same namespace but they are packaged in a separate assembly because I do not want to deploy them with the application.  I’ve written about this before.

FixNamespaceforTests

Configure common Assembly Properties

Once we’ve settled into a project structure, the next easy win is to configure the projects to share the same the assembly details such as version number, manufacture, copyright, etc.  This is easily accomplished by creating a single file to represent this data, and then linking each project to this file.  At a later step, this file can be auto-generated as part of the build process.

using System.Reflection;

[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]
[assembly: AssemblyCompany("Bryan Cook")]
[assembly: AssemblyCopyright("Copyright © Bryan Cook 2009")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]

Tip: Link the AssemblyVersion.cs file to the root of each project, then drag it into the Properties folder.

Give the Assemblies a Strong-Name

If your code will ultimately end up on a end-user desktop, it is imperative to give the assembly a strong-name.  We can take advantage of Visual Studio’s built in features to create our strong-name-key (snk) but we’ll also take a few extra steps to ensure that each project has the same key.

  1. Open the project properties.
  2. Click on the Signing tab
  3. Check the “Sign the assembly” checkbox
  4. Choose “<New…>”
  5. Create a key with no password.
  6. Open Windows Explorer and copy the snk file to the root of the solution.
  7. Then for each project:
    1. Check the “Sign the assembly” checkbox
    2. Choose “<Browse…">”
    3. Navigate to the root of the solution and select the snk key.

Note that Visual Studio will copy the snk file to each project folder, though each project will have the same public key.

Designate Friend Assemblies

In order to aid testing, we can configure our Shell and Core assemblies to implicitly trust our test assemblies.  I’ve written about the benefits before, but the main advantage is that I don’t have to alter type visibility for testing purposes.  Since the assemblies have a strong name, the InternalsVisibleTo attribute requires the fully public key.

strong-name-internalsvisibleto

Since all the projects share the same key file, this public token will work for all the projects.  The following shows the InternalsVisibleTo attribute for the Pomodoro.Core project:

[assembly: InternalsVisibleTo("Pomodoro.Core.Tests, PublicKey=" +
"0024000004800000940000000602000000240000525341310004000001000" +
"1003be2b1a7e08d5e14167209fc318c9c16fa5d448fb48fe1f3e22a075787" +
"55b4b1cf4059185d2bd80cc5735142927fbbd3ab6eeebe6ac6af774d5fe65" +
"0a226b87ee9778cb2f6517382102894dc6d62d5a0aaa84e4403828112167a" +
"1012d5b905a37352290e4aa23f987ff2be3ccda3e27a7f7105cf5b05c0baf" +
"3ecbfd2371c0fa0")]

Setup External References

I like to put all the third-party assemblies that are referenced into the project into a “lib” folder at the root of the solution.  At the moment, I’m only referencing Moq for testing purposes.

A note on external references and source control: Team Foundation Server typically only pulls dependencies that are listed directly in the solution file.  While there are a few hacks for this (add each assembly as an existing item in a Solution Folder; or create a class library that contains the assemblies as content), I like to have all my dependencies in a separate folder with no direct association to the Visual Studio solution.  As a result, these references must be manually updated by performing a “Get Latest” from the TFS Source Control Explorer.  If you’ve got a solution for this – spill it, let’s hear your thoughts.

Setup Third-Party Tools

For all third-party tools that are used as part of the build, I like to include these in a “tools” or “etc” folder at the root of the solution.  This approach allows me to bundle all the necessary tools for other developers to allow faster ramp-up.  It adds a bit of overhead when checking things out, but certainly simplifies the build script.

Setup Build Script

There’s a few extra steps I had to take to get my .NET 4.0 project to compile using NAnt.

  1. Download the nightly build of the nant 0.86 beta1.  The nightly build solves the missing SdkInstallRoot build error.
  2. Paige Cook (no relation) has a comprehensive configuration change that needs to be applied to nant.exe.config
  3. Modify Paige’s version numbers from .NET 4.0 beta 1 to beta 2.  (Replace all references of “v4.0.20506” to “v4.0.21006”)

Here's a few points of interest for the build file listed below:

  • I’ve defined a default target “main”.  This allows me to simply execute “nant” in the root solution of the folder and it’ll take care of the rest.
  • The “main” target is solely empty because the real work is the order of the dependencies.  Currently, I’m only specifying “build”, but normally I would specify “clean, build, test”.
<project default="main">

  <!-- VERSION NUMBER (increment before release) -->
  <property name="version" value="1.0.0.0" />
  
  <!-- SOLUTION SETTINGS -->
  <property name="framework.dir" value="${framework::get-framework-directory(framework::get-target-framework())}" />
  <property name="msbuild" value="${framework.dir}\msbuild.exe" />
  <property name="vs.sln" value="TwelveDays.sln" />
  <property name="vs.config" value="Debug" />

  <!-- FOLDERS AND TOOLS -->
  <!-- Add aliases for tools here -->
  
  
  <!-- main -->
  <target name="main" depends="build">
  </target>

  <!-- build solution -->
  <target name="build" depends="version">

    <!-- compile using msbuild -->
    <exec program="${msbuild}"
      commandline="${vs.sln} /m /t:Clean;Rebuild /p:Configuration=${vs.config}"
      workingdir="."
          />

  </target>
    
  <!-- generate version number -->
  <target name="version">
    <attrib file="AssemblyVersion.cs" readonly="false" if="${file::exists('AssemblyVersion.cs')}" />
    <asminfo output="AssemblyVersion.cs" language="CSharp">
      <imports>
        <import namespace="System" />
        <import namespace="System.Reflection" />
      </imports>
      <attributes>
        <attribute type="AssemblyVersionAttribute" value="${version}" />
        <attribute type="AssemblyFileVersionAttribute" value="${version}" />
      </attributes>
    </asminfo>
  </target>

</project>

Next Steps…

In the next post, we’ll look at the object model for our Pomodoro application.

submit to reddit

Tuesday, December 15, 2009

Unity, Dependency Injection and Service Locators

For a while now, I’ve been chewing on some thoughts about the service locators versus dependency injection, but struggled to find the right way to say it.  I sent an email to a colleague today that tried to describe some of the lessons learned from my last project.  It comes close the visceral feelings I have around this subject, but describes it well.  Here it is unaltered:

When you pull dependencies in from the Container, you’re using a Service Locator feature of Unity; When you push dependencies in via the constructor, you’re using Dependency Injection.  There’s much debate over which pattern to use.  When using dependency injection, the biggest advantage is that the relationship between objects is well known.  This helps us understand the responsibilities of each object better, may have better performance (if we’re resolving dependencies a lot) and it makes it easier to write tests that isolate the behavior of the subject under test.

In a project where we’re resolving dependencies at runtime, the true nature of the dependencies between objects is obscured -- developers must read through the source to understand responsibilities.  Each time a change is made to resolve new dependencies from the container, the tests will fail.  Since locating and understanding the responsibility of the dependencies in the source is a complex task, the easy solution is to simply register the missing dependencies in the container and move on.  In most cases, the object that is registered is the concrete implementation which is also susceptible to the same dependency problem.  Ultimately, this compounds the problem until the tests themselves become obscured, vague, incomplete or overly complex.

In contrast, if the dependencies were registered in the constructor, all dependencies are known at compile time.  Any changes made to the subject won’t compile until the tests are updated to reflect the new functionality.  Note that because the object doesn’t use the container, the tests become just like any other simple POCO test.

This is not a silver bullet solution however.  There are times when resolving from the container makes sense – loaders and savers, for example – but even then, the container can be hidden inside a POCO factory. In contrast to service location, the impact of constructor injection is that it may require more work upfront to realize the dependencies, though this can be mitigated in some cases using a TDD methodology where the tests satisfy the responsibilities of the subject under test as it is written.

Comments welcome.

submit to reddit

Monday, December 14, 2009

Twelve Days of Code Challenge – 2009

Two years ago I tried an experiment in blogging for the festive season called the "Twelve Days of Code". The concept was to challenge myself to experiment in new technologies for at least an hour a day and blog about it. While I learned a lot about a focused topic (I chose Visual Studio Automation), the experiment didn't live up to my expectations.

This year, I want to do something different. I want to create a social experiment and challenge you!

The challenge

This year, the challenge is to write a simple Pomodoro style task tracking application. The application can be as simple or as far fetched as you want it to be, but at a minimum the application needs to:

  • Start, Stop and Cancel a Pomodoro
  • Notify the user when the pomodoro is complete

Of course, you don’t have to limit yourself to this functionality – sky’s the limit. If you want to collect usage statistics and provide reporting capabilities, persist a custom task list using SQLite, or work in the other characteristics of the pomodoro “flow” such as the 3-5 minute break between tasks – that’s up to you. Surprise me – and yourself!

In the spirit of Twelve Days of Code, pick a technology that’s new to you and do as much as you can. I'm planning on doing mine in .NET 4.0.

The rules

  • You can tackle any task in any order
  • Spend as much time as you want researching, but coding must be limited to one hour
  • Blog about it and leave a comment here
  • Have fun

What do I win?

The are no prizes, unfortunately. But if you are ever in downtown Toronto, there will be beers involved.

Here’s an outline of my twelve days:

  1. Solution Setup (build script, packaging, strong-names, etc)
  2. Object Model (application structure, commands, controllers)
  3. Presentation Model (View Models)
  4. Presentation templates (XAML, Converters, Data Templates)
  5. Composite Application setup (Prism)
  6. Dependency Injection Setup (Unity)
  7. Persistence Layer (record some stats using SQLite)
  8. Persistence Layer (Entity Framework 4.0)
  9. Animations (.NET 4.0 Visual State Manager)
  10. Windows 7 features (task bar icon overlays, action buttons, jump lists?)
  11. Installer (Wix)
  12. Functional Automation Testing

Good luck, Happy Coding and Happy Holidays.

submit to reddit

Friday, December 11, 2009

Visual Studio Keyboard Katas - II

Hopefully, if you read the last kata and have been trying it out, you may have found yourself needing to use your mouse less for common activities such as opening files and reviewing build status.  This kata builds upon those previous techniques, adding seven more handy shortcuts and a pattern to practice them.

Granted, the last Kata was a bit of white belt pattern: obvious and almost comical, but essential.  In Tae Kwon Do, the yellow belt patterns introduce forward and backward motion, so it seems logical that the next kata introduces rapidly navigating forward and backward through code.

Today’s Shortcut Lesson

Our mnemonic for this set of shortcuts is centered around two keys in the upper-right area of the keyboard: F12 and Minus (-).  The basic combinations for these keys can be modified by using the SHIFT key.

Also note, I’ve introduced another Tool window (CTRL + Window).  The Find Symbols Results is also displayed when you do a Quick Symbol search, which may help explain the “Q”.

F12 Go to Definition
SHIFT + F12 Find all References
CTRL + MINUS Navigate Backward
SHIFT + CTRL + MINUS Navigate Forward
CTRL + W, Q Find Symbols Results Window
CTRL + LEFT ARROW Move to previous word
CTRL + RIGHT ARROW Move to next word

And as an extra Keeno bonus, an 8th shortcut:

CTRL + ALT + DOWN ARROW Show MDI File List

Keyboard Kata

Practice this kata any time you need to identify how a class is used.

  1. Open the Solution Explorer. (CTRL+W, S)
  2. Navigate to a file. (Arrow Keys / Enter)
  3. Select a property or variable (Arrow keys)
  4. Navigate to the Definition for this item (F12)
  5. Find all References of this Type (CTRL+LEFT to move the cursor from the definition to the type, then SHIFT+F12 for references)
  6. Open one of the references (Arrow Keys / Enter)
  7. Open the next reference (CTRL+W,Q / Arrow Keys / Enter)
  8. Open the nth reference (CTRL+W,Q / Arrow Keys / Enter)
  9. Navigate to the original starting point (CTRL + MINUS)
  10. Navigate to the 2nd reference (SHIFT + CTRL + MINUS)
  11. Navigate to any window (CTRL + ALT + DOWN / Arrow Keys / Enter)

submit to reddit

Monday, December 07, 2009

Visual Studio Keyboard Katas

I’ve never spent much time learning keyboard shortcuts for Visual Studio – they’ve always seemed hard to remember with multiple key combinations, some commands have multiple shortcut bindings, and some keystrokes simply aren’t intuitive.  Recently, however, I’ve met a few IDE Ninjas who have opened my eyes on the productivity gains to be had.

The problem with learning keyboard shortcuts is that they can be a negative self-enforcing loop.  If the secret to learning keyboard shortcuts is using them during your day-to-day activities, the act of stopping work to look up an awkward keystroke interrupts your flow, lowers your productivity, and ultimately results in lost time.  Lost time and distractions puts pressure on us to stay focused and complete our work, which further discourages us from stopping to learn new techniques, including those that would ultimately speed us up.  Oh, the irony.

To break out that loop, we need to:

  • learn a few shortcuts by associating them with some mnemonics; and then
  • learn a few exercises that we can inject into daily coding flow

As an homage to the Code Katas cropping up on the internets, this is my first attempt at a Keyboard Kata. 

The concept of the “kata” is taken from martial arts, where a series of movements are combined into a pattern.  Patterns are ancient, handed down from master to student over generations, and are a big part of martial art exams.  They often represent a visualization of defending yourself from multiple attackers, with a focus on technique, form, and strength.  The point is that you repeat them over and over until you master them and they become instinctive muscle memory.  Having done many years of Tae Kwon Do, many years ago, I still remember most of my patterns to this date.  Repetition is a powerful thing.

A note about my Visual Studio environment:  I’m using the default Visual Studio C# keyboard scheme in Visual Studio 2008.  I’ve unpinned all of my commonly docked windows so that they auto-hide when not in use.  Unpinning your tool windows not only gives you more screen real estate, but it encourages you to use keyboard sequences to open them.

Today’s Shortcut Lesson

In order to help your retention for each lesson, I’m going to limit what you need to remember to seven simple shortcuts.  Read through the shortcuts, try out the kata, and include it in your daily routine -- memorize them and let them become muscle memory.  I hope to post a bunch of Katas over the next few weeks.

Tip: You’ll get even better retention if you say the shortcuts out loud as you do them.  You’ll feel (and sound) like a complete dork, but it works.

Tool Windows (CTRL+W, …)

Visual Studio’s keyboard scheme does have some reason behind its madness, where related functionality are grouped with similar shortcuts.  The majority of the toolbar windows are grouped under CTRL+W.  If it helps, think CTRL+WINDOW.

Here are a few of the shortcuts for Tool Windows:

CTRL+W, S Solution Explorer
CTRL+W, P Properties
CTRL+W, O Output Window
CTRL+W, E Errors
CTRL+W, C Class View

 

Note that the ESC key will put focus in the currently opened document and auto-hide the current tool window.

Build Shortcuts

F6
-or -
CTRL+SHIFT+B
Build Solution
SHIFT+F6 Build Project

 

Opening a Solution Kata

So here is the kata.  Try this pattern every morning after you open a solution file.

  1. Open the Solution Explorer.
  2. Navigate to a file
  3. View it’s properties
  4. Build the current Project
  5. Build the Solution
  6. Review the Output
  7. Check for build Errors

Extra credit:

  1. Open a file by navigating to it in the solution explorer
  2. Open a file to a specific method in the Class View
  3. View properties of a currently opened file.

submit to reddit

Wednesday, December 02, 2009

NUnit for Visual Studio Addin

I recently stumbled upon this great addin for Visual Studio that uses the Visual Studio Test Adapter pattern to include NUnit tests within Visual Studio as MS Tests.  They appear in the Test List Editor and execute equivalent to MS Test, including those handy Run and Debug keyboard shortcuts I described in my last post.

Since they operate as MS Tests, the project requires some additional meta-data in the csproj file in order to have Visual Studio recognize this project as a Test library.  My last post has the details.

Curious to see how far the addin could act as a stand-in for NUnit, I fired up Visual Studio, a few beers, and the NUnit attribute documentation to put it through the works.  I’ve compiled my findings here in the table below.

In all fairness, there are a lot of attributes in NUnit, some of these you probably didn’t know existed.

NUnit Attribute Supported Comments
Category No Sadly, the addin does not register a new column definition for Category.  Though this feature is not tied to any functional behavior, it would be greatly welcomed to improve upon Visual Studio’s Test Lists.
Combinatorial Yes  
Culture / SetCulture No Tests that would normally be excluded by NUnit fail.
Datapoint / Theory No Test names do not match NUnit runtime.  All Datapoints produce result Not Runnable in the Test Results
Description No Value does not appear in the Test List Editor
Explicit No Explicit Tests are executed and appear as Enabled in the Test List Editor
ExpectedException Yes  
Ignore Partial Ignored tests are excluded from the Test List Editor, so they are ignored, but they do not appear as Enabled = False.
MaxTime / Timeout Partial Functions properly though supplied setting does not appear in the Timeout column in the Test List Editor
Platform No Tests are executed regardless of the specified platform.
Property - Custom properties do not appear in the output of the TRX file, which is where I’m assuming they would appear.  Not entirely sure if the schema would support custom properties however.
Random No Tests are generated, though the names contain only numbers.  Executing these tests produce the result Not Runnable in the Test Results.
Range No Tests are generated, though the names contain only numbers.  Executing these tests produce the result Not Runnable in the Test Results.
Repeat Yes  
RequiredAddin - Not tested.
RequiresMTA / RequiresSTA / RequiresThread Yes  
Sequential Yes  
Setup / Teardown Yes  
SetupFixture No Setup methods for a given namespace do not execute.
Suite - Not tested (requires command-line switch)
TestFixtureSetup / TestFixtureTeardown Yes  
Test Yes Of course!
TestCase Yes Tested different argument types (int, string), TestName, ExpectedException.
TestCaseSource Yes  

There’s quite a few No’s in this list, but the major players (Test, Setup/Teardown, TestFixtureSetup/Teardown) are functional.  I’m actually pleased that NUnit 2.5.2 features such as parameterized tests (TestCase, TestCaseSource) and Combinatorial / Sequential / Values are in place, as well as former addin features that were bundled into the framework (MaxTime / Repeat).

In respect to the malformed test names and non-runnable tests for the Theory / Range / Random attributes, hopefully this is a small issue that can be resolved.  The cosmetic issues with Ignore / Description / Category don’t pose any major concerns though they would be large wins in terms of full compatibility with MS Test user interface and features.

I’ve never used the SetupFixture nor the culture attributes, so I’m not losing much sleep over these.

However, for me, the main issue for me is that Explicit tests are always executed.  I’ve worked on many projects where a handful of tests either brought down the build server or couldn’t be run with other tests.  Rather than solve the problem, developers tagged the tests as Explicit – they work, but you better have a good reason to be running them.

Hats off to the NUnitForVS team.

submit to reddit

Tuesday, December 01, 2009

Manually creating a MS Test Project

Although I’ve always been a huge proponent of NUnit, I’m finding I’m using MS Test more frequently for the following reasons:

  • My organization is a large Microsoft partner, so there’s often some preference for Microsoft tools in our projects.
  • Support for open source tools is a concern for some organizations I work with.  Although Tests are not part of the production deliverables, some organizations are very risk adverse and reasonably do not want to tie themselves to products without support or guarantee of backward compatibility.
  • Severe Resharper withdrawal.  After spending several years with Resharper tools, I’ve spent the last year with a barebones Visual Studio 2008 installation.  Without the tight integration between Visual Studio and NUnit, attaching and debugging a process isn’t involuntary. If the JetBrains guys are listening, hook me up.

Reluctantly, I’ve started to use Visual Studio Test warts-n-all.  Out of the box, MS Test has two very handy keyboard shortcuts where you can either Run (CTRL+R, T) or Debug (CTRL+R, CTRL+T) the current test, fixture or solution, depending on where your mouse is currently focused. 

Oddly enough, I’ve found myself in a position where I’ve manually created a Test project by adding the appropriate references, but none of the Visual Studio Test features work, including these handy short cuts.  Any attempt to run these tests using these shortcut produces an error:

No tests were run because no tests were loaded or the selected tests are disabled.

This error is produced because the Test Adapter is looking for a few meta attributes in the project that are added when you using the New Test Project template.

To manually create a MS Test project, in Visual Studio:

  1. Create a new Class Library project
  2. Add a reference to: Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll
  3. Right-click your project and choose “Unload Project”.

    unload-project
  4. Right-click on your project and choose “Edit <ProjectName>”

    edit-project
  5. Add the following ProjectTypeGuids element to the first ProjectGroup element:

      <PropertyGroup>
        <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
        <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
        <ProductVersion>9.0.30729</ProductVersion>
        <SchemaVersion>2.0</SchemaVersion>
        <ProjectGuid>{4D38A077-23EE-4E9F-876A-43C33433FFEB}</ProjectGuid>
        <OutputType>Library</OutputType>
        <AppDesignerFolder>Properties</AppDesignerFolder>
        <RootNamespace>Example.ManualTestProject</RootNamespace>
        <AssemblyName>Example.ManualTestProject</AssemblyName>
        <TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
        <FileAlignment>512</FileAlignment>
        <ProjectTypeGuids>{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
      </PropertyGroup>
  6. Right-click on the Project and choose "Reload <Project Name>"

Once reloaded, the handy shortcuts work as expected.

Note for the curious:

  • Guid {FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}: refers to C# project
  • Guid {3AC096D0-A1C2-E12C-1390-A8335801FDAB}: refers to the “Test Project Flavor”

From my limited understanding of how Visual Studio Test package works, it scans the solution looking for projects that can contain tests.  Without the magic ProjectTypeGuid, the class library is excluded from this process.

Happy coding.

submit to reddit

Monday, November 23, 2009

User Extensions with the Selenium Toolkit for .NET

As I mentioned in my last post, new to version 0.81 the Selenium Toolkit for .NET now provides a simple mechanism to add user-defined extensions to Selenium.

Some background

A user-extension for Selenium is a JavaScript file that becomes embedded into Selenium’s Test Runner.  Typically, the script extends Selenium’s core JavaScript to add new commands, but it also can be used to extend existing functionality, or add custom locator strategies.  Both Selenium IDE and RC allow extensions to be added.

For the purposes of this discussion, here’s a crude example of a selenium extension: 
Selenium.prototype.getMyValue = function(locator, text) {
    return locator;
};

The toolkit takes a straight forward approach to including extensions: it combines the files listed in the configuration file (in order of appearance) into a single file and then configures the Selenium RC to use it. 

Incidentally, if you ever need to verify that your extensions were loaded, you can simply view the JavaScript file at this url: http://localhost:4444/selenium-server/core/scripts/user-extensions.js

The ISelenium interface of the Selenium API does not directly expose a mechanism to execute custom commands.  Instead, we need to send our custom command using a command processor (ICommandProcessor), which isn’t exposed by default so a custom implementation of the ISelenium interface is required.  The Selenium documentation provides a pretty good overview of how to do this, and the remainder of this post demonstrates how to integrate this approach with the toolkit.

Create a customized ISelenium

One of the new features added to the toolkit in this release is the ability to supply your own mechanism for instantiating the ISelenium instance. The factory is a basic class with a Create method.  Simply implement the ISeleniumFactoryProvider interface and then wire it up into the configuration settings (see below).  If no setting is provided in the config, a default factory is used.

This example below shows a custom ISeleniumFactoryProvider that creates a customized ISelenium instance that exposes the ICommandProcessor:

namespace Custom
{
    using Selenium;
    using SeleniumToolkit.Core;

    public class SeleniumFactory : ISeleniumFactoryProvider
    {
        public ISelenium Create(string host,
                                int port,
                                string browserProfile,
                                string baseUrl)
         {
             ICommandProcessor processor = new HttpCommandProcessor(host, port, browserProfile, baseUrl);
             return new CustomSelenium(processor);       
         }
    }

    public class CustomSelenium : DefaultSelenium
    {
        public CustomSelenium(ICommandProcessor processor) 
            : base(processor)
        {
            CommandProcessor = processor
        }

        public ICommandProcessor CommmandProcessor
        {
            get;
            protected set;
        }
    }
}

Tweak your Settings

After you’ve compiled your custom factory, you’ll need to reference your JavaScript file in the userExtensions element and specify your custom factory using the factoryType attribute of the Selenium Node.

<Selenium
    factoryType="Custom.SeleniumFactory, CustomProject"
    >
   <userExtensions>
       <add name="example"
            path="myextension.js" />
   </userExtensions>
</Selenium>

Putting it all together

With the configuration settings in place, now all we need to do is cast the Browser.Current instance to our custom type.

namespace Custom.Test
{
    using NUnit.Framework;
    using Selenium;
    using SeleniumToolkit;

    [WebFixture]
    public class Example
    {
        [WebTest]
        public void ShowUsage()
        {
            CustomSelenium browser = Browser.Current as CustomSelenium;

            string[] inputArgs = { "Hello World" };
            string result = browser.CommandProcessor.DoCommand("getMyValue", inputArgs);
            Assert.AreEqual("Hello World", result);
        }
    }
}

Happy coding.

submit to reddit

Thursday, November 19, 2009

Selenium Toolkit for .NET 0.81 Now Available

I’m happy to announce the next release for the Selenium Toolkit for .NET is now available for download.  This release adds minor fixes to the runtime and enhanced configuration support.

New for this release

This release includes several new enhancements:

  • NUnit 2.5.2 Support
  • Custom Browser Profiles and Aliases
  • User Extensions support
  • Proxy Server support

NUnit 2.5.2 Support

This release represents an upgrade for the NUnit addin to latest version NUnit (2.5.2).  NUnit 2.5.2 includes several performance and user interface enhancements which improve the development experience, including data driven tests and source-code browser for exceptions.

As part of a dependency issue with the NUnit framework, addins are specific to the runtime they’re compiled against, so you must have NUnit 2.5.2 installed in order to use the addin.  If you have NUnit 2.4.8, you’ll have to upgrade.  If you haven’t been able to use the toolkit because you weren’t using NUnit 2.4.8, please download and send in some feedback.

Custom Browser Profiles and Aliases

One of the key goals of the toolkit is to minimize duplication and environment specific settings in your tests so that the impact of moving between environments is minimized.  The toolkit now supports the concept of browser aliases where custom profiles can be defined as a lookup table in the configuration settings.

Before:

[WebFixture]
public class ExampleTest
{
    [WebTest(DefaultBrowser=@"*custom \"C:\PathToFireFox\FireFox.exe\" -no-remote -profile \"C:\PathToProfile\"")]
    public void OpenCustomProfile()
    {
        // ....
    }
}

After:

The dependency on the physical folder path can now be externalized to the configuration file, which can be modified to suit each environment without having to alter the tests.

[WebFixture]
public void ExampleWithBrowserAlias
{
    [WebTest(DefaultBrowser="ff-profile-1")]
    public void OpenCustomProfile()
    {
        // ....
    }
}

And the configuration settings....

<configuration>
    <!-- snip -->

    <Selenium
        BrowserUrl="http://www.bryancook.net"
        >
        <browsers>
            <add key="ff-profile-1"
                 value="*custom &quot;C:\FireFoxPath&quot; -no-remote -profile &quot;PathToProfile&quot;"
                />
        </browsers>
    </Selenium>

</configuration>

User Extensions support

Selenium provides an extensibility model that allows you to extend its functionality through custom JavaScript files.  This release provides two key enhancements to enable this functionality:

  • Configuration element that allows you to specify JavaScript files and directories to be concatenated into the user-extensions.js file; and
  • Factory model that allows you to create your own implementations of the ISelenium interface.  A custom implementation allows you to execute your own custom commands, and will likely play an integral role in the WebDriver backed Selenium implementation planned for Selenium 2.0.

Expect a blog post soon that demonstrates this functionality, but in the meantime you can find details on the configuration settings here.

Proxy Server Support

While not as exciting or as interesting as custom user extensions, this release includes some additional configuration settings for users running Selenium-RC against a web-site that is only accessible through a proxy server.  See the configuration settings  for more details.

Download now

Enough already!  Go download the latest release and try it out.  As always, feedback is always welcome.

Happy coding.

submit to reddit

Wednesday, November 11, 2009

Add Syntax Highlighter to Live Writer Templates

Been tweaking my blog template slowly over the past year, and while I’ve got plans to switch to 960.gs format, my most recent change has been how I highlight my code snippets: like everyone else on the planet, I’m now using Alex Gorbatchev’s SyntaxHighlighter in combination with Anthony Bouch’s PreCode.

Up to this point, I’ve been using CSharpFormat with Omar Shahine’s Insert Code plugin.  This solution has worked well for me up to this point, but it’s the features of SyntaxHighlighter (line numbers, collapsible blocks, etc) and the quality of PreCode’s editing features that have made me switch.

The key difference between the two solutions is that CSharpFormat adornes your code with HTML blocks that are immediately visible in Live Writer (and my RSS feed); SyntaxHighlighter highlights your code at runtime using JavaScript.

While I can live without the RSS feed highlighting, I really miss seeing my code highlighted in Live Writer.  After updating my blogger template, I noticed that Live Writer ignores your JavaScript when it updates your editing template.

Fortunately, Live Writer’s editing template is stored locally as HTML, here:

C:\Users\bcook\AppData\Roaming\Windows Live Writer\blogtemplates\<guid>\index.html

Simply open the file in your editor, and paste in your JavaScript:

<!-- syntax highlighter --> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shCore.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushCSharp.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushCss.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushJScript.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushXml.js' type='text/javascript'></script> 
<script type='text/javascript'> 
  SyntaxHighlighter.config.bloggerMode = true;
  SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf';
  SyntaxHighlighter.all();
</script>

Tuesday, November 10, 2009

Implementing a Strategy Pattern using the Unity Framework

My current project is using Unity and has had some interesting challenges to work through.  One of the design challenges we encountered involved several components that were nearly identical in appearance except for a few minor functional details.  We decided to implement a strategy pattern where the variances were different strategies that were applied to common context.

For the sake of argument, let’s say we’re talking about a Financial Calculator that supports different types of financial calculations (FPV, Payment, etc)

public class Calculator : ICalculator
{
    public Calculator(IFormula formula)
    {
	_formula = formula;
    }

    public CalcResult Calculate(CalcArgs input)
    {
        return _formula.Process(input);
    }

    IFormula _formula;
}

public interface IFormula
{
    CalcResult Process(CalcArgs input);
}

public class FutureValue : IFormula
{
    public CalcResult Process(CalcArgs input)
    {
        // ...
    }
}

public class PresentValue: IFormula
{
    public CalcResult Process(CalcArgs input)
    {
        // ...
    }
}

In my last post, I showed how Unity needs some additional configuration in order to resolve ambiguous types – Unit won’t be able to resolve our Calculator unless we specify which IFormula we want to use.  This scenario is a bit different because we want to be able to use both the FutureValue and PresentValue formulas at runtime. 

We have a couple of options to choose from, each with their own pitfalls:

  • Container overloading
  • Named Instances

Container Overloading

At a basic level, we’re limited to registering a type only once per container.  Fortunately, Unity supports the ability to spawn a child container that can used to replace registered types with alternate registrations.  By swapping out our types, we can create alternate runtimes that can be scoped to specific areas of our application.

[TestMethod]
public void CanOverloadContainer()
{
    // create the container and register the
    //    PresentValue formula
    var container = new UnityContainer();
    container.RegisterType<IFormula, PresentValue>();

    // verify we can resolve PresentValue
    var formula1 = container.Resolve<IFormula>();
    Assert.IsInstanceOfType(formula1, typeof(PresentValue));

    // create a child container, and replace
    //    the formula with FutureValue formula
    var childContainer = container.CreateChildContainer();
    childContainer.RegisterType<IFormula, FutureValue>();

    // verify that we resolve the new formula
    var formula2 = childContainer.Resolve<IFormula>();
    Assert.IsInstanceOfType(formula2, typeof(FutureValue));
}

This approach provides a clean separation between the container and the various combinations of our strategy pattern.  If you can compose your application of controllers that can extend and manipulate child containers, this is your best bet as each container provides a small targeted service to the application as a whole.

However, if the variations must live in close contact with one another and you can’t separate them into different containers, you’ll need a different strategy.

Named Instances

Unity provides the ability to register a type more than once, with a caveat that each type is given a unique name within the container.  This feels more like a hack than a best practice, but it provides a simple solution to registering the various combinations of ICalculator types side-by-side.  Since our variations are based on the types passed in through the constructor, we can provide Unity the construction information it needs using InjectionConstructor and ResolvedParameter objects.  To resolve our objects, we must specify the unique name when resolving.

[TestMethod]
public void CanConstructUsingConstructorInfo()
{
    var container = new UnityContainer();

    container.RegisterType<ICalculator, Calculator>("FutureValue",
                            new InjectionConstructor(new ResolvedParameter<FutureValue>())
                            );

    container.RegisterType<ICalculator, Calculator>("PresentValue",
                            new InjectionConstructor(new ResolvedParameter<PresentValue>())
                            );

    var calc1 = container.Resolve<ICalculator>("FutureValue");
    var calc2 = container.Resolve<ICalculator>("PresentValue");
}

At this point, I need to step aside and point out that there is a potential problem with this approach.  We’ve created a mechanism that allows our type to be easily resolved directly from the container using it’s unique name.  There is temptation to pass the container into our code and resolve these objects by name.  This is a slippery slope that will quickly tie your application code and tests directly to Unity.  I will blog more about this in an upcoming post, but here are some better alternatives:

Resolved Parameters

Rather than manually resolving the named instance from the container, configure the container to use your named instance:

public class Example
{
    public Example(ICalculator calculator)
    {
        Calculator = calculator;
    }

    public ICalculator Calculator
    {
        get;
        set;
    }
}

[TestClass]
public class ExampleTest
{
    [TestMethod]
    public void ConfigureExampleForNamedInstance()
    {
        var container = new UnityContainer();
        container.RegisterType<ICalculator, Calculator>("FutureValue",
                                    new InjectionConstructor(
                                            new ResolvedParameter<FutureValue>())
                                    );

        container.RegisterType<Example>(
                                    new InjectionConstructor(
                                            new ResolvedParameter<ICalculator>("FutureValue"))
                                    );

        Example ex = container.Resolve<Example>();
    }
}
Dependency Attributes

Unity supports special attributes that can be applied to your code to provide configuration guidance.  This can make instrumenting your code much easier, but it has a downside of brining the dependency injection framework into your code.  This isn’t necessarily the end of the world, but can limit us if we want to reuse this class with a different dependency or overload it in another container.

public class Example
{
    public Example(
                [Dependency("FutureValue")]
                ICalculator calculator
                  )
    {
        Calculator = calculator;
    }

    public ICalculator Calculator
    {
        get;
        set;
    }
}

Alternatively, the DependencyAttribute can be applied to Properties as well for Setter injection.  Personally, I find this reads better than using attributes in the constructor, and I’ll save my points about Setter injection for another post.

public class Example
{
    [Dependency("FutureValue")]
    public ICalculator Calculator
    {
        get; set;
    }
}

Conclusion

Although inversion of control promotes use of interfaces to decouple our implementations, we often use abstract classes and composition to construct our objects.  In the case of composition, we need to provide Unity with the appropriate information to construct our objects.  Wherever possible, we should defer construction logic to configuration settings rather than instrumenting the code with container information.

submit to reddit

Monday, November 09, 2009

Simple Dependency Injection using the Unity Framework

My current project is using Unity as an Inversion of Control / Dependency Injection framework.  We’ve had a few challenges and lessons learned while using it.  This is my first post on the topic, and I want to provide a quick look:

Unity is slightly different than most dependency injection frameworks that I’ve seen.  Most dependency injection frameworks, such as Spring.NET (when I last used it), require you to explicitly define the dependencies for your objects in configuration files.  Unity also uses configuration files, but it also can construct any object at runtime by examining its constructor.

Let’s take a simple example where I have a product catalog service that provides simple searching for products by category.  The service is backed by a simple and crude ProductRepository object.

public class ProductCatalog
{
    public ProductCatalog(ProductRepository repository)
    {
        _repository = repository;
    }

    public List<Product> GetProductsByCategory(string category)
    {
        return _repository.GetAll()
                          .Where(p => p.Category == category)
                          .ToList();
    }

    private ProductRepository _repository;
}

public class ProductRepository
{
    public ProductRepository()
    {
        _products = new List<Product>()
            {
            new Product() { Id = "1", Name = "Snuggie", Category = "Seen on TV" },
            new Product() { Id = "2", Name = "Slap Chop", Category = "Seen on TV" }
            };
    }

    public virtual List<Product> GetAll()
    {
        return _products;
    }

    private List<Product> _products;
}

public class Product
{
    public string Id { get; set; }
    public string Name { get; set; }
    public string Category { get; set; }
}

With no configuration whatsoever, I can query Unity to resolve my ProductCatalog in a single line of code.  Unity will inspect the constructor of the ProductCatalog and discover that it is dependent on the ProductRepository.  Since the ProductRepository does not require any additional dependencies in its constructor, Unity will construct the repository and assign it to the catalog for me.

[Test]
public void CanResolveWithoutConfiguration()
{
    var container = new Microsoft.Practices.Unity.UnityContainer();

    var catalog = container.Resolve<ProductCatalog>();

    var products = catalog.GetProductsByCategory("Seen on TV");

    Assert.AreEqual(2, products.Count);
}

Aside from the fact that this is a crude example, a production system would be extremely limited because the product catalog is tightly coupled to our repository implementation.  Although we could subclass the product repository, a better solution would decouple the two classes by introducing an interface for the product repository.

public interface IProductRepository
{
    List<Product> GetAll();
}

public class ProductRepository : IProductRepository
{
    // ...
}

public class ProductCatalog
{
    public ProductCatalog(IProductRepository repository)
    {
        // ...
    }

    // ...
}

This is a much more ideal solution as our components are now loosely coupled, but Unity no longer has enough information to automatically resolve the ProductCatalog’s dependencies.  To fix this issue, we need to provide this information to the Unity container.  This can be done through configuration files, or by manipulating the container directly:

[Test]
public void CanResolveWithRegisteredType()
{
    var container = new Microsoft.Practices.Unity.UnityContainer();
    
    container.RegisterType<IProductRepository, ProductRepository>();

    var catalog = container.Resolve<ProductCatalog>();

    var products = catalog.GetProductsByCategory("Seen on TV");

    Assert.AreEqual(2, products.Count);
}

Note that while I’m using the Unity container in my tests, it's completely unnecessary. I can simply mock out my dependencies (IProductRepository), pass them in, and test normally.

Conclusion

This demonstrates how Unity is able to resolve simple types without extensive configuration.  By designing our objects to have their dependencies passed in they possess no knowledge of the dependency injection framework, making them lean, more portable and easy to test.

submit to reddit