Thursday, December 24, 2009

Twelve Days of Code - Solution Setup

As part of the Twelve Days of Code Challenge, I’m developing a Pomodoro style application and sharing the progress here on my blog.  This post tackles day one: setting up your project.

The initial stage of a project where you are figuring out layers and packaging is a critical part of the process and one that I’ve always found interesting.  From past experience, the small details at this stage can become massive technical debt later if the wrong approach is used, so it’s best to take your time and make sure you’ve crossed all the T’s and dotted the I’s.

Creating the Solution

For this project I’ve chosen to use Visual Studio 2010 Beta 2 and so far the experience has been great.  Visual Studio 2010 is going to reset the standard and bring new levels of developer productivity (assuming they solve some of the stability issues): it’s faster and much more responsive, eats less memory and adds subtle UX refinements that improve developer flow.  To get a better sense and to recreate this feeling, I urge you to load up Visual Studio 2003 and look at the Start page – we’ve come a long way.

The New Project window has a nice overhaul, and we can specify the target framework in the dialog.  Here I’m creating a WPF Application Pomodoro.Shell.  Note that I’m specifying to create a directory for the solution and that the Solution Name and Project Name are different. 

AddSolution 

Normally at this point I would consider renaming the output of the application from “Pomodoro.Shell.exe” to some other simpler name like “pomodoro.exe”.  This is an optional step which I won’t bother with for this application.

Adding Projects

When laying out the solution, the first challenge is determining how many Visual Studio Projects we’ll need, and there are many factors to consider including dependencies, security, versioning, deployment, reuse, etc.

There appears to be a school of thought that believes every component or module should be its own assembly, and I strongly disagree.  Assemblies should be thought of as deployment-units – if the components version and deploy together, its very likely that they should be a single assembly.  As Visual Studio does not handle large number of projects well, it’s always better to start with larger assemblies and separate them later if needed.

For my pomodoro app, I’ve decided to structure the project into two primary pieces, “core” and “shell”, where “core” provides the model of the application and “shell” provides the user-interface specific plumbing.

Add Test Projects

Right from the start of the project, I’m gearing towards how it will be tested.  As such, I’ve created two test projects, one for each assembly.  This allow me to keep the logical division between assemblies.

AddProject

As soon as the projects are created, the first thing I’ll do is adjust the namespaces of the test libraries to match their counterparts.  By extension, the tests are features of the same namespace but they are packaged in a separate assembly because I do not want to deploy them with the application.  I’ve written about this before.

FixNamespaceforTests

Configure common Assembly Properties

Once we’ve settled into a project structure, the next easy win is to configure the projects to share the same the assembly details such as version number, manufacture, copyright, etc.  This is easily accomplished by creating a single file to represent this data, and then linking each project to this file.  At a later step, this file can be auto-generated as part of the build process.

using System.Reflection;

[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]
[assembly: AssemblyCompany("Bryan Cook")]
[assembly: AssemblyCopyright("Copyright © Bryan Cook 2009")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]

Tip: Link the AssemblyVersion.cs file to the root of each project, then drag it into the Properties folder.

Give the Assemblies a Strong-Name

If your code will ultimately end up on a end-user desktop, it is imperative to give the assembly a strong-name.  We can take advantage of Visual Studio’s built in features to create our strong-name-key (snk) but we’ll also take a few extra steps to ensure that each project has the same key.

  1. Open the project properties.
  2. Click on the Signing tab
  3. Check the “Sign the assembly” checkbox
  4. Choose “<New…>”
  5. Create a key with no password.
  6. Open Windows Explorer and copy the snk file to the root of the solution.
  7. Then for each project:
    1. Check the “Sign the assembly” checkbox
    2. Choose “<Browse…">”
    3. Navigate to the root of the solution and select the snk key.

Note that Visual Studio will copy the snk file to each project folder, though each project will have the same public key.

Designate Friend Assemblies

In order to aid testing, we can configure our Shell and Core assemblies to implicitly trust our test assemblies.  I’ve written about the benefits before, but the main advantage is that I don’t have to alter type visibility for testing purposes.  Since the assemblies have a strong name, the InternalsVisibleTo attribute requires the fully public key.

strong-name-internalsvisibleto

Since all the projects share the same key file, this public token will work for all the projects.  The following shows the InternalsVisibleTo attribute for the Pomodoro.Core project:

[assembly: InternalsVisibleTo("Pomodoro.Core.Tests, PublicKey=" +
"0024000004800000940000000602000000240000525341310004000001000" +
"1003be2b1a7e08d5e14167209fc318c9c16fa5d448fb48fe1f3e22a075787" +
"55b4b1cf4059185d2bd80cc5735142927fbbd3ab6eeebe6ac6af774d5fe65" +
"0a226b87ee9778cb2f6517382102894dc6d62d5a0aaa84e4403828112167a" +
"1012d5b905a37352290e4aa23f987ff2be3ccda3e27a7f7105cf5b05c0baf" +
"3ecbfd2371c0fa0")]

Setup External References

I like to put all the third-party assemblies that are referenced into the project into a “lib” folder at the root of the solution.  At the moment, I’m only referencing Moq for testing purposes.

A note on external references and source control: Team Foundation Server typically only pulls dependencies that are listed directly in the solution file.  While there are a few hacks for this (add each assembly as an existing item in a Solution Folder; or create a class library that contains the assemblies as content), I like to have all my dependencies in a separate folder with no direct association to the Visual Studio solution.  As a result, these references must be manually updated by performing a “Get Latest” from the TFS Source Control Explorer.  If you’ve got a solution for this – spill it, let’s hear your thoughts.

Setup Third-Party Tools

For all third-party tools that are used as part of the build, I like to include these in a “tools” or “etc” folder at the root of the solution.  This approach allows me to bundle all the necessary tools for other developers to allow faster ramp-up.  It adds a bit of overhead when checking things out, but certainly simplifies the build script.

Setup Build Script

There’s a few extra steps I had to take to get my .NET 4.0 project to compile using NAnt.

  1. Download the nightly build of the nant 0.86 beta1.  The nightly build solves the missing SdkInstallRoot build error.
  2. Paige Cook (no relation) has a comprehensive configuration change that needs to be applied to nant.exe.config
  3. Modify Paige’s version numbers from .NET 4.0 beta 1 to beta 2.  (Replace all references of “v4.0.20506” to “v4.0.21006”)

Here's a few points of interest for the build file listed below:

  • I’ve defined a default target “main”.  This allows me to simply execute “nant” in the root solution of the folder and it’ll take care of the rest.
  • The “main” target is solely empty because the real work is the order of the dependencies.  Currently, I’m only specifying “build”, but normally I would specify “clean, build, test”.
<project default="main">

  <!-- VERSION NUMBER (increment before release) -->
  <property name="version" value="1.0.0.0" />
  
  <!-- SOLUTION SETTINGS -->
  <property name="framework.dir" value="${framework::get-framework-directory(framework::get-target-framework())}" />
  <property name="msbuild" value="${framework.dir}\msbuild.exe" />
  <property name="vs.sln" value="TwelveDays.sln" />
  <property name="vs.config" value="Debug" />

  <!-- FOLDERS AND TOOLS -->
  <!-- Add aliases for tools here -->
  
  
  <!-- main -->
  <target name="main" depends="build">
  </target>

  <!-- build solution -->
  <target name="build" depends="version">

    <!-- compile using msbuild -->
    <exec program="${msbuild}"
      commandline="${vs.sln} /m /t:Clean;Rebuild /p:Configuration=${vs.config}"
      workingdir="."
          />

  </target>
    
  <!-- generate version number -->
  <target name="version">
    <attrib file="AssemblyVersion.cs" readonly="false" if="${file::exists('AssemblyVersion.cs')}" />
    <asminfo output="AssemblyVersion.cs" language="CSharp">
      <imports>
        <import namespace="System" />
        <import namespace="System.Reflection" />
      </imports>
      <attributes>
        <attribute type="AssemblyVersionAttribute" value="${version}" />
        <attribute type="AssemblyFileVersionAttribute" value="${version}" />
      </attributes>
    </asminfo>
  </target>

</project>

Next Steps…

In the next post, we’ll look at the object model for our Pomodoro application.

submit to reddit

Tuesday, December 15, 2009

Unity, Dependency Injection and Service Locators

For a while now, I’ve been chewing on some thoughts about the service locators versus dependency injection, but struggled to find the right way to say it.  I sent an email to a colleague today that tried to describe some of the lessons learned from my last project.  It comes close the visceral feelings I have around this subject, but describes it well.  Here it is unaltered:

When you pull dependencies in from the Container, you’re using a Service Locator feature of Unity; When you push dependencies in via the constructor, you’re using Dependency Injection.  There’s much debate over which pattern to use.  When using dependency injection, the biggest advantage is that the relationship between objects is well known.  This helps us understand the responsibilities of each object better, may have better performance (if we’re resolving dependencies a lot) and it makes it easier to write tests that isolate the behavior of the subject under test.

In a project where we’re resolving dependencies at runtime, the true nature of the dependencies between objects is obscured -- developers must read through the source to understand responsibilities.  Each time a change is made to resolve new dependencies from the container, the tests will fail.  Since locating and understanding the responsibility of the dependencies in the source is a complex task, the easy solution is to simply register the missing dependencies in the container and move on.  In most cases, the object that is registered is the concrete implementation which is also susceptible to the same dependency problem.  Ultimately, this compounds the problem until the tests themselves become obscured, vague, incomplete or overly complex.

In contrast, if the dependencies were registered in the constructor, all dependencies are known at compile time.  Any changes made to the subject won’t compile until the tests are updated to reflect the new functionality.  Note that because the object doesn’t use the container, the tests become just like any other simple POCO test.

This is not a silver bullet solution however.  There are times when resolving from the container makes sense – loaders and savers, for example – but even then, the container can be hidden inside a POCO factory. In contrast to service location, the impact of constructor injection is that it may require more work upfront to realize the dependencies, though this can be mitigated in some cases using a TDD methodology where the tests satisfy the responsibilities of the subject under test as it is written.

Comments welcome.

submit to reddit

Monday, December 14, 2009

Twelve Days of Code Challenge – 2009

Two years ago I tried an experiment in blogging for the festive season called the "Twelve Days of Code". The concept was to challenge myself to experiment in new technologies for at least an hour a day and blog about it. While I learned a lot about a focused topic (I chose Visual Studio Automation), the experiment didn't live up to my expectations.

This year, I want to do something different. I want to create a social experiment and challenge you!

The challenge

This year, the challenge is to write a simple Pomodoro style task tracking application. The application can be as simple or as far fetched as you want it to be, but at a minimum the application needs to:

  • Start, Stop and Cancel a Pomodoro
  • Notify the user when the pomodoro is complete

Of course, you don’t have to limit yourself to this functionality – sky’s the limit. If you want to collect usage statistics and provide reporting capabilities, persist a custom task list using SQLite, or work in the other characteristics of the pomodoro “flow” such as the 3-5 minute break between tasks – that’s up to you. Surprise me – and yourself!

In the spirit of Twelve Days of Code, pick a technology that’s new to you and do as much as you can. I'm planning on doing mine in .NET 4.0.

The rules

  • You can tackle any task in any order
  • Spend as much time as you want researching, but coding must be limited to one hour
  • Blog about it and leave a comment here
  • Have fun

What do I win?

The are no prizes, unfortunately. But if you are ever in downtown Toronto, there will be beers involved.

Here’s an outline of my twelve days:

  1. Solution Setup (build script, packaging, strong-names, etc)
  2. Object Model (application structure, commands, controllers)
  3. Presentation Model (View Models)
  4. Presentation templates (XAML, Converters, Data Templates)
  5. Composite Application setup (Prism)
  6. Dependency Injection Setup (Unity)
  7. Persistence Layer (record some stats using SQLite)
  8. Persistence Layer (Entity Framework 4.0)
  9. Animations (.NET 4.0 Visual State Manager)
  10. Windows 7 features (task bar icon overlays, action buttons, jump lists?)
  11. Installer (Wix)
  12. Functional Automation Testing

Good luck, Happy Coding and Happy Holidays.

submit to reddit

Friday, December 11, 2009

Visual Studio Keyboard Katas - II

Hopefully, if you read the last kata and have been trying it out, you may have found yourself needing to use your mouse less for common activities such as opening files and reviewing build status.  This kata builds upon those previous techniques, adding seven more handy shortcuts and a pattern to practice them.

Granted, the last Kata was a bit of white belt pattern: obvious and almost comical, but essential.  In Tae Kwon Do, the yellow belt patterns introduce forward and backward motion, so it seems logical that the next kata introduces rapidly navigating forward and backward through code.

Today’s Shortcut Lesson

Our mnemonic for this set of shortcuts is centered around two keys in the upper-right area of the keyboard: F12 and Minus (-).  The basic combinations for these keys can be modified by using the SHIFT key.

Also note, I’ve introduced another Tool window (CTRL + Window).  The Find Symbols Results is also displayed when you do a Quick Symbol search, which may help explain the “Q”.

F12 Go to Definition
SHIFT + F12 Find all References
CTRL + MINUS Navigate Backward
SHIFT + CTRL + MINUS Navigate Forward
CTRL + W, Q Find Symbols Results Window
CTRL + LEFT ARROW Move to previous word
CTRL + RIGHT ARROW Move to next word

And as an extra Keeno bonus, an 8th shortcut:

CTRL + ALT + DOWN ARROW Show MDI File List

Keyboard Kata

Practice this kata any time you need to identify how a class is used.

  1. Open the Solution Explorer. (CTRL+W, S)
  2. Navigate to a file. (Arrow Keys / Enter)
  3. Select a property or variable (Arrow keys)
  4. Navigate to the Definition for this item (F12)
  5. Find all References of this Type (CTRL+LEFT to move the cursor from the definition to the type, then SHIFT+F12 for references)
  6. Open one of the references (Arrow Keys / Enter)
  7. Open the next reference (CTRL+W,Q / Arrow Keys / Enter)
  8. Open the nth reference (CTRL+W,Q / Arrow Keys / Enter)
  9. Navigate to the original starting point (CTRL + MINUS)
  10. Navigate to the 2nd reference (SHIFT + CTRL + MINUS)
  11. Navigate to any window (CTRL + ALT + DOWN / Arrow Keys / Enter)

submit to reddit

Monday, December 07, 2009

Visual Studio Keyboard Katas

I’ve never spent much time learning keyboard shortcuts for Visual Studio – they’ve always seemed hard to remember with multiple key combinations, some commands have multiple shortcut bindings, and some keystrokes simply aren’t intuitive.  Recently, however, I’ve met a few IDE Ninjas who have opened my eyes on the productivity gains to be had.

The problem with learning keyboard shortcuts is that they can be a negative self-enforcing loop.  If the secret to learning keyboard shortcuts is using them during your day-to-day activities, the act of stopping work to look up an awkward keystroke interrupts your flow, lowers your productivity, and ultimately results in lost time.  Lost time and distractions puts pressure on us to stay focused and complete our work, which further discourages us from stopping to learn new techniques, including those that would ultimately speed us up.  Oh, the irony.

To break out that loop, we need to:

  • learn a few shortcuts by associating them with some mnemonics; and then
  • learn a few exercises that we can inject into daily coding flow

As an homage to the Code Katas cropping up on the internets, this is my first attempt at a Keyboard Kata. 

The concept of the “kata” is taken from martial arts, where a series of movements are combined into a pattern.  Patterns are ancient, handed down from master to student over generations, and are a big part of martial art exams.  They often represent a visualization of defending yourself from multiple attackers, with a focus on technique, form, and strength.  The point is that you repeat them over and over until you master them and they become instinctive muscle memory.  Having done many years of Tae Kwon Do, many years ago, I still remember most of my patterns to this date.  Repetition is a powerful thing.

A note about my Visual Studio environment:  I’m using the default Visual Studio C# keyboard scheme in Visual Studio 2008.  I’ve unpinned all of my commonly docked windows so that they auto-hide when not in use.  Unpinning your tool windows not only gives you more screen real estate, but it encourages you to use keyboard sequences to open them.

Today’s Shortcut Lesson

In order to help your retention for each lesson, I’m going to limit what you need to remember to seven simple shortcuts.  Read through the shortcuts, try out the kata, and include it in your daily routine -- memorize them and let them become muscle memory.  I hope to post a bunch of Katas over the next few weeks.

Tip: You’ll get even better retention if you say the shortcuts out loud as you do them.  You’ll feel (and sound) like a complete dork, but it works.

Tool Windows (CTRL+W, …)

Visual Studio’s keyboard scheme does have some reason behind its madness, where related functionality are grouped with similar shortcuts.  The majority of the toolbar windows are grouped under CTRL+W.  If it helps, think CTRL+WINDOW.

Here are a few of the shortcuts for Tool Windows:

CTRL+W, S Solution Explorer
CTRL+W, P Properties
CTRL+W, O Output Window
CTRL+W, E Errors
CTRL+W, C Class View

 

Note that the ESC key will put focus in the currently opened document and auto-hide the current tool window.

Build Shortcuts

F6
-or -
CTRL+SHIFT+B
Build Solution
SHIFT+F6 Build Project

 

Opening a Solution Kata

So here is the kata.  Try this pattern every morning after you open a solution file.

  1. Open the Solution Explorer.
  2. Navigate to a file
  3. View it’s properties
  4. Build the current Project
  5. Build the Solution
  6. Review the Output
  7. Check for build Errors

Extra credit:

  1. Open a file by navigating to it in the solution explorer
  2. Open a file to a specific method in the Class View
  3. View properties of a currently opened file.

submit to reddit

Wednesday, December 02, 2009

NUnit for Visual Studio Addin

I recently stumbled upon this great addin for Visual Studio that uses the Visual Studio Test Adapter pattern to include NUnit tests within Visual Studio as MS Tests.  They appear in the Test List Editor and execute equivalent to MS Test, including those handy Run and Debug keyboard shortcuts I described in my last post.

Since they operate as MS Tests, the project requires some additional meta-data in the csproj file in order to have Visual Studio recognize this project as a Test library.  My last post has the details.

Curious to see how far the addin could act as a stand-in for NUnit, I fired up Visual Studio, a few beers, and the NUnit attribute documentation to put it through the works.  I’ve compiled my findings here in the table below.

In all fairness, there are a lot of attributes in NUnit, some of these you probably didn’t know existed.

NUnit Attribute Supported Comments
Category No Sadly, the addin does not register a new column definition for Category.  Though this feature is not tied to any functional behavior, it would be greatly welcomed to improve upon Visual Studio’s Test Lists.
Combinatorial Yes  
Culture / SetCulture No Tests that would normally be excluded by NUnit fail.
Datapoint / Theory No Test names do not match NUnit runtime.  All Datapoints produce result Not Runnable in the Test Results
Description No Value does not appear in the Test List Editor
Explicit No Explicit Tests are executed and appear as Enabled in the Test List Editor
ExpectedException Yes  
Ignore Partial Ignored tests are excluded from the Test List Editor, so they are ignored, but they do not appear as Enabled = False.
MaxTime / Timeout Partial Functions properly though supplied setting does not appear in the Timeout column in the Test List Editor
Platform No Tests are executed regardless of the specified platform.
Property - Custom properties do not appear in the output of the TRX file, which is where I’m assuming they would appear.  Not entirely sure if the schema would support custom properties however.
Random No Tests are generated, though the names contain only numbers.  Executing these tests produce the result Not Runnable in the Test Results.
Range No Tests are generated, though the names contain only numbers.  Executing these tests produce the result Not Runnable in the Test Results.
Repeat Yes  
RequiredAddin - Not tested.
RequiresMTA / RequiresSTA / RequiresThread Yes  
Sequential Yes  
Setup / Teardown Yes  
SetupFixture No Setup methods for a given namespace do not execute.
Suite - Not tested (requires command-line switch)
TestFixtureSetup / TestFixtureTeardown Yes  
Test Yes Of course!
TestCase Yes Tested different argument types (int, string), TestName, ExpectedException.
TestCaseSource Yes  

There’s quite a few No’s in this list, but the major players (Test, Setup/Teardown, TestFixtureSetup/Teardown) are functional.  I’m actually pleased that NUnit 2.5.2 features such as parameterized tests (TestCase, TestCaseSource) and Combinatorial / Sequential / Values are in place, as well as former addin features that were bundled into the framework (MaxTime / Repeat).

In respect to the malformed test names and non-runnable tests for the Theory / Range / Random attributes, hopefully this is a small issue that can be resolved.  The cosmetic issues with Ignore / Description / Category don’t pose any major concerns though they would be large wins in terms of full compatibility with MS Test user interface and features.

I’ve never used the SetupFixture nor the culture attributes, so I’m not losing much sleep over these.

However, for me, the main issue for me is that Explicit tests are always executed.  I’ve worked on many projects where a handful of tests either brought down the build server or couldn’t be run with other tests.  Rather than solve the problem, developers tagged the tests as Explicit – they work, but you better have a good reason to be running them.

Hats off to the NUnitForVS team.

submit to reddit

Tuesday, December 01, 2009

Manually creating a MS Test Project

Although I’ve always been a huge proponent of NUnit, I’m finding I’m using MS Test more frequently for the following reasons:

  • My organization is a large Microsoft partner, so there’s often some preference for Microsoft tools in our projects.
  • Support for open source tools is a concern for some organizations I work with.  Although Tests are not part of the production deliverables, some organizations are very risk adverse and reasonably do not want to tie themselves to products without support or guarantee of backward compatibility.
  • Severe Resharper withdrawal.  After spending several years with Resharper tools, I’ve spent the last year with a barebones Visual Studio 2008 installation.  Without the tight integration between Visual Studio and NUnit, attaching and debugging a process isn’t involuntary. If the JetBrains guys are listening, hook me up.

Reluctantly, I’ve started to use Visual Studio Test warts-n-all.  Out of the box, MS Test has two very handy keyboard shortcuts where you can either Run (CTRL+R, T) or Debug (CTRL+R, CTRL+T) the current test, fixture or solution, depending on where your mouse is currently focused. 

Oddly enough, I’ve found myself in a position where I’ve manually created a Test project by adding the appropriate references, but none of the Visual Studio Test features work, including these handy short cuts.  Any attempt to run these tests using these shortcut produces an error:

No tests were run because no tests were loaded or the selected tests are disabled.

This error is produced because the Test Adapter is looking for a few meta attributes in the project that are added when you using the New Test Project template.

To manually create a MS Test project, in Visual Studio:

  1. Create a new Class Library project
  2. Add a reference to: Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll
  3. Right-click your project and choose “Unload Project”.

    unload-project
  4. Right-click on your project and choose “Edit <ProjectName>”

    edit-project
  5. Add the following ProjectTypeGuids element to the first ProjectGroup element:

      <PropertyGroup>
        <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
        <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
        <ProductVersion>9.0.30729</ProductVersion>
        <SchemaVersion>2.0</SchemaVersion>
        <ProjectGuid>{4D38A077-23EE-4E9F-876A-43C33433FFEB}</ProjectGuid>
        <OutputType>Library</OutputType>
        <AppDesignerFolder>Properties</AppDesignerFolder>
        <RootNamespace>Example.ManualTestProject</RootNamespace>
        <AssemblyName>Example.ManualTestProject</AssemblyName>
        <TargetFrameworkVersion>v3.5</TargetFrameworkVersion>
        <FileAlignment>512</FileAlignment>
        <ProjectTypeGuids>{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
      </PropertyGroup>
  6. Right-click on the Project and choose "Reload <Project Name>"

Once reloaded, the handy shortcuts work as expected.

Note for the curious:

  • Guid {FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}: refers to C# project
  • Guid {3AC096D0-A1C2-E12C-1390-A8335801FDAB}: refers to the “Test Project Flavor”

From my limited understanding of how Visual Studio Test package works, it scans the solution looking for projects that can contain tests.  Without the magic ProjectTypeGuid, the class library is excluded from this process.

Happy coding.

submit to reddit

Monday, November 23, 2009

User Extensions with the Selenium Toolkit for .NET

As I mentioned in my last post, new to version 0.81 the Selenium Toolkit for .NET now provides a simple mechanism to add user-defined extensions to Selenium.

Some background

A user-extension for Selenium is a JavaScript file that becomes embedded into Selenium’s Test Runner.  Typically, the script extends Selenium’s core JavaScript to add new commands, but it also can be used to extend existing functionality, or add custom locator strategies.  Both Selenium IDE and RC allow extensions to be added.

For the purposes of this discussion, here’s a crude example of a selenium extension: 
Selenium.prototype.getMyValue = function(locator, text) {
    return locator;
};

The toolkit takes a straight forward approach to including extensions: it combines the files listed in the configuration file (in order of appearance) into a single file and then configures the Selenium RC to use it. 

Incidentally, if you ever need to verify that your extensions were loaded, you can simply view the JavaScript file at this url: http://localhost:4444/selenium-server/core/scripts/user-extensions.js

The ISelenium interface of the Selenium API does not directly expose a mechanism to execute custom commands.  Instead, we need to send our custom command using a command processor (ICommandProcessor), which isn’t exposed by default so a custom implementation of the ISelenium interface is required.  The Selenium documentation provides a pretty good overview of how to do this, and the remainder of this post demonstrates how to integrate this approach with the toolkit.

Create a customized ISelenium

One of the new features added to the toolkit in this release is the ability to supply your own mechanism for instantiating the ISelenium instance. The factory is a basic class with a Create method.  Simply implement the ISeleniumFactoryProvider interface and then wire it up into the configuration settings (see below).  If no setting is provided in the config, a default factory is used.

This example below shows a custom ISeleniumFactoryProvider that creates a customized ISelenium instance that exposes the ICommandProcessor:

namespace Custom
{
    using Selenium;
    using SeleniumToolkit.Core;

    public class SeleniumFactory : ISeleniumFactoryProvider
    {
        public ISelenium Create(string host,
                                int port,
                                string browserProfile,
                                string baseUrl)
         {
             ICommandProcessor processor = new HttpCommandProcessor(host, port, browserProfile, baseUrl);
             return new CustomSelenium(processor);       
         }
    }

    public class CustomSelenium : DefaultSelenium
    {
        public CustomSelenium(ICommandProcessor processor) 
            : base(processor)
        {
            CommandProcessor = processor
        }

        public ICommandProcessor CommmandProcessor
        {
            get;
            protected set;
        }
    }
}

Tweak your Settings

After you’ve compiled your custom factory, you’ll need to reference your JavaScript file in the userExtensions element and specify your custom factory using the factoryType attribute of the Selenium Node.

<Selenium
    factoryType="Custom.SeleniumFactory, CustomProject"
    >
   <userExtensions>
       <add name="example"
            path="myextension.js" />
   </userExtensions>
</Selenium>

Putting it all together

With the configuration settings in place, now all we need to do is cast the Browser.Current instance to our custom type.

namespace Custom.Test
{
    using NUnit.Framework;
    using Selenium;
    using SeleniumToolkit;

    [WebFixture]
    public class Example
    {
        [WebTest]
        public void ShowUsage()
        {
            CustomSelenium browser = Browser.Current as CustomSelenium;

            string[] inputArgs = { "Hello World" };
            string result = browser.CommandProcessor.DoCommand("getMyValue", inputArgs);
            Assert.AreEqual("Hello World", result);
        }
    }
}

Happy coding.

submit to reddit

Thursday, November 19, 2009

Selenium Toolkit for .NET 0.81 Now Available

I’m happy to announce the next release for the Selenium Toolkit for .NET is now available for download.  This release adds minor fixes to the runtime and enhanced configuration support.

New for this release

This release includes several new enhancements:

  • NUnit 2.5.2 Support
  • Custom Browser Profiles and Aliases
  • User Extensions support
  • Proxy Server support

NUnit 2.5.2 Support

This release represents an upgrade for the NUnit addin to latest version NUnit (2.5.2).  NUnit 2.5.2 includes several performance and user interface enhancements which improve the development experience, including data driven tests and source-code browser for exceptions.

As part of a dependency issue with the NUnit framework, addins are specific to the runtime they’re compiled against, so you must have NUnit 2.5.2 installed in order to use the addin.  If you have NUnit 2.4.8, you’ll have to upgrade.  If you haven’t been able to use the toolkit because you weren’t using NUnit 2.4.8, please download and send in some feedback.

Custom Browser Profiles and Aliases

One of the key goals of the toolkit is to minimize duplication and environment specific settings in your tests so that the impact of moving between environments is minimized.  The toolkit now supports the concept of browser aliases where custom profiles can be defined as a lookup table in the configuration settings.

Before:

[WebFixture]
public class ExampleTest
{
    [WebTest(DefaultBrowser=@"*custom \"C:\PathToFireFox\FireFox.exe\" -no-remote -profile \"C:\PathToProfile\"")]
    public void OpenCustomProfile()
    {
        // ....
    }
}

After:

The dependency on the physical folder path can now be externalized to the configuration file, which can be modified to suit each environment without having to alter the tests.

[WebFixture]
public void ExampleWithBrowserAlias
{
    [WebTest(DefaultBrowser="ff-profile-1")]
    public void OpenCustomProfile()
    {
        // ....
    }
}

And the configuration settings....

<configuration>
    <!-- snip -->

    <Selenium
        BrowserUrl="http://www.bryancook.net"
        >
        <browsers>
            <add key="ff-profile-1"
                 value="*custom &quot;C:\FireFoxPath&quot; -no-remote -profile &quot;PathToProfile&quot;"
                />
        </browsers>
    </Selenium>

</configuration>

User Extensions support

Selenium provides an extensibility model that allows you to extend its functionality through custom JavaScript files.  This release provides two key enhancements to enable this functionality:

  • Configuration element that allows you to specify JavaScript files and directories to be concatenated into the user-extensions.js file; and
  • Factory model that allows you to create your own implementations of the ISelenium interface.  A custom implementation allows you to execute your own custom commands, and will likely play an integral role in the WebDriver backed Selenium implementation planned for Selenium 2.0.

Expect a blog post soon that demonstrates this functionality, but in the meantime you can find details on the configuration settings here.

Proxy Server Support

While not as exciting or as interesting as custom user extensions, this release includes some additional configuration settings for users running Selenium-RC against a web-site that is only accessible through a proxy server.  See the configuration settings  for more details.

Download now

Enough already!  Go download the latest release and try it out.  As always, feedback is always welcome.

Happy coding.

submit to reddit

Wednesday, November 11, 2009

Add Syntax Highlighter to Live Writer Templates

Been tweaking my blog template slowly over the past year, and while I’ve got plans to switch to 960.gs format, my most recent change has been how I highlight my code snippets: like everyone else on the planet, I’m now using Alex Gorbatchev’s SyntaxHighlighter in combination with Anthony Bouch’s PreCode.

Up to this point, I’ve been using CSharpFormat with Omar Shahine’s Insert Code plugin.  This solution has worked well for me up to this point, but it’s the features of SyntaxHighlighter (line numbers, collapsible blocks, etc) and the quality of PreCode’s editing features that have made me switch.

The key difference between the two solutions is that CSharpFormat adornes your code with HTML blocks that are immediately visible in Live Writer (and my RSS feed); SyntaxHighlighter highlights your code at runtime using JavaScript.

While I can live without the RSS feed highlighting, I really miss seeing my code highlighted in Live Writer.  After updating my blogger template, I noticed that Live Writer ignores your JavaScript when it updates your editing template.

Fortunately, Live Writer’s editing template is stored locally as HTML, here:

C:\Users\bcook\AppData\Roaming\Windows Live Writer\blogtemplates\<guid>\index.html

Simply open the file in your editor, and paste in your JavaScript:

<!-- syntax highlighter --> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shCore.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushCSharp.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushCss.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushJScript.js' type='text/javascript'></script> 
<script src='http://alexgorbatchev.com/pub/sh/2.0.320/scripts/shBrushXml.js' type='text/javascript'></script> 
<script type='text/javascript'> 
  SyntaxHighlighter.config.bloggerMode = true;
  SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf';
  SyntaxHighlighter.all();
</script>

Tuesday, November 10, 2009

Implementing a Strategy Pattern using the Unity Framework

My current project is using Unity and has had some interesting challenges to work through.  One of the design challenges we encountered involved several components that were nearly identical in appearance except for a few minor functional details.  We decided to implement a strategy pattern where the variances were different strategies that were applied to common context.

For the sake of argument, let’s say we’re talking about a Financial Calculator that supports different types of financial calculations (FPV, Payment, etc)

public class Calculator : ICalculator
{
    public Calculator(IFormula formula)
    {
	_formula = formula;
    }

    public CalcResult Calculate(CalcArgs input)
    {
        return _formula.Process(input);
    }

    IFormula _formula;
}

public interface IFormula
{
    CalcResult Process(CalcArgs input);
}

public class FutureValue : IFormula
{
    public CalcResult Process(CalcArgs input)
    {
        // ...
    }
}

public class PresentValue: IFormula
{
    public CalcResult Process(CalcArgs input)
    {
        // ...
    }
}

In my last post, I showed how Unity needs some additional configuration in order to resolve ambiguous types – Unit won’t be able to resolve our Calculator unless we specify which IFormula we want to use.  This scenario is a bit different because we want to be able to use both the FutureValue and PresentValue formulas at runtime. 

We have a couple of options to choose from, each with their own pitfalls:

  • Container overloading
  • Named Instances

Container Overloading

At a basic level, we’re limited to registering a type only once per container.  Fortunately, Unity supports the ability to spawn a child container that can used to replace registered types with alternate registrations.  By swapping out our types, we can create alternate runtimes that can be scoped to specific areas of our application.

[TestMethod]
public void CanOverloadContainer()
{
    // create the container and register the
    //    PresentValue formula
    var container = new UnityContainer();
    container.RegisterType<IFormula, PresentValue>();

    // verify we can resolve PresentValue
    var formula1 = container.Resolve<IFormula>();
    Assert.IsInstanceOfType(formula1, typeof(PresentValue));

    // create a child container, and replace
    //    the formula with FutureValue formula
    var childContainer = container.CreateChildContainer();
    childContainer.RegisterType<IFormula, FutureValue>();

    // verify that we resolve the new formula
    var formula2 = childContainer.Resolve<IFormula>();
    Assert.IsInstanceOfType(formula2, typeof(FutureValue));
}

This approach provides a clean separation between the container and the various combinations of our strategy pattern.  If you can compose your application of controllers that can extend and manipulate child containers, this is your best bet as each container provides a small targeted service to the application as a whole.

However, if the variations must live in close contact with one another and you can’t separate them into different containers, you’ll need a different strategy.

Named Instances

Unity provides the ability to register a type more than once, with a caveat that each type is given a unique name within the container.  This feels more like a hack than a best practice, but it provides a simple solution to registering the various combinations of ICalculator types side-by-side.  Since our variations are based on the types passed in through the constructor, we can provide Unity the construction information it needs using InjectionConstructor and ResolvedParameter objects.  To resolve our objects, we must specify the unique name when resolving.

[TestMethod]
public void CanConstructUsingConstructorInfo()
{
    var container = new UnityContainer();

    container.RegisterType<ICalculator, Calculator>("FutureValue",
                            new InjectionConstructor(new ResolvedParameter<FutureValue>())
                            );

    container.RegisterType<ICalculator, Calculator>("PresentValue",
                            new InjectionConstructor(new ResolvedParameter<PresentValue>())
                            );

    var calc1 = container.Resolve<ICalculator>("FutureValue");
    var calc2 = container.Resolve<ICalculator>("PresentValue");
}

At this point, I need to step aside and point out that there is a potential problem with this approach.  We’ve created a mechanism that allows our type to be easily resolved directly from the container using it’s unique name.  There is temptation to pass the container into our code and resolve these objects by name.  This is a slippery slope that will quickly tie your application code and tests directly to Unity.  I will blog more about this in an upcoming post, but here are some better alternatives:

Resolved Parameters

Rather than manually resolving the named instance from the container, configure the container to use your named instance:

public class Example
{
    public Example(ICalculator calculator)
    {
        Calculator = calculator;
    }

    public ICalculator Calculator
    {
        get;
        set;
    }
}

[TestClass]
public class ExampleTest
{
    [TestMethod]
    public void ConfigureExampleForNamedInstance()
    {
        var container = new UnityContainer();
        container.RegisterType<ICalculator, Calculator>("FutureValue",
                                    new InjectionConstructor(
                                            new ResolvedParameter<FutureValue>())
                                    );

        container.RegisterType<Example>(
                                    new InjectionConstructor(
                                            new ResolvedParameter<ICalculator>("FutureValue"))
                                    );

        Example ex = container.Resolve<Example>();
    }
}
Dependency Attributes

Unity supports special attributes that can be applied to your code to provide configuration guidance.  This can make instrumenting your code much easier, but it has a downside of brining the dependency injection framework into your code.  This isn’t necessarily the end of the world, but can limit us if we want to reuse this class with a different dependency or overload it in another container.

public class Example
{
    public Example(
                [Dependency("FutureValue")]
                ICalculator calculator
                  )
    {
        Calculator = calculator;
    }

    public ICalculator Calculator
    {
        get;
        set;
    }
}

Alternatively, the DependencyAttribute can be applied to Properties as well for Setter injection.  Personally, I find this reads better than using attributes in the constructor, and I’ll save my points about Setter injection for another post.

public class Example
{
    [Dependency("FutureValue")]
    public ICalculator Calculator
    {
        get; set;
    }
}

Conclusion

Although inversion of control promotes use of interfaces to decouple our implementations, we often use abstract classes and composition to construct our objects.  In the case of composition, we need to provide Unity with the appropriate information to construct our objects.  Wherever possible, we should defer construction logic to configuration settings rather than instrumenting the code with container information.

submit to reddit

Monday, November 09, 2009

Simple Dependency Injection using the Unity Framework

My current project is using Unity as an Inversion of Control / Dependency Injection framework.  We’ve had a few challenges and lessons learned while using it.  This is my first post on the topic, and I want to provide a quick look:

Unity is slightly different than most dependency injection frameworks that I’ve seen.  Most dependency injection frameworks, such as Spring.NET (when I last used it), require you to explicitly define the dependencies for your objects in configuration files.  Unity also uses configuration files, but it also can construct any object at runtime by examining its constructor.

Let’s take a simple example where I have a product catalog service that provides simple searching for products by category.  The service is backed by a simple and crude ProductRepository object.

public class ProductCatalog
{
    public ProductCatalog(ProductRepository repository)
    {
        _repository = repository;
    }

    public List<Product> GetProductsByCategory(string category)
    {
        return _repository.GetAll()
                          .Where(p => p.Category == category)
                          .ToList();
    }

    private ProductRepository _repository;
}

public class ProductRepository
{
    public ProductRepository()
    {
        _products = new List<Product>()
            {
            new Product() { Id = "1", Name = "Snuggie", Category = "Seen on TV" },
            new Product() { Id = "2", Name = "Slap Chop", Category = "Seen on TV" }
            };
    }

    public virtual List<Product> GetAll()
    {
        return _products;
    }

    private List<Product> _products;
}

public class Product
{
    public string Id { get; set; }
    public string Name { get; set; }
    public string Category { get; set; }
}

With no configuration whatsoever, I can query Unity to resolve my ProductCatalog in a single line of code.  Unity will inspect the constructor of the ProductCatalog and discover that it is dependent on the ProductRepository.  Since the ProductRepository does not require any additional dependencies in its constructor, Unity will construct the repository and assign it to the catalog for me.

[Test]
public void CanResolveWithoutConfiguration()
{
    var container = new Microsoft.Practices.Unity.UnityContainer();

    var catalog = container.Resolve<ProductCatalog>();

    var products = catalog.GetProductsByCategory("Seen on TV");

    Assert.AreEqual(2, products.Count);
}

Aside from the fact that this is a crude example, a production system would be extremely limited because the product catalog is tightly coupled to our repository implementation.  Although we could subclass the product repository, a better solution would decouple the two classes by introducing an interface for the product repository.

public interface IProductRepository
{
    List<Product> GetAll();
}

public class ProductRepository : IProductRepository
{
    // ...
}

public class ProductCatalog
{
    public ProductCatalog(IProductRepository repository)
    {
        // ...
    }

    // ...
}

This is a much more ideal solution as our components are now loosely coupled, but Unity no longer has enough information to automatically resolve the ProductCatalog’s dependencies.  To fix this issue, we need to provide this information to the Unity container.  This can be done through configuration files, or by manipulating the container directly:

[Test]
public void CanResolveWithRegisteredType()
{
    var container = new Microsoft.Practices.Unity.UnityContainer();
    
    container.RegisterType<IProductRepository, ProductRepository>();

    var catalog = container.Resolve<ProductCatalog>();

    var products = catalog.GetProductsByCategory("Seen on TV");

    Assert.AreEqual(2, products.Count);
}

Note that while I’m using the Unity container in my tests, it's completely unnecessary. I can simply mock out my dependencies (IProductRepository), pass them in, and test normally.

Conclusion

This demonstrates how Unity is able to resolve simple types without extensive configuration.  By designing our objects to have their dependencies passed in they possess no knowledge of the dependency injection framework, making them lean, more portable and easy to test.

submit to reddit

Friday, October 23, 2009

Vista: Will not be missed

Here be the final resting place of windows vista

Vista, born into the Windows family early 2007, was finally laid to rest earlier this week.

Wednesday, October 21, 2009

Running MS Team System tests in NUnit

Earlier this week I saw this question on Stack Overflow asking if it was possible to run MS Team System Tests under NUnit.  At first, this sounds like a really odd request.  After all, why not just convert your tests to NUnit and be done with it? 

I can think of a few examples where you may need interoperability – for example, two teams collaborating where team #1 has corporate policy for MSTest, while team #2 doesn’t have access to a version of Visual Studio with Team System.  Being able to target one platform but have the tests run in either environment is desirable.  Whatever the reason, it’s an interesting code challenge.

Fortunately, the good folks at Exact Magic Software published an NUnit addin that can adapt Team System test fixtures to NUnit.  Since NUnit identifies candidate fixtures simply by examining attribute names, the addin’s job is fairly straight forward, but it also recognizes MS specific features such as TestContext and DataSource driven tests.

Unfortunately, the NUnit adapter uses features that reside within the nunit.core assembly which ties it to a specific version of NUnit.  In this case, the addin will only work with NUnit 2.4.6.  This is a frustrating design problem that plagues many NUnit addins, including my own open source project.

Note: NUnit 3.0 plans to solve this problem by moving the boundary between framework and test runner so that each version of the framework knows how to run it’s own tests.  The GUI or host application will be able to load and execute the tests in a version independent manner.  Hopefully this means that addins will target and execute within a specific framework version, but will work in different versions of the user-interface.

The current version of NUnit (2.5.2) is a stepping stone between the 2.4.x framework and the upcoming overhauled 3.0 version.  As part of this transition, there’s a lot of breaking changes between older versions.  For me, this translates into a problem where Exact Magic’s project simply won’t compile if you update the dependencies.  Since I need to do similar work for my own project, this was good exercise.

I’ve reworked their code and put it here.

A few known caveats:

  • The unit tests for the adapter attempt to perform reflection calls to internals fields or methods within the NUnit.Core that have been renamed or no longer exist.  I haven’t bothered fixing these tests.
  • Does not appear to have support for AssemblyInitialize or AssemblyCleanup, though these could be adapted by adding an EventListener when the test run start and finish.  I may add this feature.
  • The adapter doesn’t wrap functionality contained within MS Test, it simulates it’s behavior.  While it has the same result, it won’t capture the exact same nuances of Microsoft’s implementation.

If you have any concerns or questions, let me know.

Cheers.

submit to reddit

Tuesday, October 20, 2009

My current approach to Context/Specification

One of my most popular posts from last year is an article on test naming guidelines, which was written to resemble the format used by the Framework Design Guidelines.  Despite the popularity of the article, I started to stray from those guidelines over the last year.  While I haven’t abandoned the philosophy of those guidelines, in fact most of the general advise still applies, I’ve begun to adopt a much different approach for structuring and organizing my tests, and as result, the naming has changed slightly too.

The syntax I’ve promoted and followed for years, let’s call it TDD or original-flavor, has a few known side-effects:

  • Unnecessary or complex Setup – You declare common setup logic in the Setup of your tests, but not all tests require this initialization logic.  In some cases, a test requires entirely different setup logic, so the initial “arrange” portion of the test must undo some of the work done in the setup.
  • Grouping related tests – When a complex component has a lot of tests that handle different scenarios, keeping dozens of tests organized can be difficult.  Naming conventions can help here, but are masking the underlying problem.

Over the last year, I’ve been experimenting with a Behavior-Driven-Development flavor of tests, often referred to as Context/Specification pattern.  While it addresses the side-effects outlined above, the goal of “true” BDD is to attempt to describe requirements in a common language for technical and non-technical members of an Agile project, often with a Given / When / Should syntax.  For example,

Given a bank account in good standing, when the customer requests cash, the bank account should be debited

The underlying concept is when tests are written using this syntax, they become executable specifications – and that’s really cool.  Unfortunately, I’ve always considered this syntax to be somewhat awkward, and how I’ve adopted this approach is a rather loose interpretation.  I’m also still experimenting, so your feedback is definitely welcome.

An Example

Rather than try to explain the concepts and the coding style in abstract terms, I think it’s best to let the code speak for itself first and then try reason my way out.

Note: I’ve borrowed and bended concepts from many different sources, some I can’t recall where.  This example borrows many concepts from Scott Bellware’s specunit-net.

public abstract class ContextSpecification
{
    [TestFixtureSetUp]
    public void SetupFixture()
    {
       BeforeAllSpecs();
    }
    
    [TestFixtureTearDown]
    public void TearDownFixture()
    {
       AfterAllSpecs();
    }

     [SetUp]
     public void Setup()
    {
       Context();
       Because();
    }

    [TearDown]
    public void TearDown()
    {
       CleanUp();
    }

    protected virtual void BeforeAllSpecs() { }
    protected virtual void Context() { }
    protected virtual void Because() { }
    protected virtual void CleanUp() { }
    protected virtual void AfterAllSpecs() { }   
}

public class ArgumentBuilderSpecs : ContextSpecification
{
    protected ArgumentBuilder builder;
    protected Dictionary<string,string> settings;
    protected string results;

    protected override void Context()
    {
       builder = new ArgumentBuilder();
       settings = new Dictionary<string,string>();
    }

    protected override void Because()
    { 
       results = builder.Parse(settings);
    }

    [TestFixture]
    public class WhenNoArgumentsAreSupplied : ArgumentBuilderSpecs
    {
       [Test]
       public void ResultsShouldBeEmpty()
       {
           results.ShouldBeEmpty();
       }
    }

    [TestFixture]
    public class WhenProxyServerSettingsAreSupplied : ArgumentBuilderSpecs
    {
       protected override void Because()
       {
           settings.Add("server", "proxyServer");
           settings.Add("port", "8080");

           base.Because();
       }

       [Test]
       public void ShouldContainProxyServerArgument()
       {
           results.ShouldContain("-DhttpProxy:proxyServer");
       }

       [Test]
       public void ShouldContainProxyPortArgument()
       {
           results.ShouldContain("-DhttpPort:8080");
       }
  }
}

Compared to my original flavor, this new bouquet has some considerable differences which may seem odd to an adjusted palate.  Let’s walk through those differences:

  • No longer using Fixture-per-Class structure, where all tests for a class reside within a single class.
  • Top level “specification” ArgumentBuilderSpecs is not decorated with a [TestFixture] attribute, nor does it contain any tests.
  • ArgumentBuilderSpecs derives from a base class ContextSpecification which controls the setup/teardown logic and the semantic structure of the BDD syntax.
  • ArgumentBuilderSpecs contains the variables that are common to all tests, but setup logic is kept to a minimum.
  • ArgumentBuilderSpecs contains two nested classes that derive from ArgumentBuilderSpecs.  Each nested class is a test-fixture for a scenario or context.
  • Each Test-Fixture focuses on a single action only and is responsible for its own setup.
  • Each Test represents a single specification, often only as a single Assert.
  • Asserts are made using Extension methods (not depicted in the code example)

Observations

Inheritance

I’ve never been a big fan of using inheritance, especially in test scenarios as it requires more effort on part of the future developer to understand the test structure.  In this example, inheritance plays a part in both the base class and nested classes, though you could argue the impact of inheritance is negated since the base class only provides structure, and the derived classes are clearly visible within the parent class.  It’s a bit unwieldy, but the payoff for using inheritance is found when viewing the tests in their hierarchy:

Context-Spec-Example

While technically you could achieve a similar effect by using namespaces to group tests, but you lose some of the benefits of encapsulation of test-helper methods and common variables.

Although we can extend our contexts by deriving from the parent class, this approach is limited to inheriting from the root specification container (ArgumentBuilderSpecs).  If you were to derive from WhenProxyServerSettingsAreSupplied for example, you would inherit the test cases from that class as well.  I have yet to find a scenario where I needed to do this.  While the concepts of DRY make sense, there’s a lot to be said about clear intent of test cases where duplication aids readability.

Extra Plumbing

There’s quite a bit of extra plumbing to be able to create our nested contexts, and it seems to take a bit longer to write tests.  This delay is either caused by grappling with new context/specification concepts, writing additional code for subclasses or more thought determining which contexts are required.  I’m anticipating that it gets easier with more practice, and some Visual Studio code snippets might simplify the authoring process.

One area where I can sense I’m slowing down is trying to determine if I should be overriding or extending Context versus Because.

Granular Tests

In this style of tests, where a class is created to represent a context, each context performs only one small piece of work and the tests serve as assertions against the outcome.  I found that the tests I wrote were concise and elegant, and the use of classes to structure the tests around the context helped to organize both the tests and the responsibility of the subject under test.  I also found that I wrote fewer tests with this approach – I write only a few principle contexts.  If a class starts to have too many contexts, it could mean that the class has too much responsibility and should be split into smaller parts.

Regarding the structure of the tests, if you’re used to fixture-per-class style of tests, it may take some time to get accustomed to the context performing only a single action.  Though as Steven Harman points out, visualizing the context setup as part of the fixture setup may help guide your transition to this style of testing.

Conclusion

I’m enjoying writing Context/Specification style tests as they read like a specification document and provide clear feedback of what the system does.  In addition, when a test fails, the context it belongs to provides additional meaning around the failure.

There appear to be many different formats for writing tests in this style, and my current format is a work in progress.  Let me know what you think.

Presently, I’m flip flopping between this format and my old habits.  I’ve caught myself a few times where I write a flat structure of tests without taking context into consideration – after a point, the number of tests becomes unmanageable, and it becomes difficult to identify if I should be tweaking existing tests or writing new ones.  If the tests were organized by context, the number of tests becomes irrelevant, and the focus is placed on the scenarios that are being tested.

submit to reddit

Monday, October 19, 2009

Configuring the Selenium Toolkit for Different Environments

Suppose you’ve written some selenium tests using Selenium IDE that target your local machine (http://localhost), but you’d like to repurpose the same tests for your QA, Staging or Production environment as part of your build process.  The Selenium Toolkit for .NET makes this really easy to change the target environment without having to touch your tests.

In this post, I’ll show you how to configure the Selenium Toolkit for the following scenarios:

  • Local Selenium RC / Local Web Server
  • Local Selenium RC / Remote Web Server
  • Remote Selenium RC / Remote Web Server

Running Locally

In this scenario, both the Selenium Remote Control Host (aka Selenium RC / Selenium Server) and the web site you want to test are running on your local machine.  This is perhaps the most common scenario for developers who develop against a local environment, though it also applies to your build server if the web server you want to test is also on that machine. 

Although this configuration is limited to running tests against the current operating system and installed browsers, it provides a few advantages to developers as they can watch the tests execute or set breakpoints on the server to assist in debugging.

This is the default configuration for the Selenium Toolkit for .NET – when the tests execute, the NUnit Addin will automatically start the Selenium RC process and shut it down when the tests complete.

Assuming you have installed the Selenium Toolkit for .NET, the only configuration setting you'll need to provide is the URL of the web site you want to test. In this example, we assume that http://mywebsite is running on the local web server.

<Selenium BrowserUrl="http://mywebsite" />

Running Tests against a Remote Web Server

In this scenario Selenium RC runs local, but web server is on a remote machine.  You’ll still see the browser executing the tests locally, but because the web-server is physically located elsewhere it’s not as easy to debug server-side issues.  From a configuration perspective, this scenario uses the same configuration settings as above, except the URL of the server is not local.

This scenario is typically used in a build server environment.  For example, the build server compiles and deploys the web application to a target machine using rsync and then uses Selenium to validate the deployment using a few functional tests.

Executing Tests in a Remote Environment / Selenium Grid

In this scenario your local test-engine executes your tests against a remote Selenium RC process.  While this could be a server where the selenium RC process is configured to run as a dedicated service, it’s more likely that you would use this configuration for executing your tests against a Selenium Grid.  The Selenium Grid server exposes the same API as the Selenium RC, but it acts as a broker between your tests and multiple operating systems and different browser configurations.

To configure the Selenium Toolkit to use a Selenium Grid, you’ll need to specify the location of the Grid Server and turn off the automatic start/stop feature:

<Selenium
     server="grid-server-name"
     port="4444"  
     BrowserUrl="http://mywebsite.com"
   />
   <runtime
        autoStart="false"
        />
</Selenium>

submit to reddit

Thursday, October 15, 2009

A Proposal for Functional Testing

When I’m writing code, my preference is to follow test-driven development techniques where I’m writing tests as I go.  Ideally, each test fixture focuses attention on one object at a time, isolating its behavior from its dependencies.

While unit tests provide us with immediate feedback about our progress, it would be foolish to deploy a system without performing some form of integration test to ensure that the systems’ components work as expected when pieced together.  Often, integration tests focus on a cohesive set of objects in a controlled environment, such as a restoring a database after the test.

Eventually, you’ll need to bring all the components together and test them in real-world scenarios.  The best place to bring all these components together is the live system or staged equivalent.  The best tool for the job is a human inspecting the system. Period.

Wait, I thought this was supposed to be about functional testing?  Don’t worry, it is.

Humans may be the best tool for the job, but if you consider the amount of effort associated with code-freezes, build-reports, packaging and deployment, verification, coordination with the client and waiting testing teams, it can be really expensive to use humans for testing.  This is especially true if you deliver a failed build to your testing team -- your testing team who’ve been queued up are now unable test, and must wait for the next build.  If you were to total up all the hours from the entire team, you’d be losing at least a day or more in scheduled cost.

Functional tests can help prevent this loss in production.

In addition, humans possess an understanding of what the system should do, as well as what previous versions did.  Humans are users of the system and can contribute greatly to the overall quality of the product.  However, once a human has tested and validated a feature, revisiting these tests in subsequent builds becomes more of a check than a test.  Having to go back and check these features becomes increasingly difficult to accomplish in short-timelines as the complexity of the system grows.  Invariably, shortcuts are taken, features are missed and subtle, aggravating bugs silently sneak into the system.  While separation of concerns and good unit tests can downplay the need for full regression tests, the value of system-wide integration tests for repetitive tasks shouldn’t be discounted.

Functional tests can help here too, but these are largely the fruits of labor of my first point about validating builds.  Most organizations can’t capitalize on this simply because they haven’t got the base to build up from.

Functional tests take a lot of criticism, however.  Let’s address some common (mis)beliefs.

Duplication of testing efforts / Diminishing returns.  Where teams have invested in test driven development, tests tend to focus on the backend code artifacts as these parts are the core logic of the application.  Using mocks and stubs, the core logic can be tested extremely well from the database layer and up, but as unit-tests cross the boundary from controller-logic into the user-interface layer, testing becomes harder to simulate: web-applications need to concern themselves with server requests; desktop applications have to worry about things like screen resolution, user input and modal dialogs.  In such team environments, testing the user-interface isn’t an attractive option since most bugs, if any, originate from the core logic that can be covered by more unit tests.  From this perspective, adding functional tests wouldn’t provide enough insight to outweigh the effort involved.

I’d agree with this perspective, if the functional tests were trying to aggressively interrogate the system at the same level of detail of their backend equivalents.  Unlike unit tests, functional tests are focused on emulating what the user does and sees, not on the technical aspects under the hood.  They operate in the live system, providing a comprehensive view of the system that is unit tests cannot.  In the majority of cases, a few simple tests that follow the happy path of the application may be all you need to validate the build.

More over, failures at this level point to problems in the build, packaging or deployment – something well beyond a typical unit test’s reach.

Functional tests are too much effort / No time for tests.  This is a common view that is applied to testing in general, which is based on a flawed assumption that testing should follow after development work is done.  In this argument, testing is seen as “double the effort”, which is an unfair position if you think about it.  If you treat testing as a separate task and wait until the components are fully written, then without a doubt the action of writing tests becomes an exercise in reverse-engineering and will always be more effort.

Functional tests, like unit tests, should be brought into the development process.  While there is some investment required to get your user-interface and its components into a test-harness, the effort to add new components and tests (should) become an incremental task.

Functional tests are too brittle / Too much maintenance.  Without doubt, the user-interface can be the most volatile part of your application, as it is subject to frequent cosmetic changes.  If you’re writing tests that depend on the contract of the user-interface, it shouldn’t be a surprise that they’re going to be impacted when that interface changes.  Claiming that your tests are the source of extra effort because of changes you introduced is an indication of a problem in your approach to testing.

Rather than reacting to changes, anticipate them: if you have a change to make, use the tests to introduce that change.  There are many techniques to accomplish this (and I may have to blog about that later), but here’s an example: to identify tests that are impacted by your change, try removing the part that needs to change and watch which tests fail.  Find a test that resembles your new requirement and augment it to reflect the new requirements. The tests will fail at first, but as you add the new requirements, they’ll slowly turn green.

As an added bonus, when you debug your code through the automated test, you won’t have to endure repetitive user-actions and keystrokes.  (Who has time for all the clicking??)

This approach works well in agile environments where stories are focused on adding or changing features for an interaction and changes to the user-interface are expected.

Adding Functional Testing to your Regime

Develop a test harness

The first step to adding functional testing into your project is the development of a test-harness that can launch the application and get it into a ready state for your tests.  Depending on the complexity of your application and the extent of how far you want to take your functional tests, this can seem like the largest part.  Fortunately most test automation products provide a “recorder” application that can generate code from user activity, which can jump start this process.  While these tools make it easy to get started, they are really only suitable for basic scenarios or for initial prototyping.  As your system evolves you quickly find that the duplication in these scripts becomes a maintenance nightmare.

To avoid this issue, you’ll want to model the screens and functional behavior of your application into modular, components that hide the implementation details of the recorder tools’ output.  This approach shields you from having to re-record your tests and makes it easier to apply changes to tests.  The downfall to this approach is that it may take some deep thinking on how to model your application, and it will seem as though you’re writing a lot of code to emulate what your backend code already does.  However, once this initial framework is in place, it becomes easier to add new components.  Eventually, you reach a happy place where you can write new tests without having to record anything.

The following example illustrates how the implementation details of a product editor are hidden from the test, but the user actions are clearly visible:

[Test]
public void CanOpenAnExistingProduct()
{
    using (var app = new App())
    {
        app.Login("user1", "p@ssw3rd");

        var product = new Product()
                            {
                                Id = 1,
                                Name = "Foo"
                            };

        // opens the product editor, 
        // fills it with my values
        // saves it, closes it.
        app.CreateNewProduct(product);

        // open the dialog, find the item
        ProductEditorComponent editor = app.OpenProductEditor("Foo");

        // retrieves the settings of the product from the screen
        Product actual = editor.GetEntity();

        Assert.AreEqual(product, actual);
    }
}

Write Functional Unit Tests for Screen Components

Once you’ve got a basic test-harness, you should consider developing simple functional tests for user-interface components as you add them to your application.   If you can demo it to the client, you're probably ready to start writing functional tests. A few notes to consider at this stage:

  • Be pragmatic!  Screen components that are required as part of base use cases will have more functional tests than non-essential components.
  • Consider pairing developers with testers.  As the developer builds the UI, the tester writes the automation tests that verify the UI’s functionality.  Testers may guide the development of the UI to include automation ids, which reduces the amount of reverse-engineering.
  • Write tests as new features or changes are introduced.  No need to get too granular, just verify that the essentials.

Verify Build Process with Functional Sanity Tests

While your functional unit tests concentrate on the behaviors of individual screen components, you’ll want to augment your build process with tests that demonstrate common user-stories that can be used to validate the build.  These tests mimic the minimum happy path.

If you’re already using a continuous integration server to run unit tests as part of each build, functional tests can be included at this stage but can be regulated to nightly builds or as part of the release process to your quality assurance team.

Augment QA Process

As noted above, humans are a critical part of the testing of our applications and that’s not likely to change.  However, the framework that we used to validate the build can be reused by your testing team to write automation tests for their test cases.  Ideally, humans verify the stories manually, then write automation tests to represent regression tests.

Tests that require repetitive or complex time consuming procedures are ideal candidates for automation.

Conclusion

Automated functional testing can add value to your project for build verification and regression testing.  Being pragmatic about the components you automate and vigilant in your development process to ensure the tests remain in sync are the keys to their success.

How does your organization use functional testing?  Where does it work work?  What’s your story?

submit to reddit

Wednesday, October 07, 2009

Use Windows 7 Libraries to organize your code

One of the new features that I’m really enjoying in Windows 7 is the ability to group common folders from different locations into a common organizational unit, known as a Library.  I work with a lot of different code bases and tend to generate a lot of mini-prototypes, and I’ve struggled with a good way to organize them.  The Library feature in Windows 7 offers a neat way to view and organize your files.  Here’s how I’ve organized mine.

Create your Library

  1. Open windows explorer
  2. Bring up the context-menu on the Libraries root folder and choose “New –> Library”
  3. Select folders to include in your library.

While your folders can be organized from anywhere, I’ve created a logical folder “C:\Projects” and four sub-folders:

  • C:\Projects\Infusion (my employer)
  • C:\Projects\lib (group of common libraries I reference a lot)
  • C:\Projects\Experiments (small proof of concept projects)
  • C:\Projects\Personal (my pet projects)

Note that you can easily add new folders to your library by clicking on the Includes: x locations hyperlink.  This dialog also lets you move folders up and down, which makes it easy to organize the folders based on your preference.

Here’s a screen capture of my library, arranged by folder with a List view.

CodeLibrary

By default, the included folders are arranged by “Folder” and behind the scenes they’re grouped by the “Folder Path” of the included folders, which gives us the headings above our included folders.  A word of caution: while you can change the “Arrange by” and “view” without issue, if you change the setting for Group-by (view-> group by) there doesn’t appear to be a way to easily revert the Group-by setting back to the default.  Thus, if your heading back you’ll have to manually add the “Folder Path” column and the set it to the group-by value, but the user-defined sort order of the Libraries won’t be used.

This dialog is available anywhere that uses a standard COM dialog.  From within Visual Studio, this view is really helpful when adding project references, opening files and creating projects.  Being able to search all of your code using in the top-right is awesome.

Add your Library to the Start Menu

There are a few hacks to put your library on to the start menu.  You can pin the item to the start menu, which puts it on the left-side of the start menu.  This technique requires a registry hack to allow Libraries to be pinned.

If you want to put your library into the right-hand side of the start menu, there is no native support for adding custom folders.  However, you can repurpose some of the existing folders.  I’ve added my library by repurposing the “Recorded TV” library, since my work PC doesn’t have any recorded TV.

Here’s how:

  1. Open the Control Panel
  2. Choose “Appearance and Personalization”
  3. Choose “Customize the Start Menu” under Taskbar and Start Menu.
  4. Turn on the “Recorded TV” option as “Display as a menu”

    Customize-StartMenu

  5. Next, on the Start Menu, right-click “Recorded-TV”, remove the default folder locations and then add your own.
  6. Rename the “Recorded TV” to whatever you want.

Here’s a screen capture of my start menu.

StartMenu

submit to reddit