Thursday, December 11, 2008

Selenium Field Notes

My last few projects have leveraged both Selenium-RC and Selenium-Core.  Here's a few notes from the field:

FireFox 3 doesn't work with Selenium 1.0.0 beta 1

When working with Selenium RC out of the box, Selenium stalls when trying to launch an instance of FireFox 3.  The Selenium-RC application works as a browser extension that is marked to only certain versions of the browser, this patch fixes the meta-data for the firefox plugin.

Instructions on how to fix the issue yourself can be found here, and within the comments there's a downloadable selenium-server.jar with the patch already applied.

Use Selenium-RC Judiciously

When you're working with Selenium Remote Control, every selenese command is sent over the network to the java application (even when working locally), so beware of redundant calls.  For example, I wrote several helper methods in my NUnit tests to group common selenese functions together:

public void SetText(string locator, string value) 
{ 
    selenium.Click(locator); 
    selenium.Type(locator,value); 
} 

Since the "Click" event is only needed for a few commands, trimming down the selenese can improve execution speed:

public void SetText(string locator, string value) 
{ 
    SetText(locator, value, false) 
} 
public void SetText(string locator, string value, bool clickBeforeType) 
{ 
    if (clickBeforeType) 
    { 
        selenium.Click(locator,value); 
    } 
    selenium.Type(locator,value); 
} 

Avoid XPath when possible

Using XPath as a location strategy for your elements can be dangerous for long term maintenance for your tests as changes to the markup will undoubtedly break your tests.  Likewise, certain browsers (cough cough IE) have poor XPath engines and are considerably slower (IE is about 16x slower).

Strangely enough, following accessibility guidelines also makes for better functional UI testing.  So instead of XPath locators, consider:

  • Use "id" whenever feasible.
    <a id="close" href="#" onclick="javascript:foo();"><img src="close.gif"/></a> 
    selenium.Click("close");
  • Use "alt" tags for images.
    <img src="close.gif" alt="close window" onclick="javascript:foo();" />
    selenium.Click("alt=close window");
  • Use text inside anchor tags when ids or images are not used.
  • <a href="/">Home</a> 
    selenium.Click("link=Home"); 

Avoid timing code

When working with AJAX or Postback events, page load speed can vary per machine or request.  Rather than putting timing code in your NUnit code (ie Thread.Sleep), take advantage of one of the selenium built-in WaitFor... selenese commands.

To use, you place javascript code in the condition where the last statement is treated as a return value.

// wait 30 seconds until an element is in the DOM
selenium.WaitForCondition("var element = document.getElementById('element'); element;", "3000");

This approach allows your code to be as fast as the browser rather than set to a fixed speed.

Use Experimental Browsers

When testing, I found several cases where I hit security limits, such as uploading a file.  In those cases, you have to use *chrome for Firefox and *iehta for Internet Explorer.  These browser profiles are just like *firefox and *iexplore, except that they run with elevated privileges.

Got a tip?  Leave a comment

submit to reddit

Wednesday, November 19, 2008

Producing readable log4net output

As a follow up to log4net configuration made easy, a common question people ask is what tool to use for reading log4net output.

Unless you're using an AdoNetAppender, there aren't many popular choices for combing through log4net file output.  This really should be as simple as setting fixed width columns or column delimiters into our FileAppender's layout pattern, but unfortunately it isn't that simple: log4net's base appender (AppenderSkeleton) outputs Exceptions over multiple lines making it unsuitable for delimited output.

Here are a few options for producing friendly log4net output that can be easily imported or understood by common tools, such as Excel, LogParser, etc.

Format Exceptions using an IObjectRenderer

log4net is just so darn extensible!  Object renderers are one of those great hidden gems in log4net that allow you to log an object and leave the formatting to log4net configuration.  Any object reference pushed through log4net (including derived classes) will use the supplied object render to customize the output. 

public class ExceptionRenderer : IObjectRenderer
{
        public void RenderObject(RendererMap rendererMap, object obj, TextWriter writer)
        {
            Exception ex = obj as Exception;
            if (ex != null)
            {
                // format exception to taste
                writer.Write(ex.StackTrace);
            }
        }
}

The object renderer appears in your config thusly:

<log4net>
    <!-- appenders -->
    <appender ... />
    <root ... />
    <renderer 
	renderingClass="MyNamespace.ExceptionRenderer,MyAssembly"
	renderedClass= "System.Exception" />
</log4net>

This option can be combined with the other approaches defined below, or on it's own.

try
{
    // perform work
}
catch(Exception ex)
{
    // format using IObjectRenderer
    log.Warn(ex);
}

By logging just the Exception object, the IObjectRenderer will do its formatting. Coincidentally, because the exception is the message it isn't subject to the same delimited friendly problems, though this may not be a suitable solution for you if it means having to rewrite all of your exception blocks.

Redirect Exceptions using a Custom Appender

As previously mentioned, the culprit behind our messy exceptions is the AppenderSkeleton.  Technically, it's how the RollingFileAppender leverages the AppenderSkeleton RenderLoggingEvent method: it appends content to the log based on our LayoutPattern, and then dumps the Exception stack trace as its own line.  We can correct this behaviour by creating a new appender that handles our exception details before it gets rendered into the logger as an exception.

public class CustomRollingFileAppender : RollingFileAppender
{
    public override Append(LoggingEvent loggingEvent)
    {
        string exceptionString = loggingEvent.GetExceptionString();
            
        if (String.IsNullOrEmpty(exceptionString))
        {
            // business as usual
            base.Append(loggingEvent);
        }
        else
        {
            // move our formatted exception details into the message
            LoggingEventData data = loggingEvent.GetLoggingEventData(FixFlags.All);
            data.ExceptionString = null;
            data.Message = String.Format("{0}:{1}", data.Message, exceptionString);

            LoggingEvent newLoggingEvent = new LoggingEvent(data);

            base.Append(newLoggingEvent);
        }
    }
}

The key advantage to this approach is that you won't have to change your existing logging code.

Move Exception details to a Custom Property

Though the previous approach solves our problem, we're coupling our message to our exception.  There may be some cases where you would want to separate exception details, such as importing into a database where the message is limited and the stack trace is a blob.  To accommodate, we can write our exceptionString to a custom property, which can be further customized using our layout configuration.

This code example shows the exception being logged to a custom property:

   LoggingEventData data = loggingEvent.GetLoggingEventData(FixFlags.All);
   data.ExceptionString = null;
   data.Properties["exception"] = exceptionString;

And our configuration:

<appender name="RollingLogFileAppender" type="log4net.Appender.CustomAppender">
  <file value="..\logs\log.txt" />
  <layout type="log4net.Layout.PatternLayout">
    <conversionPattern value="%date %-5level [%t] - %message %property{exception} %newline" />
  </layout>
</appender>

Putting it all together

The final piece is defining our layout of file so that it can be consumed by Excel or Log Parser. In the example below, I've customized my Header and Conversion pattern to use a pipe-delimited format, perfect for importing into Excel or a database table.

<log4net>
    <appender type="Example.CustomAppender,Example" name="CustomAppender">
        <!-- derived from Rolling File Appender -->
        <file value="..\logs\log.txt" />
        <appendtofile value="false" />
              <layout type="log4net.Layout.PatternLayout">
                <header value="Date|Level|Thread|Logger|Message|Exception&#13;&#10;" />
                <conversionpattern value="%date|%-5level|%t|%logger|%message|%property{exception}%newline" />
              </layout>
    </appender>
    <renderer renderedclass="System.Exception" renderingclass="Example.ExceptionRenderer,Example" />
    </appender>
        <root>
        <level value="DEBUG" />
        <appender-ref ref="CustomAppender" />
    </root>
</log4net>

Here's a Log Parser query that uses pipe-delimited format (we use a Tab delimited format with a pipe as the delimiter):

logparser.exe -i:TSV -iSeparator:"|" "select Level, count(*) as Count from log.txt group by Level"

submit to reddit

Tuesday, November 11, 2008

RGB with Opacity to Hex Tool

Bad designer! You designed a web-site in Illustrator and decided that every colour should be semi-transparent. Blue (#0000FF) with 70% opacity on top of a Grey background with 50% opacity isn't Blue anymore. Even if I could set different opacities for foreground and background, I'm completely trumped by the browser which inherits the opacity level from its parent container.

So how to fix? Ideally, if you need transparency your designer should apply transparency to the whole layer instead of individual elements, but it usually means re-exporting your Illustrator to JPG or BMP and then using an eye-dropper tool to get the colour. Or, if you're like me, you build a tool using WPF that basically performs the same thing.

opacity_tool

To use:

  1. Set the background and foreground color to taste.
  2. Adjust the Opacity of the foreground color. The background colour should bleed through the foreground.
  3. Take the effective colour in the status bar.

Tool for download here.  Code is available here.

Comments, feature requests and bug reports are welcome - just post a comment.

Wednesday, October 22, 2008

Synergy on Vista

I heard about Synergy several years back, hailed in some of my circles as the coolest thing since sliced bread, however I've only ever a single laptop or PC.  If you've never heard of it, it let's you share a keyboard and mouse between multiple computers.  It's a KVM without the V

In a strange twist of events, I've gone from one laptop to three.  While tools like KeyPass have eased the pain of floating passwords between machines, the worst challenge is adjusting to the radically different keyboard layouts.  Synergy with my new Bluetooth keyboard/mouse may be the answer.

Some useful links:

A few notes about configuration:

  • Make sure you can successfully ping between machines.  Consider adding entries to your hosts file to ensure proper name resolution
  • Don't forget bi-directional relationships!  If you only define one link, you can't drag your mouse back onto the other screen.
  • The configuration screens are klugey.  Just remember to click the "+" buttons when defining links -- huge grief saver.

Tuesday, October 07, 2008

Detecting Lock Workstation Session Events using WPF

A recent project at work needed to detect when the user locked and unlocked their workstation.  As I'd seen some great examples do this previously, I didn't think much of it.  However, because WPF is a very different beast compared to Windows Forms, it's a slightly different approach.

Neat stuff like detecting operating system events isn't part of the .NET framework and it requires calling outside to native Win32 API using P/Invoke features of the CLR.  If you've done any Win32 programming, you'd know that it's largely based on Handles, IntrPtr's and messages.  And as a breath of fresh air, the WPF API is focused primarily on building rich user-interfaces and is completely devoid of legacy Win32 programming concepts.  WPF is a huge leap, but worth it.

Fortunately, interoperability with Win32 is a breeze so if you want to tap into native Windows API, it's available if you're willing to write some code.

I should point out that most of this code has been adapted from this post over at the .NET Security Blog.  As shawnfa's post goes over the API in detail, I won't cover it here, rather I'll focus on how to pull this off in WPF.  Also, as I'm a big fan of composition in favour over inheritance, I've pulled all the session management stuff into an encapsulated class to make it easier to "mix in".  I haven't found any issues with this approach, but I'd welcome feedback.

Tapping into the Windows API can be mixed into your app using the HwndSource class.  It requires a handle to the calling window, and since the Handle property doesn't exist on the WPF Window class, you'll have to use the WindowInteropHelper to expose it.  The only gotcha here is that the Handle isn't available until after the window has been loaded.  This is analogous to the Form.OnHandleCreated method in Windows Form programming.

Here's my adapted sample:

using System;
using System.Runtime.InteropServices;
using System.Windows;
using System.Windows.Interop;

namespace LockedWorkstationExample
{
  public class SessionNotificationUtil : IDisposable
  {
     // from wtsapi32.h
     private const int NotifyForThisSession = 0;

     // from winuser.h
     private const int SessionChangeMessage = 0x02B1;
     private const int SessionLockParam = 0x7;
     private const int SessionUnlockParam = 0x8;

     [DllImport("wtsapi32.dll")]
     private static extern bool WTSRegisterSessionNotification(IntPtr hWnd, int dwFlags);

     [DllImport("wtsapi32.dll")]
     private static extern bool WTSUnRegisterSessionNotification(IntPtr hWnd);

     // flag to indicate if we've registered for notifications or not
     private bool registered = false;

     WindowInteropHelper interopHelper;

     /// <summary>
     /// Constructor
     /// </summary>
     /// <param name="window"></param>
     public SessionNotificationUtil(Window window)
     {
         interopHelper = new WindowInteropHelper(window);
         window.Loaded += new RoutedEventHandler(window_Loaded);
     }

     // deferred initialization logic
     void window_Loaded(object sender, RoutedEventArgs e)
     {
         HwndSource source = HwndSource.FromHwnd(interopHelper.Handle);
         source.AddHook(new HwndSourceHook(WndProc));
         EnableRaisingEvents = true;
     }

     protected bool EnableRaisingEvents
     {
         get { return registered; }
         set
         {
            // WtsRegisterSessionNotification requires Windows XP or higher
            bool haveXp =   Environment.OSVersion.Platform == PlatformID.Win32NT &&
                 (Environment.OSVersion.Version.Major > 5 || 
                 (Environment.OSVersion.Version.Major == 5 &&
                 Environment.OSVersion.Version.Minor >= 1));

            if (!haveXp)
            {
                 registered = false;
                 return;
            }

            if (value == true && !registered)
            {
                 WTSRegisterSessionNotification(interopHelper.Handle, NotifyForThisSession);
                 registered = true;
            }
            else if (value == false && registered)
            {
                 WTSUnRegisterSessionNotification(interopHelper.Handle);
                 registered = false;
            }
         }
     }

     private IntPtr WndProc(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled)
     {
         if (msg == SessionChangeMessage)
         {
            if (wParam.ToInt32() == SessionLockParam)
            {
                OnSessionLock();
            }
            else if (wParam.ToInt32() == SessionUnlockParam)
            {
                OnSessionUnLock();
            }
         }
         return IntPtr.Zero;
     }

     private void OnSessionLock()
     {
         if (SessionChanged != null)
         {
            SessionChanged(this, new SessionNotificationEventArgs(SessionNotification.Lock));
         }
     }

     private void OnSessionUnLock()
     {
         if (SessionChanged != null)
         {
            SessionChanged(this, new SessionNotificationEventArgs(SessionNotification.Unlock));
         }
     }

     public event EventHandler<SessionNotificationEventArgs> SessionChanged;

     #region IDisposable Members
     public void Dispose()
     {
         // unhook from wtsapi
         if (registered)
         {
            EnableRaisingEvents = false;
         }
     }
     #endregion
 }

 public class SessionNotificationEventArgs : EventArgs
 {
     public SessionNotificationEventArgs(SessionNotification notification)
     {
        _notification = notification;
        _timestamp = DateTime.Now;
     }

     public SessionNotification Notification
     {
        get { return _notification; }
     }

     public DateTime TimeStamp
     {
        get { return _timestamp; }
     }

     private SessionNotification _notification;
     private DateTime _timestamp;
 }

 public enum SessionNotification
 {
     Lock = 0,
     Unlock
 }

}

At this point, I can mix in session notification wherever needed without having to introduce a base window class:

public partial class Window1 : Window
{
    public Window1()
    {
        InitializeComponent();
        SessionNotificationUtil util = new SessionNotificationUtil(this);
        util.SessionChanged += new EventHandler<SessionNotificationEventArgs>(util_SessionChanged);
    }

    void util_SessionChanged(object sender, SessionNotificationEventArgs e)
    {
        this.txtOutput.Text += String.Format("Recieved {0} notification at {1}\n", e.Notification, e.TimeStamp);
    }
}
Update 10/23/2008: Source code now available for download.

Casey Alexander

Well, it has been pretty quiet here on the blog for the last month only because my personal life has been pretty loud.

IMG_5263

6lbs-14oz, 20" long, October 1st.  Mom, baby and big brother are all well.

Tuesday, August 26, 2008

Vista Keyboard Shortcuts

Both my home laptop and work laptop are running different versions of Vista, and after the initial shock, I've found it to be growing on me.

This list of shortcuts covers the basics and a bit more of the Vista shortcuts.

One shortcut that I've discovered this week, is tapping the SHIFT key twice.  It brings the task bar and gadgets to the foreground (works for Google Desktop search, too).

Saturday, August 09, 2008

New Role on Monday

This July will definitely go down in my books as most memorable to date.  After a major round of changes at work and a decent severance package, I spent most of July milking my extensive contact list for opportunities, playing phone tag with head hunters and spending as much time as I possibly could outside.  I've never been so relaxed... my wife claims a night-and-day difference in my outlook.

This Monday I start a senior role with a development firm specializing in emerging technologies.  I'm pretty jazzed up about this as I'm normally working on technology that customers can be comfortable with -- this will be the exact opposite: looks like I'll be working primarily with WPF and Microsoft Surface.  This will be baptism-by-fire, full-steam-ahead, bleeding-edge stuff - a great opportunity to go "all in" and focus on my technical.

While I expect that I'll continue to blog about TDD, guidance automation, process engineering and generally awesome web-centric code for some time to come -- we'll likely see some postcards from the edge here soon.

Friday, August 08, 2008

Legacy Projects: Planning for Refactoring

Over the last few posts, my legacy monolithic project with no unit tests has: configured a build server with statistics reports, empty coverage data, and a set of unit tests for the user interface.  We're now in a really healthy position to introduce some healthy change into our project.  Well... not quite: applying refactoring to an existing project requires a plan with some creative thinking that integrates change into the daily work cycle.

Have a Plan

I can't stress this enough: without a plan, you're just recklessly introducing chaos into your project.  Though it would help to do deep technical audit, the trick is to keep the plan at a really high level.  Your main goal should be to provide management some deliverable, such as a set of diagrams and a list of recommendations.  Each set of recommendations will likely need their own estimation and planning cycle.  Here's an approach you can use to help start your plan:

  • Whiteboard all the components of your solution.  You might want to take several tries to get it right: grouping related components together, etc.  Ask a team member to validate that all parts of the solution are represented.  When you've got a good handle on it, draw it out in Visio.  (I find a whiteboard to be less restrictive at this phase...)
  • Gather feedback on the current design from as many different sources as possible.  Team members may be able to provide pain points about the current architecture and how it has failed in the past; other solution architects may have different approaches or experiences that may lead to a more informed strategy.  Use this feedback to compile a list of faults and code smells that are present in the current code.
  • Set goals for a new architecture.  The pain points outlined by your developers may inspire you; but ideally your new architecture is clean, performs well, requires less code, secure, loosely coupled, easily testable, flexible to change and more maintainable -- piece of cake, right?
  • Redraw the components of your solution under your ideal solution architecture. It can be difficult to look past the limitations of the current design, but don't let that influence your thinking.  When you're done, compare this diagram to the current solution.   Question everything: How are they different?  What are the major obstacles to obtaining this design and how can they be overcome?  What represents the least/most effort?  What are the short versus long term changes?  What must be done together versus independently?  How does your packaging / deployment / build script / configuration / infrastructure need to change?

After this short exercise, you should have a better sense for the amount of changes and the order that they should be done.  The next step is finding a way to introduce these changes into the your release schedule. 

Introducing Change

While documenting your findings and producing a deliverable is key, perhaps the best way to introduce change into the release schedule is the direct route: tell the decision makers your plans.  An informed client/management is your best ally, but you need to speak their language. 

For example, in a project where the user-interface code is tied to service-objects which are tied directly to web-services, it's not enough to state this is an inflexible design.  However, by outlining a cost savings, reduced bugs and quicker time to market by removing a pain point (the direct coupling between UI and Web-Services prevents third parties or remote developers from properly testing their code) they're much more agreeable to scheduling some time to fix things.

For an existing project, it's very unlikely that the client will agree to a massive refactoring such as changing all of the underlying architecture for the UI at the same time.  However, if a business request touches a component that suffers a pain point, you might be able to make a case to fix things while introducing the change.  This is the general theme of refactoring: each step in the plan should be small and isolated so that the impact is minimal.  I like to think of it as a sliding-puzzle.

Introducing change to a project typically gets easier as you demonstrate results.  However, since the first steps to introduce a new design typically requires a lot of plumbing and simultaneous changes, it can be a very difficult sell for the client if these plumbing changes are padded into a simple request.  To ease the transition it might help if you alleviate the bite by taking some of the first steps on your own: either as a proof of concept, or as an isolated example that can be used to set direction for other team members.

Here are a few things you can take on with relatively minor effort that will ease your transition.

Rethink your Packaging

A common problem with legacy projects is the confusion within the code caused by organic growth: classes are littered with multiple disjointed responsibilities, namespaces lose their meaning, inconsistent or complex relationships between assemblies, etc.  Before you start any major refactoring, now is a really good time to establish how your application will be composed in terms of namespaces and assemblies (packages).

Packaging is normally a side effect of solution design and isn't something you consider first when building an application from scratch.  However, for a legacy project where the code already exists, we can look at using packaging as the vehicle for delivering our new solution.  Some Types within your code base may move to new locations, or adopt new namespaces.  I highly recommend using assemblies as an organizational practice: instruct team members where new code should reside and guide (enforce) the development of new code within these locations.  (Just don't blindly move things around: have a plan!)

Recently, Jeffrey Palermo coined the term Onion architecture to describe a layered architecture where the domain model is centered in the "core", service layers are built upon the core, and physical dependencies (such as databases) are pushed to the outer layers.  I've seen a fair amount of designs follow this approach, and a name for it is highly welcomed -- anyone considering a new architecture should take a look at this design.  Following this principle, it's easy to think of the layers or services residing in different packages.

Introduce a Service Locator

A service locator is an effective approach to breaking down dependencies between implementations, making your code more contract-based and intrinsically more testable.  There are lots of different service locator or dependency injection frameworks out there; a common approach is to write your own Locator and have it wrap around your framework of choice. The implementation doesn't need to be too complicated, even just a hashtable of objects will do; the implementation can be upgraded to other technologies, such as Spring.net, Unity, etc.

Perhaps the greatest advantage that a Service Locator can bring to your legacy project is the ability to unhook the User Interface code from the Business Logic (Service) implementations.  This opens the door to refactoring inline user-interface code and user controls.  The fruits of this labor are clearly demonstrated in your code coverage statistics.

Not all your business objects will fit into your service locator right away, mainly because of strong coupling between UI and BL layers (static methods, etc).  Compile a list of services that will need to be refactored, provide a high-level estimate for each one and add them to a backlog of technical debt to be worked on a later date. 

You can move Business Layer objects into the Service Locator by following the following steps:

  • Extract an interface for the Service objects.  If your business logic is exposed as static methods, you'll have some work to convert these to instance methods.  I'll likely have a follow-up post that shows how to perform these types of refactoring using TDD as a safety net -- more later...
  • Register the service with the service locator.  This work will depend on how your Service Locator works, either through configuration settings or through some initiation sequence.
  • Replace the references to the Service object with the newly extracted interface.  If your business logic is exposed using static methods, you can convert the references to the Service object in the calling code to a property.
  • Obtain a reference to the Service object from the Service Locator.  You can either obtain a reference to the object by making an inline request to the Service Locator, or as the point above encapsulate the call in a property.  The latter approach allows you to cache a reference to the service object.

Next steps

Now that you have continuous integration, reports that demonstrate results, unit tests for the presentation layer, the initial ground-work for your new architecture and a plan of attack -- you are well on your way to start the refactoring process of changing your architecture from the inside out.  Remember to keep your backlog and plan current (it will change), write tests for the components you refactor, and don't bite off more than you can chew.

Good luck with the technical debt!

submit to reddit

Friday, July 11, 2008

Automate Visual Studio from external tools

While cleaning up a code monster, a colleague and I were looking for ways to dynamically rebuild all of our web-services as part of build script or utility as we have dozens of them and they change somewhat frequently.  In the end, we decided that we didn't necessarily need support for modifying them within the IDE and we could just generate them using the WSDL tool.

However, while I was researching the problem I stumbled upon an easy method to drive Visual Studio without having to write an addin or macro; useful for one-off utilities and hair-brain schemes.

Here's some ugly code, just to give you a sense for it.

You'll need references to:

  • EnvDTE - 8.0.0.0
  • VSLangProj - 7.0.3300.0
  • VSLangProj80 - 8.0.0.0
namespace AutomateVisualStudio
{
  using System;
  using EnvDTE;
  using VSLangProj80;

  public class Utility
  {
      public static void Main()
      {
          string projectPath = @"C:\Demo\Empty.csproj";
          Type type = Type.GetTypeFromProgID("VisualStudio.DTE.8.0");
          DTE dte = (DTE) Activator.CreateInstance(type);
          dte.MainWindow.Visible = false;

          dte.Solution.Create(@"C:\Temp\","tmp.sln");
          Project project = dte.Solution.AddFromFile(projectPath, true);

          VSProject2 projectV8 = (VSProject2) project.Object;
          if (projectV8.WebReferencesFolder == null)
          {
              projectV8.CreateWebReferencesFolder();
          }

          ProjectItem item = projectV8.AddWebReference("http://localhost/services/DemoWS?WSDL");
          item.Name = "DemoWS";
            
          project.Save(projectPath);
          dte.Quit();
      }
  }
}

Note that Visual Studio doesn't allow you to manipulate projects directly; you must load your project into a solution.  If you don't want to mess with your existing solution file, you can create a temporary solution and add your existing project to it.  And if you don't want to clutter up your disk with temporary solution files, just don't call the the Save method on the Solution object.

If you had to build a Visual Studio utility, what would you build?

submit to reddit

Thursday, July 10, 2008

Catching server errors with WatiN: redux

Stumbled upon this post about how to catch server errors for your WatiN tests.  The approach outlined provides a decent mechanism for detecting server errors by sub-classing the WatiN IE object.  While I do appreciate the ability to subclass, it bothers me a bit that I have to write the logic in my subclass to detect server errors.  After poking around a bit, I think there's a more generic approach that can be achieved by tapping into the NavigateError event of the native browser:

public class MyIE : IE
{
    private InternetExplorerClass ieInstance;
    private NavigateError error;

    public MyIE()
    {
        ieInstance = (InternetExplorerClass) InternetExplorer;
        ieInstance.BeforeNavigate += BeforeNavigate;
        ieInstance.NavigateError += NavigateError;
    }

    public override void WaitForComplete()
    {
        base.WaitForComplete();
        if (error != null)
        {
            throw new ServerErrorException(Text);
        }
    }

    void BeforeNavigate(string URL, int Flags, string TargetFrameName, ref object PostData, string Headers, ref bool Cancel)
    {
        error = null;
    }

    void NavigateError(object pDisp, ref object URL, ref object Frame, ref object StatusCode, ref bool Cancel)
    {
        error = new NavigateError(URL,StatusCode);
    }

    private class NavigateError
    {
        public NavigateError(object url, object statusCode)
        {
            _url = url;
            _statusCode = statusCode;
        }

        private object _url;
        private object _statusCode;
    }
}
public class ServerErrorException : Exception 
{
    public ServerErrorException(string message) : base(String.Format("A server error occurred: {0}",message))
    { } 
}

Few caveats:

  • Constructor of MyIE needs to be updated to reflect the other constructor overloads.
  • Need to ensure that URL of NavigateError is the same URL of BeforeNavigate
  • Test library needs to reference the Interop.SHDocVw wrapper for Internet Explorer
  • Only tested with IE7

While I wouldn't consider COM Interop to be a "clean" solution, it is more bit more portable between solutions.  And if it was this easy, why isn't it part of WatiN anyway?

submit to reddit

Tuesday, July 08, 2008

Legacy Projects: Test the User Interface with Selenium or WatiN

Following up on the series of posts on Legacy Projects, my legacy project with no tests now has a build server with empty coverage data.  At this point, it's really quite tempting to start refactoring my code, adding in tests as I go, but that approach is slightly ahead of the cart.

Although Tests for the backend code would help, they can't necessarily guarantee that everything will work correctly.  To be fair, the only real guarantee for the backend code would be to write Tests for the existing code and then begin to refactor both Tests and code.  This turns out to be a very time consuming endeavour as you'll end up writing the Tests twice.  In addition, I'm working with the assumption that my code is filled with static methods with tight-coupling which doesn't lend itself well to testing.  I'm going to need a crowbar to fix that, and that'll come later.

It helps to approach the problem by looking at the current manual process as a form of unit testing.  It's worked well up to this point, but because it's done by hand it's a very time consuming process that is prone to error and subjective of the user performing the tests.  The biggest downfall of the current process is that when the going get's tough, we are more likely to miss details.  In his book, Test Driven Development by Example, Kent Beck refers to manual testing as "test as a verb", where we test by evaluating aspects of the system.  What we need to do is turn this into "test as a noun" where the test is a "procedure to evaluate" in an automated fashion.  By automating the process, we eliminate most of the human related problems and save a bundle of time. 

For legacy projects, the best place automation starting point is to test the user interface, which isn't the norm for TDD projects.  In a typical TDD project, user interface testing tends to appear casually late in the project (if it appears at all), often because the site is incomplete and the user interface is a very volatile place;  UI tests are often seen as too brittle.  However, for a legacy project the opposite is true: the site is already up and running and the user interface is relatively stable; it's more likely that any change we make to the backend systems will break the user interface.

There is some debate on the topic of where this testing should take place.  Some organizations, especially those where the Quality Assurance team is separated from the development teams, rely on automated testing suites such as Empirix (recently acquired by Oracle) to perform functional and performance tests.  These are powerful (and expensive) tools, but in my opinion are too late in the development cycle --  you want to catch minor bugs before they are released to QA, otherwise you'll incur an additional bug-fix development cycle.  Ideally, you should integrate UI testing into your build cycle using tools that your development team is familiar with.  And if you can incorporate your QA team into the development cycle to help write the tests, you're more likely to have a successful automated UI testing practice.

Of the user interface testing frameworks that integrate nicely with our build scripts, two favourites come to mind:  Selenium and WaitN.

Using Selenium

Selenium is a java-based powerhouse whose key strengths are platform and browser diversity, and it's extremely scalable.  Like most java-based solutions, it's a hodge-podge of individual components that you cobble together to suit your needs; it may seem really complex, but it's a really smart design.  At its core, Selenium Core is a set of JavaScript files that manipulate the DOM.  The most common element is known as Selenium Remote-Control, which is a server-component that can act as a message-broker/proxy-server/browser-hook that can magically insert the Selenium JavaScript into any site --  it's an insanely-wicked-evil-genius solution to overcoming cross-domain scripting issues.  Because Selenium RC is written in Java, it can live on any machine, which allows you to target Linux, Mac and PC browsers.  The scalability feature is accomplished using Selenium Grid, which is a server-component that can proxy requests to multiple Selenium RC machines -- you simply change your tests to target the URL of the grid server.  Selenium's only Achilles' heel is that SSL support requires some additional effort.

A Selenium test that targets the Selenium RC looks something like this:

[Test]
public void CanPerformSeleniumSearch()
{
    ISelenium browser = new DefaultSelenium("localhost",4444, "*iexplore", "http://www.google.com");
    browser.Start();
    browser.Open("/"); 
    browser.Type("q", "Selenium RC"); 
    browser.Click("btnG");

    string body = browser.GetBodyText();

    Assert.IsTrue(body.Contains("Selenium"));

    browser.Stop(); 
}

The above code instantiates a new session against the Selenium RC service running on port 4444.  You'll have to launch the service from a command prompt, or configure it to run as a service.  There are lots of options.  The best way to get up to speed is to simply follow their tutorial...

Selenium has a FireFox extension, Selenium IDE, that can be used to record browser actions into Selenese.

Using WatiN

WatiN is a .NET port of the java equivalent WatiR.  Although it's currently limited to Internet Explorer on Windows (version 2.0 will target FireFox), it has an easy entry-path and a simple API.

The following WatiN sample is a rehash of the Selenium example.  Confession: both samples are directly from the provided documentation...

[Test]
public void CanPerformWatiNSearch()
{
    using (IE ie = new IE("http://www.google.com"))
    {
        ie.TextField(Find.ByName("q")).TypeText("WatiN");
        ie.Button(Find.ByName("btnG")).Click();

        Assert.IsTrue(ie.ContainsText("WaitN");
    }
}

As WatiN is a browser hook, its API contains exposes the option to tap directly into the browser through Interop.  You may find it considerably more responsive than Selenium because the requests are marshaled via windows calls instead of HTTP commands.  Though there is a caveat to performance: WatiN expects a Single Threaded Apartment model in order to operate, so you may have to adjust your runtime configuration.

WatiN also has a standalone application, WatiN Recorder, that can capture browser activity in C# code.

UI Testing Strategy Tips

Rather than writing an exhaustive set of regression tests, here's my approach:

  • Start Small: Begin by writing coarse UI tests that demonstrate simple functionality.  For example, a test that hits the homepage and validates that there aren't any 500 errors.  Writing complex tests that validate specific HTML markup take longer to produce and often tend to be brittle and less maintainable in the long run.
  • Map out and test functional areas:  Identify the key functional elements of the site that QA would normally regression test for a build: login, update a profile, add items to a shopping cart, checkout, search, etc.  Some of these will be definite road-blockers that you'll have to work around -- you'll quickly realize you can't guarantee profile-ids and passwords between environments, or maybe your product catalog changes too frequently.  Some will require creative thinking, others may inspire custom testing tools that can perform test-specific queries or functions.  You may even find a missing need in the backend systems that you could build and leverage as part of your tests. 
  • Write tests for functional changes:  You don't need to sit down an write an exhaustive site wide regression fixture -- focus on the areas that you touch.  If you write tests before you make any changes you can use these tests to help automate the debugging process.  The development effort is relatively small -- you have to test it anyway a dozen times by hand.
  • Write tests for testing bugs!!!:  What better motivation could you have?  This is what regression testing is all about!
  • Design for different environments:  The code examples above have URLs hard-coded.  Consider using a tool that uses configuration settings to retrieve or help construct URLs so that you can run your UI tests against your local instance, dev, build-server, QA, integration, etc.  UI Tests make great build-validation utilities!

submit to reddit

Sunday, July 06, 2008

Switching to LiveWriter

Up to this point, I've crafted the HTML markup for my posts this year using Notepad++.  While working with a local editor is far superior to using Blogger's editor window, I've found stylizing elements and adding hyperlinks to be somewhat time consuming, not to mention difficult to read/review/write content with all the HTML markup in the way.   Despite having better control over the markup, the largest problem with this approach is you really can't see what your post will look like until you publish, and even then, I usually follow a nervous publish/review/tweak/publish dance number to sort out all the display issues.

Recently, I downloaded LiveWriter and w.bloggar to test drive alternatives.  (Actually, I was interested in w.blogger's ability to edit Blogger Templates -- but it turns out that they don't work on blogger's new layout templates.  Drat.)   So far, I'm pleasantly surprised with LiveWriter.

Although I'm pretty excited that the tool is written in .NET with support for managed addins, I am most impressed with the feature that can simulate a live preview of your post.  LiveWriter is able to pull this off by creating a temporary post against your blog and analyzing it to extract your CSS and HTML Layout.  You can toggle between editing (F11), preview (F12) and HTML (Shift + F11) really easily.

LiveWriter-Post-Preview

The biggest snag I've encountered thus far is that the HTML markup produced by LiveWriter is cleaned up with lots of extra line-feeds for readability.  While this makes reading the HTML a simple pleasure, it wreaks havoc with my current Blogger settings.

Blogger's default setting converts carriage-returns into <br /> tags.  So all the extra line breaks inserted by LiveWriter are transformed into ugly whitespace in your posts.  This feature is configurable within Blogger: Posts -> Formatting -> Convert line breaks.

Settings - Formatting - Convert Line Breaks

Unfortunately for me, this is a breaking change for most of my posts (dating back to 2004).  To fix, I have to add the appropriate <p></p> tags around my content -- fortunately, LiveWriter will automatically correct markup for paragraphs that I touch with additional whitespace.  So while the good news is my posts will have proper markup in the editor, the bad news is I have to manually edit each one.

Saturday, July 05, 2008

Legacy Projects: Coverage Data without Tests

From my previous post, Get Statistics from your Build Server, I spoke about getting meaningful data into your log output as soon as possible so that you can begin to generate reports about the state of your application.

I'm using NCover to provide code coverage analysis, but I can also get important metrics like Non-Comment Lines Of Code, number of classes, members, etc.  Unfortunately, I have no unit tests so my coverage report contains no data.  Since NCover will only profile assemblies that are loaded into the profiler's memory space, referencing my target assembly into my Test assembly isn't enough.  To compensate, i added this simple test to load the assembly into memory:

[Test]
public void CanLoadAssemblyToProvideCoverageData()
{
 System.Reflection.Assembly.Load("AssemblyName");
}

This is obviously a dirty hack, and I'll remove it the second I write some tests.  Although I only have 0% coverage, I now have a detailed report that shows over 40,000 lines of untested code.  The stage is now set to remove duplication and introduce code coverage.

Tuesday, July 01, 2008

Legacy Projects: Get Statistics from your Build Server

As I mentioned in my post, Working with Legacy .NET Projects, my latest project is a legacy application with no tests. We're migrating from .NET 1.1 to .NET 2.0, and this is the first entry in the series of dealing with legacy projects. Click here to see the starting point.

On the majority of legacy projects that I've worked on, there is often a common thread within the development team that believes the entire code base is outdated, filled with bugs and should be thrown away and rewritten from scratch. Such a proposal is a very tough sell for management, who will no doubt see zero value in spending a staggering amount only to receive exactly what they currently have, plus a handful of fresh bugs. Rewrites might make sense when accompanied with new features or platform shifts, but in large they are a very long and costly endeavour. Refactoring the code using small steps in order to get out of Design Debt is a much more suitable approach, but cannot be done without a plan that management can get behind. Typically, management will support projects that can quantify results, such as improving server performance or page load times. However, in the context of a sprawling application without separation of concerns, estimating effort for these types of projects can be extremely difficult, and further compounded when there is no automated testing in place. It's a difficult stalemate between simple requirements and a full rewrite.

Assuming that your legacy project at least has source control, the next logical step to improve your landscape is to introduce a continous integration server or build server. And as there are countless other posts out there describing how to setup a continuous integration server, I'm not going to repeat those good fellows.

While the benefits of a build server are immediately visible for developers, who are all too familiar with dumb-dumb errors like compilation issues due to missing files in source control, the build server can also be an important reporting tool that can be used to sell management on the state of the application. As a technology consultant who has played the part between the development team and management, I think it's fair to say that most management teams would love to claim that they understand what their development teams do, but they'd rather be spared the finer details. So if you could provide management a summary of all your application's problems graphed against a timeline, you'd be able to demonstrate the effectiveness of their investment over time. That's a pretty easy sell.

The great news is, very little is required on your part to produce the graphs: CruiseControl 1.3 has a built in Statistics Feature that uses XPath statements to extract values from your build log. Statistics are written to an xml file and csv file for easy exporting, and third party graphing tools can be plugged into the CruiseControl dashboard to produce slick looking graphs. The challenge lies in mapping the key pain points in your application to a set of quantifiable metrics and then establishing a plan that will help you improve those metrics.

Here's a common set of pain points and metrics that I want to improve/measure for my legacy project:

Pain Metrics Toolset
Tight Coupling (Poor Testability) Code Coverage, Number of Tests NCover, NUnit
Complexity / Duplication (Code Size) Cyclomatic complexity, number of lines of code, classes and members NCover, NDepend, SourceMonitor or VIL
Standards Compliance FxCop warnings and violations, compilation warnings FxCop, MSBuild

Ideally, before I start any refactoring or code clean-up, I want my reports to reflect the current state of the application (flawed, tightly coupled and un-testable). To do this, I need to start capturing this data as soon as possible by adding the appropriate tools to my build script. While it's possible to add new metrics to your build configuration at any time, there is no way to go back and generate log data for previous builds. (You could manually check out previous builds and run the tools directly, but would take an insane amount of time.) The CruiseControl.NET extension CCStatistics also has a tool that can reprocess your log files, which is handy if you add new metrics for data sources that have already been added to your build output.

Since adding all these tools into your build script requires some tinkering, i'll be gradually adding these tools into my build script. To minimize changes to my cruise control configuration, I can use a wildcard filter to match all files that follow a set naming convention. I'm using a "*-Results.xml" naming convention.

<-- from ccnet.config -->
<publishers>
<merge>
<files>
<file>c:\buildpath\build-output\*-Results.xml</file>
</files>
</merge>
</publishers>

Configuring the Statistics Publisher is really quite easy, and the great news is that the default configuration captures most of the metrics above. The out of box configuration captures the following:

  • CCNET: Build Label
  • CCNET: Error Type
  • CCNET: Error Message
  • CCNET: Build Status
  • CCNET: Build Start Time
  • CCNET: Build Duration
  • CCNET: Project Name
  • NUNIT: Test Count
  • NUNIT: Test Failures
  • NUNIT: Tests Ignored
  • FXCOP: FxCop Warnings
  • FXCOP: FxCop Errors

Here's a snippet from my ccnet.config file that shows NCover lines of code, files, classes and members. Note that I'm also using Grant Drake's NCoverExplorer extras to generate an xml summary instead of the full coverage xml output for performance reasons.

<publishers>
<merge>
<files>
<file>c:\buildpath\build-output\*-Results.xml</file>
</files>
</merge>

<statistics>
<statisticList>
<firstMatch name='NCLOC' xpath='//coverageReport/project/@nonCommentLines' include='true' />
<firstMatch name='files' xpath='//coverageReport/project/@files' include='true' />
<firstMatch name='classes' xpath='//coverageReport/project/@classes' include='true' />
<firstMatch name='members' xpath='//coverageReport/project/@members' include='true' />
</statisticList>
</statistics>

<!-- email, etc -->
</publishers>

I've omitted the metrics for NDepend/SourceMonitor/VIL, as I haven't fully integrated these tools into my build reports. I may revisit this later.

If you've found this useful or have other cool tools or metrics you want to share, please leave a note.

Happy Canada Day!

submit to reddit

Wednesday, June 25, 2008

Adding Subversion Ignores from the command line

I use Subversion at work and when I'm managing files from the command prompt, I generally don't enjoy having to sift through a long list of file names with question marks next to them, wondering whether these files should be checked into source control. Folders like "bin" and "obj" and user-preference files have no place in source control -- they just seem to get in the way.

If you're using TortoiseSVN, you can hide theses folder from source control simply by pulling up the context-menu for the un-versioned folder, select TortoiseSVN and "Add to ignore list". However, if you're using the command prompt, it requires a bit more effort. (Over the last few years, I've grown a well established distrust for TortiseSVN, as they shell-overlays can cripple even the fastest machines. I really wish the TortiseSVN guys would release their merge tool as a separate download, if you know a good diff tool, let me know.)

Because the svn:ignore property is stored as a new line delimited list, you need to pipe the results into a text file and edit them in the tool of your choice. When updating the property, I use the -F argument to specify a file instead of supplying the value in the command line.

  1. Get a list of the current ignores and pipe it into a text file:
    svn propget svn:ignore . > .ignore
  2. Edit the list in your editor:
    notepad .ignore
  3. Put the property back in:
    svn propset svn:ignore -F.ignore .
  4. Verify that your ignores work:
    svn st
  5. Commit your changes into the repository:
    svn ci . -m"Updating ignores"

Monday, June 23, 2008

Working with Legacy .NET Projects

My current project at work is a legacy application, written using .NET 1.1. The application is at least 5 years old and has had a wide range of developers. It's complex, has many third-party elements and constraints and lots of lots of code. Like all legacy applications, they set out with best of intentions but ended up somewhere else when new requirements started to deviate from the original design. It's safe to say that it's got challenges, it works despite its bugs and all hope is not yet lost.

Oh, and no unit Tests. Which in my world, is a pretty big thing. Hope you like Spaghetti!!

Fortunately, the client has agreed to a .NET 2.0 migration, which is a great starting place. All in all, I see this as a great refactoring exercise to slowly introduce tests and proper design. Along the way, we'll be fixing bugs, improving performance and reducing friction to change. I'll be writing some posts over the next while that talk about the strategies were using to change our landscape. Maybe, you'll find them useful for your project.

Related Posts:

submit to reddit

Thursday, June 19, 2008

Debugging WebTrends Page Attributes

A few weeks back, I provided a specially constructed link that would allow you to debug HitBox page attributes. I had the pleasure (sarcasm intended) of attending WebTrends training this week, which revealed a similar gem...

javascript:alert(gImages[0].src)

To use, drag this link to your browser toolbar: Show WebTrends.

When clicked, the resulting alert shows all the attributes that are sent to WebTrends SmartSource data collector.

If you want to try it out, Motorcylce USA uses WebTrends.

Update 6-20-08: If you're using FireBug in FireFox, the network performance tab makes it really easy to view the querystring parameters associated with the WebTrends tracking image.

  1. Navigate to your page.
  2. Open FireBug.
  3. Select the Net tab.
  4. Click on the Images button in the menu.
  5. Find the instance of dcs.gif from the statse.webtrendslive.com site.

Tuesday, June 17, 2008

TDD Tips: Create Custom NUnit Categories

In my recent post about test naming conventions and guidelines, I mentioned that you should annotate tests for third-party and external dependencies with category attributes and limit the number of categories that you create. This post will show basic usage of categories, will explain some of the reasoning behind limiting the number of categories. I'll also show how you can create your own categories with NUnit 2.4.x.

Although it's possible to annotate all of your tests with categories, they're really only useful for marking sensitive tests, typically around logical boundaries in your application. Some of the typical categories that I mark my tests with:

  • Database: Tests that require a database to execute. You might want to exclude these tests when you're working remotely or isolate these tests if you need to validate a database deployment for an environment.
  • Integration: Tests that interact with external components you don't have much control over, such as web-services or other infrastructure.
  • Web: Tests that perform regression tests on the visual aspect of a web-site. These tests tend to be very time consuming or require special configuration, so being able to exclude them until they're required can be a big help. Often I run these type of tests when the build server kicks off a build.

Usage

Using categories are very straight forward. Here's an example of a test that is marked with a "Database" category


namespace Code
{
[TestFixture]
public class AdoOrderManagementProvider
{
[Test,Category("Database")]
public void CanRetrieveOrderById()
{
// database code here
}
}
}

Challenges with Categories

One problem I've found with using categories is that category names can be difficult to keep consistent in large teams, mainly because the category name is a literal value that is passed to the attribute constructor. In large teams, you either end up with several categories with different spellings, or the unclear intent of the categories becomes an obstacle which prevents developer adoption.

Fortunately, since NUnit 2.4.x, it's possible to create your own custom categories by deriving from the CategoryAttribute class. (In previous releases, the CategoryAttribute class was sealed.) Creating your own custom categories as concrete classes allows the solution architect to clearly express the intent of the testing strategy, and relieves the developer of spelling mistakes. As an added bonus, you get Intellisense support (through Xml Documentation syntax), ability to identify usages and the ability to refactor the category much more effectively than a literal value.

Here's the code for a custom database category, and the above example modified to take advantage of it:


using NUnit.Framework;

namespace NUnit.Framework
{
/// <summary>
/// This test, fixture or assembly has a direct dependency to a database.
/// </summary>
[AttributeUsage(AttributeTarget.Class | AttributeTarget.Method | AttributeTarget.Assembly, AllowMultiple = false)]
public class RequiresDatabaseAttribute : CategoryAttribute
{
public RequiresDatabaseCategoryAttribute() : base("Database")
{}
}
}

namespace Code
{
[TestFixture]
public class AdoOrderManagementProvider
{
[Test, RequiresDatabase]
public void CanRetrieveOrderById()
{
// etc...
}
}
}

It's important to point out that categories can be applied per Test, per Fixture or even for the entire Assembly, so you have lots of options in terms of the level of granularity.

Filtering Tests using Categories

The real advantage to using categories is that you can filter which tests should be included or excluded when the tests are run.

Filtering Categories within Nunit-Gui.exe

To actively include/exclude tests by category in the GUI:

  1. Click on the Categories tab in the top left
  2. Select the categories you wish to include/exclude, then click the Add button.
  3. If excluding tests, check the "exclude these categories" checkbox.

Filtering Categories in NUnit 2.4.7.

Filtering Categories with Nunit-Console.exe

To include/exclude tests by category from the command line use either the /include:<category-name> or /exclude:<category-name> parameters. It's possible to provide a list of categories by using a comma delimiter.

Example of running all tests within assemblyName.dll except for tests marked as Database or Web.:

nunit-console assemblyName.dll /exclude:Database,Web

Example of running only tests marked with the Database category:

nunit-console assemblyName.dll /include:Database
Note: The name of the category is case-sensitive.

Code Available

I'm pleased to announce that I've setup a repository using Google Project hosting. I'll be posting downloadable code samples. I've created a few simple NUnit categories based on the examples above that you can download and use for your projects:

Happy testing!

submit to reddit

Wednesday, June 04, 2008

TDD Tips: Test Naming Conventions & Guidelines

The idea behind test driven development is that you are writing the test first. Since all code must reside in a method, the very first step before you can write any code, is to name the test. If you're new to TDD, you'll find this to be a very difficult thing to do. Don't let this discourage you, I'd go so far to say that out of all the tasks a developer must accomplish, finding names for things is perhaps the most difficult. W.H. Auden's statement show's that this "meta" thing transcends development:

Proper names are poetry in the raw. Like all poetry they are untranslatable. ~W.H. Auden

This begs a question that comes up frequently for new TDD developers starting out as well as experienced developers during code review: "Is there a naming convention or guidelines for unit tests?" Some believe it to be a black art, but I think it's more like acquiring a rhythm and following along. Once you've got the rhythm it gets easier.

Prior to diving into the guidelines, let's clear up some basic vocabulary:

  • Target / Subject: I often use the term "Target" or "Subject" to refer to the piece of functionality that I'm testing.
  • Fixture: Synonymous with "TestFixture", a fixture is a class that contains a set of related tests. Fixtures are classes that have been decorated with the [TestFixture] attribute.
  • Suite: Test Suites are an older style of organizing tests. They're specialized fixtures that programmatically define which Fixtures or Tests to run. NUnit supports them for backward compatibility by using the [TestSuite] attribute. Since NUnit dynamically finds all tests with the [TestFixture] attribute, they're not as popular these days.
  • Test: You may have noticed that I capitalize Test in all my entries. Tests are methods within the Fixture that are decorated with the [Test] attribute and contain code that validates the functionality of our target.
  • Setup/TearDown: Test Fixtures can designate a special piece of code to run before every Test within that Fixture. That method is decorated with the [Setup] attribute. Likewise, a method with the [TearDown] attribute is called at the end of every test within a fixture.
  • Fixture Setup/Fixture TearDown: Similar to constructors and finalizers, methods with the [TestFixtureSetup] or [TestFixtureTearDown] attributes execute before and after the Fixture executes. These methods happen before [Setup] and after [TearDown].
  • Category : The [Category] attribute when applied to a method associates the Test within a user-defined category.
  • Ignore: Tests with the [Ignore] attribute are skipped over when the Tests are run.
  • Explicit: Tests with the [Explicit] attribute won't run unless you manually run them.

The following are some suggestions I've adopted or recommended to others from past projects. Feel free to take 'em at face value, or leave a comment if you have some to add:

Fixtures

DO: Name Fixtures consistently
TestFixtures should follow a consistent naming convention to make tests easier to find. Choose a naming convention such as <TargetType>Fixture or <TargetType>Test and stick to it.

DO: Mimic namespaces of Target Code
To help keep your tests organized, use the same folders and namespace structures as your target assembly. This will help you locate tests for target types and vice versa. Since most Test runners group Tests by their namespace, it's really easy to run all tests for a specific namespace by selecting by the container folder -- which is great for regression testing an area of code. I've got another post which talks about how to structure your Test namespaces.

DO: Name Setup/TearDown methods consistenty
When naming your fixture setup and teardown methods, you really should pick a style for these methods and stick with it. Personally, I can't find any reason why you would deviate from naming these methods FixtureSetup, FixtureTearDown, Setup, and TearDown as these provide clear names. By following a standard TestFixtures structure you can cut down some of the visual noise, make tests easier to read and produce more maintainable tests across multiple developers.

CONSIDER: Separating your Tests from your Production Code
As a general rule, you should try to separate your tests from your production code. If you have a requirement where you want to test in production or verify at the client's side, you can accomplish this simply by bundling the test library with your release. Still, every project is different, and tests won't necessarily impede production other than bloating up your assembly. Separate when needed, and use your gut to tell you when you should.

CONSIDER: Deriving common Fixtures from a base Fixture
In scenarios where you are testing sets of common classes or when tests share a great deal of duplication, consider creating a base TestFixture that your Fixtures can inherit.

CONSIDER: Using Categories instead of Suites or Specialized Tests
Although Suites can be used to organize Tests of similar functionality together, Categories are the preferred method. Suites represent significant developer overhead and maintenance. Likewise, creating specialized folders to contain tests (ie "Database Tests") also creates additional effort as tests for a particular Type become spread over the test library. Categories offer a unique advantage in the UI and at the command-line that allows you to specify which categories should be included or excluded from execution. For example, you could execute only "Stateful" tests against an environment to validate a database deployment.

CONSIDER: Splitting Test Libraries into Multiple Assemblies
From past experience, projects go to lengths to separate tests from code but don't place a lot of emphasis on how to structure Test assemblies. Often, a single Test library is created, which is suitable for most projects. However, for large scale projects that can have hundreds of tests this approach can get difficult to manage. I'm not suggesting that you should religiously enforce test structure, but there may be logical motivators to divide your test assemblies into smaller units, such as grouping tests with third-party dependencies or as an alternative for using Categories. Again, separate when needed, and use your gut to tell you when you should. (You can always go back)

AVOID: Empty Setup methods
As a best practice, you should only write the methods that you need today. Adding methods for future purposes only adds visual noise for maintenance purposes. The exception to this is when you are creating a base Fixture that contains empty methods that will be overridden by derived classes.

Tests

DO: Name Tests after Functionality
The test name should match a specific unit of functionality for the target type being tested. Some key questions you may want to ask yourself: "what is the responsibility of this class?" "What does this class need to do?" Think in terms of action words. Well written test names should provide guidance when the test fails. For example, a test with the name CanDetermineAuthenticatedState provides more direction about how authentication states are examined than Login.

DO: Document your Tests
You can't assume that all of your tests will be intuitive for everyone who reviews them. Most tests require special knowledge about the functionality your testing, so a little documentation to explain what the test is doing is helpful. Using XML Documentation syntax might be overkill, but a few comments here and there are often just the right amount to help the next person understand what you need to test and how your test approaches demonstrates that functionality.

CONSIDER: Use "Cannot" Prefix for Expected Exceptions
Since Exceptions are typically thrown when your application is a performing something it wasn't designed to do, prefix "Cannot" to tests that are decorated with the [ExpectedException] attribute. Some examples: CannotAcceptNullArguments, CannotRetrieveInvalidRecord.

I would consider this a "DO" recommendation, but this a personal preference. I can't think of scenarios where this isn't the case, so this one is up for debate.

CONSIDER: Using prefixes for Different Scenarios
If your application has features that differ slightly for application roles, it's likely that your test names will overlap. Some teams have adopted a For<Scenario> syntax (CanGetPreferencesForAnonymousUser). Other teams have adopted an underscore prefix _<Scenario> (AnonymousUser_CanGetPreferences).

AVOID: Ignore Attributes with no explanation
Tests that are marked with the Ignore attribute should include a reason for why this test has been disabled. Eventually, you'll want to circle back on these tests and either fix them or alter them so that they can be used. But without an explaination, the next person will have to do a lot of investigative work to figure out that reason. In my experience, most tests with the Ignore attribute are never fixed.

AVOID: Naming Tests after Implementation
If you find that your tests are named after the methods within your classes, that's a code smell that you're testing your implementation instead of your functionality. If you changed your method name, would the test name still make sense?

AVOID: Using underscores as word-separators
I've seen tests that use_underscores_as_word_separators_for_readability, which is so-o-o 1960. PascalCase should suffice. Imagine all the time you save not holding down the shift key.

AVOID: Unclear Test Names
Sometimes we create tests for bugs that are caught late in the development cycle, or tests to demonstrate requirements based on lengthy requirements documentation. As these are usually pretty important tests (especially for bugs that creep back in), it's important to avoid giving them vague test names that represent a some external requirement like FixForBug133 or TestCase21.

Categories

DO: Limit the number of Categories
Using Categories is a powerful way to dynamically separate your tests at runtime, however their effectiveness is diminished when developers are unsure which Category to use.

CONSIDER: Defining Custom Category Attributes
As Categories are sensitive to case and spelling, you might want to consider creating your own Category attributes by deriving from CategoryAttribute. UPDATE: Read more about custom NUnit Categories.

Well, that's all for now. Are you doing things differently, or did I miss something? Feel free to leave a comment.

Updates:

  • 6/18/08 - Added links to custom NUnit Categories

submit to reddit

Wednesday, May 21, 2008

log4net Configuration made simple through Attributes

I'm sure this is well documented, but for my own reference and your convenience, here's one from my list of favorite log4net tips and tricks: how to instrument your code so that log4net automatically picks up your configuration.

On average, I've been so happy with how well log4net has fit my application logging needs that most of my projects end up using it: console apps, web applications, class libraries. Needless to say I use it a lot, and I get tired of writing the same configuration code over and over:

private static void Main()
{
    string basePath = AppDomain.CurrentDomain.BaseDirectory;
    string filePath = Path.Combine(basePath, "FileName.log4net");
    XmlConfigurator.ConfigureAndWatch(new FileInfo(filePath));
}

log4net documentation refers to a Configuration Attribute (XmlConfiguratorAttribute), but it can be frustrating to use if you're not sure how to set it up. The trick is how you name your configuration file and where you put it. I'll walk through how I set it up...

log4net using XmlConfiguratorAttribute Walkthrough

  1. Add an Assembly Configuration Attribute: log4net will look for this configuration attribute the first time you make a call to a logger. I typically give my configuration file a "log4net" extension. Place the following configuration attribute in the AssemblyInfo.cs file in the assembly that contains the main entry point for the application.

    [assembly: log4net.Config.XmlConfigurator(ConfigFileExtension = "log4net",Watch = true)]

  2. Create your configuration file: As mentioned previously, the name of the configuration file is important as is where you put it. In general, the name of the configuration file should follow the convention: full-assembly-name.extension.log4net. The file needs to be at the base folder of the application, so for WinForms and Console applications it resides in the same folder as the main executable, for ASP.NET applications it's the root of the web-site along side the web.config file.

    Project Type Project Output log4net file name Location
    WinForm App Program.exe Program.exe.log4net with exe
    Console App Console.exe Console.exe.log4net with exe
    Class Library Library.dll N/A  
    ASP.NET /bin/Web.dll /Web.dll.log4net Web root (/)

  3. Define your Configuration Settings: Copy and paste the following sample into a new file. I'm using the Rolling Appender as this creates a new log file every time the app is restarted.

    <?xml version="1.0" encoding="utf-8" ?>
    <configuration>

    <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
    </configSections>

    <log4net>
    <!-- Define output appenders -->
    <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
    <file value="log.txt" />
    <appendToFile value="true" />
    <rollingStyle value="Once" /> <!-- new log file on restart -->
    <maxSizeRollBackups value="10"/> <!-- renames rolled files on startup 1-10, no more than 10 -->
    <datePattern value="yyyyMMdd" />
    <layout type="log4net.Layout.PatternLayout">
    <param name="Header" value="[START LOG]&#13;&#10;" />
    <param name="Footer" value="[END LOG]&#13;&#10;" />
    <conversionPattern value="%d [%t] %-5p %c [%x] - %m%n" />
    </layout>
    </appender>

    <!-- Setup the root category, add the appenders and set the default level -->
    <root>
    <level value="DEBUG" />
    <appender-ref ref="RollingLogFileAppender" />
    </root>

    </log4net>
    </configuration>
  4. Make a logging call as early as possible: In order for the configuration attribute to be invoked, you need to make a logging call in the assembly that contains that attribute. Note I declare the logger as static readonly as a JIT optimization.

    namespace example
    {
    public class Global : System.Web.HttpApplication
    {
    private static readonly ILog log = LogManager.GetLogger(typeof(Global));

    protected void Application_Start(object sender, EventArgs e)
    {
    log.Info("Web Application Start.");
    }
    }
    }

Cheers.

submit to reddit