Well, it has been pretty quiet here on the blog for the last month only because my personal life has been pretty loud.
6lbs-14oz, 20" long, October 1st. Mom, baby and big brother are all well.
Living idyllically in a .NET, C#, TDD world
Well, it has been pretty quiet here on the blog for the last month only because my personal life has been pretty loud.
6lbs-14oz, 20" long, October 1st. Mom, baby and big brother are all well.
by
bryan
at
3:15 PM
1 comments
Both my home laptop and work laptop are running different versions of Vista, and after the initial shock, I've found it to be growing on me.
This list of shortcuts covers the basics and a bit more of the Vista shortcuts.
One shortcut that I've discovered this week, is tapping the SHIFT key twice. It brings the task bar and gadgets to the foreground (works for Google Desktop search, too).
by
bryan
at
12:39 AM
1 comments
This July will definitely go down in my books as most memorable to date. After a major round of changes at work and a decent severance package, I spent most of July milking my extensive contact list for opportunities, playing phone tag with head hunters and spending as much time as I possibly could outside. I've never been so relaxed... my wife claims a night-and-day difference in my outlook.
This Monday I start a senior role with a development firm specializing in emerging technologies. I'm pretty jazzed up about this as I'm normally working on technology that customers can be comfortable with -- this will be the exact opposite: looks like I'll be working primarily with WPF and Microsoft Surface. This will be baptism-by-fire, full-steam-ahead, bleeding-edge stuff - a great opportunity to go "all in" and focus on my technical.
While I expect that I'll continue to blog about TDD, guidance automation, process engineering and generally awesome web-centric code for some time to come -- we'll likely see some postcards from the edge here soon.
by
bryan
at
1:08 PM
0
comments
Over the last few posts, my legacy monolithic project with no unit tests has: configured a build server with statistics reports, empty coverage data, and a set of unit tests for the user interface. We're now in a really healthy position to introduce some healthy change into our project. Well... not quite: applying refactoring to an existing project requires a plan with some creative thinking that integrates change into the daily work cycle.
I can't stress this enough: without a plan, you're just recklessly introducing chaos into your project. Though it would help to do deep technical audit, the trick is to keep the plan at a really high level. Your main goal should be to provide management some deliverable, such as a set of diagrams and a list of recommendations. Each set of recommendations will likely need their own estimation and planning cycle. Here's an approach you can use to help start your plan:
After this short exercise, you should have a better sense for the amount of changes and the order that they should be done. The next step is finding a way to introduce these changes into the your release schedule.
While documenting your findings and producing a deliverable is key, perhaps the best way to introduce change into the release schedule is the direct route: tell the decision makers your plans. An informed client/management is your best ally, but you need to speak their language.
For example, in a project where the user-interface code is tied to service-objects which are tied directly to web-services, it's not enough to state this is an inflexible design. However, by outlining a cost savings, reduced bugs and quicker time to market by removing a pain point (the direct coupling between UI and Web-Services prevents third parties or remote developers from properly testing their code) they're much more agreeable to scheduling some time to fix things.
For an existing project, it's very unlikely that the client will agree to a massive refactoring such as changing all of the underlying architecture for the UI at the same time. However, if a business request touches a component that suffers a pain point, you might be able to make a case to fix things while introducing the change. This is the general theme of refactoring: each step in the plan should be small and isolated so that the impact is minimal. I like to think of it as a sliding-puzzle.
Introducing change to a project typically gets easier as you demonstrate results. However, since the first steps to introduce a new design typically requires a lot of plumbing and simultaneous changes, it can be a very difficult sell for the client if these plumbing changes are padded into a simple request. To ease the transition it might help if you alleviate the bite by taking some of the first steps on your own: either as a proof of concept, or as an isolated example that can be used to set direction for other team members.
Here are a few things you can take on with relatively minor effort that will ease your transition.
A common problem with legacy projects is the confusion within the code caused by organic growth: classes are littered with multiple disjointed responsibilities, namespaces lose their meaning, inconsistent or complex relationships between assemblies, etc. Before you start any major refactoring, now is a really good time to establish how your application will be composed in terms of namespaces and assemblies (packages).
Packaging is normally a side effect of solution design and isn't something you consider first when building an application from scratch. However, for a legacy project where the code already exists, we can look at using packaging as the vehicle for delivering our new solution. Some Types within your code base may move to new locations, or adopt new namespaces. I highly recommend using assemblies as an organizational practice: instruct team members where new code should reside and guide (enforce) the development of new code within these locations. (Just don't blindly move things around: have a plan!)
Recently, Jeffrey Palermo coined the term Onion architecture to describe a layered architecture where the domain model is centered in the "core", service layers are built upon the core, and physical dependencies (such as databases) are pushed to the outer layers. I've seen a fair amount of designs follow this approach, and a name for it is highly welcomed -- anyone considering a new architecture should take a look at this design. Following this principle, it's easy to think of the layers or services residing in different packages.
A service locator is an effective approach to breaking down dependencies between implementations, making your code more contract-based and intrinsically more testable. There are lots of different service locator or dependency injection frameworks out there; a common approach is to write your own Locator and have it wrap around your framework of choice. The implementation doesn't need to be too complicated, even just a hashtable of objects will do; the implementation can be upgraded to other technologies, such as Spring.net, Unity, etc.
Perhaps the greatest advantage that a Service Locator can bring to your legacy project is the ability to unhook the User Interface code from the Business Logic (Service) implementations. This opens the door to refactoring inline user-interface code and user controls. The fruits of this labor are clearly demonstrated in your code coverage statistics.
Not all your business objects will fit into your service locator right away, mainly because of strong coupling between UI and BL layers (static methods, etc). Compile a list of services that will need to be refactored, provide a high-level estimate for each one and add them to a backlog of technical debt to be worked on a later date.
You can move Business Layer objects into the Service Locator by following the following steps:
Now that you have continuous integration, reports that demonstrate results, unit tests for the presentation layer, the initial ground-work for your new architecture and a plan of attack -- you are well on your way to start the refactoring process of changing your architecture from the inside out. Remember to keep your backlog and plan current (it will change), write tests for the components you refactor, and don't bite off more than you can chew.
Good luck with the technical debt!
by
bryan
at
12:22 AM
0
comments
While cleaning up a code monster, a colleague and I were looking for ways to dynamically rebuild all of our web-services as part of build script or utility as we have dozens of them and they change somewhat frequently. In the end, we decided that we didn't necessarily need support for modifying them within the IDE and we could just generate them using the WSDL tool.
However, while I was researching the problem I stumbled upon an easy method to drive Visual Studio without having to write an addin or macro; useful for one-off utilities and hair-brain schemes.
Here's some ugly code, just to give you a sense for it.
You'll need references to:
namespace AutomateVisualStudio { using System; using EnvDTE; using VSLangProj80; public class Utility { public static void Main() { string projectPath = @"C:\Demo\Empty.csproj"; Type type = Type.GetTypeFromProgID("VisualStudio.DTE.8.0"); DTE dte = (DTE) Activator.CreateInstance(type); dte.MainWindow.Visible = false; dte.Solution.Create(@"C:\Temp\","tmp.sln"); Project project = dte.Solution.AddFromFile(projectPath, true); VSProject2 projectV8 = (VSProject2) project.Object; if (projectV8.WebReferencesFolder == null) { projectV8.CreateWebReferencesFolder(); } ProjectItem item = projectV8.AddWebReference("http://localhost/services/DemoWS?WSDL"); item.Name = "DemoWS"; project.Save(projectPath); dte.Quit(); } } }
Note that Visual Studio doesn't allow you to manipulate projects directly; you must load your project into a solution. If you don't want to mess with your existing solution file, you can create a temporary solution and add your existing project to it. And if you don't want to clutter up your disk with temporary solution files, just don't call the the Save method on the Solution object.
If you had to build a Visual Studio utility, what would you build?
by
bryan
at
8:05 AM
2
comments
Stumbled upon this post about how to catch server errors for your WatiN tests. The approach outlined provides a decent mechanism for detecting server errors by sub-classing the WatiN IE object. While I do appreciate the ability to subclass, it bothers me a bit that I have to write the logic in my subclass to detect server errors. After poking around a bit, I think there's a more generic approach that can be achieved by tapping into the NavigateError event of the native browser:
public class MyIE : IE { private InternetExplorerClass ieInstance; private NavigateError error; public MyIE() { ieInstance = (InternetExplorerClass) InternetExplorer; ieInstance.BeforeNavigate += BeforeNavigate; ieInstance.NavigateError += NavigateError; } public override void WaitForComplete() { base.WaitForComplete(); if (error != null) { throw new ServerErrorException(Text); } } void BeforeNavigate(string URL, int Flags, string TargetFrameName, ref object PostData, string Headers, ref bool Cancel) { error = null; } void NavigateError(object pDisp, ref object URL, ref object Frame, ref object StatusCode, ref bool Cancel) { error = new NavigateError(URL,StatusCode); } private class NavigateError { public NavigateError(object url, object statusCode) { _url = url; _statusCode = statusCode; } private object _url; private object _statusCode; } } public class ServerErrorException : Exception { public ServerErrorException(string message) : base(String.Format("A server error occurred: {0}",message)) { } }
Few caveats:
While I wouldn't consider COM Interop to be a "clean" solution, it is more bit more portable between solutions. And if it was this easy, why isn't it part of WatiN anyway?
by
bryan
at
8:10 AM
0
comments
Following up on the series of posts on Legacy Projects, my legacy project with no tests now has a build server with empty coverage data. At this point, it's really quite tempting to start refactoring my code, adding in tests as I go, but that approach is slightly ahead of the cart.
Although Tests for the backend code would help, they can't necessarily guarantee that everything will work correctly. To be fair, the only real guarantee for the backend code would be to write Tests for the existing code and then begin to refactor both Tests and code. This turns out to be a very time consuming endeavour as you'll end up writing the Tests twice. In addition, I'm working with the assumption that my code is filled with static methods with tight-coupling which doesn't lend itself well to testing. I'm going to need a crowbar to fix that, and that'll come later.
It helps to approach the problem by looking at the current manual process as a form of unit testing. It's worked well up to this point, but because it's done by hand it's a very time consuming process that is prone to error and subjective of the user performing the tests. The biggest downfall of the current process is that when the going get's tough, we are more likely to miss details. In his book, Test Driven Development by Example, Kent Beck refers to manual testing as "test as a verb", where we test by evaluating aspects of the system. What we need to do is turn this into "test as a noun" where the test is a "procedure to evaluate" in an automated fashion. By automating the process, we eliminate most of the human related problems and save a bundle of time.
For legacy projects, the best place automation starting point is to test the user interface, which isn't the norm for TDD projects. In a typical TDD project, user interface testing tends to appear casually late in the project (if it appears at all), often because the site is incomplete and the user interface is a very volatile place; UI tests are often seen as too brittle. However, for a legacy project the opposite is true: the site is already up and running and the user interface is relatively stable; it's more likely that any change we make to the backend systems will break the user interface.
There is some debate on the topic of where this testing should take place. Some organizations, especially those where the Quality Assurance team is separated from the development teams, rely on automated testing suites such as Empirix (recently acquired by Oracle) to perform functional and performance tests. These are powerful (and expensive) tools, but in my opinion are too late in the development cycle -- you want to catch minor bugs before they are released to QA, otherwise you'll incur an additional bug-fix development cycle. Ideally, you should integrate UI testing into your build cycle using tools that your development team is familiar with. And if you can incorporate your QA team into the development cycle to help write the tests, you're more likely to have a successful automated UI testing practice.
Of the user interface testing frameworks that integrate nicely with our build scripts, two favourites come to mind: Selenium and WaitN.
Selenium is a java-based powerhouse whose key strengths are platform and browser diversity, and it's extremely scalable. Like most java-based solutions, it's a hodge-podge of individual components that you cobble together to suit your needs; it may seem really complex, but it's a really smart design. At its core, Selenium Core is a set of JavaScript files that manipulate the DOM. The most common element is known as Selenium Remote-Control, which is a server-component that can act as a message-broker/proxy-server/browser-hook that can magically insert the Selenium JavaScript into any site -- it's an insanely-wicked-evil-genius solution to overcoming cross-domain scripting issues. Because Selenium RC is written in Java, it can live on any machine, which allows you to target Linux, Mac and PC browsers. The scalability feature is accomplished using Selenium Grid, which is a server-component that can proxy requests to multiple Selenium RC machines -- you simply change your tests to target the URL of the grid server. Selenium's only Achilles' heel is that SSL support requires some additional effort.
A Selenium test that targets the Selenium RC looks something like this:
[Test] public void CanPerformSeleniumSearch() { ISelenium browser = new DefaultSelenium("localhost",4444, "*iexplore", "http://www.google.com"); browser.Start(); browser.Open("/"); browser.Type("q", "Selenium RC"); browser.Click("btnG"); string body = browser.GetBodyText(); Assert.IsTrue(body.Contains("Selenium")); browser.Stop(); }
The above code instantiates a new session against the Selenium RC service running on port 4444. You'll have to launch the service from a command prompt, or configure it to run as a service. There are lots of options. The best way to get up to speed is to simply follow their tutorial...
Selenium has a FireFox extension, Selenium IDE, that can be used to record browser actions into Selenese.
WatiN is a .NET port of the java equivalent WatiR. Although it's currently limited to Internet Explorer on Windows (version 2.0 will target FireFox), it has an easy entry-path and a simple API.
The following WatiN sample is a rehash of the Selenium example. Confession: both samples are directly from the provided documentation...
[Test] public void CanPerformWatiNSearch() { using (IE ie = new IE("http://www.google.com")) { ie.TextField(Find.ByName("q")).TypeText("WatiN"); ie.Button(Find.ByName("btnG")).Click(); Assert.IsTrue(ie.ContainsText("WaitN"); } }
As WatiN is a browser hook, its API contains exposes the option to tap directly into the browser through Interop. You may find it considerably more responsive than Selenium because the requests are marshaled via windows calls instead of HTTP commands. Though there is a caveat to performance: WatiN expects a Single Threaded Apartment model in order to operate, so you may have to adjust your runtime configuration.
WatiN also has a standalone application, WatiN Recorder, that can capture browser activity in C# code.
Rather than writing an exhaustive set of regression tests, here's my approach:
by
bryan
at
12:17 AM
2
comments
Up to this point, I've crafted the HTML markup for my posts this year using Notepad++. While working with a local editor is far superior to using Blogger's editor window, I've found stylizing elements and adding hyperlinks to be somewhat time consuming, not to mention difficult to read/review/write content with all the HTML markup in the way. Despite having better control over the markup, the largest problem with this approach is you really can't see what your post will look like until you publish, and even then, I usually follow a nervous publish/review/tweak/publish dance number to sort out all the display issues.
Recently, I downloaded LiveWriter and w.bloggar to test drive alternatives. (Actually, I was interested in w.blogger's ability to edit Blogger Templates -- but it turns out that they don't work on blogger's new layout templates. Drat.) So far, I'm pleasantly surprised with LiveWriter.
Although I'm pretty excited that the tool is written in .NET with support for managed addins, I am most impressed with the feature that can simulate a live preview of your post. LiveWriter is able to pull this off by creating a temporary post against your blog and analyzing it to extract your CSS and HTML Layout. You can toggle between editing (F11), preview (F12) and HTML (Shift + F11) really easily.
The biggest snag I've encountered thus far is that the HTML markup produced by LiveWriter is cleaned up with lots of extra line-feeds for readability. While this makes reading the HTML a simple pleasure, it wreaks havoc with my current Blogger settings.
Blogger's default setting converts carriage-returns into <br /> tags. So all the extra line breaks inserted by LiveWriter are transformed into ugly whitespace in your posts. This feature is configurable within Blogger: Posts -> Formatting -> Convert line breaks.
Unfortunately for me, this is a breaking change for most of my posts (dating back to 2004). To fix, I have to add the appropriate <p></p> tags around my content -- fortunately, LiveWriter will automatically correct markup for paragraphs that I touch with additional whitespace. So while the good news is my posts will have proper markup in the editor, the bad news is I have to manually edit each one.
by
bryan
at
2:58 PM
3
comments
From my previous post, Get Statistics from your Build Server, I spoke about getting meaningful data into your log output as soon as possible so that you can begin to generate reports about the state of your application.
I'm using NCover to provide code coverage analysis, but I can also get important metrics like Non-Comment Lines Of Code, number of classes, members, etc. Unfortunately, I have no unit tests so my coverage report contains no data. Since NCover will only profile assemblies that are loaded into the profiler's memory space, referencing my target assembly into my Test assembly isn't enough. To compensate, i added this simple test to load the assembly into memory:
[Test]
public void CanLoadAssemblyToProvideCoverageData()
{
System.Reflection.Assembly.Load("AssemblyName");
}This is obviously a dirty hack, and I'll remove it the second I write some tests. Although I only have 0% coverage, I now have a detailed report that shows over 40,000 lines of untested code. The stage is now set to remove duplication and introduce code coverage.
by
bryan
at
4:03 PM
0
comments
As I mentioned in my post, Working with Legacy .NET Projects, my latest project is a legacy application with no tests. We're migrating from .NET 1.1 to .NET 2.0, and this is the first entry in the series of dealing with legacy projects. Click here to see the starting point.
On the majority of legacy projects that I've worked on, there is often a common thread within the development team that believes the entire code base is outdated, filled with bugs and should be thrown away and rewritten from scratch. Such a proposal is a very tough sell for management, who will no doubt see zero value in spending a staggering amount only to receive exactly what they currently have, plus a handful of fresh bugs. Rewrites might make sense when accompanied with new features or platform shifts, but in large they are a very long and costly endeavour. Refactoring the code using small steps in order to get out of Design Debt is a much more suitable approach, but cannot be done without a plan that management can get behind. Typically, management will support projects that can quantify results, such as improving server performance or page load times. However, in the context of a sprawling application without separation of concerns, estimating effort for these types of projects can be extremely difficult, and further compounded when there is no automated testing in place. It's a difficult stalemate between simple requirements and a full rewrite.
Assuming that your legacy project at least has source control, the next logical step to improve your landscape is to introduce a continous integration server or build server. And as there are countless other posts out there describing how to setup a continuous integration server, I'm not going to repeat those good fellows.
While the benefits of a build server are immediately visible for developers, who are all too familiar with dumb-dumb errors like compilation issues due to missing files in source control, the build server can also be an important reporting tool that can be used to sell management on the state of the application. As a technology consultant who has played the part between the development team and management, I think it's fair to say that most management teams would love to claim that they understand what their development teams do, but they'd rather be spared the finer details. So if you could provide management a summary of all your application's problems graphed against a timeline, you'd be able to demonstrate the effectiveness of their investment over time. That's a pretty easy sell.
The great news is, very little is required on your part to produce the graphs: CruiseControl 1.3 has a built in Statistics Feature that uses XPath statements to extract values from your build log. Statistics are written to an xml file and csv file for easy exporting, and third party graphing tools can be plugged into the CruiseControl dashboard to produce slick looking graphs. The challenge lies in mapping the key pain points in your application to a set of quantifiable metrics and then establishing a plan that will help you improve those metrics.
Here's a common set of pain points and metrics that I want to improve/measure for my legacy project:
| Pain | Metrics | Toolset |
| Tight Coupling (Poor Testability) | Code Coverage, Number of Tests | NCover, NUnit |
| Complexity / Duplication (Code Size) | Cyclomatic complexity, number of lines of code, classes and members | NCover, NDepend, SourceMonitor or VIL |
| Standards Compliance | FxCop warnings and violations, compilation warnings | FxCop, MSBuild |
Ideally, before I start any refactoring or code clean-up, I want my reports to reflect the current state of the application (flawed, tightly coupled and un-testable). To do this, I need to start capturing this data as soon as possible by adding the appropriate tools to my build script. While it's possible to add new metrics to your build configuration at any time, there is no way to go back and generate log data for previous builds. (You could manually check out previous builds and run the tools directly, but would take an insane amount of time.) The CruiseControl.NET extension CCStatistics also has a tool that can reprocess your log files, which is handy if you add new metrics for data sources that have already been added to your build output.
Since adding all these tools into your build script requires some tinkering, i'll be gradually adding these tools into my build script. To minimize changes to my cruise control configuration, I can use a wildcard filter to match all files that follow a set naming convention. I'm using a "*-Results.xml" naming convention.
<-- from ccnet.config -->
<publishers>
<merge>
<files>
<file>c:\buildpath\build-output\*-Results.xml</file>
</files>
</merge>
</publishers>
Configuring the Statistics Publisher is really quite easy, and the great news is that the default configuration captures most of the metrics above. The out of box configuration captures the following:
Here's a snippet from my ccnet.config file that shows NCover lines of code, files, classes and members. Note that I'm also using Grant Drake's NCoverExplorer extras to generate an xml summary instead of the full coverage xml output for performance reasons.
<publishers>
<merge>
<files>
<file>c:\buildpath\build-output\*-Results.xml</file>
</files>
</merge>
<statistics>
<statisticList>
<firstMatch name='NCLOC' xpath='//coverageReport/project/@nonCommentLines' include='true' />
<firstMatch name='files' xpath='//coverageReport/project/@files' include='true' />
<firstMatch name='classes' xpath='//coverageReport/project/@classes' include='true' />
<firstMatch name='members' xpath='//coverageReport/project/@members' include='true' />
</statisticList>
</statistics>
<!-- email, etc -->
</publishers>
I've omitted the metrics for NDepend/SourceMonitor/VIL, as I haven't fully integrated these tools into my build reports. I may revisit this later.
If you've found this useful or have other cool tools or metrics you want to share, please leave a note.
Happy Canada Day!
by
bryan
at
4:18 PM
2
comments
I use Subversion at work and when I'm managing files from the command prompt, I generally don't enjoy having to sift through a long list of file names with question marks next to them, wondering whether these files should be checked into source control. Folders like "bin" and "obj" and user-preference files have no place in source control -- they just seem to get in the way.
If you're using TortoiseSVN, you can hide theses folder from source control simply by pulling up the context-menu for the un-versioned folder, select TortoiseSVN and "Add to ignore list". However, if you're using the command prompt, it requires a bit more effort. (Over the last few years, I've grown a well established distrust for TortiseSVN, as they shell-overlays can cripple even the fastest machines. I really wish the TortiseSVN guys would release their merge tool as a separate download, if you know a good diff tool, let me know.)
Because the svn:ignore property is stored as a new line delimited list, you need to pipe the results into a text file and edit them in the tool of your choice. When updating the property, I use the -F argument to specify a file instead of supplying the value in the command line.
svn propget svn:ignore . > .ignore
notepad .ignore
svn propset svn:ignore -F.ignore .
svn st
svn ci . -m"Updating ignores"
by
bryan
at
1:28 AM
3
comments
My current project at work is a legacy application, written using .NET 1.1. The application is at least 5 years old and has had a wide range of developers. It's complex, has many third-party elements and constraints and lots of lots of code. Like all legacy applications, they set out with best of intentions but ended up somewhere else when new requirements started to deviate from the original design. It's safe to say that it's got challenges, it works despite its bugs and all hope is not yet lost.
Oh, and no unit Tests. Which in my world, is a pretty big thing. Hope you like Spaghetti!!
Fortunately, the client has agreed to a .NET 2.0 migration, which is a great starting place. All in all, I see this as a great refactoring exercise to slowly introduce tests and proper design. Along the way, we'll be fixing bugs, improving performance and reducing friction to change. I'll be writing some posts over the next while that talk about the strategies were using to change our landscape. Maybe, you'll find them useful for your project.
Related Posts:
by
bryan
at
10:04 PM
0
comments
A few weeks back, I provided a specially constructed link that would allow you to debug HitBox page attributes. I had the pleasure (sarcasm intended) of attending WebTrends training this week, which revealed a similar gem...
javascript:alert(gImages[0].src)
To use, drag this link to your browser toolbar: Show WebTrends.
When clicked, the resulting alert shows all the attributes that are sent to WebTrends SmartSource data collector.
If you want to try it out, Motorcylce USA uses WebTrends.
Update 6-20-08: If you're using FireBug in FireFox, the network performance tab makes it really easy to view the querystring parameters associated with the WebTrends tracking image.
by
bryan
at
7:58 PM
0
comments
In my recent post about test naming conventions and guidelines, I mentioned that you should annotate tests for third-party and external dependencies with category attributes and limit the number of categories that you create. This post will show basic usage of categories, will explain some of the reasoning behind limiting the number of categories. I'll also show how you can create your own categories with NUnit 2.4.x.
Although it's possible to annotate all of your tests with categories, they're really only useful for marking sensitive tests, typically around logical boundaries in your application. Some of the typical categories that I mark my tests with:
Using categories are very straight forward. Here's an example of a test that is marked with a "Database" category
namespace Code
{
[TestFixture]
public class AdoOrderManagementProvider
{
[Test,Category("Database")]
public void CanRetrieveOrderById()
{
// database code here
}
}
}
One problem I've found with using categories is that category names can be difficult to keep consistent in large teams, mainly because the category name is a literal value that is passed to the attribute constructor. In large teams, you either end up with several categories with different spellings, or the unclear intent of the categories becomes an obstacle which prevents developer adoption.
Fortunately, since NUnit 2.4.x, it's possible to create your own custom categories by deriving from the CategoryAttribute class. (In previous releases, the CategoryAttribute class was sealed.) Creating your own custom categories as concrete classes allows the solution architect to clearly express the intent of the testing strategy, and relieves the developer of spelling mistakes. As an added bonus, you get Intellisense support (through Xml Documentation syntax), ability to identify usages and the ability to refactor the category much more effectively than a literal value.
Here's the code for a custom database category, and the above example modified to take advantage of it:
using NUnit.Framework;
namespace NUnit.Framework
{
/// <summary>
/// This test, fixture or assembly has a direct dependency to a database.
/// </summary>
[AttributeUsage(AttributeTarget.Class | AttributeTarget.Method | AttributeTarget.Assembly, AllowMultiple = false)]
public class RequiresDatabaseAttribute : CategoryAttribute
{
public RequiresDatabaseCategoryAttribute() : base("Database")
{}
}
}
namespace Code
{
[TestFixture]
public class AdoOrderManagementProvider
{
[Test, RequiresDatabase]
public void CanRetrieveOrderById()
{
// etc...
}
}
}
It's important to point out that categories can be applied per Test, per Fixture or even for the entire Assembly, so you have lots of options in terms of the level of granularity.
The real advantage to using categories is that you can filter which tests should be included or excluded when the tests are run.
To actively include/exclude tests by category in the GUI:
Filtering Categories in NUnit 2.4.7.
To include/exclude tests by category from the command line use either the /include:<category-name> or /exclude:<category-name> parameters. It's possible to provide a list of categories by using a comma delimiter.
Example of running all tests within assemblyName.dll except for tests marked as Database or Web.:
nunit-console assemblyName.dll /exclude:Database,Web
Example of running only tests marked with the Database category:
nunit-console assemblyName.dll /include:Database
Note: The name of the category is case-sensitive.
I'm pleased to announce that I've setup a repository using Google Project hosting. I'll be posting downloadable code samples. I've created a few simple NUnit categories based on the examples above that you can download and use for your projects:
Happy testing!
by
bryan
at
11:30 PM
0
comments
The idea behind test driven development is that you are writing the test first. Since all code must reside in a method, the very first step before you can write any code, is to name the test. If you're new to TDD, you'll find this to be a very difficult thing to do. Don't let this discourage you, I'd go so far to say that out of all the tasks a developer must accomplish, finding names for things is perhaps the most difficult. W.H. Auden's statement show's that this "meta" thing transcends development:
Proper names are poetry in the raw. Like all poetry they are untranslatable. ~W.H. Auden
This begs a question that comes up frequently for new TDD developers starting out as well as experienced developers during code review: "Is there a naming convention or guidelines for unit tests?" Some believe it to be a black art, but I think it's more like acquiring a rhythm and following along. Once you've got the rhythm it gets easier.
Prior to diving into the guidelines, let's clear up some basic vocabulary:
The following are some suggestions I've adopted or recommended to others from past projects. Feel free to take 'em at face value, or leave a comment if you have some to add:
DO: Name Fixtures consistently
TestFixtures should follow a consistent naming convention to make tests easier to find. Choose a naming convention such as <TargetType>Fixture or <TargetType>Test and stick to it.
DO: Mimic namespaces of Target Code
To help keep your tests organized, use the same folders and namespace structures as your target assembly. This will help you locate tests for target types and vice versa. Since most Test runners group Tests by their namespace, it's really easy to run all tests for a specific namespace by selecting by the container folder -- which is great for regression testing an area of code. I've got another post which talks about how to structure your Test namespaces.
DO: Name Setup/TearDown methods consistenty
When naming your fixture setup and teardown methods, you really should pick a style for these methods and stick with it. Personally, I can't find any reason why you would deviate from naming these methods FixtureSetup, FixtureTearDown, Setup, and TearDown as these provide clear names. By following a standard TestFixtures structure you can cut down some of the visual noise, make tests easier to read and produce more maintainable tests across multiple developers.
CONSIDER: Separating your Tests from your Production Code
As a general rule, you should try to separate your tests from your production code. If you have a requirement where you want to test in production or verify at the client's side, you can accomplish this simply by bundling the test library with your release. Still, every project is different, and tests won't necessarily impede production other than bloating up your assembly. Separate when needed, and use your gut to tell you when you should.
CONSIDER: Deriving common Fixtures from a base Fixture
In scenarios where you are testing sets of common classes or when tests share a great deal of duplication, consider creating a base TestFixture that your Fixtures can inherit.
CONSIDER: Using Categories instead of Suites or Specialized Tests
Although Suites can be used to organize Tests of similar functionality together, Categories are the preferred method. Suites represent significant developer overhead and maintenance. Likewise, creating specialized folders to contain tests (ie "Database Tests") also creates additional effort as tests for a particular Type become spread over the test library. Categories offer a unique advantage in the UI and at the command-line that allows you to specify which categories should be included or excluded from execution. For example, you could execute only "Stateful" tests against an environment to validate a database deployment.
CONSIDER: Splitting Test Libraries into Multiple Assemblies
From past experience, projects go to lengths to separate tests from code but don't place a lot of emphasis on how to structure Test assemblies. Often, a single Test library is created, which is suitable for most projects. However, for large scale projects that can have hundreds of tests this approach can get difficult to manage. I'm not suggesting that you should religiously enforce test structure, but there may be logical motivators to divide your test assemblies into smaller units, such as grouping tests with third-party dependencies or as an alternative for using Categories. Again, separate when needed, and use your gut to tell you when you should. (You can always go back)
AVOID: Empty Setup methods
As a best practice, you should only write the methods that you need today. Adding methods for future purposes only adds visual noise for maintenance purposes. The exception to this is when you are creating a base Fixture that contains empty methods that will be overridden by derived classes.
DO: Name Tests after Functionality
The test name should match a specific unit of functionality for the target type being tested. Some key questions you may want to ask yourself: "what is the responsibility of this class?" "What does this class need to do?" Think in terms of action words. Well written test names should provide guidance when the test fails. For example, a test with the name CanDetermineAuthenticatedState provides more direction about how authentication states are examined than Login.
DO: Document your Tests
You can't assume that all of your tests will be intuitive for everyone who reviews them. Most tests require special knowledge about the functionality your testing, so a little documentation to explain what the test is doing is helpful. Using XML Documentation syntax might be overkill, but a few comments here and there are often just the right amount to help the next person understand what you need to test and how your test approaches demonstrates that functionality.
CONSIDER: Use "Cannot" Prefix for Expected Exceptions
Since Exceptions are typically thrown when your application is a performing something it wasn't designed to do, prefix "Cannot" to tests that are decorated with the [ExpectedException] attribute. Some examples: CannotAcceptNullArguments, CannotRetrieveInvalidRecord.
I would consider this a "DO" recommendation, but this a personal preference. I can't think of scenarios where this isn't the case, so this one is up for debate.
CONSIDER: Using prefixes for Different Scenarios
If your application has features that differ slightly for application roles, it's likely that your test names will overlap. Some teams have adopted a For<Scenario> syntax (CanGetPreferencesForAnonymousUser). Other teams have adopted an underscore prefix _<Scenario> (AnonymousUser_CanGetPreferences).
AVOID: Ignore Attributes with no explanation
Tests that are marked with the Ignore attribute should include a reason for why this test has been disabled. Eventually, you'll want to circle back on these tests and either fix them or alter them so that they can be used. But without an explaination, the next person will have to do a lot of investigative work to figure out that reason. In my experience, most tests with the Ignore attribute are never fixed.
AVOID: Naming Tests after Implementation
If you find that your tests are named after the methods within your classes, that's a code smell that you're testing your implementation instead of your functionality. If you changed your method name, would the test name still make sense?
AVOID: Using underscores as word-separators
I've seen tests that use_underscores_as_word_separators_for_readability, which is so-o-o 1960. PascalCase should suffice. Imagine all the time you save not holding down the shift key.
AVOID: Unclear Test Names
Sometimes we create tests for bugs that are caught late in the development cycle, or tests to demonstrate requirements based on lengthy requirements documentation. As these are usually pretty important tests (especially for bugs that creep back in), it's important to avoid giving them vague test names that represent a some external requirement like FixForBug133 or TestCase21.
DO: Limit the number of Categories
Using Categories is a powerful way to dynamically separate your tests at runtime, however their effectiveness is diminished when developers are unsure which Category to use.
CONSIDER: Defining Custom Category Attributes
As Categories are sensitive to case and spelling, you might want to consider creating your own Category attributes by deriving from CategoryAttribute. UPDATE: Read more about custom NUnit Categories.
Well, that's all for now. Are you doing things differently, or did I miss something? Feel free to leave a comment.
Updates:
by
bryan
at
1:04 AM
8
comments
I'm sure this is well documented, but for my own reference and your convenience, here's one from my list of favorite log4net tips and tricks: how to instrument your code so that log4net automatically picks up your configuration.
On average, I've been so happy with how well log4net has fit my application logging needs that most of my projects end up using it: console apps, web applications, class libraries. Needless to say I use it a lot, and I get tired of writing the same configuration code over and over:
private static void Main()
{
string basePath = AppDomain.CurrentDomain.BaseDirectory;
string filePath = Path.Combine(basePath, "FileName.log4net");
XmlConfigurator.ConfigureAndWatch(new FileInfo(filePath));
}
log4net documentation refers to a Configuration Attribute (XmlConfiguratorAttribute), but it can be frustrating to use if you're not sure how to set it up. The trick is how you name your configuration file and where you put it. I'll walk through how I set it up...
| Project Type | Project Output | log4net file name | Location |
| WinForm App | Program.exe | Program.exe.log4net | with exe |
| Console App | Console.exe | Console.exe.log4net | with exe |
| Class Library | Library.dll | N/A | |
| ASP.NET | /bin/Web.dll | /Web.dll.log4net | Web root (/) |
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
</configSections>
<log4net>
<!-- Define output appenders -->
<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
<file value="log.txt" />
<appendToFile value="true" />
<rollingStyle value="Once" /> <!-- new log file on restart -->
<maxSizeRollBackups value="10"/> <!-- renames rolled files on startup 1-10, no more than 10 -->
<datePattern value="yyyyMMdd" />
<layout type="log4net.Layout.PatternLayout">
<param name="Header" value="[START LOG] " />
<param name="Footer" value="[END LOG] " />
<conversionPattern value="%d [%t] %-5p %c [%x] - %m%n" />
</layout>
</appender>
<!-- Setup the root category, add the appenders and set the default level -->
<root>
<level value="DEBUG" />
<appender-ref ref="RollingLogFileAppender" />
</root>
</log4net>
</configuration>
namespace example
{
public class Global : System.Web.HttpApplication
{
private static readonly ILog log = LogManager.GetLogger(typeof(Global));
protected void Application_Start(object sender, EventArgs e)
{
log.Info("Web Application Start.");
}
}
}
Cheers.
by
bryan
at
9:03 PM
6
comments
If you're following true test driven development, you should be writing tests before you write the code. By definition you only write the code that is required and you should always have 100% code coverage.
Unfortunately, this is not always the case. We have legacy projects without tests; we're forced to cut corners; we leave things to finish later that we forget about. For that reason, we look to tools to give us a sense of confidence in the quality of our code. Code coverage is often (dangerously) seen as a confidence gauge. So to follow up on a few of my other TDD posts, I want to talk about what value code coverage can provide and how you should and shouldn't use it...
Let's start by looking at what code coverage will tell us...
In some cases, code coverage can be used to contribute to a confidence level. I feel better about a large code base that has an 80% coverage than little or no coverage. But coverage is just statistical data -- it can be misleading...
Good Coverage doesn't mean Good Code
Having a high coverage metric cannot be used as an overall code quality metric. Code coverage cannot reveal that your code or tests haven't accounted for unexpected scenarios, so it's possible that buggy code with "just enough" tests can have high coverage.
Good Coverage doesn't mean Good Tests
A widely held belief of TDD is that the confidence level of the code is proportional to the quality of the tests. Code coverage tools can be very useful to developers to identify areas of the code that are missing tests, but should not be used as a benchmark for test quality. Tests can become meaningless when developers write tests to satisfy coverage reports instead of writing tests to prove functionality of the application. See the example below.
Developers can unknowingly write a test that invalidates coverage. To demonstrate, let's assume we have a really simple Person class. For sake of argument, FirstName is always required so we make it available through the the constructor.
[TestFixture]
public class PersonTest
{
[Test]
public void CanCreatePerson()
{
Person p = new Person("Bryan");
Assert.AreEqual(p.FirstName,"Bryan");
}
}
public class Person
{
public Person(string firstName)
{
_first = firstName;
}
public virtual string FirstName
{
get { return _first; }
set { _first = value; }
}
private string _first;
}
This is all well and good. However, a code coverage report would reveal that the FirstName property setter (highlighted above) has no coverage.
Should we fix the code....
public Person(string firstName)
{
_first = firstName;
FirstName = firstName; // virtual method call in constructor
// is a FxCop violation
}
... or the test?
[Test]
public void CanCreatePerson()
{
Person p = new Person("bryan");
Assert.AreEqual(p.FirstName,"bryan");
p.FirstName = "Bryan";
Assert.AreEqual("Bryan",p.FirstName);
}
Trick question. Neither!
There are two ways to improve code coverage -- write more tests, or get rid of code. In this case, I would argue that it better to remove the setter than write any code just to satisfy coverage. (Wow, less really IS more!) Leave the property as read-only until some calling code needs to write to it, at which point the tests for that call site will provide the coverage you need.
"But putting the setter back in is a pain!" -- sure it is. Alternatively, you can leave it in, but make sure you do not write a test for it. If the coverage remains zero for extended periods of time, remove it later. (If you can't remove it because some calling code is writing to it, you missed something in one of your tests.)
Note: In general, plain old value objects like our Person class won't need standalone tests. The exception to this is when you need tests to demonstrate specialized logic in getter/setter methods.
by
bryan
at
2:30 PM
0
comments
In my last post, I highlighted some of the test-driven benefits of using the InternalsVisbleTo attribute. In keeping with the trend of TDD posts, here's a recent change in direction I've made about how to separate your tests from your code.
There's a debate and poll going on about where you should put your tests. The poll shows that the majority of developers are putting their code in separate projects (yeaaaa!). Bill Simser's suggestion to have tests reside within the code is a belief that balances dangerously between heresy and pragmaticism. Although I'm opposed to releasing tests with production, one point I can identify with is the overhead of keeping namespaces between your code and your tests in sync. (Sorry Bill, if I wanted my end users to run my tests, I'd give them my Test library and tell them to download NUnit) A long the same lines, at some point our organization picked up some advice that code namespaces should reflect their location in source control. This has proven effective for maintenance as this makes it easier to track down Types when inspecting a stack-trace. Following this advice has led us to adopt a consistent naming strategy for assemblies and their namespaces:
| Project | Namespace | Assembly |
| Component | Company.Component | Company.Component.dll |
| Test | Company.Component.Test | Company.Component.Test.dll |
This works well, but I have a few hang-ups on this design. This strategy pre-dates most of our TDD efforts, and frankly it gets in the way. Here are my issues:
However, there is some great advice in the Framework Design Guidelines book which states that assembly names and namespaces don't necessarily have to match. From Brad Abrams site:
Please keep in mind that namespaces are distinct from DLL names. Namespaces represent logical groupings for developers whereas DLLs represent packaging and deployment boundaries. DLLs can contain multiple namespaces for product factoring and other reasons, and namespaces can span multiple DLLs. Since namespace factoring is different than assembly factoring, you should design them independently.
A great example is that there is no System.IO.dll in the .NET framework: System.IO.FileStream resides in MSCorLib.dll while System.IO.FileSystemWatcher resides in System.dll. So if we apply this concept to our solution and think of Tests as a subset of functionality with different packaging purposes, our code and test libraries look like this:
| Project | Namespace | Assembly |
| Component | Company.Component | Company.Component.dll |
| Test | Company.Component | Company.Component.Test.dll |
Here's a snap of my Test Library's project properties...
Now that the namespaces are identical between projects, I never have to worry about missed namespace declarations --- I can quickly create Types in the Test library and move them to the library when I'm done. As an added bonus, when I change the namespace using Resharper, it will change my Test library as well. Here's what the TDD flow looks like using Resharper:
by
bryan
at
3:05 PM
0
comments
Yesterday I met a cashier who needed to use a calculator when I gave him $20.35 for a $10.34 item. Experiences like this are terrifying, and rather than let myself become reliant on tools with rich user interfaces, I like to give my brain and fingers a workout every now and then and use some command line tools. Today, I needed to make some changes to a legacy .NET 1.1 application. Rather than going through the hassle of installing Visual Studio 2003, I figured I could get by with our great NAnt scripts and Notepad++ for a short while. Apart from having to download and install the .NET 1.1 SDK, I ran into a few snags:
Our NAnt scripts need to run under the .NET 1.1 framework and require a specific version of NAnt. Fortunately, when we put the project together, we assumed that not everyone would have NAnt installed on their machines, so we created a "tools" folder in our solution and included the appropriate version of NAnt. To simplify calling the local NAnt version, we created a really simple batch file:
tools\nant\bin\nant.exe -buildfile:main.build -targetframework:net-1.1 %*
The nant "solution" task gave me some trouble. Dependencies that were wired into the csproj file with a valid HintPath were not being found. In particular, I had problems with my version of NUnit. It was referencing a .NET 2.0 version somewhere else on the machine. While I could have treated the symptom by copying the command line out of the log file, I decided to go to the source using Reflector. The NAnt "solution" task uses the registry to identify well known assembly locations from the following locations:
HKCU\SOFTWARE\Microsoft\VisualStudio\<Version>\AssemblyFolders HKLM\SOFTWARE\Microsoft\VisualStudio\<Version>\AssemblyFolders HKCU\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders HKLM\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders
I found the culprit here:
Deleting this registry key did the trick, now it compiles fine.
by
bryan
at
4:21 PM
0
comments
...or how to have all the great benefits of clean code and 100% code coverage too.
Although the .NET 2.0 Runtime has been out for quite sometime, I'm still surprised that most people are not aware that the 2.0 framework supports a concept known as "Friend Assemblies", made possible using the InternalsVisibleToAttribute. For me, this handy (and dare i say awesome) attribute solves an age old problem frequently encountered with Test Driven Development and when I first stumbled upon it about two years ago, my jaw hit the floor and I was all nerdy giddy about it.
This has all been blogged about before, but I want to comment on some of the best practices this approach affords us. As a general rule of thumb, you should always try to keep your Unit Tests out of your production code. After all, the classes needed for testing will never be used by end-users, so to prevent bloating up your assembly you should put the tests in a different assembly and leave 'em at home when you release the code. Unfortunately, this produces a strange side-effect: Types and Methods that would normally be marked as internal or private must be made public so that the external Test assembly can access them. You're left with a difficult compromise... either choose to violate your API access rules to support testing, or forgo all unit testing and code coverage for clean code. While the practice of exposing types is relatively harmless, it can introduce some negative side-effects into your project, especially if you're producing a library that is shared with other applications or third parties. Specifically, it can hurt usability and performance:
Here's a few links that refer to these best practices:
Fortunately, the InternalsVisibleTo attribute fixes these issues. By placing the attribute in your assembly, you can keep types as internal and still allow unit testing.
Using the attribute is quite simple. The attribute is placed in the assembly that contains the internal classes and methods that you want to expose to other "friend" assemblies. The attribute lists the "friend" assembly.
using System.Runtime.CompilerServices;
[assembly:InternalsVisibleTo("assemblyName")]
MSDN documentation refers to strong names when referring to the friend assemblies, however, a strong-name is not required. This is extremely useful if you're just starting your project or not ready to strong-name the assembly. Note that if you are using a strong-name, it's the full public key and not just the public token.
[assembly:InternalsVisibleTo("assemblyName, PublicKey=fff....")]
To get the full public key of your assembly, you can use the strong name tool that ships with the .NET Framework to extract the public key:
sn -Tp Code.dll
Alternatively, David Kean has published a handy tool that can help you generate the InternalsVisibleTo attribute, so you can simply paste it into your assembly. However, his site is presently being reworked. I have the binary downloaded from his site, though I have no where to host the file. Give me a shout if you're interested... and David, let us know when you're site is back up.
Note: Although the strong-name is optional, you should be using strong-names on your assemblies as a best practice to prevent this type of runtime injection. And if you go down this route, all referenced assemblies must also been signed (all the more reason why you should be using strong-naming in the first place).
This rudimentary example shows how you can create a class that takes advantage of the InternalsVisibleTo attribute. There are two assemblies: "Code" is my main assembly has the InternalsVisibleTo attribute and public facing API, "Test" is my test library that references "Code". If these assemblies weren't friends, all Types within "Code" would have to be public.
// within Code.dll [assembly:InternalsVisibleTo("Code.Test")] namespace Code { public internal class StringUtility { public static string ProperCase(string input) { CultureInfo culture = Thread.CurrentThread.CurrentCulture; return culture.TextInfo.ToTitleCase(input.ToLower(CultureInfo.InvariantCulture)); } } } // within Code.Test.dll namespace Code.Test { [TestFixture] public class StringUtilityTest { [Test] public void CanGetProperCaseFromInternalClass() { Assert.AreEqual("Hello", StringUtility.ProperCase("HELLO")); } } }
Kudos to Rick Strahl for the ProperCase string tip.
So now that you've got your internal classes with test coverage goodness, treat yourself by opening up FxCop and viewing the reduced violations report.
This screen capture of FxCop shows a few standard FxCop violations (my assembly isn't strong-named, yet) and a violating public arguments warning.
Since most FxCop rules are centered around designing public APIs, classes that are marked as internal are exempt from certain rules. This snapshot shows how our internal class isn't subject to requiring additional validation logic.
by
bryan
at
3:32 PM
0
comments