Wednesday, May 21, 2008

log4net Configuration made simple through Attributes

I'm sure this is well documented, but for my own reference and your convenience, here's one from my list of favorite log4net tips and tricks: how to instrument your code so that log4net automatically picks up your configuration.

On average, I've been so happy with how well log4net has fit my application logging needs that most of my projects end up using it: console apps, web applications, class libraries. Needless to say I use it a lot, and I get tired of writing the same configuration code over and over:

private static void Main()
{
    string basePath = AppDomain.CurrentDomain.BaseDirectory;
    string filePath = Path.Combine(basePath, "FileName.log4net");
    XmlConfigurator.ConfigureAndWatch(new FileInfo(filePath));
}

log4net documentation refers to a Configuration Attribute (XmlConfiguratorAttribute), but it can be frustrating to use if you're not sure how to set it up. The trick is how you name your configuration file and where you put it. I'll walk through how I set it up...

log4net using XmlConfiguratorAttribute Walkthrough

  1. Add an Assembly Configuration Attribute: log4net will look for this configuration attribute the first time you make a call to a logger. I typically give my configuration file a "log4net" extension. Place the following configuration attribute in the AssemblyInfo.cs file in the assembly that contains the main entry point for the application.

    [assembly: log4net.Config.XmlConfigurator(ConfigFileExtension = "log4net",Watch = true)]

  2. Create your configuration file: As mentioned previously, the name of the configuration file is important as is where you put it. In general, the name of the configuration file should follow the convention: full-assembly-name.extension.log4net. The file needs to be at the base folder of the application, so for WinForms and Console applications it resides in the same folder as the main executable, for ASP.NET applications it's the root of the web-site along side the web.config file.

    Project Type Project Output log4net file name Location
    WinForm App Program.exe Program.exe.log4net with exe
    Console App Console.exe Console.exe.log4net with exe
    Class Library Library.dll N/A  
    ASP.NET /bin/Web.dll /Web.dll.log4net Web root (/)

  3. Define your Configuration Settings: Copy and paste the following sample into a new file. I'm using the Rolling Appender as this creates a new log file every time the app is restarted.

    <?xml version="1.0" encoding="utf-8" ?>
    <configuration>

    <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
    </configSections>

    <log4net>
    <!-- Define output appenders -->
    <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
    <file value="log.txt" />
    <appendToFile value="true" />
    <rollingStyle value="Once" /> <!-- new log file on restart -->
    <maxSizeRollBackups value="10"/> <!-- renames rolled files on startup 1-10, no more than 10 -->
    <datePattern value="yyyyMMdd" />
    <layout type="log4net.Layout.PatternLayout">
    <param name="Header" value="[START LOG]&#13;&#10;" />
    <param name="Footer" value="[END LOG]&#13;&#10;" />
    <conversionPattern value="%d [%t] %-5p %c [%x] - %m%n" />
    </layout>
    </appender>

    <!-- Setup the root category, add the appenders and set the default level -->
    <root>
    <level value="DEBUG" />
    <appender-ref ref="RollingLogFileAppender" />
    </root>

    </log4net>
    </configuration>
  4. Make a logging call as early as possible: In order for the configuration attribute to be invoked, you need to make a logging call in the assembly that contains that attribute. Note I declare the logger as static readonly as a JIT optimization.

    namespace example
    {
    public class Global : System.Web.HttpApplication
    {
    private static readonly ILog log = LogManager.GetLogger(typeof(Global));

    protected void Application_Start(object sender, EventArgs e)
    {
    log.Info("Web Application Start.");
    }
    }
    }

Cheers.

submit to reddit

Thursday, May 15, 2008

TDD Tips: Getting value out of Code Coverage

If you're following true test driven development, you should be writing tests before you write the code. By definition you only write the code that is required and you should always have 100% code coverage.

Unfortunately, this is not always the case. We have legacy projects without tests; we're forced to cut corners; we leave things to finish later that we forget about. For that reason, we look to tools to give us a sense of confidence in the quality of our code. Code coverage is often (dangerously) seen as a confidence gauge. So to follow up on a few of my other TDD posts, I want to talk about what value code coverage can provide and how you should and shouldn't use it...

Let's start by looking at what code coverage will tell us...

  • Code coverage shows which parts of our code have been tested. This metric is usually inferred as a total percentage of code that has been tested.
  • Most coverage tools keep track of how many times methods have been visited. This value shows us how much or how little testing is represented for specific a code block, but as far as I know, there's no overall valuable metric. You could infer "top most tested" or "top least tested" metrics.

In some cases, code coverage can be used to contribute to a confidence level. I feel better about a large code base that has an 80% coverage than little or no coverage. But coverage is just statistical data -- it can be misleading...

Good Coverage doesn't mean Good Code
Having a high coverage metric cannot be used as an overall code quality metric. Code coverage cannot reveal that your code or tests haven't accounted for unexpected scenarios, so it's possible that buggy code with "just enough" tests can have high coverage.

Good Coverage doesn't mean Good Tests
A widely held belief of TDD is that the confidence level of the code is proportional to the quality of the tests. Code coverage tools can be very useful to developers to identify areas of the code that are missing tests, but should not be used as a benchmark for test quality. Tests can become meaningless when developers write tests to satisfy coverage reports instead of writing tests to prove functionality of the application. See the example below.

How a few bad tests ruin coverage

Developers can unknowingly write a test that invalidates coverage. To demonstrate, let's assume we have a really simple Person class. For sake of argument, FirstName is always required so we make it available through the the constructor.


[TestFixture]
public class PersonTest
{
[Test]
public void CanCreatePerson()
{
Person p = new Person("Bryan");
Assert.AreEqual(p.FirstName,"Bryan");
}
}

public class Person
{
public Person(string firstName)
{
_first = firstName;
}
    public virtual string FirstName
{
get { return _first; }
set { _first = value; }
}
    private string _first;
}

This is all well and good. However, a code coverage report would reveal that the FirstName property setter (highlighted above) has no coverage.

Should we fix the code....


public Person(string firstName)
{
_first = firstName;
FirstName = firstName; // virtual method call in constructor
//
is a FxCop violation
}

... or the test?


[Test]
public void CanCreatePerson()
{
Person p = new Person("bryan");
Assert.AreEqual(p.FirstName,"bryan");
p.FirstName = "Bryan";
Assert.AreEqual("Bryan",p.FirstName);

}

Trick question. Neither!

There are two ways to improve code coverage -- write more tests, or get rid of code. In this case, I would argue that it better to remove the setter than write any code just to satisfy coverage. (Wow, less really IS more!) Leave the property as read-only until some calling code needs to write to it, at which point the tests for that call site will provide the coverage you need.

"But putting the setter back in is a pain!" -- sure it is. Alternatively, you can leave it in, but make sure you do not write a test for it. If the coverage remains zero for extended periods of time, remove it later. (If you can't remove it because some calling code is writing to it, you missed something in one of your tests.)

Note: In general, plain old value objects like our Person class won't need standalone tests. The exception to this is when you need tests to demonstrate specialized logic in getter/setter methods.

Coverage Tips for Your Project

  • Set goals for coverage: Talk to your team about coverage and gather their feedback early in the project. Identify areas that will be difficult to test and develop strategies to make your code more testable. Agree upon a level of acceptable coverage based on your timelines and these constraints. For most projects that start with TDD in mind, 70-80% is very realistic target. I don't have any concrete data to back this up, but I imagine that effort increases by levels of magnitude after a certain percentage.
  • Watch for changes in coverage: Rather than looking at overall code coverage percentage as a quality metric, integrate coverage into your build or continuous integration process and look at the change in coverage between builds. Coverage will flutuate as a project matures, eventually it should level out and remain relatively constant between changes. Applaud when it goes up, recognize the hard-work of your team when it stays the same, and investigate when it takes a steep drop. As an added bonus, the integrated coverage logs on your build server can be analyzed over time: it's amazing how developer churn, ramp-up, changes in functionality/design/timelines can become evident in a graphed timeline of failed builds and drops in coverage.
  • Use Milestones: Whether you're in an waterfall or agile project, pick milestones in your project where you can look at coverage. I try to fit in at least one code review per iteration and kick them off with a look at code coverage reports ("Yikes! We don't have any tests for this entire namespace, maybe we should fix that.") When coverage is low, I use this time to evangelize the benefits of having tests. Set a goal for next iteration and get buy-in from both the team, management (and client) for well written tests that bump up your coverage. It can be fun motivator for the team.
  • Don't Force it:. If you obsess about coverage, you're probably doing it wrong. Deliberately reworking code so that code will light up in the coverage report or writing coverage-serving tests yeilds little benefits -- let it come naturally by writing concise tests. If your tests don't reflect the functionality of the application, fix your tests; if the tests serve only to satisfy coverage they likely don't serve anybody.

submit to reddit

Monday, May 12, 2008

TDD Tips: Unit Test Namespace considerations

In my last post, I highlighted some of the test-driven benefits of using the InternalsVisbleTo attribute. In keeping with the trend of TDD posts, here's a recent change in direction I've made about how to separate your tests from your code.

There's a debate and poll going on about where you should put your tests. The poll shows that the majority of developers are putting their code in separate projects (yeaaaa!). Bill Simser's suggestion to have tests reside within the code is a belief that balances dangerously between heresy and pragmaticism. Although I'm opposed to releasing tests with production, one point I can identify with is the overhead of keeping namespaces between your code and your tests in sync. (Sorry Bill, if I wanted my end users to run my tests, I'd give them my Test library and tell them to download NUnit) A long the same lines, at some point our organization picked up some advice that code namespaces should reflect their location in source control. This has proven effective for maintenance as this makes it easier to track down Types when inspecting a stack-trace. Following this advice has led us to adopt a consistent naming strategy for assemblies and their namespaces:

Project Namespace Assembly
Component Company.Component Company.Component.dll
Test Company.Component.Test Company.Component.Test.dll

This works well, but I have a few hang-ups on this design. This strategy pre-dates most of our TDD efforts, and frankly it gets in the way. Here are my issues:

  • Namespace Mismatch: We attempt to model the same folder structure between projects and although the folder structure is the same, the namespaces are different. The type Customer would reside in Company.Component.Customer while the CustomerTest would reside in Company.Component.Test.Customer.
  • Pure TDD is difficult: When the namespaces are different, it's a lot of extra clicking if you want to create your types as you write your Tests. You have to get out of the Test, create the Type in Library project, switch back to the test and then add the appropriate namespace using statement. If you create the type in the same file as the Test, you'll have to refactor the tests and the Type namespaces when you move it to the library. Most of these issues get caught at compile time, but it's a real nuisance.

However, there is some great advice in the Framework Design Guidelines book which states that assembly names and namespaces don't necessarily have to match. From Brad Abrams site:

Please keep in mind that namespaces are distinct from DLL names. Namespaces represent logical groupings for developers whereas DLLs represent packaging and deployment boundaries. DLLs can contain multiple namespaces for product factoring and other reasons, and namespaces can span multiple DLLs. Since namespace factoring is different than assembly factoring, you should design them independently.

A great example is that there is no System.IO.dll in the .NET framework: System.IO.FileStream resides in MSCorLib.dll while System.IO.FileSystemWatcher resides in System.dll. So if we apply this concept to our solution and think of Tests as a subset of functionality with different packaging purposes, our code and test libraries look like this:

Project Namespace Assembly
Component Company.Component Company.Component.dll
Test Company.Component Company.Component.Test.dll

Here's a snap of my Test Library's project properties...

 

Now that the namespaces are identical between projects, I never have to worry about missed namespace declarations --- I can quickly create Types in the Test library and move them to the library when I'm done. As an added bonus, when I change the namespace using Resharper, it will change my Test library as well. Here's what the TDD flow looks like using Resharper:

  1. Write the test, refer to a new non-existent Type.
  2. Use Resharper to generate the missing class. The class is created in the same file as the test and is marked internal.
  3. Flush out the class using additional tests.
  4. When the class is finished, right click the class and choose Refactor -> Move. Specify a new file, the name will automatically reflect the Type name.
  5. Drag the new file while holding the SHIFT key from the Test library to the code project. This will physically move the file between the projects and automatically update the project definitions.

Caveats:

  • Folder Issues: I should point out that this doesn't solve resolve the folder renaming issue. If you rename the folder in your code library, you'll have to do the same in the Test library. Mind you, Resharper doesn't automatically fix folders when you rename them anyway, so you're going to have to fix this yourself.
  • Maintenance Strategy: The maintenance model strategy that allows you to identify the location of a Type in source control based on a stack-trace is partially broken with this design. I say partially because a stack-trace should really only be a concern for production code, and stack-traces for unit-tests don't provide much in the context of a Test Runner. Still, to support troubleshooting, I encourage developers to follow a "Test" naming convention for their tests.
  • Intillisense Confusion With your Test and Code library sharing the same namespaces, both TestFixtures and Types will show up in Intellisense when you write code in your Test library. Some might see this as noise when writing tests; others might use it as a good holistic view for classes and associated Tests. If this really bothers you, you could mark your tests with an attribute that would hide the tests from intellisense.

submit to reddit

Wednesday, May 07, 2008

Compiling .NET 1.1 using NAnt

Yesterday I met a cashier who needed to use a calculator when I gave him $20.35 for a $10.34 item. Experiences like this are terrifying, and rather than let myself become reliant on tools with rich user interfaces, I like to give my brain and fingers a workout every now and then and use some command line tools. Today, I needed to make some changes to a legacy .NET 1.1 application. Rather than going through the hassle of installing Visual Studio 2003, I figured I could get by with our great NAnt scripts and Notepad++ for a short while. Apart from having to download and install the .NET 1.1 SDK, I ran into a few snags:

Running NAnt in 1.1

Our NAnt scripts need to run under the .NET 1.1 framework and require a specific version of NAnt. Fortunately, when we put the project together, we assumed that not everyone would have NAnt installed on their machines, so we created a "tools" folder in our solution and included the appropriate version of NAnt. To simplify calling the local NAnt version, we created a really simple batch file:

tools\nant\bin\nant.exe -buildfile:main.build -targetframework:net-1.1 %*

Missing or Wrong References

The nant "solution" task gave me some trouble. Dependencies that were wired into the csproj file with a valid HintPath were not being found. In particular, I had problems with my version of NUnit. It was referencing a .NET 2.0 version somewhere else on the machine. While I could have treated the symptom by copying the command line out of the log file, I decided to go to the source using Reflector. The NAnt "solution" task uses the registry to identify well known assembly locations from the following locations:

HKCU\SOFTWARE\Microsoft\VisualStudio\<Version>\AssemblyFolders
HKLM\SOFTWARE\Microsoft\VisualStudio\<Version>\AssemblyFolders
HKCU\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders
HKLM\SOFTWARE\Microsoft\.NETFramework\AssemblyFolders

I found the culprit here:

 

Deleting this registry key did the trick, now it compiles fine.

submit to reddit

Tuesday, May 06, 2008

TDD Tips: InternalsVisibleTo - Keep your API clean

...or how to have all the great benefits of clean code and 100% code coverage too.

Although the .NET 2.0 Runtime has been out for quite sometime, I'm still surprised that most people are not aware that the 2.0 framework supports a concept known as "Friend Assemblies", made possible using the InternalsVisibleToAttribute. For me, this handy (and dare i say awesome) attribute solves an age old problem frequently encountered with Test Driven Development and when I first stumbled upon it about two years ago, my jaw hit the floor and I was all nerdy giddy about it.

This has all been blogged about before, but I want to comment on some of the best practices this approach affords us. As a general rule of thumb, you should always try to keep your Unit Tests out of your production code. After all, the classes needed for testing will never be used by end-users, so to prevent bloating up your assembly you should put the tests in a different assembly and leave 'em at home when you release the code. Unfortunately, this produces a strange side-effect: Types and Methods that would normally be marked as internal or private must be made public so that the external Test assembly can access them. You're left with a difficult compromise... either choose to violate your API access rules to support testing, or forgo all unit testing and code coverage for clean code. While the practice of exposing types is relatively harmless, it can introduce some negative side-effects into your project, especially if you're producing a library that is shared with other applications or third parties. Specifically, it can hurt usability and performance:

  • Usability of your assembly will be reduced because users will have a full gammet of Types to choose from. A clean API with only a few public facing classes is easier to understand that dozens of utility and helper classes. If you only have a handful of classes, this doesn't apply to you -- but if you've ever inherited a project with hundreds of Types and piss-poor documentation, I know you know what I'm talking about.
  • Performance of your API will be compromised if you follow FxCop recommendations -- which btw, is good advise. With all these public facing types you'll need additional parameter-validation and error handling because you can't guarantee how third-parties will access your Types. If your app is for internal-use, you can shirk this responsibility, but be warned: the onus is on you to enforce proper use of your library and to ignore several dozen FxCop violations. If you have third-parties using your library, this is extra plumbing is hard to avoid so it's more likely the Types and Methods are kept private/internal and the tests are simply neglected. Which, IMHO is where you really need the tests since the bugs are more likely to be nested deep in your implementation rather than the public exposed API, and hey ...bugs are bad for business.

Here's a few links that refer to these best practices:

Fortunately, the InternalsVisibleTo attribute fixes these issues. By placing the attribute in your assembly, you can keep types as internal and still allow unit testing.

Attribute Usage Examples

Using the attribute is quite simple. The attribute is placed in the assembly that contains the internal classes and methods that you want to expose to other "friend" assemblies. The attribute lists the "friend" assembly.

using System.Runtime.CompilerServices;

[assembly:InternalsVisibleTo("assemblyName")]

MSDN documentation refers to strong names when referring to the friend assemblies, however, a strong-name is not required. This is extremely useful if you're just starting your project or not ready to strong-name the assembly. Note that if you are using a strong-name, it's the full public key and not just the public token.

[assembly:InternalsVisibleTo("assemblyName, PublicKey=fff....")]

To get the full public key of your assembly, you can use the strong name tool that ships with the .NET Framework to extract the public key:

sn -Tp Code.dll

Alternatively, David Kean has published a handy tool that can help you generate the InternalsVisibleTo attribute, so you can simply paste it into your assembly. However, his site is presently being reworked. I have the binary downloaded from his site, though I have no where to host the file. Give me a shout if you're interested... and David, let us know when you're site is back up.

Note: Although the strong-name is optional, you should be using strong-names on your assemblies as a best practice to prevent this type of runtime injection. And if you go down this route, all referenced assemblies must also been signed (all the more reason why you should be using strong-naming in the first place).

A Code Example...

This rudimentary example shows how you can create a class that takes advantage of the InternalsVisibleTo attribute. There are two assemblies: "Code" is my main assembly has the InternalsVisibleTo attribute and public facing API, "Test" is my test library that references "Code". If these assemblies weren't friends, all Types within "Code" would have to be public.

// within Code.dll
[assembly:InternalsVisibleTo("Code.Test")]

namespace Code
{
    public internal class StringUtility
    {
        public static string ProperCase(string input)
        {
            CultureInfo culture = Thread.CurrentThread.CurrentCulture;
            return culture.TextInfo.ToTitleCase(input.ToLower(CultureInfo.InvariantCulture));
        }
    }
}

// within Code.Test.dll
namespace Code.Test
{
    [TestFixture]
    public class StringUtilityTest
    {
        [Test]
        public void CanGetProperCaseFromInternalClass()
        {
            Assert.AreEqual("Hello", StringUtility.ProperCase("HELLO"));
        }
    }
}

Kudos to Rick Strahl for the ProperCase string tip.

The Payoff...

So now that you've got your internal classes with test coverage goodness, treat yourself by opening up FxCop and viewing the reduced violations report.

FxCop before:

This screen capture of FxCop shows a few standard FxCop violations (my assembly isn't strong-named, yet) and a violating public arguments warning.

FxCop after:

Since most FxCop rules are centered around designing public APIs, classes that are marked as internal are exempt from certain rules. This snapshot shows how our internal class isn't subject to requiring additional validation logic.

 

submit to reddit

Thursday, May 01, 2008

.NET Garbage Collection Behavior for Release Code

Every so often, I pick up my copy of Jeffrey Richter's CLR via C# which provides a great low level look at the .NET Framework intrinsics. When I read this book two things are likely to happen, either I fall fast asleep, or I discover something that makes my head snaps backward at break-neck speeds. Here's a great mind bender on garbage collection. Take this simple console program:

public class Program
{
    public static void Main()
    {
        // setup a call back for every two seconds
        Timer t = new Timer(Callback,null,0,2000);
        
        Console.ReadLine();
    }
    
    private static void Callback(object state)
    {
        Console.WriteLine("Callback called.");
        GC.Collect();
    }
}

This simple console program when compiled in Debug mode has different behavior than when it's compiled in Release mode.

Skeptic? Try it.

Debug Mode:

  1. Compile the solution in Debug mode.
  2. Open a command-prompt and execute the app
  3. The callback is called every two seconds until the Console reads a line.

Release Mode:

  1. Compile the solution in Release mode.
  2. Open a command-prompt and execute the app
  3. The callback is only called once.

...does your neck hurt? ;-)

In Release mode, the code and the JIT compiler are optimized. At the first callback where we force Garbage Collection, the Garbage Collector determines that our timer is not used in the remainder of the Main method, therefore not "rooted", and can be safely garbage collected. As this behavior would wreak havoc on debugging sessions, the JIT compiler treats un-optimized code (Debug) differently: it artificially "roots" all variables within a method to prevent them from premature garbage collection. Note that release code running under a Visual Studio debugging session will have the same behavior as debug code, that's why you need to run them from the command-line. You can fix this code by adding another call to our timer object further on down the method. When the Garbage Collector runs it will walk the stack and determine that our variable is "rooted" and our Release code will work just like our Debug counter-part.

Here's our Main method modified to prove that point:

public static void Main()
{
    Timer t = new Timer(CallBack,null,0,2000);
    Console.ReadLine();
    
    // our object is now rooted and will survive garbage collection
    t.Dispose();
}

Jeff also points out that simply adding code like:

t = null;

...won't change anything since this line will be optimized out of the code during JIT compilation. In short, what this means is that all objects don't have to fall out of scope (ie, end of the method) to be garbage collected. The garbage collector operates under the assumption that all objects are garbage until proven useful, regardless of where the object appears on the stack. So if you're not using it, the garbage collector is going to throw it out.

submit to reddit