Showing posts with label Tips. Show all posts
Showing posts with label Tips. Show all posts

Wednesday, July 15, 2020

Exclusive Lock comes to Azure Pipelines

semaphore red left

As part of Sprint 171, the Azure DevOps team introduced a much needed feature for Multi-Stage YAML Pipelines, the Exclusive Lock "check" that can be applied to your environments. This feature silently slipped into existence without any mention of it in the release notes, but I was personally thrilled to see this. (At the time this post was written, Sprint 172 announced this feature was available)

Although Multi-Stage YAML Pipelines have been available for a while, there are still some subtle differences between their functionality and what's available through Classic Release Pipelines. Fortunately over the last few sprints we've seen a few incremental features to help close that feature parity gap, with more to come. One of the missing features is something known as "Deployment Queuing Settings" -- a Classic Release pipeline feature that dictates how pipelines are queued and executed. The Exclusive Lock check solves a few pain points but falls short on some of the more advanced options.

In this post, I'll walk through what Exclusive Locks are, how to use them and some other thoughts for consideration.

Deployments and Environments

Let's start with a multi-stage pipeline with a few stages, where we perform CI activities and each subsequent stage deploys into an environment. Although we could write our YAML to build and deploy using standard tasks, we're going to use the special "deployment" job that tracks builds against Environments.

trigger:
 - master

stages:
 - stage: ci_stage
   ...steps to compile and produce artifacts

- stage: dev_stage
   condition: and(succeeded(), eq(variables['Build.SourceBranch','refs/heads/master'))
   dependsOn: ci_stage
   jobs:
   - deployment: dev_deploy
     environment: dev
     strategy:
       runOnce:
         deploy:
           ... steps to deploy
       
 - stage: test_stage
   dependsOn: dev_stage
   ...

If we were to run this hypothetical pipeline, the code would compile in the CI stage and then immediately start deploying into each environment in sequence. Although we definitely want to have our builds deploy into the environments in sequence, we might not want them to advance into the environments automatically. That's where Environment Checks come in.

Environment Checks

As part of multi-stage yaml deployments, Azure DevOps has introduced the concept of Environments which are controlled outside of your pipeline. You can set special "Checks" on the environment that must be fulfilled before the deployment can occur. On a technical note, environment checks bubble up from the deployment task to the stage, so the checks must be satisfied before the stage is allowed to start.

For our scenario, we're going to assume that we don't want to automatically go to QA, so we'll add an Approval Check that allows our testing team to approve the build before deploying into their environment. We'll add approval checks for the other stages, too. Yay workflow!

approval-checks

At this point, everything is great: builds deploy to dev automatically and then pause at the test_stage until the testing team approves. Later, we add more developers to our project and the frequency of the builds starts to pick up. Almost immediately, the single agent build pool starts to fill up with builds and the development team start to complain that they're waiting a really long time for their build validation to complete.

Obviously, we add more build agents. Chaos ensues.

What just happen'd?

When we introduced additional build agents, we were expecting multiple CI builds to run simultaneously but we probably weren't expecting multiple simultaneous deployments! This is why the Exclusive Lock is so important.

By introducing an Exclusive Lock, all deployments are forced to happen in sequence. Awesome. Order is restored.

There unfortunately isn't a lot of documentation available for the Exclusive Lock, but according to the description:

“Adding an exclusive lock will only allow a single run to utilize this resource at a time. If multiple runs are waiting on the lock, only the latest will run. All others will be canceled."

Most of this is obvious, but what does 'All others will be canceled' mean?

Canceling Queued Builds

My initial impression of the "all other [builds] will be canceled" got me excited -- I thought this was the similar to the “deploy latest and cancel the others” setting of Deployment Queuing Settings:

deployment-queue-settings

Unfortunately, this is not the intention of the Exclusive Lock. It focuses only on sequencing of the build, not on the pending queue. To understand what the “all others will be canceled” means, let's assume we have 3 available build agents and we'll use the az devops CLI to trigger three simultaneous builds.

az pipelines run --project myproject --name mypipeline 
az pipelines run --project myproject --name mypipeline 
az pipelines run --project myproject --name mypipeline

In this scenario, all three CI builds happen simultaneously but the fun happens when all three pipeline runs hit the dev_stage. As expected, the first pipeline takes the exclusive lock on the development environment while the deployment runs and the remaining two builds queue up waiting for the exclusive lock to be released. When the first build completes, the second build is automatically marked as canceled and the last build remains begins deployment.

exclusive-lock-queuing

This is awesome. However I was really hoping that I could combine the Exclusive Lock with the Approval Gate to recreate the same functionality of the Deployment Queuing option: approving the third build would cancel the previous builds. Unfortunately, this isn’t the case. I’m currently evaluating whether I can write some deployment automation in my pipeline to cancel other pending builds.

Wrapping Up

In my opinion, Exclusive Locks are a hidden gem of Sprint 171 as they’re essential if you’re automatically deploying into an environment without an Approval Gate. This feature recreates the “deploy all in sequence” feature of Classic Release Pipelines. The jury is still out on canceling builds from automation. I’ll keep you posted.

Happy coding!

Tuesday, July 14, 2020

Using Templates to improve Pull Requests and Work-Items (Part 2)

In my previous post, I outlined how to setup templates for pull-requests. Today we’ll focus on how to configure work-items with some project-specific templates. We’ll also look at how you can create these customizations for all projects within the enterprise and the motivations for doing so.

While having a good pull-request template can improve clarity and reduce the effort needed to approve pull-requests, having well defined work-items are equally as important as they can drive and shape the work that needs to happen. We can define templates for our work-items to encourage work-item authors to provide the right level of detail (such as steps to reproduce a defect) or we can use templates to reduce effort for commonly created work-items (such as fields that are common set when creating a technical-debt work-item).

Creating Work-Item Templates

Although you can define pull-request templates as files in your git repository, Azure DevOps doesn’t currently support the ability to customize work-items as managed source files. This is largely due to the complexity of work-items structure and the level of customization available, so our only option to date is to manipulate the templates through the Azure Boards user-interface. Fortunately, it’s relatively simple and there are a few different ways you can setup and customize your templates – you can either specify customizations through the Teams configuration for your Project, or you can extract a template from an existing work item.

As extracting from an existing work-item is easier, we’ll look at this first.

Creating Templates from Existing Work-Items

To create a template from an existing work item, simply create a new work-item that represents the content that you’d like to see in your template. The great news is that our template can capture many different elements, ranging from the description and other commonly used fields to more specialized fields like Sprint or Labels.

It’s important to note that templates are team specific, so if you’re running a project with multiple scrum teams, each team can self-organize and create templates that are unique to their needs.

Here’s an example of user story with the description field pre-defined:

work-item-example

Once we like the content of the story, we can convert it into a template using the ellipsis menu (…) Templates –> Capture:

work-item-capture-template

The capture dialog allows us to specify which fields we want to include in our template. This typically populates with the fields that have been modified, but you can remove or add any additional fields you want:

capture-template-dialog

As some fields are stored in the template as HTML, using this technique of creating a template from an existing work-item is especially handy.

Customizing Templates

Once you’ve defined the template, you find them in Settings –> Team Configuration. There’s a sub-navigation item for Templates.

edit-template

Applying Templates

Once you have the template(s) created, there are a few ways you can apply them to your work-items: you can apply the template to the work-item while you’re editing it, or you can apply it to the work-item from the backlog. Both activities are achieved using the ellipsis menu: Templates –> <template-name>.

The latter option of applying the template from the Backlog is extremely useful because you can apply the template to multiple items at the same time.

assign-template-from-backlog

With some creative thinking, templates can be used like macros for commonly performed activities. For example, I created a “Technical Debt” template that adds a TechDebt tag, lowered priority and changes the Value Area to Architectural.

Creating Work-Items from Templates

If you want to apply the template to work-items as you create them, you’ll need to navigate to a special URL that is provided with each template (Settings –> Boards: Team Configuration –> Templates).

get-link-for-template

The Copy Link option copies the unique URL to the template to the clipboard, which you can circulate to your team. Personally, I like to create a Markdown widget on my dashboard that allows team members to navigate to this URL directly.

create-work-item-from-dashboard

Going Further – Set defaults for Process Template

Unfortunately, there’s no mechanism to specify which work-item template should be used as the default for a team. You can however provide these customizations at the Process level, which applies these settings for all teams using that process template. Generally speaking, you should only make these changes for enterprise-wide changes.

Note that you can’t directly edit the default process templates, you will need to create a new process template based on the default: Organization Settings –> Boards –> Process:

process-template

Within the process, you can bring up any of the work-items into an editor that let’s you re-arrange the layout and contents of the work-item. To edit the Description field to have a default value, we select the Edit option in the ellipsis menu:

edit-process-template

Remembering that certain fields are HTML, we can set the default for our user story by modifying the default options:

edit-process-template-field

Wrapping up

Hopefully the last two posts for providing templates for pull requests and work-item templates has given you some ideas on how to quickly provide some consistency to your projects.

Happy coding!

Monday, June 29, 2020

Using Templates to improve Pull Requests and Work-Items (Part 1)

I’m always looking for ways to improve the flow of work. I have a few posts I want to share on using templates for pull requests and work-items. Today, I want to focus on some templates that you can add to your Azure DevOps pull requests to provide some additional context for the work.

Templates for Pull Requests

Pull Requests are a crucial component of our daily work. They help drive our continuous delivery workflows and because they’re accessible from our git history long after the pull-request has been completed, they can serve as an excellent reference point for the work. If you review a lot of pull-requests in your day, a well-written pull-request can make the difference between a good and bad day.

Not many folks realize that Azure DevOps supports pre-populating your pull request with a default template. It can even provide customized messages for specific branches. And because Pull Requests for Azure Repos support markdown, you can provide a template that encourages your team to provide the right amount of detail (and look good, too).

Default Pull Request Template

To create a single template for all your pull requests, create a markdown file named pull_request_template.md and place it in the root of your repository or in a folder named either .azuredevops, .vsts, or docs. For example:

  • .azuredevops/pull_request_template.md
  • .vsts/pull_request_template.md
  • docs/pull_request_template.md
  • <root>/pull_request_template.md

A sample pull request might look like:

----
Delete this section before submitting!

Please ensure you have the following:

- PR Title is meaningful
- PR Title includes work-item number
- Required reviewers is populated with people who must review these changes
- Optional reviewers is populated with individuals who should be made aware of these changes
----
# Summary

_Please provide a high-level summary of the changes for the changes and notes for the reviewers_

- [ ] Code compiles without issues or warnings
- [ ] Code passes all static code-analysis (SonarQube, Fortify SAST)
- [ ] Unit tests provided for these changes

## Related Work

These changes are related to the following PRs and work-items:

_Note: use !<number> to link to PRs, #<number> to link to work items_

## Other Notes

_if applicable, please note any other fixes or improvements in this PR_

As you can see, I've provided a section a the top that provides some guidance on things to do before creating the pull request, such as making sure it has a meaningful name, while the following section provides some prompts to encourage the pull-request author to provide more detail. Your kilometrage will vary, but you may want to work your team to make a template this fits your needs.

Pull request templates can be written in markdown, so it’s possible to include images and tables. My favourite are the checkboxes (- [ ]) which can be marked as completed without having to edit the content.

Branch Specific Templates

You may find the need to create templates that are specific to the target branch. To do this, create a special folder named “pull_request_template/branches” within one of the same folders mentioned above and create a markdown file with the name of the target branch. For example:

  • .azuredevops/pull_request_template/branches/develop.md
  • .azuredevops/pull_request_template/branches/release.md
  • .azuredevops/pull_request_template/branches/master.md

When creating your pull-request, Azure DevOps will attempt to find the appropriate template by matching on these templates first. If a match cannot be found, the pull_request_template.md is used as a fallback option.

Ideally, I’d prefer different templates from the source branch, as we could provide pull-request guidance for bug/*, feature/*, and hotfix/* branches. However, if we focus on develop, release and master we can support the following scenarios:

  • develop.md: provide an overview of improvements of a feature, evidence for unit tests and documentation, links to work-items and test-cases, etc
  • release.md: provide high-level overview of the items in this release, related dependencies and testing considerations
  • master.md: (optional) provide a summary of the release and its related dependencies

    Additional Templates

    In additional to the branch-specific or default-templates, you can create as many templates as you need. You could create specific templates for critical bug fixes, feature proposals, etc. In this scenario, I’d use that initial (delete-me-section) to educate the user on which template they should use.

    You’re obviously not limited to a single template either. If you have multiple templates available, you can mix and match from any of the available templates to fit your needs. Clicking the “add template” simply append the other template to the body of the pull-request.

    create-pull-request

    Other observations

    Here’s a few other observations that you might want to consider:

    • If the pull-request contains only a single commit, the name of the pull-request will default to the commit message. The commit message is also appended to the bottom of the pull-request automatically.
    • If your pull-request contains multiple commits, the name of the pull-request is left empty. The commit messages do not prepopulate into the pull-request, but the “Add commit messages” button appears. The commit messages are added “as-is” to the bottom of the pull-request, regardless where the keyboard cursor is.

    Conclusion

    Hopefully this sheds some light on a feature you might not have known existed. In my next post, we’ll look at how we can provide templates for work-items.

    Happy Coding!

    Monday, February 10, 2020

    Challenges with Parallel Tests on Azure DevOps

    As I wrote about last week, Adventures in Code Spelunking, relentlessly digging into problems can be a time-consuming but rewarding task.

    That post centers around a tweet I made while I was struggling with an issue with VSTest on my Azure DevOps Pipeline. I'm feel I'm doing something interesting here: I've associated my automated tests to my test cases and I'm asking the VSTest task to run all the tests in the Plan; this is considerably different than just running the tests that are contained in the test assemblies. The challenge at the time was that the test runner wasn't finding any of my tests. My spelunking exercise revealed that the runner required an array of test suites despite the fact that the user interface restricts you to pick only one. I modified my yaml pipeline to contain a comma-delimited list of suites. Done!

    Next challenge, unlocked!

    Unfortunately, this would turn out to be a short victory, as I quickly discovered that although the VSTest task was able to find the test cases, the test run would simply hang with no meaningful insight as to why.

    [xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (64-bit .NET Core 3.1.1)
    [xUnit.net 00:00:00.52]   Discovering: MyTests
    [xUnit.net 00:00:00.57]   Discovered: MyTests
    [xUnit.net 00:00:00.57]   Starting: MyTests
    -> Loading plugin D:\a\1\a\SpecFlow.Console.FunctionalTests\TechTalk.SpecFlow.xUnit.SpecFlowPlugin.dll
    -> Using default config
    

    So, on a wild hunch I changed my test plan so that only a single test case was automated, and it worked. What gives?

    Is it me, or you? (it’s probably you)

    The tests work great on my local machine, so it’s easy to fall into a trap that the problem isn’t me. But to truly understand the problem is to be able to recreate it locally. And to do that, I’d need to strip away all the unique elements until I had the most basic setup.

    My first assumption was that it might actually be the VSTest runner -- a possible issue with the “Run Test Plan” option I was using. So I modified my build pipeline to just run my unit tests like normal regression tests. And surprisingly, the results were the same. So, maybe it’s my tests.

    Under a hunch that I might have a threading deadlock somewhere in my tests, I hunted through my solution looking for rogue asynchronous methods and notorious deadlock maker Task.Result. There were none that I could see. So, maybe there’s a mismatch in the environment setup somehow?

    Sure enough, I had some mismatches. My test runner from the command-prompt was an old version. The server build agent was using a different version of the test framework than what I had referenced in my project. After upgrading nuget packages, Visual Studio versions and fixing the pipeline to exactly match my environment – I still was unable to reproduce the problem locally.

    I have a fever, and the only prescription is more logging

    Well, if it’s a deadlock in my code, maybe I can introduce some logging into my tests to put a spotlight on the issue. After some initial futzing around (I’m amazing futzing wasn’t caught by spellcheck, btw), I was unable to get any of these log messages to appear in my output. Maybe xUnit has a setting for this?

    Turns out, xUnit has a great logging capability but requires a the magical presence of the xunit.runner.json file in the working directory.

    {
      "$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
      "diagnosticMessages": true
    }
    

    The presence of this file reveals this simple truth:

    [xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (64-bit .NET Core 3.1.1)
    [xUnit.net 00:00:00.52]   Discovering: MyTests (method display = ClassAndMethod, method display options = None)
    [xUnit.net 00:00:00.57]   Discovered: MyTests (found 10 test cases)
    [xUnit.net 00:00:00.57]   Starting: MyTests (parallel test collection = on, max threads = 8)
    -> Loading plugin D:\a\1\a\SpecFlow.Console.FunctionalTests\TechTalk.SpecFlow.xUnit.SpecFlowPlugin.dll
    -> Using default config
    

    And when compared to the server:

    [xUnit.net 00:00:00.00] xUnit.net VSTest Adapter v2.4.1 (64-bit .NET Core 3.1.1)
    [xUnit.net 00:00:00.52]   Discovering: MyTests (method display = ClassAndMethod, method display options = None)
    [xUnit.net 00:00:00.57]   Discovered: MyTests (found 10 test cases)
    [xUnit.net 00:00:00.57]   Starting: MyTests (parallel test collection = on, max threads = 2)
    -> Loading plugin D:\a\1\a\SpecFlow.Console.FunctionalTests\TechTalk.SpecFlow.xUnit.SpecFlowPlugin.dll
    -> Using default config
    

    Yes, Virginia, there is a thread contention problem

    The build agent on the server has only 2 virtual CPUs allocated and both executing tests are likely trying to spawn additional threads to perform the asynchronous operations. By setting the maxParallelThreads to “2” I am able to completely reproduce the problem from the server.

    I can disable parallel execution in the tests by adding the following to the assembly:

    [assembly: CollectionBehavior(DisableTestParallelization = true)]

    …or by disabling parallel execution in the xunit.runner.json:

    {
      "$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
      "diagnosticMessages": true,
      "parallelizeTestCollections": false
    }
    

    submit to reddit

    Friday, February 07, 2020

    Adventures in Code Spelunking

    image

    It started innocently enough. I had an Azure DevOps Test Plan that I wanted to associate some automation to. I’d wager that there are only a handful of people on the planet who’d be interested by this, and I’m one of them, but the online walk-throughs from Microsoft’s online documentation seemed compatible with my setup – so why not? So, with some time in my Saturday afternoon and some horrible weather outside, I decided to try it out. And after going through all the motions, my first attempt failed spectacularly with no meaningful errors.

    I re-read the documentation, verified my setup and it failed a dozen more times. Google and StackOverflow yielded no helpful suggestions. None.

    It’s the sort of problem that would drive most developers crazy. We’ve grown accustomed to having all the answers a simple search away. Surely others have already had this problem and solved it. But when the oracle of all human knowledge comes back with a fat goose egg you start to worry that we’ve all become a group of truly lazy developers that can only find ready-made code snippets from StackOverflow.

    When you are faced with this challenge, don’t give up. Don’t throw up your hands and walk away. Surely there’s an answer, and if there isn’t, you can make one. I want to walk you through my process.

    Read the logs

    If the devil is in the details, surely he’ll be found in the log file. You’ve probably already scanned the logs for obvious errors, it’s okay to go back and look again. If it seems the log file is gibberish at first glance, it often is. But sometimes the log contains some gems that give clues as to what’s missing. Maybe the log warns that a default value is missing, maybe you’ll discover a typo in a parameter.

    Read the logs, again

    Amp up the verbosity on the logs if possible and try again. Often developers use the verbose logging to diagnose problems that happen in the field, so maybe the hidden detail in the verbose log may reveal further gems.

    Now’s a good moment for some developer insight. Are these log messages helpful? Would someone reading the logs from your program be as delighted or frustrated with the quality of these output messages?

    Keep an eye out for references to class names or methods that appear in the log or stack traces. These could lead to further clues or give you a starting point for the next stage.

    Find the source

    Microsoft is the largest contributor to open-source projects on Github than anyone else, so it makes sense that they bought them. Just watching the culture shift within Microsoft in the last decade has been astounding and now it seems that almost all of their properties have their source code freely available for public viewing. Some sleuthing may be required to find the right repository. Sometimes it’s as easy as Googling “<name-of-class> github” or following the link on a nuget or maven repository.

    But once you’ve found the source, you enter a world of magic. Best case scenario, you immediately find the control logic in the code that relates to your problem. Worse case scenario, you learn more about this component than anyone you know. Maybe you’ll discover they parse inputs as case sensitive strings, or some conditional logic requires the presence of a parameter you’re not using.

    Within Github, your secret weapon is the ability to search within the repository, as you can find the implementation and usages in a single search. Recent changes within Github’s web-interface allows you to navigate through the code by clicking on class and method names – support is limited to specific programming languages but I’ll be in heaven when this capability expands. The point is to find a place to start and keep digging. It’ll seem weird not being able to set a breakpoint and simply run the app, but the ability to mentally trace through the code is invaluable. Practice makes perfect.

    If you’re lucky, the output from the log file will help guide you. Go back and read it again.

    As another developer insight – this code might be beautiful or make you want to vomit. Exposure to other approaches can validate and grow your opinions on what makes good software. I encourage all developers to read as much code that isn’t theirs.

    After spending some time looking at the source, check out their issues list. You might discover your problem is known by a different name that is only familiar to those that wrote it. Alternative suitable workarounds might appear from other problems.

    Roadblocks are just obstacles you haven’t overcome

    If you hit a roadblock, it helps to step back and think of other ways of looking at the problem. What alternative approaches could you explore? And above all else, never start from a position where you assume everything on your end is correct. Years ago when I worked part-time at the local computer repair shop, I learnt the hard way that the easiest and most blatantly obvious step, checking to see if it was plugged in, was the most important step to not skip. When you keep an open-mind, you will never run out of options.

    As evidenced by the tweet above, the error message I was experiencing was something that had no corresponding source-code online and all of my problems were baked into a black-box that only exists on the build server when the build runs. When the build runs… on the build server. When the build runs on the build agent… that I can install on my machine. Within minutes of installing a local build agent, I had the mysterious black-box gift wrapped on my machine.

    No source code? No problem. JetBrain’s dotPeek is a free utility that allows you to decompile and review any .net executable.

    Just dig until you hit the next obstacle. Step back, reflect. Dig differently. As I sit in a coffee shop looking out at the harsh cold of our Canadian winter, I reflect that we have it so easy compared to the original pioneers who forged their path here. That’s who you are, a pioneer cutting a path that no one has tread before. It isn’t easy, but the payoff is worth it.

    Happy coding.

    Sunday, March 11, 2018

    On Code Reviews

    Recently, a colleague reached out looking for advise on documentation for code reviews. It was a simple question, like so many others that arrive in my inbox phrased as a “quick question” yet don’t seem to have a “quick answer”. It struck me as odd that we didn’t have a template for this sort of thing. Why would we need this and under what circumstances would a template help?

    After some careful contemplation, I landed on two scenarios for code review. One definitely needs a template, the other does not.

    Detailed Code Analysis

    If you’ve been tasked with writing up a detailed analysis of a code-base, I can see benefit for a structured document template. Strangely, I don’t have a template but I’ve done this task many different times. The interesting part of this task is that the need for the document is often to support a business case for change or to provide evidence to squash or validate business concerns. Understanding the driving need for the document will shape how you approach the task. For example, you may be asked to review the code to identify performance improvements. Or perhaps the business has lost confidence in their team’s ability to estimate and they want an outside party to validate that the code follows sound development practices (and isn’t a hot mess).

    In general, a detailed analysis is usually painted in broad-strokes with high-level findings. Eg, classes with too many dependencies, insufficient error handling, lack of unit tests, insecure coding practices. As some of these can be perceived as the opinion of the author it’s imperative that you have hard evidence to support your findings. This is where tools like SonarCube shine as they can highlight design and security flaws, potential defects and even suggest how many hours of technical debt a solution has. Some tools like NDepend or JArchitect allow you to write SQL-like queries to find areas that need the most attention. For example, a query to “find highly used methods that have high cyclomatic complexity and low test coverage” can identify high yield pain points.

    If I was to have a template for this sort of analysis, it would have:

    • An executive summary that provides an overview of the analysis and how it was achieved
    • A list of the top 3-5 key concerns where each concern has a short concise paragraph with a focus on the business impact
    • A breakdown of findings in key areas:
      • Security
      • Operational Support and Diagnostics
      • Performance
      • Maintainability Concerns (code-quality, consistency, test automation and continuous delivery).

    Peer Code Review

    If we’re talking about code-review for a pull-request, my approach is very different. A template might be useful here, but it’s likely less needed.

    First, a number of linting tools such as FXCop, JSLint, etc can be included in the build pipeline so warnings and potential issues with the code are identified during the CI build and can be measured over time. Members of the team that are aware of these rules will call them out where appropriate in a code-reviews or they’ll set targets on reducing these warnings over time.

    Secondly, it’s best to let the team establish stylistic rules and formatting rather than trying to enforce a standard from above. The reasoning for this should be obvious: code style can be a religious war where there is no right answer. If you set a standard from above, you’ll spend your dying breath policing a codebase where developers don’t agree with you. In the end, consistency throughout the codebase should trump personal preference, so if the team can decide like adults which rules they feel are important then they’re less likely to act like children when reviewing each other’s work.

    With linting and stylistic concerns out of the way, what should remain is a process that requires all changes to be reviewed by one or more peers, and they should be reading the code to understand what it does or alternative ways that the author hadn’t considered. I’ve always seen code-review as a discussion, rather than policing, which is always more enjoyable.

    Monday, April 24, 2017

    My favourite Visual Studio Snippets

    It happens a fair bit: there’s a small identical piece of code that you have to need to include in each project you work on. Sometimes you can copy and paste from an old project or you simply write it from scratch. You do this over and over so much that you get used to writing it.

    Fortunately, Visual Studio “snippets” can tame this monster. Simply type a keyword and hit tab twice and – bam! – code magically appears. But if you’re like me, the thought of deviating from your project to write a snippet can seem tedious. Lucky for you, you don’t have to write them, you can just borrow my favourite snippets.

    vmbase

    My vmbase snippet includes some common boilerplate code for INotifyPropertyChanged. It includes the SetField<T> method you may see in a few of my posts. After adding this snippet, be sure to decorate your class with INotifyPropertyChanged.

    <?xml version="1.0" encoding="utf-8" ?>
    <CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
        <CodeSnippet Format="1.0.0">
            <Header>
                <Title>Base ViewModel implementation</Title>
                <Shortcut>vmbase</Shortcut>
                <Description>Inserts SetField and NotifyPropertyChanged</Description>
                <Author>BCook</Author>
                <SnippetTypes>
                    <SnippetType>Expansion</SnippetType>
                </SnippetTypes>
            </Header>
            <Snippet>
                <Imports>
                    <Import>
                        <Namespace>System.Collections.Generic</Namespace>
                    </Import>
                    <Import>
                        <Namespace>System.ComponentModel</Namespace>
                    </Import>
                    <Import>
                        <Namespace>System.Runtime.CompilerServices</Namespace>
                    </Import>
    
                </Imports>
                <Declarations>
                </Declarations>
                <Code Language="csharp"><![CDATA[
                protected bool SetField<T>(ref T field, T value, [CallerMemberName] string propertyName = null)
                {
                    if (!EqualityComparer<T>.Default.Equals(field, value))
                    {
                        field = value;
                        NotifyPropertyChanged(propertyName);
                        return true;
                    }
                    
                    return false;
                }
                
                protected void NotifyPropertyChanged([CallerMemberName] string propertyName = null)
                {
                    var handler = PropertyChanged;
                    if (handler != null)
                    {
                        handler(this, new PropertyChangedEventArgs(propertyName));
                    }
                }
    $end$]]>
                </Code>
            </Snippet>
        </CodeSnippet>
    </CodeSnippets>

    propvm

    The propvm snippet is similar to other “prop” snippets. It creates a property with a backing field and uses the SetField<T> method for raising property change notifications.

    <?xml version="1.0" encoding="utf-8" ?>
    <CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
        <CodeSnippet Format="1.0.0">
            <Header>
                <Title>Define a Property with SetField</Title>
                <Shortcut>propvm</Shortcut>
                <Description>Code snippet for a ViewModel property</Description>
                <Author>Bryan Cook</Author>
                <SnippetTypes>
                    <SnippetType>Expansion</SnippetType>
                </SnippetTypes>
            </Header>
            <Snippet>
                <Declarations>
                    <Literal>
                        <ID>type</ID>
                        <ToolTip>Property Type</ToolTip>
                        <Default>string</Default>
                    </Literal>
                    <Literal>
                        <ID>property</ID>
                        <ToolTip>Property Name</ToolTip>
                        <Default>MyProperty</Default>
                    </Literal>
                    <Literal>
                        <ID>field</ID>
                        <ToolTip>Field Name</ToolTip>
                        <Default>myProperty</Default>
                    </Literal>
                </Declarations>
                <Code Language="csharp"><![CDATA[
    private $type$ _$field$;
    
    public $type$ $property$
    {
        get { return _$field$; }
        set { SetField(ref _$field$, value); }
    }
    $end$]]>
                </Code>
            </Snippet>
        </CodeSnippet>
    </CodeSnippets>

    propbp

    Similar to the propdp snippet which creates WPF dependency properties, propbp creates a Xamarin.Form BindableProperty.

    <?xml version="1.0" encoding="utf-8" ?>
    <CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
        <CodeSnippet Format="1.0.0">
            <Header>
                <Title>Define a Xamarin.Forms Bindable Property</Title>
                <Shortcut>propbp</Shortcut>
                <Description>Code snippet for Xamarin.Forms Bindable Property</Description>
                <Author>Bryan Cook</Author>
                <SnippetTypes>
                    <SnippetType>Expansion</SnippetType>
                </SnippetTypes>
            </Header>
            <Snippet>
                <Imports>
                    <Import>
                        <Namespace>Xamarin.Forms</Namespace>
                    </Import>
                </Imports>
                <Declarations>
                    <Literal>
                        <ID>type</ID>
                        <ToolTip>Property Type</ToolTip>
                        <Default>string</Default>
                    </Literal>
                    <Literal>
                        <ID>property</ID>
                        <ToolTip>Property Name</ToolTip>
                        <Default>MyProperty</Default>
                    </Literal>
                    <Literal>
                        <ID>owner</ID>
                        <ToolTip>Owner Type for the BindableProperty</ToolTip>
                        <Default>object</Default>
                    </Literal>
                </Declarations>
                <Code Language="csharp"><![CDATA[
    #region $property$
    public static BindableProperty $property$Property =
        BindableProperty.Create(
            "$property$",
            typeof($type$),
            typeof($owner$),
            default($type$),
            defaultBindingMode: BindingMode.OneWay,
            propertyChanged: On$property$Changed);
            
    private static void On$property$Changed(BindableObject sender, object oldValue, object newValue)
    {
    }
    #endregion
    
    public $type$ $property$
    {
        get { return ($type$)GetValue($property$Property); }
        set { SetValue($property$Property, value); }
    }
    
    $end$]]>
                </Code>
            </Snippet>
        </CodeSnippet>
    </CodeSnippets>

    testma

    Visual Studio ships with a super helpful testm snippet which generates a MSTest test method for you. My simple testma snippet creates an asynchronous test method.

    <?xml version="1.0" encoding="utf-8" ?>
    <CodeSnippets  xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
        <CodeSnippet Format="1.0.0">
            <Header>
                <Title>Async Test Method</Title>
                <Shortcut>testma</Shortcut>
                <Description>Inserts Test Method with async keyword</Description>
                <Author>BCook</Author>
                <SnippetTypes>
                    <SnippetType>Expansion</SnippetType>
                </SnippetTypes>
            </Header>
            <Snippet>
                <Imports>
                    <Import>
                        <Namespace>Microsoft.VisualStudio.TestTools.UnitTesting</Namespace>
                    </Import>
                    <Import>
                        <Namespace>System.Threading.Tasks</Namespace>
                    </Import>
                </Imports>
                <References>
                    <Reference>
                        <Assembly>Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll</Assembly>
                    </Reference>
                </References>        
                <Declarations>
                    <Literal>
                        <ID>method</ID>
                        <ToolTip>MethodName</ToolTip>
                        <Default>TestMethodName</Default>
                    </Literal>
                    <Literal Editable="false">
                        <ID>TestMethod</ID>
                        <Function>SimpleTypeName(global::Microsoft.VisualStudio.TestTools.UnitTesting.TestMethod)</Function>
                    </Literal>                
                </Declarations>
                <Code Language="csharp"><![CDATA[
                [$TestMethod$]
                public async Task $method$()
                {
                    // await ...
                    Assert.Fail();
                }
                
    $end$]]>
                </Code>
            </Snippet>
        </CodeSnippet>
    </CodeSnippets>

    Installing Snippets

    To install the snippets:

    1. Copy/paste each snippet into a dedicated file on your hard-drive, eg propbp.snippet
    2. In Visual Studio, select Tools –> Code Snippets Manager
    3. Click the Import button
    4. Navigate to the location where you put the snippets
    5. Select one or more snippet files.
    6. Click Open and Ok.

    To use them, simply type the keyword for the snippet (as defined in the ShortCut element in the snippet) and press the tab key twice.

    Note: If you have resharper installed, the default tab/tab keyboard shortcut might not be enabled.

    Any feedback is greatly welcomed.

    Enjoy.

    submit to reddit

    Wednesday, October 16, 2013

    Advance WorkItems to next state on check-in

    Nearly two years ago, I had a project where we used whiteboard and post-it notes for our Kanban board. Perhaps one of my favourite aspects of using a whiteboard was the non-verbal communication that occurred between team members: when a developer would finish a task, she would stand-up from her desk, take a deep breath and well deserved stretch then rip her post-it note from the In-Progress column, slap it into the Ready-for-test column and then yank one from the Next column. Everyone on the team would look up, smile and go back to coding.

    Alas, whiteboards and post-it notes only work when all team members can see the board and when you’re teams are remote you will need a software solution. Our organization is big on TFS, and I’ve found much luck using SEP Teamworks to simplify the data and present it in a Kanban fashion.

    One of the challenges with using software for task tracking is that it loses that tacit capability. Choosing the wrong tool can mean you spend more time managing the tool than building software.

    Here’s a quick post that illustrates how you can leverage features of TFS workflow to automate your Kanban process a bit, so you don’t have to harass your team members so much.

    The xml schema for our User Stories, Bugs and Tasks contains elements that describe fields, user-interface and workflow. The workflow element is interesting because it allows us to define the supported transitions between states and default values for fields in each state. It also supports this sweet little addition that will automatically transition a work-item to a different state simply by including the following Action in the Actions element:

    <TRANSITION from="In Development" to="Ready for Test">
      <REASONS>
        <DEFAULTREASON value="Development Complete" />
      </REASONS>
      <FIELDS>
        <FIELD refname="System.AssignedTo">
          <ALLOWEXISTINGVALUE />
          <EMPTY />
        </FIELD>
      </FIELDS>
      <ACTIONS>
        <ACTION value="Microsoft.VSTS.Actions.Checkin" />
      </ACTIONS>
    </TRANSITION>

    Although the schema suggests that it would allow custom actions to be plugged in here, only the Microsoft.VSTS.Actions.Checkin action is supported.

    To take advantage of this feature, simply associate your work items to your check-in and mark the action as “Resolve”.

    TFS_AssociateWorkItems

    Tuesday, August 13, 2013

    Fix your code with an “On Notice” board

    OnNotice

    The above comes with thanks to the On Notice Generator, and my board re-iterates a lot of the guidance found on MiÅ¡ko Hevery’s blog. These are code patterns and anti-patterns that I’ve encountered on many projects and have strong feelings against. Some of these are actually on my Dead to Me board, but there wasn’t a online generator.

    I think it’s a good habit to start on On Notice board for your project – a list of offending code that should be cleaned-up at some point. Often, these unsightly offenders are large and tightly woven into the fabric of our code so they’re not something that can be fixed in a single refactoring. But by placing these offences on a visible On Notice board, they become goals that can fuel future refactorings.

    You might not be able to fix a problem in a single session, but you can add a 2 hour research task to your backlog to understand why it exists. The output of such task might be further research tasks or changes you could introduce to shrink their influence and eventually remove all together. Sometimes I bundle a bunch of these fixing tasks into a refactoring user story, or slip a few into a new feature if they’re related. Over time, the board clears up.

    Don’t forget, while you’re making these changes, write a few tests while you’re at it.

    Oh, regarding Gluten-free cookies… they look like cookies, but they are most definitely not.

    Thursday, July 18, 2013

    Unhandled exceptions in WPF applications

    When it comes to unit testing there are a few areas of the application where I am comfortable not getting coverage. There are some areas of the application, typically in the UI, that are generally difficult to unit test but can be easily verified if you run the application manually. There are a few other places where testing is very difficult to validate such as the global error handler for your application. For the global error handler, you have to live with some manual testing and assume you’ve got it right.

    Today I discovered one of my assumptions about the global error handler was completely wrong. My app was crashing and displaying error messages; I assumed it was crashing, logging to a file and exiting politely. It was not. And as always, I’m writing this as a reminder for you and myself.

    As most know, the best place for a global exception handler is to attach an event handler to the DispatcherUnhandledException event of the application. It’s important to set the the Handled property of the UnhandledExceptionEventArgs to true to prevent the app from crashing.

    However, this will only capture exceptions on the UI thread. All other exceptions will look for an event handler on that threads’ stack. If no event handler is found it will bubble up to the AppDomain’s handler. So to capture these exceptions you should add an event handler to the AppDomain’s UnhandledException event.

    In contrast to the UnhandledExceptionEventArgs, UnahdledException does not have a Handled property. I assumed that the purpose of this handler was so that we could log the error and go on about our business. As it turns out, if your code reaches to this event handler it is completely unrecoverable. As in Bill Paxton, “Game over, man!” – your app is going to crash and show a nasty error dialog. The only way to prevent the dialog is to use Environment.Exit(1);

    namespace MyApplication
    {
        public class MyApp : Application
        {
            private static ILog Log = log4net.LogManager.GetLogger(typeof(MyApp));
    
            protected override void OnStartup(StartupEventArgs e)
            {
                base.OnStartup(e);
    
                // handle all main UI thread related exceptions
                Application.Current.DispatcherUnhandledException += DispatcherUnhandledException;
    
                // handle all other exceptions in background threads
                AppDomain.CurrentDomain.UnhandledException += AppDomainUnhandledException;
            }
    
            void DispatcherUnhandledException(object sender, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs e)
            {
                // prevent unhandled exception from crashing the application.
                e.Handled = true;
    
                Log.Fatal("An unhandled exception has reached the UI Dispatcher.", e.Exception);
    
                // shut down the application nicely.
                Application.Shutdown(-1);
            }
    
            void AppDomainUnhandledException(object sender, UnhandledExceptionEventArgs e)
            {
                var ex = e.ExceptionObject as Exception;
    
                Log.Fatal("An unhandled exception has reached the AppDomain exception handler. Application will now exit.", ex);
    
                // This exception cannot be handled and you cannot reliably use Shutdown to gracefully shutdown.
                // The only way to suppress the CLR error dialog is to supply "1" to the exit code.
                Environment.Exit(1);
            }
        }    
    }

    If you want to gracefully exit the application regardless which thread created the Exception, the recommended approach is to:

    • Handle the exception on the background thread.
    • Marshal the exception to the UI thread and then re-throw it there.
    • Handle the exception in the Application.DispatcherUnhandledException handler.

    There’s no easy way out here and means you need to fix the offending code. My recommendation is to use the AppDomain UnhandledException as a honey pot to find issues.

    High five.

    submit to reddit

    Monday, July 08, 2013

    DeploymentItems in Visual Studio 2012

    A frequent concern with writing unit tests with MSTest is how to include additional files and test data for a test run. This process has changed between Visual Studio 2010 and 2012 and it’s become a source of confusion.

    Background

    With Visual Studio 2010 and earlier, every time you ran your tests Visual Studio would copy all files related to the test to a test run folder and execute them from this location. For local development this feature allows you to compare results between test runs, but the feature is also intended to support deploying the tests to remote machines for execution.

    If your tests depend on additional files such as external configuration files or 3rd party dependencies that aren’t directly referenced by the tests, you would need to enable Deployment in your testsettings and then either specifying the deployment items in the testsettings file or marking each test with a DeploymentItemAttribute.

    What’s changed in Visual Studio 2012?

    Visual Studio 2012 has a number of changes related to the test engine that impact deployment. The most visible change is that Visual Studio 2012 no longer automatically adds the testsettings file to your solution when you add a Test project. The testsettings file can be added to your project manually, but it’s generally recommended that you don’t use it as it’s for backward compatibility and not all features within Visual Studio 2012 are backward compatible. Microsoft Fakes for example are not backward compatible.

    The biggest change related to deployment is that Visual Studio 2012 tests run directly out of the output folder by default. This adds a significant speed boost for the tests but it also means that if your tests are dependent on files that are already part of the build output, you won’t need to enable deployment at all.

    Another interesting change is that if you include a DeploymentItemAttribute in your tests, Deployment will be automatically enabled and your tests will run out of the deployment folder.

    More information can be found here.

    Monday, March 05, 2012

    How I organize my Local TFS Workspaces

    It happens several times on most projects. Developer one, let’s call him Andy, adds a third-party library into the source control repository that isn’t referenced anywhere in the Visual Studio solution file that the team uses. Andy also modifies a few files that depend on this new library and checks his changes in. Developer two, let’s call him Eric, gets the latest from source control by right-clicking the top of the Solution in the Solution Explorer and selecting “Get Latest (recursive)” from the context menu. Although Andy’s local workspace and the build server work fine, Eric believes he has the latest but his code won’t compile.

    It’s a frustrating problem with an easy fix: just get the latest copy of the source and rebuild the solution. You can get the latest from the Source Control Explorer in Visual Studio, or open a Visual Studio Command-prompt and issue this command at the root of your solution:

    tf.exe get /recursive

    I’ve worked to remedy this problem with my teams in several ways, including special buttons you can add to your IDE to make getting to the Source Control Explorer window faster. But when pair-programming on someone else’s machine, my buttons aren’t always available so I drop down to command-line as preferred choice. However, this sometimes has mixed results. If the command-line can’t figure out which workspace you’re in, sometimes it will get the latest from all local workspaces.

    I don’t have this problem because I structure my workspaces differently than you. Here’s how I do it.

    Multiple Workspaces per Client

    This step is optional, but I think it’s worth mentioning. Rather than use a single workspace for all clients, I create one or more workspaces that reflect the client that I’m writing code for. To keep this information visible, I name the workspace after the client instead of the computer name.

    TFS-AddWorkspace

    I separate my TFS-Workspaces by client for a few reasons:

    • Some of my clients have their own repository which requires me to create a workspace for their server.
    • When I finish work with a client, I can safely delete an entire workspace without concern of breaking server-mappings for other clients.

    Having multiple workspaces for the same client allows me to check out the same branch more than once. This allows me to:

    • Use an older copy of the source to reproduce a defect, validate unit tests or to run code analysis
    • Work on multiple defects in isolation from one another
    • Try out a refactoring in isolation from current development
    • Code review of a co-worker’s shelve-set

    The practice of having multiple workspaces may not be required for all projects, but it’s a good habit to form.

    Client Workspaces separated using Folders

    As stated above, I create multiple workspaces for each client. In order to keep those workspaces organized, I keep them separated in their own folder using a simple naming convention (A,B,C, etc). This makes it simple to remove an entire workspace when no longer needed.

    Building upon the folder structure that I outlined in my last post (Using Windows 7 Libraries to Organize your code), my folder structure for a client looks like this:

    Client Workspace Name Folder Location
    Client1 Client1-A C:\Projects\Infusion\Code\Client1\A
    Client1 Client1-B C:\Projects\Infusion\Code\Client1\B
    Client2 Client2-A C:\Projects\Infusion\Code\Client2\A

    A,B,C is a simple naming convention, and it doesn’t need to get too fancy. I’ve worked with some projects with some long folder names, but I haven’t yet exceeded the 260 character limit with MSBuild.

    Putting it all together

    With the above in place, I can check out separate copies of the same branch into different folders: Client1\A\trunk, Client1\B\trunk, etc. Opening a command-prompt at the root of my solution and executing:

    tf.exe get /recursive

    …gets me just the updated code for that branch. I especially love this approach because I can get latest before I open the solution file, which is immensely helpful because I don’t have to wait for Visual Studio to reload projects if they’ve changed.

    Code happy.

    Organize your Code with Libraries (redux)

    A while back, I mentioned how I was using the Document Library feature of Windows 7 to organize the different types of content on my machine. This approach has worked very well for me since I started, though I have made one small adjustment from the original post: I was keeping both my project documents and code files in the same folder and I’ve since deviated from that. At the time this made sense to keep the documents and source code as close together as possible, but it wasn’t very practical in terms of navigating the code from Visual Studio or when working with multiple TFS workspaces.

    Here’s an updated snippet from the original post. My “Code” library is comprised of the following folders:

      • C:\Projects\Infusion\Code (my employer) 3
      • C:\Projects\lib (group of common libraries I reference a lot) now using Nuget for this.
      • C:\Projects\Experiments (small proof of concept projects)
      • C:\Projects\Personal (my pet projects)

    The main difference is that I’ve added the “Code” folder as the primary container for all my work-related source. Each client that I work with gets their own folder below this root, which provides a convenient way to navigate all of my work projects.

    I also now use a “Projects” library that contains an entry for each client. I like this approach because I can set a particular library entry as the default save location, so any time I create a document and click “Save” it will get dumped into a folder for my current client. Here’s a quick peek at my sanitized Project Library.

    image_thumb[1]

    My next post will show how I organize my Visual Studio workspaces within this structure.

    submit to reddit

    Monday, January 23, 2012

    Execute Batch from Visual Studio

    Since as long as I can remember, I’ve kept a command-line window open while I worked. It’s a warm fuzzy feeling of how computers used to work. I tend to structure commonly used tasks as msbuild or nant scripts, and then add handy batch files that pass the appropriate parameters to the script.

    Unfortunately, most of my team-mates don’t live in the command-line, so running a batch file breaks their traditional flow.

    Here’s a short tip on how to execute batch files without having to leave the comfort of the IDE. You’ve probably seen this tip before, but as always, I often use my blog as a digital memory. If it helps you, great.

    Visual Studio supports the ability to associate tools and alternate editors for different files. Adding support for batch files is simply a matter of opening the context-menu for a file and choosing “Open With..”. Unfortunately, there’s no mechanism to supply parameters to your program, so adding support for a Command Prompt requires a small subtle hack that passes our parameters to the program we want.

    To Associate Batch files to a Custom Command

    First, we need to create a simple batch file that passes the arguments that Visual Studio provides onto our batch file.

    1. If you haven't already, add your batch file to your solution. These are best treated as Solution items that aren’t part of your compilation process.
    2. Open notepad and save the following script as C:\ExecuteBatch.cmd
    @cmd /c %1

    Once this is in place,

    1. Associate the batch file in Visual Studio to the command line by right-clicking on the batch file and choose "Open With...".
    2. In the dialog that appears, provide a name and associate it to the ExecuteBatch.cmd.

    image

    Optional: You can associate the Command as the default program for this extension, by selecting your custom command and clicking on the “Set as Default” button. Note that if you edit the file frequently you might want to skip this step, but you’ll have to right-click the file and choose “Open With…” anytime you want to run your custom command.

    Gotchas & Caveats

    Just a few closing points:

    • If you set your custom command as the default, note that there is no confirmation if you accidentally double-click the batch file. If your script is potentially destructive or long-running, you might want to add a prompt at the beginning of the batch file before running.
    • The ExecuteBatch.cmd provided above will close the window immediately after the batch terminates. If you want to review the output of the script before the window closes, you might want to add a pause to the end of the script.
    • Lastly, when adding new scripts to Visual Studio it will perform the default action when the file is added to the solution. If you don’t want to run the script when the file is added, you might want to temporarily assign a different editor (Source Code Editor) before you precede.

    Happy Coding.

    submit to reddit

    Monday, November 28, 2011

    Fixing Parallel Test Execution in Visual Studio 2010

    As the number of tests in my project grow, so does the length of my continuous integration build. Fortunately, the new parallel test execution of Visual Studio 2010 allow us to trim down the amount of time consumed by our unit tests. If your unit tests meet the criteria for thread-safety you can configure your unit tests to run in parallel simply by adding the following to your test run configuration:

    <?xml version="1.0" encoding="UTF-8"?>
    <TestSettings name="Local" id="5082845d-c149-4ade-a9f5-5ff568d7ae62" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
      <Description>These are default test settings for a local test run.</Description>
      <Deployment enabled="false" />
      <Execution parallelTestCount="0">
        <TestTypeSpecific />
        <AgentRule name="Execution Agents">
        </AgentRule>
      </Execution>
    </TestSettings>

    The ideal setting of “0” implies that the test runner will automatically figure out the number of concurrent tests to execute based on the number of processors on the local machine. Based on this, a single-core CPU will run 1 test simultaneously, a dual-core CPU can run 2 and a quad-core CPU can 4. Technically, a quad-core hyper-threaded machine has 8 processors but when parallelTestCount is set to zero the test run on that machine fails instantly:

    Test run is aborting on '<machine-name>', number of hung tests exceeds maximum allowable '5'.

    So what gives?

    Well, routing through the disassembled source code for the test runner we learn that the number of tests that can be executed simultaneously interferes with the maximum number of tests that can hang before the entire test run is considered to be in a failed state. Unfortunately the maximum number of tests that can hang has been hardcoded to 5. Effectively, when the 6th test begins to execute the test runner believes that the other 5 executing tests are in a failed state so it aborts everything. Maybe the team writing this feature picked “5” as an arbitrary number, or legitimately believed there wouldn’t be more than 4 CPUs before the product shipped, or simply didn’t make the connection between the setting and the possible hardware. I do sympathize for the mistake: the developers wanted the number to be low because a higher number could add several minutes to a build if the tests were actually in an non-responsive state.

    The Connect issue lists this feature as being fixed, although their are no posted workarounds and a there’s a lot of feedback that feature doesn’t work on high-end machines even with the latest service pack. But it is fixed, no-one knows about it.

    Simply add the following to your registry (you will likely have to create the key) and configure the maximum amount based on your CPU. I’m showing the default value of 5, but I figure number of CPUs + 1 is probably right.

    Windows 32 bit:
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\10.0\EnterpriseTools\QualityTools\Agent]
    "MaximumAllowedHangs"="5"
    Windows 64 bit:
    [HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\10.0\EnterpriseTools\QualityTools\Agent]
    "MaximumAllowedHangs"="5" 

    Note: although you should be able to set the parallelTestCount setting to anything you want, overall performance is constrained by the raw computing power of the CPU, so anything more than 1 test per CPU creates contention between threads which degrades performance. Sometimes I set the parallelTestCount to 4 on my dual-core CPU to check for possible concurrency issues with the code or tests.

    Epilogue

    So what’s with the Connect issue? Having worked on enterprise software my guess is this: the defect was logged and subsequently fixed, the instructions were given to the tester and verified, but these instructions never tracked forward with the release notes or correlated back to the Connect issue. Ultimately there’s probably a small handful of people at Microsoft that actually know this registry setting exists, fewer that understand why, and those that do either work on a different team or no longer work for the company. Software is hard: one small fissure and the whole thing seems to fall apart.

    Something within the process is clearly missing. However, as a software craftsman and TDD advocate I’m less concerned that the process didn’t capture the workaround as I am that the code randomly pulls settings from the registry – this is a magic string hack that’s destined to get lost in the weeds. Why isn’t this number calculated based on the number of processors? Or better, why not make MaximumAllowedHangs configurable from the test settings file so that it can be unit tested without tampering with the environment? How much more effort would it really take, assuming both solutions would need proper documentation and tests?

    Hope this helps.

    Thursday, November 24, 2011

    iPhone to PC Adapter

    Merry Happy Thanks Giving! I had some time on my hands so I decided to try something new.

    Here’s a quick review of my iPhone headset to PC adapter that I bought a few weeks ago. Hopefully this video comes just in time for Christmas ideas and Black Friday shopping.

    By the way, Thanks Giving was 5 weeks ago.

    Tuesday, July 12, 2011

    Visual Studio Regular Expressions for Find & Replace

    Visual Studio has had support for regular expressions for Find & Replace for several versions, but I've only really used it for simple searches. I recently had a problem where I needed to introduce a set of changes to a very large object model. It occurred to me that this could be greatly simplified with some pattern matching, but I was genuinely surprised to learn that Visual Studio had their own brand of Regular Expressions.

    After spending some time learning the new syntax I had a really simple expression to modify all of my property setters:

    Original:

    public string PropertyName
    {
        get { return _propertyName; }
        set
        {
            _propertyName = value;
            RaisePropertyChanged("PropertyName");
        }
    }

    Goal:

    public string PropertyName
    {
        get { return _propertyName; }
        set
        {
            if ( value == _propertyName )
                 return;            
            _propertyName = value;
            RaisePropertyChanged("PropertyName");
        }
    }

    Here’s a quick capture and breakdown of the pattern I used.

    image

    Find:

    ^{:Wh*}<{_:a+} = value;
    • ^ = beginning of line
    • { = start of capture group #1
    • :Wh = Any whitespace character
    • * = zero or more occurrences
    • } = end of capture group #1
    • < = beginning of word
    • { = start of capture group #2
    • _ = I want to the text to start with an underscore
    • :a = any alpha numerical character
    • + = 1 or more alpha numerical characters
    • } end of capture group #2
    • “ = value;” = exact text match

    Replace:

    \1(if (\2 == value)\n\1\t\return;\n\1\2 = value;

    The Replace algorithm is fairly straight forward, where “\1” and “\2” represent capture groups 1 and 2.  Since capture group #1 represents the leading whitespace, I’m using it in the replace pattern to keep the original padding and to base new lines from that point.  For example, “\n\1\t” introduces a newline, the original whitespace and then a new tab.

    It’s seems insane that Microsoft implemented their own regular expression engine, but there’s some interesting things in there, such as being able to match on quoted text, etc.

    I know this ain’t much, but hopefully it will inspire you to write some nifty expressions.  Cheers.

    submit to reddit

    Friday, March 04, 2011

    Add a Custom Toolbar for Source Control

    My current project uses TFS and I spent a lot of time in and out of source control, switching between workspaces to manage defects. I found myself needing access to my workspaces and getting really frustrated with how clunky this operation is within Visual Studio.  There are two ways you can open source control.

    1. The Source Control item in the Team System tool window.  I don’t always have the Team Explorer tool window open and when I open it, it takes a few seconds to get details from the server.
    2. View –> Other Windows –> Source Control Explorer.  Useful, but there’s too much mouse movement and clicking to be accessible.

    So, rather than creating a custom keyboard shortcut that I would forget I added a toolbar that is always in plain-sight. It’s so convenient that I take it for granted, and when I pair with others they comment on it. So for their convenience (and yours), here’s how it’s done.

    Add a new Toolbar

    From the Menubar, select “Tools –> Customize”.  It’ll pop up this dialog. 

    Click on “New” and give your toolbar a name.

    Toolbar_Customize_AddToolbar

    Add the Commands

    Switch to the Commands tab and select the name of your toolbar in the Toolbar dropdown.

    Toolbar_Customize_Empty

    Click on the “Add Command” button and select the following commands:

    • View : TfsSourceControlExplorer
    • View : TfsPendingChanges

    Toolbar_Customize_AddCommand

    Now style the button’s accordingly using the “Modify Selection” button.  I’ve set mine to use the Default styling, which is just the button icon.

    Enjoy

    CustomToolbar_result

    Tuesday, February 08, 2011

    Plaintext + Dropbox = Ubiquitous Text

    I am loving this. If you blog or like to jot down notes while on the go then I highly recommend the following setup.

    Dropbox

    Dropbox is a small cloud-based utility that synchronizes files between all your devices. It works on PC, Mac, iOS, Android and Blackberry.  It even has a web interface that you can browse from any computer, and even track previous versions of files. Best of all, it's free (for 2Gb of storage).

    There are a number of applications that use Dropbox as their storage medium. My favorite so far is PlainText.

    PlainText

    PlainText is a bare bones text editor with a minimal UI that uses Dropbox as it's file system. Simply download to your iPhone, iPod Touch or iPad, login with your Dropbox account and your data magically syncs each time you touch the contents. Only limitation is that it doesn't have the ability to move files to different folders, but I can do that on my PC.

    Now it's possible for me to sketch down an idea on my iPhone while riding the subway or streetcar, then pickup where I left off and flesh out the details on my iPad while watching TV. I can even switch between devices with almost no wait, so I can proof read or work through an idea whenever I have a free moment.

    Getting started

    Setup is easy.  On your PC or Mac, go to Dropbox and setup an account, the software will download on the next page.  Install the application and associate it to your user account. The free version gives you 2 GB of storage (more if you enlist your friends) and they offer more storage through paid upgrades.

    On your iOS device, download PlainText.  In the settings, link it to your Dropbox account.

    PlainText-Dropbox-signup

    PlainText by default will create a folder in your Dropbox called “PlainText”.  This is helpful so that you’re only syncing small files.  Here’s a quick screen-capture of the available settings.

    Photo Feb 06, 2 33 00 PM

    You can create files and folders on your iOS device.  Any changes will automatically sync to your PC/Mac.

    Dropbox-sync

    Disclosure: This is not a paid advertisement, I just really like this configuration.  I should point out that the links to sign-up are to my referral page.  If you are vehemently opposed to this you can visit the site referral free here: https://www.dropbox.com/

    Tuesday, October 12, 2010

    Working with Existing Tests

    You know, it’s easy to forget the basics after you’ve been doing something for a while.  Such is the case with TDD – I don’t have to remind myself of the fundamental “red, green, refactor” mantra everything I write a new test, it’s just baked in.  When it’s time to write something new, the good habits kick in and I write a test.  After all, this is what the Driven part of Test Driven Development is about: we drive our development through the creation of tests.

    The funny thing is, the goal of TDD isn’t to produce tests.  Tests are merely a by-product of the development of the code, and having tests that demonstrate that the code works is one of the benefits.  Once they’re written, we forget about them and move on – we only return to them if something unexpected broke.

    Wait.  Why are they breaking?  Maybe we forgot something, somewhere.

    The Safety Net Myth

    One of the reasons that tests break is because there’s a common perception that once the code is written, we no longer need the tests to drive development.  “We’ve got tests, so let’s just see what breaks after I make these changes…”

    This strategy works when you want to try “what-if” scenarios or simple proper refactorings, but it falls flat for long-term coding sessions.  The value of the tests diminish quickly the longer the coding session lasts.  Simply put, tests are not safety nets – if you go off making changes for a few days you’re only going to find that the tests get in the way as they don’t represent your changes and your code won’t compile.

    This may seem rudimentary, but let’s go back and review the absolute basics of TDD methodology:

    1. Start by writing a failing test. (RED)
    2. Implement the code necessary to make that test pass. (GREEN)
    3. Remove any duplication and clean it up.  (REFACTOR)

    It’s easy to forget the basics.  The very first step is to make sure we have a test that doesn’t pass before we do any work, and this is easily overlooked when we already have tests for that functionality.

    Writing tests for new functionality

    If you want to introduce new functionality to your code base, challenge your team to introduce those changes to the tests first.  This may seem altruistic to some, especially if it’s been a long time since the tests were written or if no-one on the team is familiar with the tests or their value. 

    Here’s a ridiculously simple tip:

    1. Locate the code you think may need to change for this feature.
    2. Introduce a fatal error into the code.  Maybe comment out the return value and return null, or throw an exception.
    3. Run the tests.

    With luck, all the areas of your tests that are impacted by this code are broken.  Review these tests and ask yourself:

    • Does this test represent a valid requirement after I introduce my change?  If not, it’s safe to remove it.
    • How does this test relate to the change that I’m introducing?  Would my change alter the expected results of this test?  If yes, change the expected results. These tests should fail after you remove the fatal flaw you introduced moments ago.
    • Do any of these tests represent the new functionality I want to introduce?  If not, write that test now.

    (If nothing breaks, you’ve got a different problem.  Do some research on what it would take to get this code under a test, and write tests for new functionality.)

    Conclusion

    The duct tape programmer will argue that you can’t make an omelette without breaking some eggs, which is true – we should have the courage to stand up and fix things that are wrong.  But I’d argue that you must do your homework first - if you don’t check for other ingredients, you’re just making scrambled eggs. 

    In my experience, long term refactorings that don’t leverage the tests are a recipe for test-abandonment; your tests and code should always be moments from being able to compile.  The best way to keep the tests valid is to remember the basics – they should be broken before you start introducing changes.

    submit to reddit