Thursday, December 26, 2013

A Quick Peek at the New Kinect for Windows

So I’ve had a few days to play with the new Kinect for Windows device and SDK, and I have lots to say, but here’s a quick post to share some immediate observations about the hardware. I’ll post some more, hopefully with video and code examples when I get a break between all the holiday festivities.

First off, I need to get some disclaimers out of the way. This is an early preview of the new Kinect for Windows, so the device, software and documentation are all preliminary and subject to change.

That being said so far the device is pretty rock solid. Although I haven’t got access to an Xbox One for comparison my guess is this is very similar to the unit that ships with the console but with a different connection and power supply. The device I have has stickers on it and although the picture below doesn’t highlight this well, but if you look closely at the sticker on the right-hand side of the device you’ll see the Xbox logo shining through slightly. No doubt, we will see some change in the appearance of the device when the official device is shipped next year.

WP_20131226_001

An interesting departure from KFW 1.0 is that it appears that Microsoft has ditched the motorized tilt motor in favor of an manually adjustable base. I imagine that the motor was a support problem as it could break if you applied too much force. I will miss the ability to freak people out by making it move randomly, but from a developer perspective this puts an onus on the hardware setup as placement of the device cannot be adjusted remotely. Maybe we can find an algorithm to infer camera angle?

Another key variation is that the base of the unit has a proper camera mount screw allowing you to deploy the device with one of many existing industry standard tripods or specialized camera mounts. I will not miss the proprietary Kinect mount.

WP_20131226_002

The power supply and connection to the PC is interesting and also likely to change in the official device. The cord coming out of the camera is quite long and has a specialized connector (likely the same as the xbox one). In order to connect this specialized connector to a PC, the camera plugs into a box with three connections: camera, USB and power. The power cord is about 3 feet long and connects a fair sized power brick which then has another 3 feet before connecting into an outlet.

WP_20131226_003

In my next post we’ll take a look at some of the new features of the SDK.

Saturday, December 21, 2013

Guess what I’ll doing during Christmas break?

It arrived at work just before my Christmas vacation…

WP_20131219_003

It came with stickers…

WP_20131219_005

Expect a few more posts from me over the Christmas break about the new Kinect for Windows.

By the way, there’s still time for you to get in on this as Microsoft has extended the Developer Preview for another 500 devices.

Tuesday, November 05, 2013

The first User Story for your next Project

For your next project, consider adding this story to the very first sprint:

As an administrator I want to install the program with an installer so that I can have everything in place without manual setup.

Acceptance criteria:

  • Application should run and show an empty screen
  • Application should produce a log file that shows debug diagnostic information that can be used to trace through application runtime (timestamp, thread, logging level, component, message). Any fatal errors during start-up should be recorded with helpful diagnostic details.
  • Any prerequisites for the installer must be known in advance or bundled with the installer
  • The installer should have a minimal user-interface and optionally support running unattended
  • The installer should put a shortcut on the desktop and start menu

Sure, this is great, but why should this be the first story in my project?

The short answer is this is the best time to do this. An installer created at the beginning of the project is often very simple as it will likely only be a handful of files. If you’ve ever written an installer, you’ll know that it can be a lot of work establishing all your dependencies and user-interface. If you wait until the end of the project when time is often a critical resource, you’ll fall into the trap of dropping this feature.

The beginning of your project is also a really good time to start thinking about deployment. By creating an installer you’ll have a packaging mechanism for your testers and business stakeholders and every time they install your application they will be regression testing the deployment. Bugs will be found, and the installer will get better and more robust as the project advances. From a development perspective you might think of this as re-work. Try to break out of that mindset and think of it more like spreading the work over the project in small increments.

As a final note, you may notice that some of the requirements refer to non-functional requirements like diagnostic logging. I’ve found this is a great way to set scope for the developers when referring to the initial “empty screen” and has an added benefit that lets testers know the location of the log files. Also, it allows you to associate defects for weird start-up errors and crashes to a User Story.

Happy coding.

Wednesday, October 16, 2013

Advance WorkItems to next state on check-in

Nearly two years ago, I had a project where we used whiteboard and post-it notes for our Kanban board. Perhaps one of my favourite aspects of using a whiteboard was the non-verbal communication that occurred between team members: when a developer would finish a task, she would stand-up from her desk, take a deep breath and well deserved stretch then rip her post-it note from the In-Progress column, slap it into the Ready-for-test column and then yank one from the Next column. Everyone on the team would look up, smile and go back to coding.

Alas, whiteboards and post-it notes only work when all team members can see the board and when you’re teams are remote you will need a software solution. Our organization is big on TFS, and I’ve found much luck using SEP Teamworks to simplify the data and present it in a Kanban fashion.

One of the challenges with using software for task tracking is that it loses that tacit capability. Choosing the wrong tool can mean you spend more time managing the tool than building software.

Here’s a quick post that illustrates how you can leverage features of TFS workflow to automate your Kanban process a bit, so you don’t have to harass your team members so much.

The xml schema for our User Stories, Bugs and Tasks contains elements that describe fields, user-interface and workflow. The workflow element is interesting because it allows us to define the supported transitions between states and default values for fields in each state. It also supports this sweet little addition that will automatically transition a work-item to a different state simply by including the following Action in the Actions element:

<TRANSITION from="In Development" to="Ready for Test">
  <REASONS>
    <DEFAULTREASON value="Development Complete" />
  </REASONS>
  <FIELDS>
    <FIELD refname="System.AssignedTo">
      <ALLOWEXISTINGVALUE />
      <EMPTY />
    </FIELD>
  </FIELDS>
  <ACTIONS>
    <ACTION value="Microsoft.VSTS.Actions.Checkin" />
  </ACTIONS>
</TRANSITION>

Although the schema suggests that it would allow custom actions to be plugged in here, only the Microsoft.VSTS.Actions.Checkin action is supported.

To take advantage of this feature, simply associate your work items to your check-in and mark the action as “Resolve”.

TFS_AssociateWorkItems

Thursday, September 12, 2013

Kanban, TFS style.

I’ve been building up a system of tracking tasks using User Stores in TFS for almost two years and I’ve been blogging as I go. Over time, I’ve written several short emails to colleagues pointing them to the Kanban tag of my blog, but that tag list is a bit heavy. Here’s a quick recap of helpful posts that can get you up and running.

Getting Started

If you’re a developer, team lead or architect – you should start here.

  1. My first month with Kanban – a quick walkthrough that discusses my initial observations with Kanban. Old school, using a whiteboard and sticky-notes.
  2. Using Kanban with TFS – a high level overview of the process I’ve adopted for tracking Tasks in TFS using Kanban columns. Provides an overview of customizing work-items.
  3. Configuring SEP Teamworks – a walk through how to set up SEP Teamworks, my favorite free tool for viewing TFS items in a sticky-note fashion.
  4. Using User Stories with SEP Teamworks – Revisiting my workflow slightly to adapt to using User Stories instead of just individual task tracking.
  5. Setting up Areas and Iterations – Shows you how to organize your stories into TFS Areas & Iterations, and then using SEP Teamworks features to simpli

Reporting and Querying

If you’re a project manager and you want to set up SEP TeamWorks. Please read:

  1. Working with TFS WorkItems in Excel – includes a walk through on how to set up your machine, and export queries to Excel.
  2. Configuring SEP Teamworks – a walk through how to set up SEP Teamworks, my favorite free tool for viewing TFS items in a sticky-note fashion.

Advanced Customizations

  1. What’s new in SEP Teamworks 1.0.31 – highlights some of the new improvements in the latest release: Card Editor to customize views, creating linked work items.
  2. Marking Kanban items as Blocked – demonstrates how to customize your work items to take advantage of marking items as blocked.

Tuesday, August 13, 2013

Fix your code with an “On Notice” board

OnNotice

The above comes with thanks to the On Notice Generator, and my board re-iterates a lot of the guidance found on Miško Hevery’s blog. These are code patterns and anti-patterns that I’ve encountered on many projects and have strong feelings against. Some of these are actually on my Dead to Me board, but there wasn’t a online generator.

I think it’s a good habit to start on On Notice board for your project – a list of offending code that should be cleaned-up at some point. Often, these unsightly offenders are large and tightly woven into the fabric of our code so they’re not something that can be fixed in a single refactoring. But by placing these offences on a visible On Notice board, they become goals that can fuel future refactorings.

You might not be able to fix a problem in a single session, but you can add a 2 hour research task to your backlog to understand why it exists. The output of such task might be further research tasks or changes you could introduce to shrink their influence and eventually remove all together. Sometimes I bundle a bunch of these fixing tasks into a refactoring user story, or slip a few into a new feature if they’re related. Over time, the board clears up.

Don’t forget, while you’re making these changes, write a few tests while you’re at it.

Oh, regarding Gluten-free cookies… they look like cookies, but they are most definitely not.

Monday, July 22, 2013

Thanks for Saying Thanks!

A few months back, I received a nice email from the folks at SEP for my blog notes on SEP TeamWorks. It’s nice to know that someone is paying attention:

My name is Adam Scroggin and I am the product owner for SEP TeamWorks.  I came across your blog today (http://www.bryancook.net/) and just wanted to say thanks for the write ups you did.  If there are any features you want added, feel free to contact me.

Keep on writing! Adam

Adam Scroggin | Software Engineering Manager

Appreciate the feedback, Adam. I have a few more blog posts to add regarding using the tool with TFS. I’m sure you’d be happy to learn that teams within my organization continue to adopt the tool for its ease of use.

Thursday, July 18, 2013

Unhandled exceptions in WPF applications

When it comes to unit testing there are a few areas of the application where I am comfortable not getting coverage. There are some areas of the application, typically in the UI, that are generally difficult to unit test but can be easily verified if you run the application manually. There are a few other places where testing is very difficult to validate such as the global error handler for your application. For the global error handler, you have to live with some manual testing and assume you’ve got it right.

Today I discovered one of my assumptions about the global error handler was completely wrong. My app was crashing and displaying error messages; I assumed it was crashing, logging to a file and exiting politely. It was not. And as always, I’m writing this as a reminder for you and myself.

As most know, the best place for a global exception handler is to attach an event handler to the DispatcherUnhandledException event of the application. It’s important to set the the Handled property of the UnhandledExceptionEventArgs to true to prevent the app from crashing.

However, this will only capture exceptions on the UI thread. All other exceptions will look for an event handler on that threads’ stack. If no event handler is found it will bubble up to the AppDomain’s handler. So to capture these exceptions you should add an event handler to the AppDomain’s UnhandledException event.

In contrast to the UnhandledExceptionEventArgs, UnahdledException does not have a Handled property. I assumed that the purpose of this handler was so that we could log the error and go on about our business. As it turns out, if your code reaches to this event handler it is completely unrecoverable. As in Bill Paxton, “Game over, man!” – your app is going to crash and show a nasty error dialog. The only way to prevent the dialog is to use Environment.Exit(1);

namespace MyApplication
{
    public class MyApp : Application
    {
        private static ILog Log = log4net.LogManager.GetLogger(typeof(MyApp));

        protected override void OnStartup(StartupEventArgs e)
        {
            base.OnStartup(e);

            // handle all main UI thread related exceptions
            Application.Current.DispatcherUnhandledException += DispatcherUnhandledException;

            // handle all other exceptions in background threads
            AppDomain.CurrentDomain.UnhandledException += AppDomainUnhandledException;
        }

        void DispatcherUnhandledException(object sender, System.Windows.Threading.DispatcherUnhandledExceptionEventArgs e)
        {
            // prevent unhandled exception from crashing the application.
            e.Handled = true;

            Log.Fatal("An unhandled exception has reached the UI Dispatcher.", e.Exception);

            // shut down the application nicely.
            Application.Shutdown(-1);
        }

        void AppDomainUnhandledException(object sender, UnhandledExceptionEventArgs e)
        {
            var ex = e.ExceptionObject as Exception;

            Log.Fatal("An unhandled exception has reached the AppDomain exception handler. Application will now exit.", ex);

            // This exception cannot be handled and you cannot reliably use Shutdown to gracefully shutdown.
            // The only way to suppress the CLR error dialog is to supply "1" to the exit code.
            Environment.Exit(1);
        }
    }    
}

If you want to gracefully exit the application regardless which thread created the Exception, the recommended approach is to:

  • Handle the exception on the background thread.
  • Marshal the exception to the UI thread and then re-throw it there.
  • Handle the exception in the Application.DispatcherUnhandledException handler.

There’s no easy way out here and means you need to fix the offending code. My recommendation is to use the AppDomain UnhandledException as a honey pot to find issues.

High five.

submit to reddit

Monday, July 08, 2013

DeploymentItems in Visual Studio 2012

A frequent concern with writing unit tests with MSTest is how to include additional files and test data for a test run. This process has changed between Visual Studio 2010 and 2012 and it’s become a source of confusion.

Background

With Visual Studio 2010 and earlier, every time you ran your tests Visual Studio would copy all files related to the test to a test run folder and execute them from this location. For local development this feature allows you to compare results between test runs, but the feature is also intended to support deploying the tests to remote machines for execution.

If your tests depend on additional files such as external configuration files or 3rd party dependencies that aren’t directly referenced by the tests, you would need to enable Deployment in your testsettings and then either specifying the deployment items in the testsettings file or marking each test with a DeploymentItemAttribute.

What’s changed in Visual Studio 2012?

Visual Studio 2012 has a number of changes related to the test engine that impact deployment. The most visible change is that Visual Studio 2012 no longer automatically adds the testsettings file to your solution when you add a Test project. The testsettings file can be added to your project manually, but it’s generally recommended that you don’t use it as it’s for backward compatibility and not all features within Visual Studio 2012 are backward compatible. Microsoft Fakes for example are not backward compatible.

The biggest change related to deployment is that Visual Studio 2012 tests run directly out of the output folder by default. This adds a significant speed boost for the tests but it also means that if your tests are dependent on files that are already part of the build output, you won’t need to enable deployment at all.

Another interesting change is that if you include a DeploymentItemAttribute in your tests, Deployment will be automatically enabled and your tests will run out of the deployment folder.

More information can be found here.

Saturday, May 25, 2013

.props files and NuGet 2.5

My last post showed a very simple PowerShell script to automate project properties as part of a NuGet package. Shortly after posting, I exchanged a few tweets with some very smart people. The suggestion was that there’s a new feature in NuGet 2.5 that can pull .props and .targets files directly into your project without having to resort to powershell scripting. I had to try this.

So, I created a .props file:

<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <PropertyGroup>
        <AssemblyOriginatorKeyFile>mykey.snk</AssemblyOriginatorKeyFile>
        <SignAssembly>true</SignAssembly>
    </PropertyGroup>
</Project>

And put it in a new package:

image

Then added my shiny new package to a new project.

…And nothing happened.

…At first. I had to unload and reload the project to see the changes.

<Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <Import Project="..\packages\StrongKey2.1.0.0\build\StrongKey2.props" Condition="Exists('..\packages\StrongKey2.1.0.0\build\StrongKey2.props')" />
  <PropertyGroup>
    <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
    <Platform Condition=" '$(Platform)' == '' ">x86</Platform>

I’m thinking, for my example anyway, that powershell is better suited for this?

Thursday, May 23, 2013

Manipulating Projects with NuGet Powershell scripts

I’ve been playing with hosting NuGet packages on our internal NuGet server a fair bit recently and encountered some interesting feedback from a colleague: none of our packages are signed with a strong-name. Gasp! Strong naming is definitely something that falls under the you really should bucket but gets quickly put in the not today simply because it’s a pain. If you’re familiar with the concept, the challenge with strong-naming is that if you give one assembly a strong-name then all dependent assemblies must also have one. This quickly cascades into a lot of repetitive tasks.

The pain of repetitive tasks is something that NuGet handles really well, so why not put my signing-key in a nuget package to automate this process? The concept of this package is extremely straight forward:

  1. Add my snk file to the project as content
  2. Manipulate the project properties in the install.ps1 script

I’ve really been wanting to write some PowerShell scripts for NuGet packages for a while now, but haven’t had the opportunity. My initial thought was to manipulate the project as an Xml document, but abandoned that approach as some research showed that Visual Studio would prompt the user to reload the project.

After some Googling, I came across a HaaHa (Haack/Hanselman) presentation at MIX11, where Scott’s AddMVC3ToWebForms package had some dirty hacks to manipulate the project. There’s a really good discussion in the NuGet forums that suggests using MSBuild API is a good approach to manipulate the Project properties only if what you want isn’t directly available from the Visual Studio API

Fortunately for me, manipulating the properties I wanted was dead easy so my script couldn’t be simpler:

param($installPath, $toolsPath, $package, $project)

$project.Properties.Item("AssemblyOriginatorKeyFile").Value = "mykey.snk";
$project.Properties.Item("SignAssembly").Value = "true";

$project.Save();

Few handy tips I discovered while writing this…

You can get access to the $project variable using the Package Manager Console:

$project = Get-Project

You can list the details of your objects using the get-member cmdlet:

$project | get-member

And you can get a dump of the current values, and list of properties easily:

$project.Properties
$project.Properties | select Name

Well, that’s all for now. Let’s hope this is the start of some awesome PowerShell NuGet badassness for you.

Happy Coding'.

Thursday, May 16, 2013

Working with TFS work items in Excel

Visual Studio ships with a pretty good query engine that you can use to retrieve a list of work items, but if you need to change the filter criteria frequently or calculate totals for estimates or remaining work, Visual Studio can’t cut it on its own. Fortunately, the integration between TFS and the Microsoft Office suite is fantastic and we can export our data into Excel with a few simple clicks.

Hey, if you’re reading this because you’re a project manager at my office and I sent you this link, you’ll need a few things on your machine for this to work:

  1. Download a copy of Visual Studio Team Explorer.  It’s basically a slimmed down shell of Visual Studio without all the code editing features and it includes all the goodies for you to query work items and export to Excel.  If you have Visual Studio installed, you can skip this step.
  2. You’ll need read permissions to the TFS Team Project. Reach out to your friendly IT support for this.
  3. If you’ve never connected to the TFS server before, you’ll also need the TFS Server name, Team Project Collection name, and Team Project name.

There are two ways to get TFS work items into Excel:

  • Import TFS Work Items into Excel
  • Export TFS Work Items from Visual Studio or Team Explorer into Excel

Importing TFS Work Items into Excel

Let’s assume that you’ve never connected to TFS before. Managing this from Excel is actually pretty easy.

  1. Open Excel and create a blank workbook
  2. Select the “Team” option from the Ribbon. If “Team” isn’t available in the Ribbon, you haven’t installed Team Explorer. (see above)
  3. Select the “New List”.
    Excel_TeamExplorer_Addin
  4. The Connect to Team Foundation Server dialog will appear.
    TFS_ConnectToServer
  5. Click the Servers button. The Add/Remove Team Foundation Server dialog will appear.
    TFS_AddRemoveServer
  6. Click the Add button to open the Add Team Foundation Server dialog.
    TFS_AddServerpng
  7. Fill in the server name and details provided to you by IT. You’ll know you’ve entered the right information when the Preview matches the information provided.
    TFS_AddServerpng
  8. After all the connection information is provided, select the appropriate Team Project Collection, and the Team Project that you want to use.
    TFS_SelectProject
  9. Finally a dialog appears that lets you select an existing TFS query.
    Excel_SelectQuery
  10. Once you’ve selected, the query, click OK. Voila!
    Excel_TFS_Goodness

Exporting TFS work items to Excel from Visual Studio

Exporting your favorite query from Visual Studio to Excel couldn’t be easier. Simply run the query and then select “Open in Microsoft Office –> Open Query in Microsoft Excel”

VS_ExportToExcel

Famous Last Words

Here’s a few tips with working with work items Excel:

  • At any time you can get the latest by clicking the “Refresh” button
  • If you make changes, you can push them back to the server by clicking the “Publish” button. (You’ll need write permissions for this)
  • This worksheet is tied directly to the server, so don’t forward the spreadsheet to individuals that don’t have access to the data. For this, copy the entire worksheet to another worksheet.

Thursday, February 28, 2013

Using Areas & Iterations in SEP TeamWorks

TFS supports a concept that allows you to organize your work items into different functional areas and chronological iterations, called Areas & Iterations. This post shows you how you can use these features to groom your Work Items using SEP TeamWorks.

Areas & Iterations in TFS

Areas & Iterations can be accessed from the Team Explorer.

Visual Studio 2012

TFS2012_AreasIterations

Selecting this item opens a single dialog that allows you to configure both Areas & Iterations. Note that you’ll need to be an administrator for your Team Project in order to make changes.

Areas

The following shows an example of functional areas for a product.

TFS_Iterations

Iterations

The following shows an example of Iterations.

TFS_AreasIterations

Both Areas & Iterations can be nested so you can structure them any way you want. The system is also flexible if you delete an item you’ll be prompted to select a new value work items that would be impact.

Querying Work Items based on Path

When you query Work Items, you can find all items within the hierarchy.

TFS_AreaPathQuery

Using Swim Lanes with SEP TeamWorks

SEP TeamWorks doesn’t have any built in mechanism to filter your TFS queries, but it has a fantastic feature that let’s you group content using swim lines.

Swim Lanes can be turned on any query (except hierarchical queries), simply right-click (long-press on your touch device) to open the context-menu and choose Edit Swim Lanes. You can also access the swim lanes from the Options button in the UI.

SEP_AllWorkItems

You can create Swim Lanes for most fields, assuming that the Work Items in the Query support the fields you’re grouping by:

SEP_EditSwimLanes

Here’s the same query with a swim lane enabled for Iteration Path.

SEP_AllWorkItems_with_SwimLanes

Where swim lanes really shine is you can use them to edit your work items without opening them simply by dragging them into another lane.  Here’s some great examples:

  • Bug Triage: write a query that pulls back all new bugs then use swim lanes for Severity, Priority, Found In or Triage status. 
  • Product Planning: write a query for all User Stories then use swim lanes for Area path, Iteration path, Risk, Story Points
  • Grooming the backlog: write a query for all Tasks then use swim lanes for Activity, Area, Assigned To, Remaining Effort

As you can see, with a little bit of planning and some creativity you can customize the tool to fit your needs. In the upcoming posts, I’ll look at common queries and exporting information out of TFS into other formats.

Until then, Happy Coding.

Monday, February 25, 2013

Using User Stories with SEP TeamWorks

When I first started with Kanban over a year ago, the post-it notes on my whiteboard represented Tasks and Bugs, which for the most part worked great. The two main challenges raised by the team, were:

  • Developers had a hard time visualizing related tasks, which made it difficult to organize and plan effort. We also had the risk of being unable to deploy because some related tasks were unfinished.
  • Project Managers had a hard time reporting progress on high-level features. Although they had access to the Kanban board, organizing a status report from low-level tasks and bugs wasn’t sufficient.

Kanban is intended to be an adaptive process, meaning that it’s not a full-featured methodology and you pretty much have to add your own rules and concepts to it so that you can adapt it to your needs. Other methodologies, like Scrum, have solved my problem by using User Stories to represent high-level features that you can group related tasks under. Developers could see related tasks; project managers could report progress on Stories – win / win. The only remaining challenge was how to represent User Stories on a whiteboard? (How do you associate post-it notes with a Story? Do Stories move on the board like Tasks do? If a few of the tasks for a Story are “ready for test” does the status of the Story change?)

I struggled with this challenge until I switched over to a digital Kanban board which afforded us the ability to produce different queries over the same data. We’re using TFS 2010 as a repository and SEP TeamWorks as a front-end to visualize task effort.

Querying and Displaying User Stories

The Agile Process Template used by TFS 2010 includes a User Story work item type which you can link tasks to in a Parent-Child relationship. You can write a simple query to obtain all parent-children tasks. 

TFS_DirectLinksQuery

Note that the TFS query that I’m using is a Work Items with Direct Links query, which allows me to pull back Children tasks as well as Related bugs. I also don’t nest my User Stories – this technically works within TeamWorks, but it seems a little odd and needs some further investigation.

SEP TeamWorks supports hierarchical tasks out of the box. Stories and their children are represented as swim lanes, with the first column representing the Story.  Note that because Stories are broken into swim lanes, you can’t use TeamWork’s swim lane feature to further refine the query. If this is something you want, you’ll have to set up different TFS queries.

SEP_BacklogQuery

Adjusting Kanban Workflows

While the workflow that I’m using for Tasks and Bugs would work for User Stories, it seems a little heavy to duplicate the process for Stories. I simplified the flow to make it easier to manage and for reporting status.

The flow for my Tasks and Bugs, which I’ve outlined in my previous posts, is:

  • Proposed
  • Active
  • Selected
  • In Development
  • Ready for Test
  • Testing
  • Acceptance
  • Closed.

For my User Stories, I’ve simplified it to:

  • Proposed: the story does not have all the details, yet.
  • Active: the story is well defined and is actionable.
  • In Development: the story is being developed or validated by developers
  • Testing: development has completed and the testers are running their test cases
  • Acceptance: test cases are complete and its ready for client review
  • Closed: the feature has been accepted / shipped / etc.

Note that SEP TeamWorks uses the Status column from all Work Items into its UI so I’ve repurposed my existing status definitions to simplify things.

Daily Operations

As mentioned previously, having our Kanban board as software solution allows us to query and represent the data in different ways. I use a combination of three different queries for my daily operations:

  • Product Planning: I query only User Stories which allows me to see the high-level status of the project at a glance. I heavily use Areas & Iterations (I’ll blog about this next) to help organize and plan the project.
  • Iteration Backlog: I use a hierarchical query as shown above to represent the current iteration.  This query allows me to see the detailed status of the Story
  • All Work Items: This query returns all Tasks and Bugs and it resembles the Kanban board I’ve fallen in love with. This query allows me to see bottlenecks, developer allocations, etc.

Conclusion

Having User Stories has made a big difference for how we track and organize our effort, but it’s not without its challenges. There’s a special knack to writing stories and it takes some practices to get used to the format, which I’ll have to blog about later. For now, I hope that this post shows you how easily it is to introduce Stories into your Kanban flow.

Until next time, happy coding.

Saturday, February 23, 2013

New updates for SEP TeamWorks

For the last 8 months, I’ve been using SEP TeamWorks as a front-end for my all digital, TFS-backed Kanban board. The experiment has been relatively successful, and I’m constantly revamping my model to make it fit for our style of projects. The real success of using Kanban with this tool has to be its adoption to other projects within the organization. While my hand-waving evangelism has played a part, this expansion is largely due to the position of my desk: I’ve been sitting directly between the Project Management group and the developers and see a fair amount of foot traffic every day. Every developer, project manager, designer and executive has walked by my multiple-monitor, touch screen environment and eventually, like bees to honey, they ask, “what the hell is that??”

While I’m very pleased to see Kanban adoption within our organization, I’m also very glad to see that SEP is using an iterative release cycle for their TeamWorks product. Late last year they added an auto-update feature and now surprise updates makes it feel like it’s Christmas all year long.

Here are two of my favorite new features from the latest release:

Revamped “Add new” menu

There are two ways you can add new work items inside TeamWorks.  The first way is through a main menu at the top of the screen; the second is a context-menu that you can bring up on an existing work item. This context-menu option has been completely reworked.

AddNewLinkedWorkItem

The new version opens a dialog window that let’s you specify the WorkItem Type (Bug, Task, etc) and relationship (Parent, Child, Related, etc) to the current item.

AddNewLinkedWorkItemDialog

This dialog is very similar to how it appears within Visual Studio, which means I can do more from the touch screen. The other clear advantage to using the context-menu is that the newly created work item inherits the iteration and area of the linked item.

Card Editor

This release has a brand new feature that allows you to edit the appearance of your Kanban cards. This is a very simple but very welcome addition to the tool.  The tool allows you to customize the fields and layout for each type of card, and you’re able to reuse your layouts from other queries or projects.

Here’s an example of a Task that I’ve customized by placing the Remaining Work field in the bottom right hand-corner, horizontally right-aligned.

CardEditor

Here’s a few customizations that I’ve used on my most recent project:

  • User Stories: add the Story points to the bottom of the card
  • Tasks: Original Estimate, Completed Work, Remaining Work
  • Bugs: Priority and Severity

Well, that’s all for now. I’ve got Inglorious Basterds queued up on my DVR and a nice cool drink waiting for me.

Happy Coding.

Tuesday, February 19, 2013

Turtles all the way down

A while back I had a heated passionate debate about dependency injection with a colleague. He was concerned about the lifetime of objects and which object had the responsibility of creating the other. While some of his points were valid, I took a position that a lot of that doesn’t matter if you allow the Inversion of Control container to manage this for you – simply push dependencies through the constructor. The argument, comically, sounded a bit like this:

Colleague: “But who creates these dependencies?”

Me: “That doesn’t matter. Put them in the constructor, let the container take care of that.”

Colleague: “Ok, so who creates this object?”

Me: “Again, doesn’t matter. It’s placed in the constructor of it’s parent, along with any other dependencies it needs.”

Colleague: “Arrrg! Who creates that object??”

Me: “Dude! It’s turtles all the way down.”

The reference “turtles all the way down” is to what I believed to be an Iroquois cosmological myth that the earth is flat and rests on the back of a great tortoise, which in turn rests on top of a larger tortoise that is on top of a larger tortoise, etc, etc… all the way down. I recently discovered that source of this expression is unknown and there are many different variants of this theme. The best variant is the Hindu belief that the earth sits on top of a tiger or elephant, which is on top of a tortoise, which is then, well, …uh, suppose we change the subject?

Over the years, I’ve adopted this expression to represent a fundamental lesson I was taught in object oriented programming: “objects should not have a top”. That is, objects should be open ended, allowing them to be extensible or mixed-in to other different applications and implementations. Our objects should be aware of their dependencies but not in control of their implementation or lifetime, and the framework that glues them together should be abstracted away. This idea is echoed by the Open/Closed principle and the Dependency Inversion principle, and it is demonstrated every time we write automated tests – the first consumers of our designs.

In my analogy, turtles all the way down accurately describes how I feel we should design our objects, but to my colleagues credit it doesn’t reflect that at some point there must be a reckoning point – someone must know how to assemble the parts to make a whole. I suppose Aristotle would point to an unmoved mover, responsible for the order of the universe. Douglas Adams would point to the man behind the man that probably has another man behind him. I’d point to some top-level application service that has the configuration that magically wires everything together.

It feels good to know there’s some sort of finality to the infinite regress problem: our application glues it all together.  So maybe now we can catch up on our sleep, knowing that we don’t have to realize infinity before we can close our eyes. At least until we see our application as part of a batch process in a much larger service, hosted in the cloud, resting on the back of a very large turtle.

Happy coding.

Thursday, January 24, 2013

A few helpful TDD links

A few months ago, I had an interesting discussion about TDD with a colleague. When I returned to my desk I quickly sent him a list of links that I thought he’d find useful.

Today, I found it while cleaning up my email. I think you might find it useful, too.

I came across Miško Hevery’s blog through the Google Testing Blog.

A few of my posts that you might enjoy:

Wednesday, January 09, 2013

Windows Xaml–Triggerless

I was surprised to learn that Xaml for Windows Store applications does not have support for Triggers. Triggers within WPF (DataTriggers, EventTriggers) are a simple way to introduce interactivity to a user control such as changing the display properties when an event or data condition is satisfied. Since Triggers are declared using Xaml syntax, functional behavior can be introduced by modifying a style rather than writing code in the code-behind.

In recent years however, WPF has introduced the Visual State Manager (VSM) concept, which originally grew out of the SilverLight and WPF Toolkits before being rolled into .NET 4.0. The Visual State Manager is much easier to use because it groups previously unrelated triggers into more meaningful semantic states (MouseOver, Pressed, Normal, Disabled, etc). Triggers have played a less important role in WPF development since this time, but there were still occasions where a Data or Event Trigger made sense.

Unfortunately, triggers were not included in Xaml for Window Store applications. (Technically, EventTrigger is included but it only supports the FrameworkElement.Loaded event). So where does this leave us?

Here’s a few options:

Personally, I’m a fan of the expression Blend Behaviors, especially the InvokeCommandAction which is immensely useful when building MVVM applications. I’ll post more about these with some examples soon.

Until then, happy coding.

Tuesday, January 08, 2013

Visibility Hidden Behavior for WinRT

In my last post, I lamented the loss of the Hidden in the Windows.UI.Xaml.Visibility enumeration. Here’s an ultra-simple attached property that simulates Hidden by changing the opacity and hit testability of the element:

using Windows.UI.Xaml;

namespace Blog.VisibilityHiddenDemo
{
    public class VisibilityHiddenBehavior : DependencyObject
    {
        public static readonly DependencyProperty IsVisibleProperty = 
            DependencyProperty.RegisterAttached("IsVisible", typeof(bool), typeof(VisibilityHiddenBehavior),
                new PropertyMetadata(true, OnIsVisibleChanged));

        public static bool GetIsVisible(DependencyObject obj)
        {
            return (bool)obj.GetValue(IsVisibleProperty);
        }

        public static void SetIsVisible(DependencyObject obj, bool value)
        {
            obj.SetValue(IsVisibleProperty, value);
        }

        private static void OnIsVisibleChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
        {
            var element = d as FrameworkElement;
            if (element != null)
            {
                var visible = (bool)e.NewValue;
                element.Opacity = visible ? 1 : 0;
                element.IsHitTestVisible = visible;
            }
        }        
    }
}

Free as in beer.

Happy coding.

Monday, January 07, 2013

Windows Xaml - No Where to Hide

With thanks to my current project, I finally have a chance to build a Xaml based Windows Store application. On the whole, I think the platform is great but it seems like a complete reboot of the developer ecosystem. Favourite tools like Reflector and WPF Snoop don't have a home and it reminds me of the first few years when .NET first came out. It's going to take a little while for us all to fully understand the change and how to make the most of it.

For today I thought it would be fun to start a new series of posts related to all the gotchas I've discovered working with Windows.UI.Xaml. Today's gotcha, Visibility.

One of the first observations you'll find working with Xaml for Windows store applications is that it is not WPF. WPF has matured into the System.Windows namespace over the last six years. Windows.UI.Xaml is brand-new -- it looks like WPF but it has some interesting changes.

Today's change: Visibility.

Under WPF, the System.Windows.Visibility enumeration has three possible values:

Visible Display the element.
Hidden Do not display the element, but reserve space for the element in layout.
Collapsed Do not display the element, and do not reserve space for it in layout.

Visible seems straight forward enough, but the difference between Hidden and Collapsed is significant. The difference is shown using the Rectangle in this example:

<Window x:Class="Blog.VisibilityHiddenDemo.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525">
    <StackPanel Orientation="Vertical">

        <Rectangle Width="100" Height="100" Fill="Green" 
                   Visibility="Visible" />
        <TextBlock HorizontalAlignment="Center">Some Content</TextBlock>
        
    </StackPanel>
</Window>

When the Rectangle is Visible it appears on the screen as expected.

VisibilityHidden_Visible

When the Rectangle is Hidden it doesn't appear, but it still takes up space.

VisibilityHidden_Hidden

When the Rectangle is Collapsed it's at though it never existed.

VisibilityHidden_Collapsed

For applications that target Windows RT, Windows.UI.Xaml.Visibility has only two possible values:

Visible Display the element.
Collapsed Do not display the element, and do not reserve space for it in layout.

It makes we wonder why the folks at Microsoft decided to drop "Hidden" from the API. Ultimately, someone lost the debate and now we're left with an enumeration that is familiar to developers but is even more awkward that the original. Strange indeed.

So what does it mean that we have no Hidden? In the majority of cases, toggling between Visible and Collapsed is exactly what you want. But occasionally, you do want to remove something from the UI without changing the layout. For example, showing a button on the screen only when required.

So we don't have Hidden in Windows.UI.Xaml. Here's a few alternatives:

Opacity / IsHitTestVisible

One alternative to Hidden is to change the element’s opacity to zero thereby making it transparent. But beware, transparent objects are still present on the screen and can be interacted with, so if you’re hiding an interactive element such as a button, you’ll also want to set the IsHitTestVisible property to False.

Alternate Layouts

Rather than relying on the Hidden to preserve layout, another alternative is to use a different layout. Here are a few hacks that I’ve used:

Canvas Control

You can use a Canvas control and set the Canvas.Left and Canvas.Top attached properties on the children controls to position the controls exactly where you want. This seems a little heavy-handed and might only work if the content has a fixed size.

Relative positioning using Margins

The Grid control has a simple, dirty hack: if you don’t specify the Grid.Row or Grid.Column attached properties the element will default to the first column and row. You can achieve a form of relative positioning by putting multiple controls in the first position and then use margins to offset their position.

<Grid>
   <Button Width="50" Visibility="Hidden">
      <Image Source="/images/back.png" />
   </Button>
   <TextBlock Margin="55,0,0,0"
              Text="Text offset by margin" />
</Grid>

Grid-based layout

Another approach to preserving layout when Hidden is to explicitly define the layout. Again, heavy-handed if all you need is collapsed visibility.

<Grid>
   <Grid.ColumnDefinitions>
        <ColumnDefinition Width="50" />
        <ColumnDefinition />
   </Grid.ColumnDefinitions>

   <Button Grid.Column="0" Visibility="Hidden">
     <Image Source="/images/back.png" />
   </Button>

   <TextBlock Grid.Column="1"
        Text="Text in 2nd column" />
</Grid>

Well, that’s all for now. I’ll continue to post finds on Windows 8 Xaml under the WinRT hash tag.

Happy coding!