Thursday, August 27, 2020

Quick Start with the JIRA REST API Browser

Milky Way over Guilderton Lighthouse, Western Australia - 35mm Panorama

Looking to start programming against the JIRA API? Me too! This post walks through setting up JIRA locally on your computer and installing the JIRA API Explorer to start poking at the APIs.

Why use the REST API Browser?

Good question. Postman and other tools can poke at JIRA endpoints just as well, but the API Explorer comes equipped with help documentation and a lets you quickly pick out an endpoint without having to worry about authentication, etc. You can easily toggle between GET, POST, PUT, DELETE commands for endpoints that support it, and the explorer provides a simple form entering required and optional fields.

jira-explorer

Setup a JIRA instance in Docker

Setting up JIRA in a docker container is pretty easy.

docker volume create --name jiraVolume
docker run -v jiraVolume:/var/atlassian/application-data/jira --name="jira" -d -p 8080:8080 atlassian/jira-software

After the docker image is downloaded, open your browser to http://localhost:8080

On first start, you will be prompted with a series of prompts:

  • Set it up for me
  • Create a trial license key
  • Setup an admin user account
  • Create an empty project, use a sample project or import from an existing project.

Quick Setup

When you first navigate to the docker instance, you'll be prompted with a choice:

jira-setup-1

Let's go with "Set it up for me", and then click on "Continue to MyAtlassian"

Create a Trial License

You'll be redirected to provide information for creating a trial license. Select "JIRA Server", provide a name for your organization and leave the default values for the server name.

jira-setup-2

When you click next, you'll be redirected to a screen that shows the license key and a Confirmation dialog.

jira-setup-3

Click Yes to confirm the installation.

Create an Admin User Account and default settings

Somewhat self-explanatory, provide the details for your admin account and click Next.

jira-setup-4

You should see a setup page run for a few minutes. When this completes, you'll need to:

jira-setup-5

  • Login with the admin account you created
  • Pick your default language
  • Choose an Avatar

Create your First Project

The last step of the Quick Setup is to create your first project. Here you can create an empty project, import from an existing project or use a sample project. As we're interested in playing with the API, pick the "See it in action" option to create a JIRA project with a prepopulated backlog.

jira-setup-7

Selecting "See it in action", prompts for the style of project and the project details.

jira-setup-8

Great job. That was easy.

Install the API Browser

The API Explorer is available through the Atlassian Marketplace. To install the add-on:

  1. In the top right, select Settings Applications
  2. Click the "Manage Apps" tab
  3. Search for "API Browser"

    manage-apps-1
  4. Click "Install"

Super easy.

Using the REST API Browser

The hardest part of using the REST API Browser is locating it after it's been installed. Fortunately, that's easy, too.

In the top-right, select Settings System and scroll all the way to the bottom to Advanced: REST API Browser

manage-apps-2

Try it out!

The REST API Browser is fairly intuitive, simply find the endpoint you're interested in on the right hand side and provide the necessary fields in the form.

rest-api-browser-1

Mind your verbs...

Pay special attention to the HTTP verbs being used: GET, POST, PUT, DELETE. Most of JIRA's endpoints differ only by the verb, items that are POST + PUT will either create or update records and not all endpoints will have a GET.

rest-api-browser-2

Wrap up

Now you have now excuse to start picking at the JIRA API.

Side note: I wrote this post using Typora, a super elegant markdown editor. I'm still experimenting on how to best integrate with my blog platform (Blogger), but I might look at some form of combination of static markdown.md files with a command-line option to publish. I will most likely post that setup when I get there.

Until then, happy coding.

Tuesday, August 18, 2020

Cleaning up stale git branches

Broken branches

If you have pull-requests, you likely have stale branches. What’s the best way to find which branches can be safely deleted? This post will explore some approaches to find and delete stale branches.

As there are many different ways we can approach this, I’m going to start with the most generic concepts and build up to a more programmatic solution. I want to use the Azure DevOps CLI and REST APIs for this instead of git-centric commands because I want to be able to run the scripts from any computer against the latest version of the repository. This also opens up the possibility of running these activities in a PowerShell script in an Azure Pipeline, as outlined in my previous post.

Table of Contents

Who can delete Branches?

One of the reasons branches don’t get deleted might be a permissions problem. In Azure DevOps, the default permission settings set up the creator of the branch with the permission to delete it. If you don’t have permission, the option isn’t available to you in the user-interface, and this creates a missed opportunity to remediate the issue when someone other than the author completes the PR.

pr-complete-merge-disabled-delete

pr-delete-source-branch

You can change the default setting by adding the appropriate user or group to the repository’s permissions and the existing branches will inherit. You’ll need the “Force push (rewrite history, delete branches and tags)” permission to delete the branch.  See my last post on ways to apply this policy programmatically.

branch-permission-forcepush

If we want to run this from a pipeline, we would have to grant the Build Service the same permissions.

Finding the Author of a Branch

One approach to cleaning-up stale branches is the old fashion way: nagging. Simply track down the branch authors and ask them to determine if they’re done with them.

We can find the authors for the branches with the following PowerShell + Az DevOps CLI:

$project    = "<project-name>"
$repository = "<repository-name>"

$refs = az repos ref list `
--query "[?starts_with(name, 'refs/heads')].{name:name, uniqueName:creator.uniqueName}" ` --project $project --repository $repository | ConvertFrom-Json $refs | Sort-Object -Property uniqueName

az-repos-ref-list-example1

Finding the Last Contributor to a Branch

Sometimes, the author of the branch isn’t the person doing the work. If this is the case, you need to track down the last person to commit against the branch. This information is available in the Azure DevOps user interface (Repos –> Branches):

branch-authors

If you want to obtain this information programmatically, az repos list ref provides us with the objectId SHA-1 of the most recent commit. Although az repos doesn’t expose a command to retrieve the commit details, we can use the az devops invoke command to call the Get Commit REST endpoint.

When we fetch the detailed information on the branch, we want to get the author that created the commit and the date that they pushed it to the server. We want the push details because a developer may have made the commit a long time ago but only recently updated the branch.

$project    = "<project-name>"
$repository = "<repository-name>"

$refs = az repos ref list –p $project –r $repository --filter heads | ConvertFrom-Json

$results = @()

foreach($ref in $refs) {

$objectId = $ref.objectId
# fetch individual commit details $commit = az devops invoke `
--area git `
--resource commits ` --route-parameters ` project=$project ` repositoryId=$repository ` commitId=$objectId |
ConvertFrom-Json $result = [PSCustomObject]@{ name = $ref.name creator = $ref.creator.uniqueName lastAuthor = $commit.committer.email
lastModified = $commit.push.date } $results += ,$result } $results | Sort-Object -Property lastAuthor

az-repos-ref-list-example

This gives us a lot of details for the last commit in each branch, but if you’ve got a lot of branches, fetching each commit individually could be really slow. So, instead we can use the same Get Commit endpoint to fetch the commit information in batches by providing a collection of objectIds in a comma-delimited format.

Note that there’s a limit to how many commits we can ask for at a time, so I’ll have to batch my batches. I could also use the Get Commits Batch endpoint that accepts the list of ids in the body of a POST message.

The following shows me batching 50 commits at a time. Your batch size may vary if the name of your server or organization name is a longer length:

$batchSize = 50
$batches = [Math]::Ceiling($refs.Length / $batchSize)

for( $x=0; $x -lt $batches; $x++ )
{
# take a batch $batch = $refs | Select-Object -First $batchSize -Skip ($x * $batchSize)

# grab the ids for the batch $ids = ($batch | ForEach-Object {$_.objectId}) -join ','

# ask for the commit details for these items $commits = az devops invoke --area git --resource commits ` --route-parameters ` project=$project `
repositoryId=$repository ` --query-parameters ` searchCriteria.ids=$ids ` searchCriteria.includePushData=true ` --query "value[]" | ConvertFrom-Json
# loop through this batch of commits for($i=0; $i -lt $commits.Length; $i++) {
$ref = $refs[($x*$batchSize)+$i] $commit = $commits[$i]

# add commit information to the batch $ref | Add-Member -Name "author" -Value $commit.author.email -MemberType NoteProperty $ref | Add-Member -Name "lastModified" -Value $commit.push.date -MemberType NoteProperty

# add the creator’s email on here for easier access in the select-object statement...
$ref | Add-Member –Name "uniqueName” –Value $ref.creator.uniqueName –MemberType NoteProperty } } $refs | Select-Object -Property name,creator,author,lastModified

Caveat about this approach: If you've updated the source branch by merging from the target branch, the last author will be from that target branch – which isn’t what we want. Even worse, there's no way to infer this scenario from the git commit details alone. One way we can solve this problem is to fetch the commits in the branch and walk up the parents of the commit until we find a commit that has more than one parent – this would be our merge commit from the target branch, which should have been done by the last author on the branch. Note that the parent information is only available if you query these items one-by-one, so this approach could be painfully slow. (If you know a better approach, let me know)

Check the Expiry

Now that we have date information associated to our branches, we can start to filter out the branches that should be considered stale. In my opinion anything that’s older than 3 weeks is a good starting point.

$date = [DateTime]::Today.AddDays( -21 )
$refs = $refs | Where-Object { $_.lastModified -lt $date }

Your kilometrage will obviously vary based on the volume of work in your repository, but on a recent project 10-20% of the branches were created recently.

Finding Branches that have Completed Pull-Requests

If you’re squashing your commits when your merge, you’ll find that the ahead / behind feature in the Azure DevOps UI is completely unreliable. This is because a squash merge re-writes history, so your commits in the branch will never appear in the target-branch at all. Microsoft recommends deleting the source branch when using this strategy as there is little value in keeping these branches around after the PR is completed. Teams may argue that they want to cherry-pick individual commits from the source-branch, but the practicality of that requires pristine

Our best bet to find stale branches is to look at the Pull-Request history and consider all branches that are associated to completed Pull-Requests as candidates for deletion. This is super easy, barely an inconvenience.

$prs = az repos pr list `
          --project $project `
          --repository $repository `
          --target-branch develop `
          --status completed `
--query "[].sourceRefName" | ConvertFrom-Json | $refs | Where-Object { $prs.Contains( $_.name ) } | ForEach-Object { $result = az repos ref delete ` --name $_.name ` --object-id $_.id ` --project $project ` --repository $repository | ConvertFrom-Json
Write-Host ("Success Message: {0}" –f $result.updateStatus) }

At first glance, this would remove about 50% of the remaining branches in our repository, leaving us with 10-20% recent branches and an additional 30-40% of branches without PRs. This is roughly a 40% reduction, and I’ll take that for now. It’s important to recognize this only includes the completed PRs, not the active or abandoned.

Wrapping Up

Using a combination of these techniques we could easily reduce the amount of stale branches, and then provide the remaining list to the team to have them clean-up the dredges. The majority of old branches are likely abandoned work, but there's sometimes scenarios where partially completed work is waiting on some external dependency. In that scenario, encourage the team to keep these important branches up-to-date to retain the value of the invested effort.

The best overall strategy is to adopt a strategy that does not let this situation occur: give the individuals reviewing and completing PRs the permission to delete branches and encourage teams to squash and delete branches as they complete PRs. Good habits create good hygiene.

Happy coding.

Friday, August 14, 2020

Securing Git Branches through Azure DevOps CLI

Permission Granted

I've been looking for an way to automate branch security in order to enforce branch naming conventions and to control who can create release branches. Although the Azure DevOps documentation illustrates how to do this using the tfssecurity.exe command, the documentation also suggests that the tfssecurity.exe command is now deprecated.

This post will walk through how to apply branch security to your Azure DevOps repository using the Azure DevOps CLI.

Table of Contents

Understanding Azure DevOps Security

The Azure DevOps Security API is quite interesting as security can be applied to various areas of the platform, including permissions for the project, build pipeline, service-connection, git repositories, etc. Each of these areas support the ability to assign permissions for groups or individuals to a security token. In some cases these tokens are hierarchical, so changes made at the root are inherited on children nodes. The areas that define the permissions are defined as Security Namespaces, and each token has a Security Access Control List that contains Access Control Entries.

We can obtain a complete list of security namespaces by querying https://dev.azure.com/<organization>/_apis/securitynamespaces, or by querying them using the az devops cli:

az devops security permission namespace list

Each security namespace contains a list of actions, which are defined as bit flags. The following shows the "Git Repositories" security namespace:

az devops security permission namespace list `
    --query "[?contains(name,'Git Repositories')] | [0]"
{
    "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87",
    "name": "Git Repositories",
    "displayName": "Git Repositories",
    "separatorValue": "/",
    "elementLength": -1,
    "writePermission": 8192,
    "readPermission": 2,
    "dataspaceCategory": "Git",
    "actions": [
        {
            "bit": 1,
            "name": "Administer",
            "displayName": "Administer",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 2,
            "name": "GenericRead",
            "displayName": "Read",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 4,
            "name": "GenericContribute",
            "displayName": "Contribute",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 8,
            "name": "ForcePush",
            "displayName": "Force push (rewrite history, delete branches and tags)",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 16,
            "name": "CreateBranch",
            "displayName": "Create branch",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 32,
            "name": "CreateTag",
            "displayName": "Create tag",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 64,
            "name": "ManageNote",
            "displayName": "Manage notes",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 128,
            "name": "PolicyExempt",
            "displayName": "Bypass policies when pushing",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 256,
            "name": "CreateRepository",
            "displayName": "Create repository",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 512,
            "name": "DeleteRepository",
            "displayName": "Delete repository",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 1024,
            "name": "RenameRepository",
            "displayName": "Rename repository",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 2048,
            "name": "EditPolicies",
            "displayName": "Edit policies",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 4096,
            "name": "RemoveOthersLocks",
            "displayName": "Remove others' locks",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 8192,
            "name": "ManagePermissions",
            "displayName": "Manage permissions",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 16384,
            "name": "PullRequestContribute",
            "displayName": "Contribute to pull requests",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        },
        {
            "bit": 32768,
            "name": "PullRequestBypassPolicy",
            "displayName": "Bypass policies when completing pull requests",
            "namespaceId": "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"
        }
    ],
    "structureValue": 1,
    "extensionType": "Microsoft.TeamFoundation.Git.Server.Plugins.GitSecurityNamespaceExtension",
    "isRemotable": true,
    "useTokenTranslator": true,
    "systemBitMask": 0
}

Security Tokens for Repositories and Branches

The tokens within the Git Repositories Security Namespace follow the naming convention repoV2/<projectId>/<repositoryId>/<branch>. As this is a hierarchical security namespace, you can target very specific and granular permissions by adding parameters from left to right.

Examples:

  • All repositories within the organization: repoV2/
  • All repositories within a project: repoV2/<projectId>
  • All branches within a repository: repoV2/<projectId>/<repositoryId>
  • Specific branch: repoV2/<projectId>/<repositoryId>/<branch>

As the tokens are hierarchial, a really cool feature is that we can define patterns for branches that do not exist yet.

While the project and repository elements are relatively self-explanatory, the git branch convention is case sensitive and expressed in a hex format. Both Jesse Houwing and the Azure DevOps blog have some good write-ups on understanding this format.

Converting Branch Names to git hex format

The following PowerShell script can produce a security token for your project, repository or branch.

function Get-RepoSecurityToken( [string]$projectId, [string]$repositoryId, [string]$branchName) {

   $builder = "repoV2/"

   if ( ![string]::IsNullOrEmpty($projectId) ) {

     $builder += $projectId
     $builder += "/"

     if ( ![string]::IsNullOrEmpty($repositoryId) ) {

        $builder += $repositoryId
        $builder += "/"

        if ( ![string]::IsNullOrEmpty( $branchName) ) {

            $builder += "refs/heads/"
            
            # remove extra values if provided
            if ( $branchName.StartsWith("/refs/heads/") ) {
                $branchName = $branchName.Replace("/refs/heads/", "")
            }
            if ( $branchName.EndsWith("/")) {
                $branchName = $branchName.TrimEnd("/")
            }

            $builder += (($branchName.Split('/')) | ForEach-Object { ConvertTo-HexFormat $_ }) -join '/'   
        }

     }
   }

   return $builder
}

function ConvertTo-HexFormat([string]$branchName) {
   return ($branchName | Format-Hex -Encoding Unicode | Select-Object -Expand Bytes | ForEach-Object { '{0:x2}' -f $_ }) -join ''
}

Obtaining Branch Name from Git Hex Format

If you're working with query results and would like to see the actual name of the branches you've assigned, this function can reverse the hex format into a human readable string.

function ConvertFrom-GitSecurityToken([string]$token) {
    $refHeads = "/refs/heads/"

    $normalized = $token

    if ($token.Contains($refHeads)) {

        $indexOf = $token.IndexOf($refHeads) + $refHeads.Length

        $firstHalf = $token.Substring(0, $indexOf)
        $secondHalf = $token.Substring($indexOf)

        $normalized = $firstHalf
        $normalized += (($secondHalf.Split('/')) | ForEach-Object { ConvertFrom-HexFormat $_ }) -join '/'  
    }

    return $normalized
}

function ConvertFrom-HexFormat([string]$hexString) {

    $bytes = [byte[]]::new($hexString.Length/2)
    
    for($i = 0; $i -lt $hexString.Length; $i += 2) {
        $bytes[$i/2] = [convert]::ToByte($hexString.Substring($i,2), 16)
    }

    return [Text.Encoding]::Unicode.GetString($bytes)
}

Fetching Group Descriptors

Although it's incredibly easy to fetch security permissions for individual users, obtaining the permissions for user groups requires a special descriptor. To make them easier to work with, I'll grab all the security groups in the organization and map them into a simple lookup table:

function Get-SecurityDescriptors() {

    $lookup = @{}
    
    $decriptors = az devops security group list --scope organization --query "graphGroups[]" | ConvertFrom-Json
    
    $descriptors | ForEach-Object { $lookup.Add( $_.principalName, $_.descriptor) }
    
    return $descriptors
}

Apply Branch Security

Given that we'll call the function several times, we'll wrap it in a method to make it easier to use.

$namespaceId = "2e9eb7ed-3c0a-47d4-87c1-0ffdd275fd87"

function Set-BranchSecurity( $descriptor, $token, $allow = 0 , $deny = 0) {

    $result = az devops security permission update `
                --id $namespaceId `
                --subject $descriptor `
                --token $token `
                --allow-bit $allow `
                --deny-bit $deny `
                --only-show-errors | ConvertFrom-Json
}

Putting it All Together

This crude example shows how to apply the guidance laid out in the Require branches to be created in folders article, to a specific repository:

$project   = "projectName"
$projectId = az devops project list --query "value[?name=='$project'].id | [0]" | ConvertFrom-Json

# grab the first repo in the project
$repoId    = az repos list --project $project --query [0].id | ConvertFrom-Json

$groups = Get-SecurityDescriptors
$administrators = $groups[ "[$project]\\Project Administrators" ]
$contributors   = $groups[ "[$project]\\Contributors" ]

# a simple array of tokens we can refer to
$tokens = @(
                Get-RepoSecurityToken( $projectId, $repoId ),             # repo - 0
                Get-RepoSecurityToken( $projectId, $repoId, "main" ),     # main - 1
                Get-RepoSecurityToken( $projectId, $repoId, "releases"),  # releases/* - 2
                Get-RepoSecurityToken( $projectId, $repoId, "feature"),   # feature/* - 3
                Get-RepoSecurityToken( $projectId, $repoId, "users")      # users/* - 4
           )
           
$CreateBranch = 16


# prevent contributors from creating branches at the root of the repository
Set-BranchSecurity $contributors, $tokens[0], -deny $CreateBranch

# limit users to only create feature and user branches
Set-BranchSecurity $contributors, $tokens[3], -allow $CreateBranch
Set-BranchSecurity $contributors, $tokens[4], -allow $CreateBranch

# restrict who can create a release
Set-BranchSecurity $admins, $token[2], -allow $CreateBranch

# allow admins to recreate master/main
Set-BranchSecurity $admins, $token[1], -allow $CreateBranch

To improve upon this, you could describe each of these expressions as a ruleset in a JSON format and apply them to all the repositories in a project.

Some considerations:

  • Granting Build Service accounts permissions to create Tags
  • Empowering the Project Collection Administrators with the ability to override branch policy
  • Empowering certain Teams or Groups with the ability to delete feature or user branches

Happy coding!

Friday, August 07, 2020

Running Azure DevOps CLI from an Azure Pipeline

pipelines

Having automation to perform common tasks is great. Having that automation run on a regular basis in the cloud is awesome.

Today, I'd like to expand upon the sweet Azure CLI script to manage Azure DevOps User Licenses I wrote and put it in a Azure Pipeline. The details of that automation script are outlined in my last post, so take the time to check that out if you're interested, but to recap: my azure cli script activates and deactivates Azure DevOps user licenses if they’re not used. Our primary focus in this post will outline how you can configure your pipeline to run your az devops automation on a reoccurring schedule.

Table of Contents

About Pipeline Security

When our pipelines run, they operate by default using a project-specific user account: <Project Name> Build Service (<Organization Name>). For security purposes, this account is restricted to information within the Project.

If your pipelines need to access details beyond the current Project they reside in, for example if you a pipeline that needs access to repositories in other projects, you can configure the Pipeline to use the Project Collection Build Service (<Organization Name>). This change is subtly made by toggling off the "Limit job authorization scope to current project for non-release pipelines"  (Project Settings -> Pipelines : Settings)

limit-job-scope

In both Project or Collection level scenarios, the security context of the build account is made available to our pipelines through the $(System.AccessToken) variable. There's a small trick that's needed to make the access token available to our PowerShell scripts and I'll go over this later. But for the most part, if you're only accessing information about pipelines, code changes or details about the project, the supplied Access Token should be sufficient. In scenarios where you're trying to alter elements in the project, you may need to grant some additional permissions to the build service account.

However, for the purposes of today's discussion, we want to modify user account licenses which requires the elevated permissions of a Project Collection Administrator. I need to stress this next point: do not place the Project Collection Build Service in the Project Collection Administrators group. You're effectively granting any pipeline that uses this account full access to your organization. Do not do this. Here by dragons.

Ok, so if the $(System.AccessToken) doesn't have the right level of access, we need an alternate access token that does.

Setup a PAT Token

Setting up Personal Access Tokens is a fairly common activity, so I'll refer you to this document on how the token is created. As we are managing users and user licenses, we need a PAT Token created by a Project Collection Administrator with the Member Entitlement Management scope:

pat-token-member-entitlement-management

Secure Access to Tokens

Now that we have the token that can manage user licenses, we need to put it somewhere safe. Azure DevOps offers a few good options here, each with increasing level of security and complexity:

My personal go-to are Variable Groups because they can be shared across multiple pipelines. Variable Groups also have their own Access Rights, so the owner of variable group must authorize which pipeline and users are allowed to use your secrets.

For our discussion, we'll create a variable group "AdminSecrets" with a variable "ACCESS_TOKEN".

Create the Pipeline

With our security concerns locked down, let's create a new pipeline (Pipelines -> Pipelines -> New Pipeline) with some basic scaffolding that defines both the machine type and access to our variable group that has my access token.

name: Manage Azure Licenses

trigger: none

pool:
  vmimage: 'ubuntu-latest'

variables:
 - group: AdminSecrets

I want to call out that by using a Linux machine, we're using PowerShell Core. There are some subtle differences between PowerShell and PowerShell Core, so I would recommend that you always write your scripts locally against PowerShell Core.

Define the Schedule

Next, we'll setup the schedule for the pipeline using a cron job schedule syntax.

We'll configure our pipeline to run every night as midnight:

schedules:
  # run at midnight every day
  - cron: "0 0 * * *"
    displayName: Check user licenses (daily)
    branches:
      include:
        - master
    always: true

By default, schedule triggers only run if there are changes, so we need to specify "always: true" to have this script run consistently.

Authenticate Azure DevOps CLI using PAT Token

In order to invoke our script that uses az devops functions, we need to setup the Azure DevOps CLI to use our PAT Token. As a security restriction, Azure DevOps does not make secrets available to scripts so we need to explicitly pass in the value as an environment variable.

- script: |
    az extension add -n azure-devops
  displayName: Install Azure DevOps CLI
  
- script: |
    echo $(ADO_PAT_TOKEN) | az devops login
    az devops configure --defaults organization=$(System.CollectionUri)
  displayName: Login and set defaults
  env:
    ADO_PAT_TOKEN: $(ACCESS_TOKEN)

Run PowerShell Script from ADO

Now that our pipeline has the ADO CLI installed, we're authenticated using our secure PAT token, our last step is to invoke the powershell script. Here I'm using the pwsh task to ensure that PowerShell Core is used. The "pwsh" task is a shortcut syntax for the standard powershell task.

Our pipeline looks like this:

name: Manage Azure Licenses

trigger: none

schedules:
  # run at midnight every day
  - cron: "0 0 * * *"
    displayName: Check user licenses (daily)
    branches:
      include:
        - master
    always: true

pool:
  vmImage: 'ubuntu-latest'

variables:
- group: AdminSecrets

steps:
- script: |
    az extension add -n azure-devops
  displayName: Install Azure DevOps CLI
  
- script: |
    echo $(ADO_PAT_TOKEN) | az devops login
    az devops configure --defaults organization=$(System.CollectionUri)
  displayName: Login and set defaults
  env:
    ADO_PAT_TOKEN: $(ACCESS_TOKEN)

- pwsh: .\manage-user-licenses.ps1
  displayName: Manage User Licenses

Combining with the Azure CLI

Keen eyes may recognize that my manage-users-licenses.ps1 from my last post also used the Azure CLI to access Active Directory, and because az login and az devops login are two separate authentication mechanisms, the approach described above won’t work in that scenario. To support this, we’ll also need:

  • A service-connection from Azure DevOps to Azure (a Service Principal with access to our Azure Subscription)
  • Directory.Read.All role assigned to the Service Principal
  • A script to authenticate us with the Azure CLI.

The built-in AZ CLI Task is probably our best option for this, as it provides an easy way to work with our Service Connection. However, because this task clears the authentication before and after it runs, we have to change our approach slightly and execute our script logic within the script definition of this task. The following shows an example of how we can use both the Azure CLI and the Azure DevOps CLI in the same task:

- task: AzureCLI@2
  inputs:
    azureSubscription: 'my-azure-service-connection'
    scriptType: 'pscore'
    scriptLocation: 'inlineScript'
    inlineScript: |
       echo $(ACCESS_TOKEN) | az devops login
       az devops configure --defaults organization=$(SYSTEM.COLLECTIONURI) project=$(SYSTEM.TEAMPROJECT)
       az pipelines list
       az ad user list

If we need to run multiple scripts or break-up the pipeline into smaller tasks as I illustrated above, we’ll need a different approach where we have more control over the authenticated context. I can dig into this in another post.

Wrap Up

As I’ve outlined in this post, we can take simple PowerShell automation that leverages the Azure DevOps CLI and run it within an Azure Pipeline securely and on a schedule.

Happy coding.

Wednesday, July 29, 2020

Managing ADO Licenses from Azure DevOps CLI

My last post introduced using JMESPath with the az devops cli, which hopefully gave you some insight into use the az cli and the az devops extension. Today I want to highlight how you can easily pull the az devops cli into PowerShell to unlock some amazing scripting ability.

A good example of this is how we can use PowerShell + az devops cli to manage Azure DevOps User Licenses.

Background

Azure DevOps is a licensed product, and while you can have unlimited free Stakeholder licenses, any developer that needs access to code repositories needs a Basic license, which costs about $7 CAD / month.

It's important to note that this cost is not tied to usage, so if you've allocated licenses manually, you're essentially paying for it. Interestingly, this is not a fixed monthly cost, but prorated on a daily basis in the billing period. So if you can convert Basic licenses to Stakeholder licenses, you’ll only pay for the days in the billing period when the license was active.

If you establish a process to revoke licenses when they're not being used, you can save your organization a few dollars that would otherwise be wasted. It might not be much, but if you consider 10 user licenses for the year is about $840 – the costs do add up. something you could argue should be added to your end-of-year bonus.

Integrating the Azure DevOps CLI into PowerShell

To kick things off and to show how incredibly easy this is, let's start with this snippet:

param() {

}


$users = az devops user list | ConvertFrom-Json

Write-Host $users.totalCount
Write-Host $users.items[0].user.principalName

Boom. No special magic. We just call the az devops cli directly in our PowerShell script, converting the JSON result into an object by piping it through the ConvertFrom-Json commandlet. We can easily interrogate object properties and build up some conditional logic. Fun.

Use JMESPath to Simplify Results

While we could work with the results in this object directly, the result objects have a complex structure so I’m going to flatten the object down to make it easier to get at the properties we need. I only want the user’s email, license, and the dates when the user account was created and last accessed.

JMESPath makes this easy. If you missed the last post, go back and have a read to get familiar with this syntax.

function Get-AzureDevOpsUsers() {

    $query = "items[].{license:accessLevel.licenseDisplayName, email:user.principalName, dateCreated:dateCreated, lastAccessedDate:lastAccessedDate }"
    $users = az devops user list --query "$query" | ConvertFrom-Json

    return $users
}

Ok. That structure’s looking a lot easier to work with.

Finding Licenses to Revoke

Now all we need to do is find the licenses we want to convert from Basic to Stakeholder. Admittedly, this could create headaches for us if we're randomly taking away licenses that people might need in the future, but we should be safe if we target people who've never logged in or haven't logged in for 21 days.

I’m going to break this down into two separate functions. One to filter the list of users based on their current license, and another function to filter based on the last access date.

function Where-License()
{
    param(
        [Parameter(Mandatory=$true, ValueFromPipeline)]
        $Users,

        [Parameter(Mandatory=$true)]
        [ValidateSet('Basic', 'Stakeholder')]
        [string]$license
    )

    BEGIN{}
    PROCESS
    {
        $users | Where-Object -FilterScript { $_.license -eq $license }
    }
    END{}
}

function Where-LicenseAccessed()
{
    param(
        [Parameter(Mandatory=$true, ValueFromPipeline)]
        $Users,

        [Parameter()]
        [int]$WithinDays = 21,

        [Parameter()]
        [switch]$NotUsed = $false
    )
    BEGIN
    {
        $today = Get-Date
    }
    PROCESS
    {
        $Users | Where-Object -FilterScript {

            $lastAccess = (@( $_.lastAccessedDate, $_.dateCreated) | 
                            ForEach-Object { [datetime]$_ } |
                            Measure-Object -Maximum | Select-Object Maximum).Maximum

            $timespan = New-TimeSpan -Start $lastAccess -End $today

            if (($NotUsed -and $timespan.Days -gt $WithinDays) -or ($NotUsed -eq $false -and $timespan.Days -le $WithinDays)) {
                Write-Host ("User {0} last accessed within {1} days." -f $_.email, $timespan.Days)
                return $true
            }

            return $false
        }
    }
    END {}
}

If you're new to PowerShell, the BEGIN,PROCESS,END blocks may look peculiar but they are essential for chaining results of arrays together. There’s a really good write-up on this here. But to demonstrate, we can now chain these methods together, like so:

Get-AzureDevOpsUsers | 
    Where-License -license Basic | 
        Where-LicenseAccessed -NotUsed -WithinDays 21

Revoking Licenses

And then we use the ever so important function to revoke licenses. This is simply a wrapper around the az devops command to improve readability.

function Set-AzureDevOpsLicense()
{
    param(
        [Parameter(Mandatory=$true)]
        $User,

        [Parameter(Mandatory=$true)]
        [ValidateSet('express','stakeholder')]
        [string]$license
    )
    Write-Host ("Setting User {0} license to {1}" -f $_.email, $license)
    az devops user update --user $user.email --license-type $license | ConvertFrom-Json
}

Putting it all Together

So we've written all these nice little functions, let's put them together into a little PowerShell haiku:

$users = Get-AzureDevOpsUsers | 
                 Where-License -license Basic | 
                 Where-LicenseAccessed -NotUsed -WithinDays 21 | 
                 ForEach-Object { Set-AzureDevOpsLicense -User $_ -license stakeholder }
Write-Host ("Changed {0} licenses." -f $users.length)

Nice. Az you can see that wasn't hard at all, and it probably saved my boss a few bucks. There's a few ways we can make this better...

Dealing with lots of Users

The az devops user list command can return up to 1000 users but only returns 100 by default. If you have a large organization, you'll need to make a few round trips to get all the data you need.

Let’s modify our Get-AzureDevOpsUsers function to retrieve all the users in the organization in batches.

function Get-AzureDevOpsUsers() {

    param(
        [Parameter()]
        [int]$BatchSize = 100
    )

    $query = "items[].{license:accessLevel.licenseDisplayName, email:user.principalName, dateCreated:dateCreated, lastAccessedDate:lastAccessedDate }"
    
    $users = @()

    $totalCount = az devops user list --query "totalCount"
    Write-Host "Fetching $totalCount users" -NoNewline

    $intervals = [math]::Ceiling($totalCount / $BatchSize)

    for($i = 0; $i -lt $intervals; $i++) {

        Write-Host -NoNewline "."

        $skip = $i * $BatchSize;
        $results = az devops user list --query "$query" --top $BatchSize --skip $skip | ConvertFrom-Json

        $users = $users + $results
    }   

    return $users
}

Giving licenses back

If the script can taketh licenses away, it should also giveth them back. To do this, we need the means to identify who should have a license. This can be accomplished using an Azure AD User Group populated with all users that should have licenses.

To get the list of these users, the Azure CLI comes to the rescue again. Also again, learning JMESPath really helps us because we can simplify the entire result into a basic string array:

az login --tenant <tenantid>
$licensedUsers = az ad group member list -g <groupname> --query "[].otherMails[0]" | ConvertFrom-Json

Note that I'm using the otherMails property to get the email address, your mileage may vary, but in my Azure AD, this setting matches Members and Guests with their email address in Azure DevOps.

With this magic array of users, my haiku can now reassign users their license if they've logged in recently without a license (sorry mate):

$licensedUsers = az ad group member list -g ADO_LicensedUsers --query "[].otherMails[0]" | ConvertFrom-Json

$users = Get-AzureDevOpsUsers
$reactivatedUsers = $user | Where-License -license Stakeholder | 
                            Where-LicenseAccessed -WithinDays 3 | 
                            Where-Object -FilterScript { $licensedUsers.Contains($_.email) } | 
                            ForEach-Object { Set-AzureDevOpsLicense -User $_ -license express }

$deactivatedUsers = $user | Where-License -license Basic | 
                            Where-LicenseAccessed -NotUsed -WithinDays 21 | 
                            ForEach-Object { Set-AzureDevOpsLicense -User $_ -license stakeholder }

Write-Host ("Reviewed {0} users" -f $users.Length)
Write-Host ("Deactivated {0} licenses." -f $deactivatedUsers.length)
Write-Host ("Reactivated {0} licenses." -f $reactivatedUsers.length)

Wrapping up

In the last few posts, we've looked at the Azure DevOps CLI, understanding JMESPath and now integrating both into PowerShell to unleash some awesome. If you're interested in the source code for this post, you can find it here.

In my next post, we'll build upon this and show you how to integrate this script magic into an Azure Pipeline that runs on a schedule.

Happy coding.

Monday, July 27, 2020

Azure DevOps CLI Examples

I've always been a fan of Azure DevOps's extensive REST API -- it's generally well documented, consistent and it seems like you can do pretty much anything you can do from within the web-interface. As much as I love the API, I hate having to bust it out. Nowadays, my new go to tool is the Azure DevOps CLI.

The Azure DevOps CLI is actually an extension of the Azure CLI. It contains a good number of common functions that you would normally use on a daily basis, plus features that I would normally rely on the REST API for. Its real power is unlocked when it's combined with your favourite scripting language. I plan to write a few posts on this topic, so stay tuned, but for today, we'll focus on getting up and running plus some cool things you can do with the tool.

Installation

Blurg. I hate installation blog posts. Let's get this part over with:

choco install azure-cli -y
az extension add --name azure-devops

Whew. If you don't have Chocolatey installed, go here: https://chocolatey.org/install

Get Ready

Ok, so we're almost there. Just a few more boring bits. First we need to login:

az login --allow-no-subscription

A quick note on the above statement. There are a number of different login options available but I've found az login with the --allow-no-subscription flag supports the majority of use cases. It'll launch a web-browser and require you to login as you normally would, and the --allow-no-subscription supports scenarios where you have access to the AD tenant to login but you don't necessarily have a subscription associated to your user account, which is probably pretty common for most users who only have access to Azure DevOps.

This next bit let's us store some commonly used parameters so we don't have to keep typing them out.

az devops configure --defaults organization=https://dev.azure.com/<organization>

In case your curious, this config file is stored in %UserProfile%\.azure\azuredevops\config

Our First Command

Let's do something basic, like getting a list of all projects:

az devops project list

If we've configured everything correctly, you should see a boatload of JSON fly by. The CLI supports different options for output, but JSON works best when paired with our automation plus there's some really cool things we can do with the result output by passing a JMESPath statement using the --query flag.

Understanding JMESPath

The JavaScript kids get all the cool tools. There are probably already a few dozen different ways of querying and manipulating JSON data, and JMESPath (pronounced James Path) is no different. The syntax is a bit confusing at first and it takes a little bit of tinkering to master it. So let's do some tinkering.

The best way to demonstrate this is to use the JSON output from listing our projects. Our JSON looks something like this:

{
   "value": [
     {
        "abbreviation": null,
        "defaultTeamImageUrl": null,
        "description": null,
        "id": "<guid>",
        "lastUpdateTime": "<date>",
        "name": "Project Name",
        "revision": 89,
        "state": "wellFormed",
        "url": "<url>",
        "visibility": "private"
     },
     ...
   ]
}

It's a single object with a property called "value" that contains an array. Let's do a few examples...

Return the contents of the array

Assuming that we want the details of the projects and not the outside wrapper, we can discard the "value" property and just get it's contents, which is an array.

az devops project list --query "value[]"
[
    {
        "abbreviation": null,
        "defaultTeamImageUrl": null,
        "description": null,
        "id": "<guid>",
        "lastUpdateTime": "<date>",
        "name": "Project Name",
        "revision": 89,
        "state": "wellFormed",
        "url": "<url>",
        "visibility": "private"
    },
    ...
]

Return just the first element

Because the "value" property is an array, we can get the first element.

az devops project --query "value[0]"

{
    "abbreviation": null,
    "defaultTeamImageUrl": null,
    "description": null,
    "id": "<guid>",
    "lastUpdateTime": "<date>",
    "name": "Project Name",
    "revision": 89,
    "state": "wellFormed",
    "url": "<url>",
    "visibility": "private"
}

You can also specify ranges:

  • [:2] = everything up to the 3rd item
  • [1:3] = from the 2nd up to the 4th items
  • [1:] = everything from the 2nd item

Return an array of properties

If we just wanted the id property of each element in the array, we can specify the property we want. The result assumes there are only 4 projects.

az devops project --query "value[].id"

[
    "<guid>",
    "<guid>",
    "<guid>",
    "<guid>"
]

Return specific properties

This is where JMESPath gets a tiny bit odd. In order to get just a handful of properties we need to do a "projection" which is basically like stating what structure you want the JSON result to look like. In this case, we're mapping the id and name property to projectId and projectName in the result output.

az devops project --query "value[].{ projectId:id, projectName:name }"

[
    {
        "projectId": "<guid>",
        "projectName": "Project 1"
    },
    {
        "projectId": "<guid>",
        "projectName": "Project 2"
    },
    ...
]

Filter the results

Here's where things get really interesting. We can put functions inside the JMESPath query to filter the results. This allows us to mix and match the capabilities of the API with the output filtering capabilities of JMESPath. This returns only the projects that are public.

az devops project list --query "value[?visibility=='public'].{ id:id, name:name }"
[
    {
        "id": "<guid>",
        "name": "Project 3"
    }
]

We could have also written this as:

--query "value[?contains(visibility,'private')].{id:id, name:name}"

Piping the results

In the above example, JMESPath assumes that the results will be an array. We can pipe the result to further refine it. In this case, we want just the first object in the resulting array.

az devops project list --query "value[?visibility=='private'].{ id:id, name:name} | [0]"

{
   "id": "<guid>",
   "name": "Project 3"
}

Piping can improve the readability of the query similar to a functional language. For example, the above could be written as a filter, followed by a projection, followed by a selection.

--query "value[?contains(visibility,'private')] | [].{id:id, name:name} | [0]"

Wildcard searches

Piping the results becomes especially important if we want just the single value of a wildcard search. For this example, I need a different JSON structure, specifically a security descriptor:

[
  {
    "acesDictionary": {
      "Microsoft.IdentityModel.Claims.ClaimsIdentity;<dynamic-value>": {
        "allow": 16,
        "deny": 0,
        "descriptor": "Microsoft.IdentityModel.Claims.ClaimsIdentity;<dynamic-value>",
        "extendedInfo": {
          "effectiveAllow": 32630,
          "effectiveDeny": null,
          "inheritedAllow": null,
          "inheritedDeny": null
        }
      }
    },
    "includeExtendedInfo": true,
    "inheritPermissions": true,
    "token": "repoV2"
  }
]

In this structure, i'm interested in getting the "allow" and "deny" and "token" values but the first element in the acesDictionary contains a dynamic value. We can use a wildcard "*" to substitute for properties we don't know at runtime.

Let's try to isolate that "allow". The path would seem like [].acesDictionary.*.allow but because JMESPath has no idea if this is a single element, so it returns an array:

[
    [
        16
    ]
]

If we pipe the result, [].acesDictionary.*.allow | [0] we'll get a single value.

[
    16
]

Following suit and jumping ahead a bit so that I can skip to the answer, I can grab the "allow", "deny" and "token" with the following query. At this point, I trust you can figure this out using by referencing all the examples I've provided. The query looks like:

--query "[].{allow:acesDictionary.*.allow | [0], deny:acesDictionary.*.deny | [0], token:token } | [0]"
{
    "allow": 16,
    "deny": 0,
    "token": "repoV2"
}

Ok! That is waay too much JMESPath. Let's get back on topic.

Using the Azure DevOps CLI

The Azure DevOps CLI is designed with commands and subcommands and has a few entry points. At each level, there are the obvious inclusions (list, add, delete, update, show), but there are a few additional commands per level.

  • az devops
    • admin
      • banner
    • extension
    • project
    • security
      • group
      • permission
        • namespace
    • service-endpoint
      • azurerm
      • github
    • team
    • user
    • wiki
      • page
  • az pipelines
    • agent
    • build
      • definition
      • tag
    • folder
    • pool
    • release
      • definition
    • runs
      • artifact
      • tag
    • variable
    • variable-group
  • az boards
    • area
      • project
      • team
    • iteration
      • project
      • team
    • work-item
      • relation
  • az repos
    • import
    • policy
      • approver-count
      • build
      • case-enforcement
      • comment-required
      • file-size
      • merge-strategy
      • required-reviewer
      • work-item-linking
    • pr
      • policy
      • reviewer
      • work-item
    • ref
  • az artifacts
    • universal

I won’t go into all of these commands and subcommands, I can showcase a few of the ones I’ve used the most recently…

List of Projects

az devops project list --query "value[].{id:id, name:name}"

List of Repositories

az repos list --query "[].{id:id, defaultBranch:defaultBranch, name:name}" 

List of Branch Policies

az repos policy list --project <name> --query "[].{name: type.displayName, required:isBlocking, enabled:isEnabled, repository:settings.scope[0].repositoryId, branch:settings.scope[0].refName}"

Service Connections

az devops service-endpoint list --project <name> --query "[].name"

One More Thing

So while the az devops cli is pretty awesome, it has a hidden gem. If you can't find a supporting command in the az devops cli, you can always call the REST API directly from the tool using the az devops invoke command. There's a bit of hunting through documentation and available endpoints to find what you're looking for, but you can get a full list of what's available using the following:

az devops invoke --query "[?contains(area,'build')]"
az devops invoke --query "[?area=='build' && resourceName=='timeline']"

[
  {
    "area": "build",
    "id": "8baac422-4c6e-4de5-8532-db96d92acffa",
    "maxVersion": 6.0,
    "minVersion": 2.0,
    "releasedVersion": "5.1",
    "resourceName": "Timeline",
    "resourceVersion": 2,
    "routeTemplate": "{project}/_apis/{area}/builds/{buildId}/{resource}/{timelineId}"
  }
]

We can invoke this REST API call by passing in the appropriate area, resource, route and query-string parameters. Assuming I know the buildId of a recent pipeline run, the following shows me the state and status of all the stages in that build:

az devops invoke 
    --area build 
    --resource Timeline 
    --route-parameters project=myproject buildId=2058 timelineid='' 
    --query "records[?contains(type,'Stage')].{name:name, state:state, result:result}"

Tip: the route and query parameters specified in the routeTemplate are case-sensitive.

More to come

Today's post outlined how to make sense out of JMESPath and some cool features of the Azure DevOps CLI. My next few posts I'll dig deeper into using the cli in your favourite scripting tool

Happy coding.

Wednesday, July 15, 2020

Exclusive Lock comes to Azure Pipelines

semaphore red left

As part of Sprint 171, the Azure DevOps team introduced a much needed feature for Multi-Stage YAML Pipelines, the Exclusive Lock "check" that can be applied to your environments. This feature silently slipped into existence without any mention of it in the release notes, but I was personally thrilled to see this. (At the time this post was written, Sprint 172 announced this feature was available)

Although Multi-Stage YAML Pipelines have been available for a while, there are still some subtle differences between their functionality and what's available through Classic Release Pipelines. Fortunately over the last few sprints we've seen a few incremental features to help close that feature parity gap, with more to come. One of the missing features is something known as "Deployment Queuing Settings" -- a Classic Release pipeline feature that dictates how pipelines are queued and executed. The Exclusive Lock check solves a few pain points but falls short on some of the more advanced options.

In this post, I'll walk through what Exclusive Locks are, how to use them and some other thoughts for consideration.

Deployments and Environments

Let's start with a multi-stage pipeline with a few stages, where we perform CI activities and each subsequent stage deploys into an environment. Although we could write our YAML to build and deploy using standard tasks, we're going to use the special "deployment" job that tracks builds against Environments.

trigger:
 - master

stages:
 - stage: ci_stage
   ...steps to compile and produce artifacts

- stage: dev_stage
   condition: and(succeeded(), eq(variables['Build.SourceBranch','refs/heads/master'))
   dependsOn: ci_stage
   jobs:
   - deployment: dev_deploy
     environment: dev
     strategy:
       runOnce:
         deploy:
           ... steps to deploy
       
 - stage: test_stage
   dependsOn: dev_stage
   ...

If we were to run this hypothetical pipeline, the code would compile in the CI stage and then immediately start deploying into each environment in sequence. Although we definitely want to have our builds deploy into the environments in sequence, we might not want them to advance into the environments automatically. That's where Environment Checks come in.

Environment Checks

As part of multi-stage yaml deployments, Azure DevOps has introduced the concept of Environments which are controlled outside of your pipeline. You can set special "Checks" on the environment that must be fulfilled before the deployment can occur. On a technical note, environment checks bubble up from the deployment task to the stage, so the checks must be satisfied before the stage is allowed to start.

For our scenario, we're going to assume that we don't want to automatically go to QA, so we'll add an Approval Check that allows our testing team to approve the build before deploying into their environment. We'll add approval checks for the other stages, too. Yay workflow!

approval-checks

At this point, everything is great: builds deploy to dev automatically and then pause at the test_stage until the testing team approves. Later, we add more developers to our project and the frequency of the builds starts to pick up. Almost immediately, the single agent build pool starts to fill up with builds and the development team start to complain that they're waiting a really long time for their build validation to complete.

Obviously, we add more build agents. Chaos ensues.

What just happen'd?

When we introduced additional build agents, we were expecting multiple CI builds to run simultaneously but we probably weren't expecting multiple simultaneous deployments! This is why the Exclusive Lock is so important.

By introducing an Exclusive Lock, all deployments are forced to happen in sequence. Awesome. Order is restored.

There unfortunately isn't a lot of documentation available for the Exclusive Lock, but according to the description:

“Adding an exclusive lock will only allow a single run to utilize this resource at a time. If multiple runs are waiting on the lock, only the latest will run. All others will be canceled."

Most of this is obvious, but what does 'All others will be canceled' mean?

Canceling Queued Builds

My initial impression of the "all other [builds] will be canceled" got me excited -- I thought this was the similar to the “deploy latest and cancel the others” setting of Deployment Queuing Settings:

deployment-queue-settings

Unfortunately, this is not the intention of the Exclusive Lock. It focuses only on sequencing of the build, not on the pending queue. To understand what the “all others will be canceled” means, let's assume we have 3 available build agents and we'll use the az devops CLI to trigger three simultaneous builds.

az pipelines run --project myproject --name mypipeline 
az pipelines run --project myproject --name mypipeline 
az pipelines run --project myproject --name mypipeline

In this scenario, all three CI builds happen simultaneously but the fun happens when all three pipeline runs hit the dev_stage. As expected, the first pipeline takes the exclusive lock on the development environment while the deployment runs and the remaining two builds queue up waiting for the exclusive lock to be released. When the first build completes, the second build is automatically marked as canceled and the last build remains begins deployment.

exclusive-lock-queuing

This is awesome. However I was really hoping that I could combine the Exclusive Lock with the Approval Gate to recreate the same functionality of the Deployment Queuing option: approving the third build would cancel the previous builds. Unfortunately, this isn’t the case. I’m currently evaluating whether I can write some deployment automation in my pipeline to cancel other pending builds.

Wrapping Up

In my opinion, Exclusive Locks are a hidden gem of Sprint 171 as they’re essential if you’re automatically deploying into an environment without an Approval Gate. This feature recreates the “deploy all in sequence” feature of Classic Release Pipelines. The jury is still out on canceling builds from automation. I’ll keep you posted.

Happy coding!

Tuesday, July 14, 2020

Using Templates to improve Pull Requests and Work-Items (Part 2)

In my previous post, I outlined how to setup templates for pull-requests. Today we’ll focus on how to configure work-items with some project-specific templates. We’ll also look at how you can create these customizations for all projects within the enterprise and the motivations for doing so.

While having a good pull-request template can improve clarity and reduce the effort needed to approve pull-requests, having well defined work-items are equally as important as they can drive and shape the work that needs to happen. We can define templates for our work-items to encourage work-item authors to provide the right level of detail (such as steps to reproduce a defect) or we can use templates to reduce effort for commonly created work-items (such as fields that are common set when creating a technical-debt work-item).

Creating Work-Item Templates

Although you can define pull-request templates as files in your git repository, Azure DevOps doesn’t currently support the ability to customize work-items as managed source files. This is largely due to the complexity of work-items structure and the level of customization available, so our only option to date is to manipulate the templates through the Azure Boards user-interface. Fortunately, it’s relatively simple and there are a few different ways you can setup and customize your templates – you can either specify customizations through the Teams configuration for your Project, or you can extract a template from an existing work item.

As extracting from an existing work-item is easier, we’ll look at this first.

Creating Templates from Existing Work-Items

To create a template from an existing work item, simply create a new work-item that represents the content that you’d like to see in your template. The great news is that our template can capture many different elements, ranging from the description and other commonly used fields to more specialized fields like Sprint or Labels.

It’s important to note that templates are team specific, so if you’re running a project with multiple scrum teams, each team can self-organize and create templates that are unique to their needs.

Here’s an example of user story with the description field pre-defined:

work-item-example

Once we like the content of the story, we can convert it into a template using the ellipsis menu (…) Templates –> Capture:

work-item-capture-template

The capture dialog allows us to specify which fields we want to include in our template. This typically populates with the fields that have been modified, but you can remove or add any additional fields you want:

capture-template-dialog

As some fields are stored in the template as HTML, using this technique of creating a template from an existing work-item is especially handy.

Customizing Templates

Once you’ve defined the template, you find them in Settings –> Team Configuration. There’s a sub-navigation item for Templates.

edit-template

Applying Templates

Once you have the template(s) created, there are a few ways you can apply them to your work-items: you can apply the template to the work-item while you’re editing it, or you can apply it to the work-item from the backlog. Both activities are achieved using the ellipsis menu: Templates –> <template-name>.

The latter option of applying the template from the Backlog is extremely useful because you can apply the template to multiple items at the same time.

assign-template-from-backlog

With some creative thinking, templates can be used like macros for commonly performed activities. For example, I created a “Technical Debt” template that adds a TechDebt tag, lowered priority and changes the Value Area to Architectural.

Creating Work-Items from Templates

If you want to apply the template to work-items as you create them, you’ll need to navigate to a special URL that is provided with each template (Settings –> Boards: Team Configuration –> Templates).

get-link-for-template

The Copy Link option copies the unique URL to the template to the clipboard, which you can circulate to your team. Personally, I like to create a Markdown widget on my dashboard that allows team members to navigate to this URL directly.

create-work-item-from-dashboard

Going Further – Set defaults for Process Template

Unfortunately, there’s no mechanism to specify which work-item template should be used as the default for a team. You can however provide these customizations at the Process level, which applies these settings for all teams using that process template. Generally speaking, you should only make these changes for enterprise-wide changes.

Note that you can’t directly edit the default process templates, you will need to create a new process template based on the default: Organization Settings –> Boards –> Process:

process-template

Within the process, you can bring up any of the work-items into an editor that let’s you re-arrange the layout and contents of the work-item. To edit the Description field to have a default value, we select the Edit option in the ellipsis menu:

edit-process-template

Remembering that certain fields are HTML, we can set the default for our user story by modifying the default options:

edit-process-template-field

Wrapping up

Hopefully the last two posts for providing templates for pull requests and work-item templates has given you some ideas on how to quickly provide some consistency to your projects.

Happy coding!

Monday, June 29, 2020

Using Templates to improve Pull Requests and Work-Items (Part 1)

I’m always looking for ways to improve the flow of work. I have a few posts I want to share on using templates for pull requests and work-items. Today, I want to focus on some templates that you can add to your Azure DevOps pull requests to provide some additional context for the work.

Templates for Pull Requests

Pull Requests are a crucial component of our daily work. They help drive our continuous delivery workflows and because they’re accessible from our git history long after the pull-request has been completed, they can serve as an excellent reference point for the work. If you review a lot of pull-requests in your day, a well-written pull-request can make the difference between a good and bad day.

Not many folks realize that Azure DevOps supports pre-populating your pull request with a default template. It can even provide customized messages for specific branches. And because Pull Requests for Azure Repos support markdown, you can provide a template that encourages your team to provide the right amount of detail (and look good, too).

Default Pull Request Template

To create a single template for all your pull requests, create a markdown file named pull_request_template.md and place it in the root of your repository or in a folder named either .azuredevops, .vsts, or docs. For example:

  • .azuredevops/pull_request_template.md
  • .vsts/pull_request_template.md
  • docs/pull_request_template.md
  • <root>/pull_request_template.md

A sample pull request might look like:

----
Delete this section before submitting!

Please ensure you have the following:

- PR Title is meaningful
- PR Title includes work-item number
- Required reviewers is populated with people who must review these changes
- Optional reviewers is populated with individuals who should be made aware of these changes
----
# Summary

_Please provide a high-level summary of the changes for the changes and notes for the reviewers_

- [ ] Code compiles without issues or warnings
- [ ] Code passes all static code-analysis (SonarQube, Fortify SAST)
- [ ] Unit tests provided for these changes

## Related Work

These changes are related to the following PRs and work-items:

_Note: use !<number> to link to PRs, #<number> to link to work items_

## Other Notes

_if applicable, please note any other fixes or improvements in this PR_

As you can see, I've provided a section a the top that provides some guidance on things to do before creating the pull request, such as making sure it has a meaningful name, while the following section provides some prompts to encourage the pull-request author to provide more detail. Your kilometrage will vary, but you may want to work your team to make a template this fits your needs.

Pull request templates can be written in markdown, so it’s possible to include images and tables. My favourite are the checkboxes (- [ ]) which can be marked as completed without having to edit the content.

Branch Specific Templates

You may find the need to create templates that are specific to the target branch. To do this, create a special folder named “pull_request_template/branches” within one of the same folders mentioned above and create a markdown file with the name of the target branch. For example:

  • .azuredevops/pull_request_template/branches/develop.md
  • .azuredevops/pull_request_template/branches/release.md
  • .azuredevops/pull_request_template/branches/master.md

When creating your pull-request, Azure DevOps will attempt to find the appropriate template by matching on these templates first. If a match cannot be found, the pull_request_template.md is used as a fallback option.

Ideally, I’d prefer different templates from the source branch, as we could provide pull-request guidance for bug/*, feature/*, and hotfix/* branches. However, if we focus on develop, release and master we can support the following scenarios:

  • develop.md: provide an overview of improvements of a feature, evidence for unit tests and documentation, links to work-items and test-cases, etc
  • release.md: provide high-level overview of the items in this release, related dependencies and testing considerations
  • master.md: (optional) provide a summary of the release and its related dependencies

    Additional Templates

    In additional to the branch-specific or default-templates, you can create as many templates as you need. You could create specific templates for critical bug fixes, feature proposals, etc. In this scenario, I’d use that initial (delete-me-section) to educate the user on which template they should use.

    You’re obviously not limited to a single template either. If you have multiple templates available, you can mix and match from any of the available templates to fit your needs. Clicking the “add template” simply append the other template to the body of the pull-request.

    create-pull-request

    Other observations

    Here’s a few other observations that you might want to consider:

    • If the pull-request contains only a single commit, the name of the pull-request will default to the commit message. The commit message is also appended to the bottom of the pull-request automatically.
    • If your pull-request contains multiple commits, the name of the pull-request is left empty. The commit messages do not prepopulate into the pull-request, but the “Add commit messages” button appears. The commit messages are added “as-is” to the bottom of the pull-request, regardless where the keyboard cursor is.

    Conclusion

    Hopefully this sheds some light on a feature you might not have known existed. In my next post, we’ll look at how we can provide templates for work-items.

    Happy Coding!