Monday, June 29, 2020

Using Templates to improve Pull Requests and Work-Items (Part 1)

I’m always looking for ways to improve the flow of work. I have a few posts I want to share on using templates for pull requests and work-items. Today, I want to focus on some templates that you can add to your Azure DevOps pull requests to provide some additional context for the work.

Templates for Pull Requests

Pull Requests are a crucial component of our daily work. They help drive our continuous delivery workflows and because they’re accessible from our git history long after the pull-request has been completed, they can serve as an excellent reference point for the work. If you review a lot of pull-requests in your day, a well-written pull-request can make the difference between a good and bad day.

Not many folks realize that Azure DevOps supports pre-populating your pull request with a default template. It can even provide customized messages for specific branches. And because Pull Requests for Azure Repos support markdown, you can provide a template that encourages your team to provide the right amount of detail (and look good, too).

Default Pull Request Template

To create a single template for all your pull requests, create a markdown file named pull_request_template.md and place it in the root of your repository or in a folder named either .azuredevops, .vsts, or docs. For example:

  • .azuredevops/pull_request_template.md
  • .vsts/pull_request_template.md
  • docs/pull_request_template.md
  • <root>/pull_request_template.md

A sample pull request might look like:

----
Delete this section before submitting!

Please ensure you have the following:

- PR Title is meaningful
- PR Title includes work-item number
- Required reviewers is populated with people who must review these changes
- Optional reviewers is populated with individuals who should be made aware of these changes
----
# Summary

_Please provide a high-level summary of the changes for the changes and notes for the reviewers_

- [ ] Code compiles without issues or warnings
- [ ] Code passes all static code-analysis (SonarQube, Fortify SAST)
- [ ] Unit tests provided for these changes

## Related Work

These changes are related to the following PRs and work-items:

_Note: use !<number> to link to PRs, #<number> to link to work items_

## Other Notes

_if applicable, please note any other fixes or improvements in this PR_

As you can see, I've provided a section a the top that provides some guidance on things to do before creating the pull request, such as making sure it has a meaningful name, while the following section provides some prompts to encourage the pull-request author to provide more detail. Your kilometrage will vary, but you may want to work your team to make a template this fits your needs.

Pull request templates can be written in markdown, so it’s possible to include images and tables. My favourite are the checkboxes (- [ ]) which can be marked as completed without having to edit the content.

Branch Specific Templates

You may find the need to create templates that are specific to the target branch. To do this, create a special folder named “pull_request_template/branches” within one of the same folders mentioned above and create a markdown file with the name of the target branch. For example:

  • .azuredevops/pull_request_template/branches/develop.md
  • .azuredevops/pull_request_template/branches/release.md
  • .azuredevops/pull_request_template/branches/master.md

When creating your pull-request, Azure DevOps will attempt to find the appropriate template by matching on these templates first. If a match cannot be found, the pull_request_template.md is used as a fallback option.

Ideally, I’d prefer different templates from the source branch, as we could provide pull-request guidance for bug/*, feature/*, and hotfix/* branches. However, if we focus on develop, release and master we can support the following scenarios:

  • develop.md: provide an overview of improvements of a feature, evidence for unit tests and documentation, links to work-items and test-cases, etc
  • release.md: provide high-level overview of the items in this release, related dependencies and testing considerations
  • master.md: (optional) provide a summary of the release and its related dependencies

    Additional Templates

    In additional to the branch-specific or default-templates, you can create as many templates as you need. You could create specific templates for critical bug fixes, feature proposals, etc. In this scenario, I’d use that initial (delete-me-section) to educate the user on which template they should use.

    You’re obviously not limited to a single template either. If you have multiple templates available, you can mix and match from any of the available templates to fit your needs. Clicking the “add template” simply append the other template to the body of the pull-request.

    create-pull-request

    Other observations

    Here’s a few other observations that you might want to consider:

    • If the pull-request contains only a single commit, the name of the pull-request will default to the commit message. The commit message is also appended to the bottom of the pull-request automatically.
    • If your pull-request contains multiple commits, the name of the pull-request is left empty. The commit messages do not prepopulate into the pull-request, but the “Add commit messages” button appears. The commit messages are added “as-is” to the bottom of the pull-request, regardless where the keyboard cursor is.

    Conclusion

    Hopefully this sheds some light on a feature you might not have known existed. In my next post, we’ll look at how we can provide templates for work-items.

    Happy Coding!

    Monday, June 08, 2020

    Keeping your Secrets Safe in Azure Pipelines

    These days, it’s critical that everyone in the delivery team has a security mindset and is vigilant about keeping secrets away from prying eyes. Fortunately, Azure Pipelines have some great features to ensure that your application secrets are not exposed during pipeline execution, but it’s important to adopt some best practices early on to keep things moving smoothly.

    Defining Variables

    Before we get too far, let’s take a moment to step back and talk about the motivations for variables in Azure Pipelines. We want to use variables for things that might change in the future, but more importantly we want to use variables to prevent secrets like passwords and API Keys from being entered into source control.

    Variables can be defined in several different places. They can be placed as meta-data for the pipeline, in variable groups, or dynamically in scripts.

    Define Variables in Pipelines

    Variables can be scoped to a Pipeline. These values, which are defined through the “Variables” button when editing a Pipeline, live as meta-data outside of the YAML file.

    image

    Define Variables in Variable Groups

    Variable Groups are perhaps the most common mechanism to define variables as they can be reused across multiple pipelines within the same project. Variable Groups also support pulling their values from an Azure KeyVault which makes them an ideal mechanism for sharing secrets across projects.

    Variable Groups are defined in the “Library” section of Azure Pipelines. Variables are simply key/value pairs.

    image

    image

    Variables are made available to the Pipeline when it runs, and although there are a few different syntaxes I’m going to focus on using what’s referred to as macro-syntax, which looks like $(VariableName)

    variables:
    - group: MyVariableGroup
    
    steps:
    - bash: |
         echo $(USERNAME)
         printenv | sort
    
    

    All variables are provided to scripts as Environment Variables. Using printenv dumps the list of environment variables. Both USERNAME and PASSWORD variables are present in the output.

    image

    Define Variables Dynamically in Scripts

    Variables can also be declared using scripts using a special logging syntax.

    - script: |
         $token = curl ....
         echo "##vso[task.setvariable variable=accesstoken]$token
    
    

    Defining Secrets

    Clearly, putting a clear text password variable in your pipeline is dangerous because any script in the pipeline has access to it. Fortunately, it’s very easy to lock this down by converting your variable into a secret.

    secrets

    Just use the lock icon to set it as a secret and then save the variable group to make it effectively irretrievable. Gandalf would be pleased.

    Why doesn't JWfan have a secure connection? - Other Topics - JOHN ...

    Now, when we run the pipeline we can see that the PASSWORD variable is no longer an Environment variable.

    image

    Securing Dynamic Variables in Scripts

    Secrets can also be declared at runtime using scripts. You should always be mindful as to whether these dynamic variables could be used maliciously if not secured.

    $token = curl ...
    echo "##vso[task.setvariable variable=accesstoken;isSecret=true]$token"
    
    

    Using Secrets in Scripts

    Now that we know that secrets aren’t made available as Environment variables, we have to explicitly provide the value to the script – effectively “opting in” – by mapping the secret to variable that can be used during script execution:

    - script : |
        echo The password is: $password
      env:
        password: $(Password)
    
    

    The above is a wonderful example of heresy, as you should never output secrets to logs. Thankfully, we don't need to worry too much about this because Azure DevOps automatically masks these values before they make it to the log.

    image

    Takeaways

    We should all do our part to take security concerns seriously. While it’s important to enable secrets early in your pipeline development to prevent leaking information, doing so will also prevent costly troubleshooting efforts when when variables are converted to secrets.

    Happy coding.

    Saturday, June 06, 2020

    Downloading Artifacts from YAML Pipelines

    Azure DevOps multi-stage YAML pipelines are pretty darn cool. You can describe a complex continuous integration pipeline that produces an artifact and then describe the continuous delivery workflow to push that artifact through multiple environments in the same YAML file.

    In today’s scenario, we’re going to suppose that our quality engineering team is using their own dedicated repository for their automated regression tests. What’s the best way to bring their automated tests into our pipeline? Let’s assume that our test automation team has their own pipeline that compiles their tests and produces an artifact so that we can run these tests with different runtime parameters in different environments.

    There are several approaches we can use. I’ll describe them from most-generic to most-awesome.

    Download from Azure Artifacts

    A common DevOps approach that is evangelized in Jez Humble’s Continuous Delivery book, is pushing binaries to an artifact repository and using those artifacts in ad-hoc manner in your pipelines. Azure DevOps has Azure Artifacts, which can be used for this purpose, but in my opinion it’s not a great fit. Azure Artifacts are better suited for maven, npm and nuget packages that are consumed as part of the build process.

    Don’t get me wrong, I’m not calling out a problem with Azure Artifacts that will you require you to find an alternative like JFrog’s Artifactory, my point is that it’s perhaps too generic. If we dumped our compiled assets into the artifactory, how would our pipeline know which version we should use? And how long should we keep these artifacts around? In my opinion, you’d want better metadata about this artifact, like source commits and build that produced it, and you’d want these artifacts to stick-around only if they’re in use. Although decoupling is advantageous, when you strip something of all semantic meaning you put the onus on something else to remember, and that often leads to manual processes that breakdown…

    If your artifacts have a predictable version number and you only ever need the latest version, there are tasks for downloading these types of artifacts. Azure Artifacts refers to these loose files as “Universal Packages”:

    - task: UniversalPackages@0
      displayName: 'Universal download'
      inputs:
        command: download
        vstsFeed: '<projectName>/<feedName>'
        vstsFeedPackage: '<packageName>'
        vstsPackageVersion: 1.0.0
        downloadDirectory: '$(Build.SourcesDirectory)\someFolder'
    
    

    Download from Pipeline

    Next up: the DownloadPipelineArtifact task is full featured built-in Task that can download artifacts from different sources, such as an artifact produced in an earlier stage, a different pipeline within the project, or other projects within your ADO Organization. You can even download artifacts from projects in other ADO Organizations if you provide the appropriate Service Connection.

    - task: DownloadPipelineArtifact@2
      inputs:
        source: 'specific'
        project: 'c7233341-a9ff-4e76-9367-909816bcd16g'
        pipeline: 1
        runVersion: 'latest'
        targetPath: '$(Pipeline.Workspace)'
    

    Note that if you’re downloading an artifact from a different project, you’ll need to adjust the authorization scope of the build agent. This is found in the Project Settings –> Pipelines : Settings. If this setting is disabled, you’ll need to adjust it at the Organization level first.

    image

    This works exactly as you’d expect it to, and the artifacts are downloaded to $(Pipeline.Workspace). Note in the above I’m using the project guid and pipeline id, which are populated by the Pipeline Editor, but you can specify them by their name as well.

    My only concern is there isn’t anything that indicates our pipeline is dependent on another project. The pipeline dependency is silently being consumed… which feels sneaky.

    build_download_without_dependencies

    Declared as a Resource

    The technique I’ve recently been using is declaring the pipeline artifact as a resource in the YAML. This makes the pipeline reference much more obvious in the pipeline code and surfaces the dependency in the build summary.

    Although this supports the ability to trigger our pipeline when new builds are available, we’ll skip that for now and only download the latest version of the artifact at runtime.

    resources:
     pipelines:
       - pipeline: my_dependent_project
         project: 'ProjectName'
         source: PipelineName
         branch: master
    
    

    To download artifacts from that pipeline we can use the download alias for DownloadPipelineArtifact. The syntax is more terse and easier to read. This example downloads the published artifact 'myartifact' from the declared pipeline reference. The download alias doesn’t seem to specify the download location. In this example, the artifact is downloaded to $(Pipeline.Workspace)\my_dependent_project\myartifact

    - download: my_dependent_project
      artifact: myartifact
    
    

    With this in place, the artifact shows up and change history appears in the build summary.

    build_download_with_pipeline_resource

    Update: 2020/06/18! Pipelines now appear as Runtime Resources

    At the time this article was written, there was an outstanding defect for referencing pipelines as resources. With this defect resolved, you can now specify the version of the pipeline resource to consume when manually kicking-off a pipeline run.

    1. Start a new pipeline run

      manually-trigger-build-select-resource
    2. Open the list of resources and select the pipeline resource

      manually-trigger-build-select-resource-pipeline
    3. From the list of available versions, pick the version of the pipeline to use:

      manually-trigger-build-select-resource-pipeline-version

    With this capability, we now have full traceability and flexibility to specify which pipeline resource we want!

    Conclusion

    So there you go. Three different ways to consume artifacts.

    Happy coding!