Edward Thomson

A GitHub App for VSTS Build

September 8, 2018  •  4:42 PM

Over the last few months, I've been trying to take a more broad view of DevOps. I've been working on Git for a bunch of years, and other version control systems for many more, so that will always be my home. But lately, I've been thinking a lot about build and release pipelines. So last weekend I decided to work on a fun project: using Probot to build a GitHub app that integrates with the Visual Studio Team Services build pipelines.


Over on the libgit2 project, we've been moving over to Visual Studio Team Services for our continuous integration builds and pull request validation. I'm obviously a bit biased, as I work on the product, but I'm very happy to move us over to VSTS — previously, we used a mix of CI/CD providers for different platforms, but since VSTS provides hosted Linux, Windows and macOS build agents, we're able to consolidate. Plus we have an option to run build agents on our own hardware or VMs, so we can expand our supported platform matrix to include platforms like this cute little ARM.

Raspberry Pi

One thing that VSTS hasn't fixed for us, though, is some occasionally flaky tests. We have tests that hit actual network services like Git hosting providers and HTTPS validation endpoints. And when you run nine builds for every pull request update, eventually one of those is bound to fail. So we need a way to rebuild PR builds when we hit one of these flaky tests.

Obviously, I can set everybody up with permissions to VSTS to be able to log in and rebuild things. But wouldn't it be easier if we could do that right from the pull request? I thought it would - plus it would give me an excuse to play with Probot, a simple framework to build GitHub Apps.

I was really impressed how easy it was to build a GitHub App to integrate with VSTS build pipelines using the VSTS Node API and how quickly I could set up an integration so that somebody can just type /rebuild in a pull request and have the VSTS build pipeline do its thing.

Results of a rebuild command

Getting Started

When you read Probot's getting started guide, you'll notice that there's a handy bootstrapping script that you can use to scaffold up a new GitHub App. And it will optionally do it with TypeScript:

npx create-probot-app --typescript my-first-app

So of course I included that flag. If I'm going to learn node.js, I might as well learn TypeScript, too. And I'm incredibly happy that I did.

Then, all I had to do was install the VSTS Node API.

npm install vso-node-api --save

And then gluing this all together is a pretty straightforward interaction between Probot, the GitHub API and the Visual Studio Team Services API.

How it Works

You can — of course — grab all this from the GitHub repository for probot-vsts-build, but here's a quick walk-through to explain how Probot, the GitHub API, and the VSTS API work and work together:

  1. Probot: set up the event listener

    First, we set up an event handler to listen for when new comments on an issue are created. (This will fire for new comments on a pull request as well.)

    app.on(['issue_comment.created'], async (context: Context) => {
      var command = context.payload.comment.body.trim()
      if (command == "/rebuild") {
        context.log.trace("Command received: " + command)
        new RebuildCommand(context).run()

    This will create a new RebuildCommand and run it. I decided that I might want to expand this to do additional things in the future, even though the only thing it listens to today is the /rebuild command.

  2. GitHub: query the issue to make sure it's a pull request

    Since we get these events for both issues and pull requests, we want to make sure that somebody didn't type /rebuild on an issue - if that were the case, there wouldn't be anything to do.

    Probot gives us a GitHub API context that we can use to query the pull request API and ensure that the it's really a PR. If it's not, we'll just exit as there's nothing to do:

    var pr = await this.probot.github.pullRequests.get({ owner: this.repo_owner, repo: this.repo_name, number: this.issue_number })
    if (!pr.data.base) {
      this.log.trace('Pull request ' + this.probot.payload.issue.number + ' has no base branch')
      return null
  3. GitHub: ensure the user requesting the rebuild has permission

    We want to limit the people who can request a rebuild to project collaborators. This prevents someone from (accidentally or intentionally) DoS'ing our build service. A misbehaving bot or a not-nice person could just post /rebuild over and over again in an issue and tie up our build queue, preventing PR builds from happening.

    Looking at project collaborators is, admittedly, a pretty arbitrary way to restrict things. It was pointed out that I could have also looked at write permission to the repository.

    It just turns out that this is the first way I thought to do it. 😀

    var response = await this.probot.github.repos.getCollaborators({ owner: this.repo_owner, repo: this.repo_name })
    var allowed = false
    this.log.debug('Ensuring that ' + this.user.login + ' is a collaborator')
    response.data.some((collaborator) => {
      if (collaborator.login == this.user.login) {
        allowed = true
        return true
      return false
  4. VSTS: load all the Team Projects for the given VSTS account

    I want to keep configuration simple, so the only thing you need to use this app is a VSTS account (URL) and a personal access token to authenticate to VSTS. VSTS has the notion of a "Team Project" which is another layer you can use to subdivide your account.

    For my personal VSTS account, I have it split up into different projects, one for each of my open source projects, so that their build pipelines aren't all jumbled together.

    VSTS Project List

    Since the build definitions for pipelines live in a Team Project, the first thing to do is look up all the projects unless the VSTS_PROJECTS environment variable is set. (This lets you skip this round-trip, at the expense of another bit of configuration.)

    if (process.env.VSTS_PROJECTS) {
      return process.env.VSTS_PROJECTS.split(',')
    var coreApi = await this.connectToVSTS().getCoreApi()
    var projects = await coreApi.getProjects()
    var project_names: string[] = [ ]
    projects.forEach((p) => {
    return project_names
  5. VSTS: find the build definitions for pull requests for this GitHub repository

    Once we have the list of team projects, we want to look at all the build definitions within those team projects for a definition that is triggered for pull requests in the GitHub repository where we typed /rebuild.

    So we want to query all build definitions for this GitHub repository:

    var all_definitions = await vsts_build.getDefinitions(

    Some of these definitions might be set up only for continuous integration — when something is pushed or merged into the master branch — and not for pull requests. So we want to iterate these definitions looking for the ones that have a pull request trigger configured.

    definition.triggers.some((t) => {
      if (t.triggerType.toString() == 'pullRequest') {
        var trigger = t as PullRequestTrigger
        if (!trigger.branchFilters) {
          return false
        trigger.branchFilters.some((branch) => {
          if (branch == '+' + pull_request.base.ref) {
            this.log.trace('Build definition ' + definition.id + ' is a pull request build for ' + pull_request.base.ref)
            is_pr_definition = true
            return true
          return false
        if (is_pr_definition) {
          return true
      return false

    (If there's one thing that I truly regret in this code, it's using a some here. It felt idiomatic at first, but a simple for loop would have been more sensible. I'll fix this up at some point.)

  6. VSTS: see what builds were run for this pull request

    We want to requeue builds, not start new ones. This sounds like a subtle distinction, but it ensures that the pull request gets updated with the new build status.

    That means we need to query all the builds that have been performed for this pull request for the definitions that support PR builds:

    var builds_for_project = await vsts_build.getBuilds(
        definition_for_project.build_definitions.map(({id}) => id),
        'refs/pull/' + this.issue_number + '/merge',

    (Oops — here's another thing that I just realized — since build definitions are optional in this API, we could have skipped that last query, and just left the second argument undefined. Another thing to improve when I have a minute!)

  7. VSTS: requeue those builds

    Now that we have the list of builds that were originally queued, we can requeue them.

    var queuedBuild = await vsts_build.requeueBuild(sourceBuild, sourceBuild.id, sourceBuild.project.id)

    But wait! You might notice that the VSTS API doesn't actually have a requeueBuild function. That's because it's a very new endpoint, but I noticed the "Rebuild" button in the VSTS UI:

    Rebuild button in the UI

    A quick peek at the network traffic showed that it was POSTing an empty body at the URL for the existing build endpoint for that build id. It's fortunate that the new method is against the same endpoint, I was able to look up the getBuild and deleteBuild APIs to understand to construct a URL for that same endpoint, using its GUID, and create a request.

    var routeValues: any = {
      project: project
    let queryValues: any = {
      sourceBuildId: buildId
    try {
      var verData: vsom.ClientVersioningData = await this.vsoClient.getVersioningData(
      var url: string = verData.requestUrl!
      var options: restm.IRequestOptions = this.createRequestOptions(
      var res: restm.IRestResponse<Build>
      res = await this.rest.create<Build>(url, { }, options)
      var ret = this.formatResponse(res.result, TypeInfo.Build, false)
    catch (err) {

    And I can even create that as an extension method on the VSTS API:

    declare module 'vso-node-api/BuildApi' {
      interface IBuildApi {
        requeueBuild(build: Build, buildId: number, project?: string): Promise<Build>
      interface BuildApi {
        requeueBuild(build: Build, buildId: number, project?: string): Promise<Build>
  8. GitHub: tell the user that we did it

    Finally, all we need to do is tell the user that we succeeded, so we'll post something back in that issue thread:

      body: 'Okay, @' + this.user.login + ', I started to rebuild this pull request.'

And that's it! Once we configure and deploy our GitHub App, we'll now listen for /rebuild commands and queue new builds:

Results of a rebuild command

Jekyll with VSTS and Azure

August 14, 2018  •  4:42 PM

I've been a big fan of the Jekyll website generator for a few years now. I use it for all my websites, and I've recently perfected my build pipeline from GitHub to Azure using Visual Studio Team Services. In particular, I use Jekyll with VSTS and Azure for the website for my podcast, All Things Git, and even after a few months, I couldn't be happier with this setup.

What is Jekyll?

If you're not familiar with Jekyll, it's a simple tool that takes a site as Markdown and processes it into HTML. It's a great way to create a site and it's a much, much simpler blog platform than something like WordPress.

One of the most popular Jekyll installations is on GitHub Pages, where it's easy to set up your site in a Git repository, push it and have it published to your GitHub Pages site. This is perfect for open source projects without too many custom requirements. For example, we use GitHub pages for hosting the website for libgit2, the open source project.

But if you need to scale up, it's easy to deploy Jekyll sites to Azure, and use VSTS to manage the deployment.

Outgrowing GitHub Pages

GitHub Pages is great for simple, static sites. We use it for the libgit2 site, for example. But many sites need some additional customization beyond what GitHub Pages offers. For example: you might want to enable search on your Jekyll blog using the lunr.js plugin. Or you might want to create an approval pipeline — adding a staging server so that you can review changes before moving them to production.

You can also set up a site with custom networking or execution requirements. All Things Git uses Azure CDN so that we can serve the episode audio efficiently. We also add custom executables and filters running within the web application, so that we can do things like configuring a custom Google Analytics setup. This lets us support Analytics for non-HTML files on the site (like our audio).

Obviously, GitHub Pages don't allow you to run arbitrary code, since it's a shared service, so hosting ourselves on a provider like Azure is our only option.

Define Your Build

VSTS has an incredibly flexible CI/CD system that supports a variety of platforms. It offers hosted build agents, in Azure, for Windows (of course), Linux and even macOS. But none of these agents have Jekyll installed out of the box. So our first step could be to install Jekyll.

But you know what's easier than installing Jekyll? Not installing it. Since Jekyll ships a docker image, we can simply run that for our build.

So our build steps are:

  1. Run a jekyll build in the jekyll/builder docker image. This will take our site, process it, and spit it the rendered site.
  2. Zip up jekyll's output, the results of the rendering.
  3. Publish that zip as a build artifact; we can download this to inspect it, or use the zip in a deployment.

VSTS has a YAML-based build system as well as a graphical designer. I prefer the YAML — using a file to configure your builds ties the build process strongly to the code, an idea called "configuration as code". Checking the build process in ensures that the build steps are accurate for every version being built, even as you check out historical versions or change branches.

You can add the YAML directly to the repository, creating a .vsts-ci.yml in the root of your project:

- repo: self

- task: Docker@0
  displayName: Run Jekyll
    action: 'Run an image'
    imageName: 'jekyll/builder:latest'
    volumes: |
    containerCommand: 'jekyll build --future'
    detached: false

- task: ArchiveFiles@1
  displayName: Archive Files 
    includeRootFolder: false

- task: PublishBuildArtifacts@1
  displayName: Publish Site
    PathtoPublish: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
    ArtifactName: www

Enable the Build Definition

In Visual Studio Team Services, navigate to the 🚀 Build and Release area of your project, and click   + New   to create a new build definition.

If you're using a Git repository hosted in VSTS, you can just select it; for GitHub or a locally-hosted GitHub Enterprise instance, you'll need to authenticate, then you can select the repository that contains your Jekyll site.

Repository Selection

When you click continue, you'll be prompted to select a build template. VSTS supports a variety of platforms and have templates for many configurations out of the box. But we recommend using configuration as code with YAML files. Since we set that up in the last step, select the YAML option and click Apply.

YAML Configuration as Code

Finally, in the build definition configuration, set up your process. Give your build definition a name, set the agent queue to the "Hosted Linux Preview" queue (since our docker image is a Linux image) and set the YAML path to .vsts-ci.yml, which we created in the previous step. (You can simply click the   …   button to browse your repository.


And that's it - this will set up a CI pipeline so that when you push into master, or merge a pull request, Jekyll will build out your site and package it into a zip file.

Configure the Deployment Pipeline

Once you've got your website built and packaged, you can deploy it to your web application. Navigate to the 🚀 Releases area of your project, and click "Create a release pipeline". You'll be prompted to select a template - the easiest to start with is the Azure App Service deployment template.

App Service Deployment

You'll immediately be prompted to configure the deployment environment. All you need to do is give it a name - I call mine "Production":


In the Artifacts tab on the left, click "Add an Artifact", then select the build artifact you created in the previous step and click   Add  .

Select Artifact

Once the artifact is configured, click the ⚡️ lightning bolt above the artifact to configure the Continuous deployment trigger. In the pane that opens, turn the continuous deployment trigger option to enabled. This will let you set up a deployment every time your build completes.

Continuous Deployment Trigger

Finally, you need to configure the app service to deploy to. You need to have already created a web app in the Azure portal. And you can do that on one of the budget plans — for a static web site you probably don't need much larger than the Shared plan. You could even get by with the Free plan if you don't want a custom domain.

All you have to do is select your Azure subscription (click Manage if you haven't connected your VSTS account and your Azure account), and then select the App service that you want to deploy to.

Pipeline Target

Finally, give your pipeline a name. Hover over "New release pipeline" at the top of the page, and select the ✏️ to edit it.

Pipeline Name

Finally, click 💾 Save and your pipeline is configured.

A CI/CD Pipeline for Jekyll

That's it! Now you have a full CI/CD pipeline for your static web site, powered by Jekyll, VSTS and Azure.

When you push a change to the master branch, or when you merge a pull request into master, VSTS will start a Jekyll build on one of its Linux build agents, hosted in Azure. When you merge a pull request into master, the results will be packaged up, then deployed to your Azure web app. It's an incredibly easy process that gives you a flexible and powerful pipeline for your website.

A security vulnerability in Git has been announced: a bug in submodule resolution can cause git clone --recursive to execute arbitrary commands.

What's the problem?

When a Git repository contains a submodule, that submodule's repository structure is stored alongside the parent's, inside the .git folder. This structure is generally stored in a folder with the same name as the submodule, however the name of this folder is configurable by a file in the parent repository.

Vulnerable versions of git allow the folder name to contain a path that is not necessarily beneath the .git directory. This can allow an attacker to carefully create a parent repository that has another Git repository checked in, as a folder inside that parent repository. Then that repository that's checked in can be added as a submodule to the parent repository. That submodule's location can be set outside of the .git folder, pointing to the checked-in repository inside the parent itself.

When you recursively clone this parent repository, Git will look at the submodule that has been configured, then look for where to store that submodule's repository. It will follow the configuration into the parent repository itself, to the repository that's been checked in as a folder. That repository will be used to check out the submodule… and, unfortunately, any hooks in that checked-in repository will be run.

So the attacker can bundle this repository configuration with a malicious post-checkout hook, and their code will be executed immediately upon your (recursive) clone of the repository.

Hosting providers

Thankfully, since most of us rely on a hosting provider to store our code, we can stop this vulnerability by simply blocking the repositories there. Visual Studio Team Services is actively blocking any repository that tries to set up a git submodule outside of the .git directory. I'm told that GitLab and GitHub are, too, and presumably other hosting providers are blocking these malicious repositories as well.

Upgrade your client

Blocking these repositories on the hosting providers shuts down an important attack vector, and I hope that it's unlikely that you git clone --recursive a repository that you don't trust. Despite that, you should still upgrade your client.

Git version 2.17.1 is the latest and greatest version of Git, and has been patched. But most people don't actually build from source, so your version of Git is probably provided to you by a distribution. You may have different versions available to you - ones that have had the patches applied by your vendor - so you may not be able to determine if you're vulnerable simply by looking at the version number.

Here's some simple steps to determine whether you're vulnerable and some upgrade instructions if you are.

Are you vulnerable?

You can easily (and safely) check to see if your version of Git is vulnerable to this recent security vulnerable. Run this from a temporary directory:

git init test && \
  cd test && \
  git update-index --add --cacheinfo 120000,e69de29bb2d1d6434b8b29ae775ad8c2e48c5391,.gitmodules

Note: this will not actually clone any repositories to your system, and it will not execute any dangerous commands.

If you see:

error: Invalid path '.gitmodules'
fatal: git update-index: --cacheinfo cannot add .gitmodules

Congratulations - you are already running a version of Git that is not vulnerable.

If, instead, you see nothing, then your version of Git is vulnerable and you should upgrade immediately.


Windows is quite easy to upgrade. Simply grab the newest version of Git for Windows (version 2.17.1) from https://gitforwindows.org/.


Apple ships Git with Xcode but unfortunately, they do not update it regularly, even for security vulnerabilities. As a result, you'll need to upgrade to the version that is included by a 3rd party. Homebrew is the preferred package manager for macOS.

  1. If you have not yet installed Homebrew, you can install it by running:

    /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

    at a command prompt.

  2. After that, you can use Homebrew to install git:

    brew install git
  3. Add the Homebrew install location (/usr/local) to your PATH.

    echo 'export PATH="/usr/local/bin:$PATH"' >> ~/.bashrc
  4. Close all open Terminal sessions, quit Terminal.app, and re-open it.

Linux (Debian, Ubuntu)

If you're using the current version of Ubuntu or Debian, then they'll have the latest version ready. If you're on a stable system, like a server, you should be running an LTS release - a "long term support" version - where they backport security patches like this one. So you should simply need to:

  1. Get the latest information about the available software versions from the remote repository:

    Debian, Ubuntu:

    sudo apt-get update

    Red Hat, CentOS:

    sudo yum update
  2. Install the latest version of git:

    Debian, Ubuntu:

    sudo apt-get install git

    Red Hat, CentOS:

    sudo yum update git

Ensuring that you're patched

Now if you run:

git init test && \
  cd test && \
  git update-index --add --cacheinfo 120000,e69de29bb2d1d6434b8b29ae775ad8c2e48c5391,.gitmodules

at a command prompt, then you should see:

error: Invalid path '.gitmodules'
fatal: git update-index: --cacheinfo cannot add .gitmodules

And now you're patched against the git security vulnerability, CVE 2018-11234 and CVE 2018-11235.

Thanks to Junio Hamano, Jeff King, Johannes Schindelin and the rest of the Git security community for their work to keep our source code safe and secure.

If you're interested in security vulnerabilities in Git, please join me at NDC Oslo, where I'll talk you through the details of this security issue and others.

tl;dr: If you just want the instructions for configuration, they're here.

I spend a lot of time writing cross-platform software, which means a lot of time writing code on Windows or testing my code there. So the Windows Subsystem for Linux has been a lifesaver for me, since it lets me run Linux applications — in fact, a whole Debian distribution — on my Windows machine (without needing to run a virtual machine).

I was talking to someone about this last week at the Build 2018 conference, and they mentioned that they liked WSL but they really wished that they had a GUI credential manager — like the Git Credential Manager — on the Linux side.

They were surprised when I told them that they could! 🤯

If you're not familiar with the Git Credential Manager, it allows you t authenticate to a remote Git server easily, even if you have a complex authentication pattern like Azure Active Directory or two-factor authentication. Git Credential Manager integrates into the authentication flow for services like Visual Studio Team Services, Bitbucket and GitHub and — once you're authenticated to your hosting provider — requests a new authentication token and stores sit securely in the Windows Credential Manager. After the first time, you can use git to talk to your hosting provider without needing to re-authenticate; it will just use the token in the Windows Credential Manager.

This gets set up for you automatically when you install Git for Windows but you can also configure it to work with Windows Subsystem for Linux.

Git Credential Manager on Windows Subsystem for Linux

You can set it up by running1:

git config --global credential.helper "/mnt/c/Program\ Files/Git/mingw64/libexec/git-core/git-credential-manager.exe"

Now any git operation you perform within Windows Subsystem for Linux will use the credential manager. If you already have credentials cached for a host, it will simply read them out of the credential manager. Otherwise, you'll get the same nice UI dialog experience, even if you're in a Linux console.

This support relies on the fact that Windows Subsystem for Linux and Windows itself can interoperate and you can invoke Windows applications from WSL.2

  1. This is the default path for a Git for Windows installation; you may need to tweak this if you're using Cygwin or mingw.) 

  2. Note, however, that you do need to update to the Windows 10 April 2018 update; prior versions had a problem with sharing stdin/stdout when the Windows application was a .NET application instead of Win32. 

Introducing ntlmclient

May 6, 2018  •  11:55 PM

I’d like to announce ntlmclient, a new open source library that I built. Usually I'd be announcing it proudly and encouraging you to use my code — but this time, I’d ask you to please not use it.

See, this new library is a library that performs NTLM2 authentication. And, to be honest, I’d like to ask you to not perform NTLM2 authentication at all. But — if you really must use NTLM2 — then I suppose that this new library will do the job.

I intend to add this to libgit2, the Git library that backs clients like GitKraken and gmaster. Because regrettably, we really must use NTLM2. Many people still use NTLM2 with their on-premises Team Foundation Server instances, and we’d like all the tools that use libgit2 to be able to talk to their Git repositories hosted in TFS.

At the moment, libgit2 can already speak NTLM2 on Windows clients; using this library will enable Unix platforms to speak NTLM2 as well.

A bit of background

My first experience with NTLM was way back in 2006, when I was working at Teamprise. We were building cross-platform tools to talk to Microsoft Team Foundation Server; we had a plug-in for the Eclipse IDE, a standalone GUI tool, and a command-line client for Windows, Mac, Linux and a bunch of legacy Unix platforms.

(Today these tools live on as Microsoft Team Explorer Everywhere.)

We faced a lot of challenges reimplementing Microsoft’s tools — and one of those early features that we needed to implement was the NTLM2 authentication protocol. Since we were building a plug-in for the Eclipse IDE, we built our entire client suite in Java. And — regrettably — our HTTP stack didn’t support NTLM2 at the time, only the older LM and NTLM protocols.

But LM and NTLM are truly ancient algorithms, so modern systems disable them both, in favor of the slightly less ancient NTLM2 algorithm. So at Teamprise, we were forced to learn about, and ultimately implement, NTLM2 ourselves.

That was over a decade ago, and it certainly hasn’t gotten any better with age.

How NTLM2 works

Many people still have their Windows servers — and some of the applications on them — to use NTLM2. That’s because it’s not without its advantages: it’s the simplest way to enable "single sign-on". When you sign in to your local computer, it hashes your password and stores that hash in memory. This is the same hash that the server — or your Active Directory server — has stored. Later, when you communicate with a server that wants you to authenticate with NTLM2, you encrypt a shared random value that the server gives you using that hash. Then you send that encrypted value to the server — it will encrypt the same value with it’s hash and if they match, it will prove that you have entered the same password without actually having to transmit the password itself, or even keep it in memory in plaintext.

This is a clever way to allow you to authenticate to a remote server without having to type your password. But there’s a better way.


Kerberos also enables single sign-on, but instead of relying on ciphers like RC4 and HMAC-MD5, Kerberos is built on modern ciphers. Microsoft Active Directory is built around Kerberos, so it’s obviously well-supported on Windows, but Kerberos is also an industry standard. There are great implementations available including MIT’s and Heimdal.

However, the reality is that Kerberos requires some additional configuration on Windows servers. And this configuration is absolutely worth it on a production machine. If you want to support single sign-on, you should probably be using Kerberos in production. But if you’re just spinning up a test server, it’s sometimes worth it to just use NTLM2. And the reality is that NTLM2 over an encrypted connection like TLS is still a reasonable solution.

Fundamentally, a lot of people still use it.

So I created a new NTLM2 client library. It’s basically a port of Team Explorer Everywhere’s NTLM2 code that’s been used in production for over a decade — but it’s a port to C, with minimal dependencies. It only requires a cryptography library for the underlying cipher support. On macOS, ntlmclient will use Common Crypto, the system’s cryptography libraries. On Linux, ntlmclient uses either OpenSSL or mbedTLS, whichever library you have on your system.

So, please, don’t use NTLM2. If you need single sign-on support, you're probably best off using Kerberos. And if you don't need single sign-on support, just use Basic authentication over TLS.

But if you do need to support NTLM2 — like if you need to talk to an on-premises Team Foundation Server that wasn’t configured with an SPN for Kerberos — then I hope my new ntlmclient library helps.