Azure Functions Continuous Deployment with Azure Pipelines: Part 3 - Creating an Initial Build Pipeline

This is the third part in a series demonstrating how to setup continuous deployment of an Azure Functions App using Azure DevOps build and release pipelines.

Demo app source code is available on GitHub.

In the previous instalment we created an Azure DevOps organization and project, we can now create a build pipeline by clicking the New pipeline button:

Starting to create a new build Azure Pipeline

Next we need to specify where the source code is located, this could be hosted inside Azure DevOps using Azure Repos or GitHub. The demo app is located in GitHub.

Triggering an Azure Pipeline from GitHub

Next we need to give permission from GitHub to enable the pipeline to access the repository. The easiest way to do this is to click the Install our app from the GitHub Marketplace link as shown in the following screenshot:

Installing the Azure Pipelines GitHub App

You can allow access to all current (and future) repositories or specified ones:

Installing the Azure Pipelines GitHub App

Click the Install button, confirm your GitHub password, and authorize Azure Pipelines:

Authorize Azure Pipelines

You will be redirected back to Azure DevOps where you can now select the InvestFunctionApp GitHub repository.

You can now select a starter YAML template that will define what happens during the build:

Defining a starter YAML Azure Pipeline build template

Choose the ASP.NET Core template:

Defining a starter YAML Azure Pipeline build template

And click Save and run and then Save and run again.

The build job will now run:

Initial build pipeline executing

Initial build pipeline executing

Initial build pipeline executing

This initial build will fail but don’t worry as we need to go and edit the YAML.

Defining an Azure Pipeline Build for an Azure Function App in YAML

When we chose the starter template earlier, a new file was added to the root of the repository in GitHub called azure-pipelines.yml.

This file defines the steps that make up the build using YAML schema.

Essentially there are a number of steps in a build, each step could be a handwritten custom script, calling a prebuilt task, or referencing another template.

We can  now customize the YAML to defined the build as follows:

pool:
  vmImage: 'Ubuntu 16.04'
  
steps:

- script: dotnet build 'src/InvestFunctionApp/InvestFunctionApp.sln' --configuration $(buildConfiguration)
  displayName: 'Build solution'
    
- script: dotnet test 'src/InvestFunctionApp/InvestFunctionApp.Tests' --configuration $(buildConfiguration) --logger trx
  displayName: 'Run unit tests'

- task: PublishTestResults@2
  inputs:
    testRunner: VSTest
    testResultsFiles: '**/*.trx'

- script: dotnet publish 'src/InvestFunctionApp/InvestFunctionApp/InvestFunctionApp.csproj' --configuration $(buildConfiguration) --output '$(Build.ArtifactStagingDirectory)/app'
  displayName: 'Package function app'

- task: PublishBuildArtifacts@1
  displayName: 'Publishing app artifact'
  inputs:
    pathtoPublish: '$(Build.ArtifactStagingDirectory)/app'
    artifactName: app

- task: CopyFiles@2
  displayName: 'Copy end to end tests'
  inputs:
    sourceFolder: 'src/InvestFunctionApp'
    targetFolder: '$(Build.ArtifactStagingDirectory)/e2etests'

- task: PublishBuildArtifacts@1
  displayName: 'Publish end to end test artifact'
  inputs:
    pathtoPublish: '$(Build.ArtifactStagingDirectory)/e2etests'
    artifactName: e2etests

In the preceding YAML, the displayName items help to describe what is happening and are hopefully fairly descriptive (they will also appear in the Azure Pipeline GUI/logs when builds are run).

Creating Multiple Build Artifacts in Azure Pipelines

One important thing to note in the preceding YAML, is that we are creating two separate build artifacts, one that contains only the Function App contents (for deployment to Azure) and one to be able to run the tests.

The Function App artifact is created by first calling dotnet publish and choosing the (temporary) output directory app with the switch --output '$(Build.ArtifactStagingDirectory)/app'. To actually create the build artifact that can be consumed in a release pipeline, the prebuilt PublishBuildArtifacts@1 task is called. This task takes its input from pathtoPublish: '$(Build.ArtifactStagingDirectory)/app' and will create a build artifact with the name app by virtue of the artifactName: app setting.

To create a separate artifact for the tests, the CopyFiles@2 task is used to copy (effectively the entire solution including the test projects) to the (temporary) targetFolder: '$(Build.ArtifactStagingDirectory)/e2etests'. The final PublishBuildArtifacts@1 task creates a second build artifact called e2etests.

We’ll see these artifacts used later in this series of blog posts.

The reason we create two artifacts here is to separate out what will get deployed to Azure Functions from  the code/binaries that contain the tests. What we don’t want is to publish the tests, xUnit.net DLLs etc. to the deployed function apps in Azure. This can also help the in readability of the release pipeline and potentially help to segregate things so that only the indented things get deployed.

We can now save the changes to azure-pipelines.yml  and push them to GitHub (which will actually trigger a new CI build).

Viewing Azure Pipeline Builds

Once the changes are pushed, the Azure Pipeline that we created will notice the changes and execute. If you click on Builds you will see the build running:

Azure Pipeline Build Running

Clicking in the build will show you all the steps:

Azure Pipeline build details

Notice in the preceding screenshot that the “Build solution” step is failing. Clicking it will show you the logs for the step:

Viewing Azure Pipeline build logs

Notice in the preceding screenshot the error”buildConfiguration: command not found”. This is because in the YAML build definition we are referencing the $(buildConfiguration) variable which is not yet defined.

In the next instalment of this series we’ll learn how to add variables to a pipeline and fix this problem.

SHARE:

Azure Functions Continuous Deployment with Azure Pipelines: Part 2 - Getting Started

This is the second part in a series demonstrating how to setup continuous deployment of an Azure Functions App using Azure DevOps build and release pipelines.

In the previous instalment we got an overview  of Azure DevOps and of the app we’ll be building/deploying.

Demo app source code is available on GitHub.

Creating Azure Function Apps

There will be two Function Apps created in Azure. One will be a self-contained test environment (with separate Azure Storage account) and to keep things simple they both belong to the same resource group. The other Function App will be the production version. For simplicity we won’t be using slots, proxies, etc.

The two Function Apps are:

  • InvestFunctionAppDemo (production)
  • InvestFunctionAppDemoTest (test)

The test environment will be used to check the deployment  artifacts and as a target to execute functional end-to-end tests against.

Once these two Function Aps have been created in the Portal, all of the deployments will happen automatically from the release pipeline, including setting test/production specific Function App settings.

If you don’t have an Azure account you can currently sign up for free.

Signing Up or Azure DevOps

Now that there are some Function Apps created in Azure to deploy to, you can deploy to them from an Azure release pipelines. To do this you’ll need to sign up to Azure DevOps.

Azure DevOps contains a number of services, in this series we’ll be using the Azure Pipeline service.

An Overview of Azure Pipelines

Azure DevOps is currently free for open source projects and small teams with some limitations on things such as parallel job execution all the way up to 1,000 users for $7,833.26 USD/month. You can find the latest pricing information as part of the product information and compare features of different plans.

You can just use Azure Pipelines on their own, you don’t have to use all of the DevOps services (Boards, etc.).

Some of the features of Azure Pipelines include:

  • Build, test, and deploy for multiple languages including .NET. Python, Java, iOS, etc.
  • Container build and publish support
  • Deploy to clouds including Azure, AWS, and Google Cloud Platform
  • 10 free parallel jobs and unlimited build minutes for all open source projects
  • Advanced workflow features, testing, reporting, gates, etc.

Azure DevOps Organizations and Projects

Once you’ve signed up for Azure DevOps you’ll need an organization and a project.

“..an organization is a mechanism for organizing and connecting groups of related projects. Examples are business divisions, regional divisions, or other organizational structure. You can choose one organization for your entire company, or separate organizations for specific business units, or an organization just for you.” [Microsoft]

“Each organization contains one or more projects. Each project contains a set of features: boards and backlogs for agile planning, pipelines for continuous integration and deployment, repos for version control and management of source code and artifacts, and continuous test integration throughout the life cycle.” [Microsoft]

Creating a new Azure DevOps Project

Once the project is created, you can navigate to Pipelines section to create and manage build and release pipelines.

In the next part of this series we’ll create an initial build pipeline and get an introduction to defining source-controlled build definitions using YAML.

SHARE:

Azure Functions Continuous Deployment with Azure Pipelines: Part 1 - Overview

This is the first part in a series demonstrating how to setup continuous deployment of an Azure Functions App using Azure DevOps build and release pipelines.

In this series we’ll explore (among other things) how to:

  • Build an Azure Functions App when code is pushed to GitHub
  • Run unit tests as part of the build
  • Create multiple Azure Pipeline artifacts
  • Release to a test Function App environment in Azure
  • Run automated functional end to end tests
  • Release to a production Function App environment in Azure
  • Run smoke tests after a production deployment

…in addition to a whole host of other things such as: YAML build files, Azure Pipeline variables, setting Azure Function app settings from an Azure release pipeline, and using post-deployment gates.

An Overview of Azure DevOps

Azure DevOps is a collection of cloud services to facilitate team DevOps practices and includes:

  • Azure boards
  • Azure Pipelines
  • Azure repos
  • Azure test plans
  • Azure Artifacts

In this series we’ll be focussing on Azure Pipelines to build, test, and deploy the Function App automatically when new changes are pushed to GitHub.

InvestFunctionApp Overview

The demo code for these demos can be found on GitHub.

The InvestFunctionApp contains a number of functions that allow the investing of money in a portfolio. The initial entry point is a HTTP-triggered function that allows the investor id to be specified along with how much to add to the investors portfolio.Depending on the investors target allocations, the amount will be investing in either bonds or shares.

There are 4 functions involved in this workflow:

  1. Portfolio function: HTTP trigger, starts the workflow, creates a queue message
  2. CalculatePortfolioAllocation function: queue trigger from the output of (1), calculates where to invest and writes to either buy-stocks queue (3) or buy-bonds queue (4)
  3. BuyStocks function: triggered from buy-stocks queue, simulates buying stocks and updates the investor’s portfolio in Azure Table storage
  4. BuyBonds function: triggered from buy-bonds queue, simulates buying bonds and updates the investor’s portfolio in Azure Table storage

The app is designed to demonstrate the  facilities of Azure Pipelines and is simplified in many ways including minimal error checking, auditing, transactions, security, etc.

Azure DevOps Build Pipeline

The build Azure Pipeline will be automatically triggered from changes pushed to the GitHub repository.

The build pipeline will have the following steps:

  1. Build the solution
  2. Run unit tests
  3. Publish test results
  4. Package the Function App
  5. Publish the packaged Function App as a build artifact
  6. Copy the functional end-to-end tests
  7. Publish the functional end-to-end tests as a separate build artifact

Assuming all these step run without error, the two build artifacts can be passed to/used by the release pipeline.

Azure DevOps Release Pipeline

The release pipeline will be triggered automatically when the build pipeline completes without error.

The following diagram shows what the resulting release pipeline will look like:

Azure DevOps Release Pipeline example

The release pipeline will:

  1. Deploy the Function App to a test Function App in Azure
  2. Run functional end-to-end tests (written in xUnit.net) against this test deployment
  3. If the tests in (2) pass, deploy to production
  4. After deploying to production, call a smoke test function to verify the deployment was successful

In the next part of this series we’ll create Function Apps, signup to Azure DevOps, create a new DevOps project, and take a closer look at the demo function app.

SHARE:

Azure Functions Dependency Injection with Autofac

This post refers specifically to Azure Function V2.

If you want to write automated tests for Azure Functions methods and want to be able to control dependencies (e.g. to inject mock versions of things) you can set up dependency injection.

One way to do this is to install the AzureFunctions.Autofac NuGet package into your functions project.

Once installed, this package allows you to inject dependencies into your function methods at runtime.

Step 1: Create DI Mappings

The first step (after package installation) is to create a class that configures the dependencies. As an example suppose there was a function method that needed to make use of an implementation of an IInvestementAllocator. The following class can be added to the functions project:

using Autofac;
using AzureFunctions.Autofac.Configuration;

namespace InvestFunctionApp
{
    public class DIConfig
    {
        public DIConfig(string functionName)
        {
            DependencyInjection.Initialize(builder =>
            {
                builder.RegisterType<NaiveInvestementAllocator>().As<IInvestementAllocator>(); // Naive

            }, functionName);
        }
    }
}

In the preceding code, a constructor is defined that receives the name of the function that’s being injected into. Inside the constructor, types can be registered for dependency injection. In the preceding code the IInvestementAllocator interface is being mapped to the concrete class NaiveInvestementAllocator.

Step 2: Decorate Function Method Parameters

Now the DI registrations have been configured, the registered types can be injected in function methods. To do this the [Inject] attribute is applied to one or more parameters as the following code demonstrates:

[FunctionName("CalculatePortfolioAllocation")]
public static void Run(
    [QueueTrigger("deposit-requests")]DepositRequest depositRequest,
    [Inject] IInvestementAllocator investementAllocator,
    ILogger log)
    {
        log.LogInformation($"C# Queue trigger function processed: {depositRequest}");

        InvestementAllocation r = investementAllocator.Calculate(depositRequest.Amount, depositRequest.Investor);
    }

Notice in the preceding code the [Inject] attribute is applied to the IInvestementAllocator investementAllocator parameter. This IInvestementAllocator is the same interface that was registered earlier in the DIConfig class.

Step 3: Select DI Configuration

The final step to make all this work is to add an attribute to the class that contains the function method (that uses [Inject]). The attribute used is the DependencyInjectionConfig attribute that takes the type containing the DI configuration as a parameter, for example: [DependencyInjectionConfig(typeof(DIConfig))]

The full function code is as follows:

using AzureFunctions.Autofac;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;

namespace InvestFunctionApp
{
    [DependencyInjectionConfig(typeof(DIConfig))]
    public static class CalculatePortfolioAllocation
    {
        [FunctionName("CalculatePortfolioAllocation")]
        public static void Run(
            [QueueTrigger("deposit-requests")]DepositRequest depositRequest,
            [Inject] IInvestementAllocator investementAllocator,
            ILogger log)
        {
            log.LogInformation($"C# Queue trigger function processed: {depositRequest}");

            InvestementAllocation r = investementAllocator.Calculate(depositRequest.Amount, depositRequest.Investor);
        }
    }
}

At runtime, when the CalculatePortfolioAllocation runs, an instance of an NaiveInvestementAllocator will be supplied to the function.

The library also supports features such as named dependencies and multiple DI configurations, to read more check out GitHub.

SHARE:

Unit Testing C# File Access Code with System.IO.Abstractions

It can be difficult  to write unit tests for code that accesses the file system.

It’s possible to write integration tests that read in an actual file from the file system, do some processing, and check the resultant output file (or result) for correctness. There are a number of potential problems with these types of integration tests including the potential for them to more run slowly (real IO access overheads), additional test file management/setup code, etc. (this does not mean that some integration tests wouldn’t be useful however).

The System.IO.Abstractions NuGet package can help to make file access code more testable. This package provides a layer of abstraction over the file system that is API-compatible with existing code.

Take the following code as an example:

using System.IO;
namespace ConsoleApp1
{
    public class FileProcessorNotTestable
    {
        public void ConvertFirstLineToUpper(string inputFilePath)
        {
            string outputFilePath = Path.ChangeExtension(inputFilePath, ".out.txt");

            using (StreamReader inputReader = File.OpenText(inputFilePath))
            using (StreamWriter outputWriter = File.CreateText(outputFilePath))
            {
                bool isFirstLine = true;

                while (!inputReader.EndOfStream)
                {
                    string line = inputReader.ReadLine();

                    if (isFirstLine)
                    {
                        line = line.ToUpperInvariant();
                        isFirstLine = false;
                    }

                    outputWriter.WriteLine(line);
                }
            }
        }
    }
}

The preceding code opens a text file, and writes it to a new output file, but with the first line converted to uppercase.

This class is not easy to unit test however, it is tightly coupled to the physical file system with the calls to File.OpenText and File.CreateText.

Once the System.IO.Abstractions NuGet package is installed, the class can be refactored as follows:

using System.IO;
using System.IO.Abstractions;

namespace ConsoleApp1
{
    public class FileProcessorTestable
    {
        private readonly IFileSystem _fileSystem;

        public FileProcessorTestable() : this (new FileSystem()) {}

        public FileProcessorTestable(IFileSystem fileSystem)
        {
            _fileSystem = fileSystem;
        }

        public void ConvertFirstLineToUpper(string inputFilePath)
        {
            string outputFilePath = Path.ChangeExtension(inputFilePath, ".out.txt");

            using (StreamReader inputReader = _fileSystem.File.OpenText(inputFilePath))
            using (StreamWriter outputWriter = _fileSystem.File.CreateText(outputFilePath))
            {
                bool isFirstLine = true;

                while (!inputReader.EndOfStream)
                {
                    string line = inputReader.ReadLine();

                    if (isFirstLine)
                    {
                        line = line.ToUpperInvariant();
                        isFirstLine = false;
                    }

                    outputWriter.WriteLine(line);
                }
            }
        }
    }
}

The key things to notice in the preceding code is the ability to pass in an IFileSystem as a constructor parameter. The calls to File.OpenText and File.CreateText are now redirected to _fileSystem.File.OpenText and _fileSystem.File.CreateText  respectively.

If the parameterless constructor is used (e.g. in production at runtime) an instance of FileSystem will be used, however at test time, a mock IFileSystem can be supplied.

Handily, the System.IO.Abstractions.TestingHelpers NuGet package provides a pre-built mock file system that can be used in unit tests, as the following simple test demonstrates:

using System.IO.Abstractions.TestingHelpers;
using Xunit;

namespace XUnitTestProject1
{
    public class FileProcessorTestableShould
    {
        [Fact]
        public void ConvertFirstLine()
        {
            var mockFileSystem = new MockFileSystem();

            var mockInputFile = new MockFileData("line1\nline2\nline3");

            mockFileSystem.AddFile(@"C:\temp\in.txt", mockInputFile);

            var sut = new FileProcessorTestable(mockFileSystem);
            sut.ConvertFirstLineToUpper(@"C:\temp\in.txt");

            MockFileData mockOutputFile = mockFileSystem.GetFile(@"C:\temp\in.out.txt");

            string[] outputLines = mockOutputFile.TextContents.SplitLines();

            Assert.Equal("LINE1", outputLines[0]);
            Assert.Equal("line2", outputLines[1]);
            Assert.Equal("line3", outputLines[2]);
        }
    }
}

To see this in action or to learn more about file access, check out my Working with Files and Streams in C# Pluralsight course.

SHARE:

Testing for Thrown Exceptions in MSTest V2

In previous posts we looked at testing for thrown exceptions in xUnit.net and NUnit. In this post we’re going to see how to test in MSTest V2.

As with the previous posts, the class being tested is as follows:

public class TemperatureSensor
{
    bool _isInitialized;

    public void Initialize()
    {
        // Initialize hardware interface
        _isInitialized = true;
    }

    public int ReadCurrentTemperature()
    {
        if (!_isInitialized)
        {
            throw new InvalidOperationException("Cannot read temperature before initializing.");
        }

        // Read hardware temp
        return 42; // Simulate for demo code purposes
    }
}

And the first test to check the normal execution:

[TestMethod]
public void ReadTemperature()
{
    var sut = new TemperatureSensor();

    sut.Initialize();

    var temperature = sut.ReadCurrentTemperature();

    Assert.AreEqual(42, temperature);
}

Next, a test can be written to check that the expected exception is thrown:

[TestMethod]
public void ErrorIfReadingBeforeInitialized()
{
    var sut = new TemperatureSensor();

    Assert.ThrowsException<InvalidOperationException>(() => sut.ReadCurrentTemperature());
}

The preceding code using the Assert.ThrowsException method, this method takes the type of the expected exception as the generic type parameter (in this case InvalidOperationException). As the method parameter an action/function can be specified – this is the code that is supposed to cause the exception to be thrown.

The thrown exception can also be captured if you need to test the exception property values:

[TestMethod]
public void ErrorIfReadingBeforeInitialized_CaptureExDemo()
{
    var sut = new TemperatureSensor();

    var ex = Assert.ThrowsException<InvalidOperationException>(() => sut.ReadCurrentTemperature());

    Assert.AreEqual("Cannot read temperature before initializing.", ex.Message);
}

To learn more about using exceptions to handle errors in C#, check out my Error Handling in C# with Exceptions Pluralsight course or to learn more about MS Test V2 check out my Automated Testing with MSTest V2 Pluralsight course.

SHARE:

Testing for Thrown Exceptions in NUnit

In a previous post, testing for thrown exceptions using xUnit.net was demonstrated. In this post we’ll see how to do the same with NUnit.

Once again the class being tested is as follows:

public class TemperatureSensor
{
    bool _isInitialized;

    public void Initialize()
    {
        // Initialize hardware interface
        _isInitialized = true;
    }

    public int ReadCurrentTemperature()
    {
        if (!_isInitialized)
        {
            throw new InvalidOperationException("Cannot read temperature before initializing.");
        }

        // Read hardware temp
        return 42; // Simulate for demo code purposes
    }
}

The first test can be to test the happy path:

[Test]
public void ReadTemperature()
{
    var sut = new TemperatureSensor();

    sut.Initialize();

    var temperature = sut.ReadCurrentTemperature();

    Assert.AreEqual(42, temperature);
}

Next, a test can be written to check that the expected exception is thrown:

[Test]
public void ErrorIfReadingBeforeInitialized()
{
    var sut = new TemperatureSensor();

    Assert.Throws<InvalidOperationException>(() => sut.ReadCurrentTemperature());
}

Notice in the preceding code that any InvalidOperationException thrown will pass the test. To ensure that the thrown exception is correct, it can be captured and further asserts performed against it:

[Test]
public void ErrorIfReadingBeforeInitialized_CaptureExDemo()
{
    var sut = new TemperatureSensor();

    var ex = Assert.Throws<InvalidOperationException>(() => sut.ReadCurrentTemperature());

    Assert.AreEqual("Cannot read temperature before initializing.", ex.Message);
    // or:
    Assert.That(ex.Message, Is.EqualTo("Cannot read temperature before initializing."));
}

There’s also other ways to assert against expected exceptions such as the following:

Assert.Throws(Is.TypeOf<InvalidOperationException>()
                .And.Message.EqualTo("Cannot read temperature before initializing."), 
              () => sut.ReadCurrentTemperature());

There’s some personal preference involved when choosing a style, for example the preceding code could be considered more verbose by some and may muddle the distinction between the Act and Assert phases of a test.

To learn more about using exceptions to handle errors in C#, check out my Error Handling in C# with Exceptions Pluralsight course.

SHARE:

Testing for Thrown Exceptions in xUnit.net

When writing tests it is sometimes useful to check that the correct exceptions are thrown at the expected time.

When using xUnit.net there are a number of ways to accomplish this.

As an example consider the following simple class:

public class TemperatureSensor
{
    bool _isInitialized;

    public void Initialize()
    {
        // Initialize hardware interface
        _isInitialized = true;
    }

    public int ReadCurrentTemperature()
    {
        if (!_isInitialized)
        {
            throw new InvalidOperationException("Cannot read temperature before initializing.");
        }

        // Read hardware temp
        return 42; // Simulate for demo code purposes
    }        
}

The first test we could write against the preceding class is to check the “happy path”:

[Fact]
public void ReadTemperature()
{
    var sut = new TemperatureSensor();

    sut.Initialize();

    var temperature = sut.ReadCurrentTemperature();

    Assert.Equal(42, temperature);
}

Next a test could be written to check that if the temperature is read before initializing the sensor, an exception of type InvalidOperationException is thrown. To do this the xUnit.net Assert.Throws method can be used. When using this method the generic type parameter indicates the type of expected exception and the method parameter takes an action that should cause this exception to be thrown, for example:

[Fact]
public void ErrorIfReadingBeforeInitialized()
{
    var sut = new TemperatureSensor();

    Assert.Throws<InvalidOperationException>(() => sut.ReadCurrentTemperature());
}

In the preceding test, if an InvalidOperationException is not thrown when the ReadCurrentTemperature method is called the test will fail.

The thrown exception can also be captured in a variable to make further asserts against the exception property values, for example:

[Fact]
public void ErrorIfReadingBeforeInitialized_CaptureExDemo()
{
    var sut = new TemperatureSensor();

    var ex = Assert.Throws<InvalidOperationException>(() => sut.ReadCurrentTemperature());

    Assert.Equal("Cannot read temperature before initializing.", ex.Message);
}

The Assert.Throws method expects the exact type of exception and not derived exceptions. In the case where you want to also allow derived exceptions, the Assert.ThrowsAny method can be used.

Similar exception testing features also exist in MSTest and NUnit frameworks.

To learn more about using exceptions to handle errors in C#, check out my Error Handling in C# with Exceptions Pluralsight course.

SHARE:

Customizing C# Object Member Display During Debugging

In a previous post I wrote about Customising the Appearance of Debug Information in Visual Studio with the DebuggerDisplay Attribute. In addition to controlling the high level  debugger appearance of an object we can also exert a lot more control over how the object appears in the debugger by using the DebuggerTypeProxy attribute.

For example, suppose we have the following (somewhat arbitrary) class:

class DataTransfer
{
    public string Name { get; set; }
    public string ValueInHex { get; set; }
}

By default, in the debugger it would look like the following:

Default Debugger View

To customize the display of the object members, the DebuggerTypeProxy attribute can be applied.

The first step is to create a class to act as a display proxy. This class takes the original object as part of the constructor and then exposes the custom view via public properties.

For example, suppose that we wanted a decimal display of the hex number that originally is stored in a string property in the original DataTransfer object:

class DataTransferDebugView
{
    private readonly DataTransfer _data;

    public DataTransferDebugView(DataTransfer data)
    {
        _data = data;
    }

    public string NameUpper => _data.Name.ToUpperInvariant();
    public string ValueDecimal
    {
        get
        {
            bool isValidHex = int.TryParse(_data.ValueInHex, System.Globalization.NumberStyles.HexNumber, null, out var value);

            if (isValidHex)
            {
                return value.ToString();
            }

            return "INVALID HEX STRING";
        }
    }
}

Once this view object is defined, it can be selected by decorating the DataTransfer class with the DebuggerTypeProxy attribute as follows:

[DebuggerTypeProxy(typeof(DataTransferDebugView))]
class DataTransfer
{
    public string Name { get; set; }
    public string ValueInHex { get; set; }
}

Now in the debugger, the following can be seen:

Custom debug view showing hex value as a decimal

Also notice in the preceding image, that the original object view is available by expanding the Raw View section.

To learn more about C# attributes and even how to create your own custom ones, check out my C# Attributes: Power and Flexibility for Your Code course at Pluralsight.

SHARE:

Prevent Secrets From Accidentally Being Committed to Source Control in ASP.NET Core Apps

One problem when dealing with developer “secrets” in development is accidentally checking them into source control. These secrets could be connection strings to dev resources, user IDs, product keys, etc.

To help prevent this from accidentally happening, the secrets can be stored outside of the project tree/source control repository. This means that when the code is checked in, there will be no secrets in the repository.

Each developer will have their secrets stored outside of the project code. When the app is run, these secrets can be retrieved at runtime from outside the project structure.

One way to accomplish this in ASP.NET Core  projects is to make use of the Microsoft.Extensions.SecretManager.Tools NuGet package to allow use of the command line tool. (also if you are targeting .NET Core 1.x , install the Microsoft.Extensions.Configuration.UserSecrets NuGet package).

Setting Up User Secrets

After creating a new ASP.NET Core project, add a tools reference to the NuGet package to the project, this will add the following item in the project file:

<DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />

Build the project and then right click the project and you will see a new item called “Manage User Secrets” as the following screenshot shows:

Managing user secrets in Visual Studio

Clicking menu item will open a secrets.json file and also add an element named UserSecretsId to the project file. The content of this element is a GUID, the GUID is arbitrary but should be unique for each and every project.

<UserSecretsId>c83d8f04-8dba-4be4-8635-b5364f54e444</UserSecretsId>

User secrets will be stored in the secrets.json file which will be in %APPDATA%\Microsoft\UserSecrets\<user_secrets_id>\secrets.json on Windows or ~/.microsoft/usersecrets/<user_secrets_id>/secrets.json on Linux and macOS. Notice these paths contain the user_secrets_id that matches the GUID in the project file. In this way each project has a separate set of user secrets.

The secrets.json file contains key value pairs.

Managing User Secrets

User secrets can be added by editing the json file or by using the command line (from the project directory).

To list user secrets type: dotnet user-secrets list At the moment his will return “No secrets configured for this application.”

To set (add) a secret: dotnet user-secrets set "Id" "42"

The secrets.json file now contains the following:

{
  "Id": "42"
}

Other dotnet user-secrets  commands include:

  • clear - Deletes all the application secrets
  • list - Lists all the application secrets
  • remove - Removes the specified user secret
  • set - Sets the user secret to the specified value

Accessing User Secrets in Code

To retrieve users secrets, in the startup class, access the item by key, for example:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();

    var secretId = Configuration["Id"]; // returns 42
}

One thing to bear in mind is that secrets are not encrypted in the secrets.json file, as the documentation states: “The Secret Manager tool doesn't encrypt the stored secrets and shouldn't be treated as a trusted store. It's for development purposes only. The keys and values are stored in a JSON configuration file in the user profile directory.” & “You can store and protect Azure test and production secrets with the Azure Key Vault configuration provider.”

There’s a lot more information in the documentation and if you plan to use this tool you should read through it.

SHARE: