Running ASP.NET Core Apps on a Synology NAS with Docker

Now I’ve got the Synology NAS up and running, I thought it would be interesting to see what the Docker support is like. You can essentially run Docker container instances on the NAS box which also means you can deploy your own custom .NET Core apps to the Synology box.

This post is organized into 3 parts:

  1. Creating and testing a Docker-enabled ASP.NET Core app locally
  2. Deploying the app to the Synology NAS via Docker Hub
  3. Deploying the app locally to the NAS

Part 1: Creating and Testing a Docker ASP.NET Core App Locally

There’s a few things to setup to allow you to deploy and Test  Docker containers locally.

The first is to enable Hyper-V in Windows, this is a prerequisite of Docker Desktop for Windows:

Installing Windows Hyper-V Feature

Once you’ve enabled Hyper-V (a restart will probably be required) you can go and download and install Docker Desktop for Windows – this will allow you to enable Docker support when you create the project in Visual Studio.

Once Docker Desktop is installed and running you can check it’s running with PowerShell:

PS C:\Users\Admin> docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:23:10 2020
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:29:16 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
PS C:\Users\Admin>

Now you can fire up Visual Studio and create a new ASP.NET Core web application and tick the Enable Docker Support checkbox:

Creating an ASP.NET Core Web App with Docker Support

Once the project is created, you can click the Run button in Visual Studio (it should say “Docker” next to it).

Checking  the Output window for Container Tools you should can see something like:

========== Checking for Container Prerequisites ==========
Verifying that Docker Desktop is installed...
Docker Desktop is installed.
========== Verifying that Docker Desktop is running... ==========
Verifying that Docker Desktop is running...
Docker Desktop is running.
========== Verifying Docker OS ==========
Verifying that Docker Desktop's operating system mode matches the project's target operating system...
Docker Desktop's operating system mode matches the project's target operating system.
========== Pulling Required Images ==========
Checking for missing Docker images...
Pulling Docker images. To cancel this download, close the command prompt window.
docker pull mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim

After a while the build might fail with the following error: Error    CTC1001    Volume sharing is not enabled. On the Settings screen in Docker Desktop, click Shared Drives, and select the drive(s) containing your project files.  

To fix this, open up the Docker Desktop UI, and find the File Sharing section and enable C: drive if you want to make it available to Docker – this should fix the error:

Enabling File Sharing in Docker Desktop

 

Once this this change is applied and Docker Desktop restarted, click the Start button again in Visual Studio and after accepting dialog boxes to do with firewall and local certificate the web app should start up and run successfully and Docker Desktop should show the web app container running:

ASP.NET Core app running in Docker Desktop for Windows

So now you have a Docker-enabled .NET Core web app and have tested it locally you can deploy it to the Synology NAS.

Part 2: Deploying an ASP.NET Core Docker App To a Synology NAS Via Docker Hub (AKA There And Back Again – a Docker Hub Tale)

Docker Hub is place (“registry”) where you can store and manage Docker images. These images can then be pulled (downloaded) by  a Docker host and then a container started from this image.

Visual Studio has built-in support for pushing an image to Docker Hub and the Synology Docker app has the ability to pull images from Docker Hub. Images on Docker Hub can be public or private (depending on what plan you are using).

Once you’ve created a Docker Hub account, in Visual Studio go to the Build menu and choose Publish WebApplication1 (or whatever the name of your project is) and click Start. You will need to choose a publish target of Container Registry and choose Docker Hub:

Choosing Docker Hub as a publish target in Visual Studio

Click Create Profile - you’ll need to supply your Docker Hub user name and password and click save.

You can now click the Publish button and wait for a little while:

Publishing a ASP.NET Core web app to Docker Hub

You should see the app being pushed to Docker Hub:

Pushing to Docker Hub

Once the publish is complete you can head over to Docker Hub and you should see your image:

Docker Hub image

Now the image is in Docker Hub, you can enable Docker support on the Synology NAS, pull the image from Docker Hub, and start a container on the NAS.

First log into the Synology as an admin account and open the Package Center. Here you can search for “Docker” and install the Docker app:

Installing Docker support on a Synology NAS

Once you’ve installed the Docker app, open it and head to the Image section, click the Add button and choose Add From Url. Now you can head over to Docker Hub and copy the URL for your image, for example it will look something like this: https://hub.docker.com/r/jrdontcodetired/webapplication1:

Pulling an image from Docker Hub to a Synology NAS

Click Add and the image will be downloaded from Docker Hub to the NAS.

Once the image has downloaded, click on it and click the Launch button. This will enable you to start a container instance from the image.

You’ll need to click on Advanced Settings and go to the Port Settings tab. In the Dockerfile in Visual Studio, the image is set to use port 80. We need to map a port on the NAS to this port 80 in the container. For example you could set up port 7500 on the NAS itself to map traffic to port 80 in the container:

Mapping Synology port to docker container port

Click Apply and then Next. You will be given a summary of the settings (the “Run this container after the wizard is finished” box is ticked) and click Apply to finish the wizard and start the container.

You should now be able to see the container running in the Container section:

Docker container running on a Synology NAS

You can now point your browser to your NAS IP and the port your chose when staring the container, for example: http://192.168.20.17:7500/

You should now see your ASP.NET Core web app being served from the Docker container on the Synology NAS:

ASP.NET Core Web App running in a Docker container on a Synology NAS

Part 3: Directly Deploying Docker Container to a Synology NAS

The first step is to publish the web app and copy the published files to the Synology. You could also publish directly to a folder on the NAS such as: \\SYN001\Test1\DockerPublish

In Visual Studio from the Build menu choose Publish WebApplication1. Create a new Publish Profile this time using a Folder Target and as the folder choose a folder on the Synology:

Publish to Synoloy NAS folder from Visual Studio

Click Create Profile and then click Publish. Once this is finished you should see the web app files published to the Synology folder.

In the DockerPublish folder (this is an arbitrary name) on the NAS create a new Dockerfile with the following contents:

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
COPY . /app
WORKDIR /app
EXPOSE 80
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

Your folder on the Synology should now look like something like this:

image

The next step is to build the Docker image on the Synology NAS. To do this you can SSH into the NAS and use Docker build.

The first step is to enable SSH access on the Synology, you can do this from the Synology Control Panel in the Terminal & SNMP section – tick the Enable SSH Service box and click Apply:

Enabling SSH on a Synology NAS

Next in Windows, open a new PowerShell window and enter:

ssh Jason@192.168.20.17

Replace “Jason” with the name of one of your admin users and the IP address with the address of your Synology NAS – you will then need to enter the user’s password.

We need to SSH in as root (or set up a new user on the NAS). Be careful working in root or you could seriously mess your NAS up or introduce security problems. To get root access enter:

sudo -i

And once again enter the password.

You can now change to the folder that contains the published web app and Dockerfile:

cd /volume1/Test1/DockerPublish

And now build the image:

docker build -t manualwebapp .

This will produce the following output:

Sending build context to Docker daemon  4.706MB
Step 1/5 : FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
3.1-buster-slim: Pulling from dotnet/core/aspnet
c499e6d256d6: Pull complete
251bcd0af921: Pull complete
852994ba072a: Pull complete
f64c6405f94b: Pull complete
9347e53e1c3a: Pull complete
Digest: sha256:a9e160dbf5ed62c358f18af8c4daf0d7c0c30f203c0dd8dff94a86598c80003b
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
 ---> c819eb4381e7
Step 2/5 : COPY . /app
 ---> 0beff55307c9
Step 3/5 : WORKDIR /app
 ---> Running in e731c0fa1d6e
Removing intermediate container e731c0fa1d6e
 ---> b64c09a9d51e
Step 4/5 : EXPOSE 80
 ---> Running in 6fddd1f77f4e
Removing intermediate container 6fddd1f77f4e
 ---> 9aa4035379dc
Step 5/5 : ENTRYPOINT ["dotnet", "WebApplication1.dll"]
 ---> Running in 4f0b086e44d3
Removing intermediate container 4f0b086e44d3
 ---> ead6395bf486
Successfully built ead6395bf486
Successfully tagged manualwebapp:latest

If you now head to the Docker app on the Synology you will see the manualwebapp image:

Docker build image on Synology NAS

You can start a container from this image as we did before using the Synology GUI or from the PowerShell prompt - we can start it with the following command (notice we’re mapping port 7501 on the NAS to port 80 in the container):

docker run --name manualtestcontainer -p 7501:80 -d manualwebapp

Now heading back the the Synology GUI you should see a container called manualtestcontainer running:

Docker container running on Synology NAS

 

 

Now you can head to the URL in a browser (e.g. http://192.168.20.17:7501/) and see the ASP.NET Core web app running in the Docker container:

ASP.NET Core running in Docker app running on Synology NAS

Summary

The ability to run Docker containers on a NAS is really nice, not only can you develop your own apps and deploy them as containers, you can also use images from a registry such as Docker Hub, for example MySQL, ghost blogging engine, etc. You should of course only use images you trust.

If you have any cool containers running on your Synology let me know in the comments!

SHARE:

Adding Tuple Support to .NET Classes in C#

Edit: Updated to improve clarity (thanks to Paulo in the comments for helping to improve his article).

Tuples in C# are objects that can be created with a specific syntax. You don’t have to declare tuple types first like you do with classes for example, they can instead be created using a lightweight C# syntax.

A tuple is a object that holds a number of arbitrary data items and which has no custom behaviour. In contrast, a class or struct can have both data and custom behaviour.

For example the following creates a tuple with 2 string values:

(string, string) names = ("Sarah", "Smith");
Console.WriteLine($"First name: '{names.Item1}' Last name: '{names.Item2}'");

This code produces the output: First name: 'Sarah' Last name: 'Smith'

In the preceding code, the items inside the tuple don’t have names so they are referred to as Item1 and Item2 but you could also name the items, for example:

(string firstName, string lastName) names = ("Sarah", "Smith");
Console.WriteLine($"First name: '{names.firstName}' Last name: '{names.lastName}'");

If you had a rich Person class that had both data and behaviour, you could also add support for tuple-like deconstruction and unpackaging of a Person instance into variables just like you would do with a tuple instance.

Consider the following class:

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int AgeInYears { get; set; }
    public string FavoriteColor { get; set; }
    
    // methods etc.
}

We could create a tuple as before containing the first and last name as follows:

var sarah = new Person
{
    FirstName = "Sarah",
    LastName = "Smith",
    AgeInYears = 42,
    FavoriteColor = "red"
};

(string firstName, string lastName) names = (sarah.FirstName, sarah.LastName);
Console.WriteLine($"First name: '{names.firstName}' Last name: '{names.lastName}'");

This is however a little clunky, we can modify the Person class to provide support for a Person to have tuple-like deconstruction and unpacking semantics. To do this a public void method called Deconstruct can be added, for example:

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int AgeInYears { get; set; }
    public string FavoriteColor { get; set; }

    // methods etc.

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }
}

Now the code could be changed to:

var (firstName, lastName) = sarah;
Console.WriteLine($"First name: '{firstName}' Last name: '{lastName}'");

You could also add this deconstruction/unpackaging support to a class you can’t change by declaring an extension method such as:

static class PersonExtensions
{
    public static void Deconstruct(this Person person, out string firstName, out string lastName)
    {
        firstName = person.FirstName;
        lastName = person.LastName;
    }
}

Or as another example, you could add tuple-like deconstruction & unpackaging support for the .NET String type:

static class StringExtensions
{
    public static void Deconstruct(this string s, out string original, out string upper, out string lower, out int length)
    {
        original = s;
        upper = s.ToUpperInvariant();
        lower = s.ToLowerInvariant();
        length = s.Length;
    }
}

And then write:

var (original, upper, lower, length) = "The quick brown fox";
Console.WriteLine($"Original: {original}");
Console.WriteLine($"Uppercase: {upper}");
Console.WriteLine($"Lowercase: {lower}");
Console.WriteLine($"Length: {length}");

As Paulo points out in the comments there is no actual tuple instance per-se involved here, if look at the the decompiled source that Paulo links to you can see the Person has been unpackaged into multiple variables.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Variables? We Don’t Need No Stinking Variables - C# Discards

C# 7.0 introduced the concept of discards. Discards are intentionally unused, temporarily dummy variables that we don’t care about and don’t want to use.

For example, the following shows the result of an addition being discarded:

_ = 1 + 1;

Note the underscore _ this is the discard character.

Given the preceding example, you cannot access the result of this addition, for example:

WriteLine(_); // Error CS0103  The name '_' does not exist in the current context 

Using C# Discards with Out Parameters

A more useful example is when you are working with a method that has one or more out parameters and you don’t care about using the outputted value.

As an example, consider one of the many TryParse methods in .NET such as int.TryParse. The following code show a method that writes to the console whether or not a string can be parsed as an int:

static void ParseInt()
{
    WriteLine("Please enter an int to validate");
    string @int = ReadLine();
    bool isValidInt = int.TryParse(@int, out int parsedInt);
    
    if (isValidInt)
    {
        WriteLine($"{@int} is a valid int");
    }
    else
    {
        WriteLine($"{@int} is NOT a valid int");
    }
}

The preceding method can be written using a discard because the out int parsedInt value is never used:

static void ParseIntUsingDiscard()
{
    WriteLine("Please enter an int to validate");
    string @int = ReadLine();

    if (int.TryParse(@int, out _))
    {
        WriteLine($"{@int} is a valid int");
    }
    else
    {
        WriteLine($"{@int} is NOT a valid int");
    }
}

For example we could create an expression bodied method using a similar approach:

static bool IsInt(string @int) => int.TryParse(@int, out _);

If you have a method that returns a lot of out values such as:

private static void GenerateDefaultCity(out string name, out string nickName, out long population, out DateTime founded)
{
    name = "London";
    nickName = "The Big Smoke";
    population = 8_000_000;
    founded = new DateTime(50, 1, 1);
}

In this case you might only care about the returned population value so you could discard all the other out values:

GenerateDefaultCity(out _,out _, out var population, out _);
WriteLine($"Population is: {population}");

Using C# Discards with Tuples

Another use for discards is where you don’t care about all the fields of a tuple. For example the following method returns a tuple containing a name and age:

static (string name, int age) GenerateDefaultPerson()
{
    return ("Amrit", 42);
}

If you only cared about the age you could write:

var (_, age) = GenerateDefaultPerson();
WriteLine($"Default person age is {age}");

Simplifying Null Checking Code with Discards

Take the following null checking code:

private static void Display(string message)
{
    if (message is null)
    {
        throw new ArgumentNullException(nameof(message));
    }
    WriteLine(message);
}

You could refactor this to make use of throw expressions:

private static void DisplayV2(string message)
{
    string checkedMessage = message ?? throw new ArgumentNullException(nameof(message));

    WriteLine(checkedMessage);
}

In the preceding version however, the checkedMessage variable is somewhat redundant, this could be refactored to use a discard:

private static void DisplayWithDiscardNullCheck(string message)
{
    _ = message ?? throw new ArgumentNullException(nameof(message));
    
    WriteLine(message);
}

Using C# Discards with Tasks

Take the following code:

// Warning CS1998  This async method lacks 'await' operators and will run synchronously.
Task.Run(() => SayHello());

Where the SayHello method is defined as:

private static string SayHello()
{
    string greeting = "Hello there!";
    return greeting;
}

If we don’t care about the return value and want to discard the result and get rid of the compiler warning::

// With discard - no compiler warning
_ = Task.Run(() => SayHello());

If there are any exceptions however, they will be supressed:

await Task.Run(() => throw new Exception()); // Exception thrown
_ = Task.Run(() => throw new Exception()); // Exception suppressed

Pattern Matching with Switch Statements and Discards

You can also use discards in switch statements:

private static void SwitchExample(object o)
{
    switch (o)
    {
        case null:
            WriteLine("o is null");
            break;
        case string s:
            WriteLine($"{s} in uppercase is {s.ToUpperInvariant()}");
            break;
        case var _:
            WriteLine($"{o.GetType()} type not supported.");
            break;
    }
}

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Simplifying Parameter Null and Other Checks with the GuardClauses Library

Often you want to add null checking and other check code at the start of a method to ensure all the values passed into the method are valid before continuing.

For example the following method checks the name and age:

public static void AddNewPerson(string name, int ageInYears)
{
    if (string.IsNullOrWhiteSpace(name))
    {
        throw new ArgumentException($"Cannot be null, empty, or contain only whitespace.", nameof(name));
    }

    if (ageInYears < 1)
    {
        throw new ArgumentOutOfRangeException(nameof(ageInYears), "Must be greater than zero.");
    }

    // Add to database etc.
}

This “guard” kind of code can “clutter” the method and reduce readability.

One library I recently came across is the Guard Clauses library from Steve Smith.

Once this library is installed we could refactor the preceding code to look like the following:

public static void AddNewPerson(string name, int ageInYears)
{
    Guard.Against.NullOrWhiteSpace(name, nameof(name));
    Guard.Against.NegativeOrZero(ageInYears, nameof(ageInYears));

    // Add to database etc.
}

Passing a null name results in the exception: System.ArgumentNullException: Value cannot be null. (Parameter 'name')

Passing an empty string results in: System.ArgumentException: Required input name was empty. (Parameter 'name')

Passing in an age of zero results in: System.ArgumentException: Required input ageInYears cannot be zero or negative. (Parameter 'ageInYears')

The code is also more readable and succinct.

Out of the box the library comes with the following guards (taken from the documentation):

  • Guard.Against.Null (throws if input is null)
  • Guard.Against.NullOrEmpty (throws if string or array input is null or empty)
  • Guard.Against.NullOrWhiteSpace (throws if string input is null, empty or whitespace)
  • Guard.Against.OutOfRange (throws if integer/DateTime/enum input is outside a provided range)
  • Guard.Against.OutOfSQLDateRange (throws if DateTime input is outside the valid range of SQL Server DateTime values)
  • Guard.Against.Zero (throws if number input is zero)

You can also define your own reusable clauses:

// Define in this namespace so can use alongside built-in guards with no additional namespaces required
namespace Ardalis.GuardClauses
{
    public static class PositiveGuard
    {
        public static void Positive(this IGuardClause guardClause, int input, string parameterName)
        {
            if (input >= 0)
            {
                throw new ArgumentOutOfRangeException(parameterName, $"Required input {parameterName} cannot be positive.");
            }                           
        }
    }
}

And then in a method we can write:

public static void ReportNegativeTemperature(int temp)
{
    Guard.Against.Positive(temp, nameof(temp));
    // Do something
}

And if we pass a positive (or zero) temp we get: System.ArgumentOutOfRangeException: Required input temp cannot be positive. (Parameter 'temp')

This is one of those simple libraries that can make basic tasks easier/more readable.

If you check this out and use it make sure you say thanks to Steve on Twitter and let him know @robertsjason sent you ;)

SHARE:

Refactoring Code to Use C# Local Functions

In a previous post I talked about the potential use of local functions to replace comments. This generated some good discussion on Twitter and in the comments.

In this post I wanted to show another use of local functions to potentially improve readability.

Improving Iteration Code Readability

Consider the following simple console app:

using System;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            var names = new[]{ "Sarah Smith", "Gentry Jones", "Arnold Appleview" };

            // Output names with surname first
            foreach (var name in names)
            {
                var nameParts = name.Split(" ");
                var firstName = nameParts[0];
                var lastName = nameParts[1];
                var formattedName = $"{lastName}, {firstName}";
                Console.WriteLine(formattedName);
            }

            Console.ReadLine();
        }
    }
}

The code inside the for loop could be considered at a lower level of abstraction/more detailed than the rest of the code and the Main method itself.

We could take the contents of the for loop and refactor it into a private function in the class as follows:

using System;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            var names = new[] { "Sarah Smith", "Gentry Jones", "Arnold Appleview" };
            
            foreach (var name in names)
            {
                OutputWithSurnameFirst(name);
            }

            Console.ReadLine();
        }

        private static void OutputWithSurnameFirst(string name)
        {
            var nameParts = name.Split(" ");
            var firstName = nameParts[0];
            var lastName = nameParts[1];
            var formattedName = $"{lastName}, {firstName}";
            Console.WriteLine(formattedName);
        }
    }
}

Notice the Main method is not mixing as many levels of abstraction/detail now – we have also been able to remove the comment because the method name OutputWithSurnameFirst describes what the comment used to. If we are reading the Main method, we don’t have to burden our concentration with the details of the how the names are output unless we want to.

This approach is fine, but it could be argued that we have “polluted” the class with a method that is only used once in the Main method. It could also be argued that declaration of the OutputWithSurnameFirst method is not as close to it’s use as a local method would be.

Let’s take a look next at a version of the code that instead uses a local method:

using System;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            var names = new[] { "Sarah Smith", "Gentry Jones", "Arnold Appleview" };

            foreach (var name in names)
            {
                OutputWithSurnameFirst(name);
            }

            Console.ReadLine();

            static void OutputWithSurnameFirst(string name)
            {
                var nameParts = name.Split(" ");
                var firstName = nameParts[0];
                var lastName = nameParts[1];
                var formattedName = $"{lastName}, {firstName}";
                Console.WriteLine(formattedName);
            }
        }
    }
}

In the preceding example, the local method has taken the place of the class-level method, it has however made the overall length of the Main method longer. In this example it could be argued that the previous version is more readable.

Let’s take a look at another example next.

Improving C# Lambda Code Readability with Local Functions

In some cases, a local function may improve the readability of lambda function code.

Take the following initial code:

using System;
using System.Collections.Generic;
using System.Linq;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            var strings = new[] { "Hello", "31414", "2HI9" };

            foreach (var name in strings)
            {
                IEnumerable<bool> areUpperLetters = name.Select(x => char.IsLetter(x) && char.IsUpper(x));
                Console.WriteLine($"{name} upper digits = {string.Join(",", areUpperLetters)}");
            }

            Console.ReadLine();
        }
    }
}

In the preceding code, the variable named areUpperLetters gives us a clue as to what the lambda does, which is good, often a well-named variable can really improve readability. This code could be refactored to a local functions as follows:

using System;
using System.Collections.Generic;
using System.Linq;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            var strings = new[] { "Hello", "31414", "2HI9" };

            foreach (var name in strings)
            {
                IEnumerable<bool> areUpperLetters = name.Select(IsUpperCaseLetter);
                Console.WriteLine($"{name} upper digits = {string.Join(",", areUpperLetters)}");
            }

            Console.ReadLine();

            static bool IsUpperCaseLetter(char c)
            {
                return char.IsLetter(c) && char.IsUpper(c);
            }
        }
    }
}

Now the logic that was contained in the lambda has been moved to the local function called IsUpperCaseLetter.

To see local functions in action and also learn a whole heap of C# tips check out my C# Tips and Traps Pluralsight course. You can also start watching the course with a free trial

SHARE:

Using Local Functions to Replace Comments

One idea I’ve been thinking about recently is the replacement of comments with local function calls.

Now this idea doesn’t mean that it’s ok to have massive functions that have no functional cohesion but instead in some circumstances it may improve readability.

In C#, local functions were introduced in C# 7.0. They essentially allow you to write a function inside of a method, property get/set, etc.

As an example take the following code:

public static void ProcessSensorData(string data)
{
    // HACK: occasionally a sensor hardware glitch adds extraneous $ signs
    data = data.Replace("$", "");

    string upperCaseName = data.ToUpperInvariant();
    Save(upperCaseName);
}

private static void Save(string data)
{
    // Save somewhere etc.
    Console.WriteLine(data);
}

In the preceding code there is a hack to fix broken sensors that keep adding extra $ signs.

This could be written using a local function as follows:

public static void ProcessSensorData(string data)
{
    FixExtraneousSensorData();
    string upperCaseName = data.ToUpperInvariant();
    Save(upperCaseName);

    void FixExtraneousSensorData()
    {
        data = data.Replace("$", "");
    }
}

Notice in this version, there is a local function FixExtraneousSensorData that strips out the $ signs. This function is named to try and convey the comment that we had before: “occasionally a sensor hardware glitch adds extraneous $ signs”. Also notice the local function has direct access to the variables of the method in which they’re declared, in this case data.

There are other options here of course such as creating a normal non-local class-level function and passing data to it, or perhaps creating and injecting a data sanitation class as a dependency.

Replacing Arrange, Act, Assert Comments in Unit Tests

As another example consider the following test code:

[Fact]
public void HaveSanitizedFullName()
{
    // Arrange
    var p = new Person
    {
        FirstName = "    Sarah ",
        LastName = "  Smith   "
    };

    // Act
    var fullName = p.CreateFullSanitizedName();

    // Assert
    Assert.Equal("Sarah Smith", fullName);
}

Notice the comments separating the logical test phases.

Again these comments could be replaced with local functions as follows:

[Fact]
public void HaveSanitizedFullName_LocalFunctions()
{
    Person p;
    string fullName;

    Arrange();
    Act();
    AssertResults();
    
    void Arrange()
    {
        p = new Person
        {
            FirstName = "    Sarah ",
            LastName = "  Smith   "
        };
    }

    void Act()
    {
        fullName = p.CreateFullSanitizedName();
    }

    void AssertResults()
    {
        Assert.Equal("Sarah Smith", fullName);
    }
}

This version is a lot longer and although we’ve rid ourselves of the comments the test body is a lot longer, with more lines of code, and I think is probably not as readable. Obviously the test is very simple, if you’ve got a lot of test arrange code for example you could just abstract the arrange phase perhaps.

Another option in the test code to remove the comments is to make use of the most basic unit of design – white space. So for example we could remove comments and still give a clue to the various phases as follows:

[Fact]
public void HaveSanitizedFullName_WhiteSpace()
{
    var p = new Person
    {
        FirstName = "    Sarah ",
        LastName = "  Smith   "
    };


    
    var fullName = p.CreateFullSanitizedName();

    

    Assert.Equal("Sarah Smith", fullName);
}

I think the tactical use of local functions like in the first example to replace the hack comment  may be more useful than replacing the (arguably extraneous) arrange act assert comments in tests.

Let me know in the comments if you think this is a good idea, a terrible idea, or something that you might use now and again.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Microsoft Feature Toggle Feature Flag Library: A First Look

EDIT: my Feature Management Pluralsight training course is now available.

As the creator of the .NET FeatureToggle library that has over half a million downloads on NuGet, I recently learned (thanks @OzBobWA) with some interest that Microsoft is working on a feature toggle / feature flag library.

It’s in its infancy at the moment and is currently in early preview on NuGet and as I understand it, the library is being developed by the Azure team and is not currently open sourced/available on GitHub.

Essentially the library allows you to configure whether a feature is on or off depending on a configuration setting, such as defined in the configuration JSON file or in Azure configuration.

Whilst the development of the library appears to be currently focused on enabling feature toggling in ASP.NET Core, I though it would be interesting to see if I could get it to work in a .NET Core console app. I hacked together the following code to demonstrate:

using System;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.FeatureManagement;

namespace ConsoleApp1
{
    class Program
    {
        static readonly IFeatureManager FeatureManager;

        static Program()
        {
            // Setup configuration to come from config file 
            IConfigurationBuilder builder = new ConfigurationBuilder();
            builder.AddJsonFile("appsettings.json");
            var configuration = builder.Build();

            // Register services (including feature management)
            var serviceCollection = new ServiceCollection();
            serviceCollection.AddSingleton<IConfiguration>(configuration);
            serviceCollection.AddFeatureManagement();

            // build the service and get IFeatureManager instance
            var serviceProvider = serviceCollection.BuildServiceProvider();
            FeatureManager = serviceProvider.GetService<IFeatureManager>();
        }

        static void Main(string[] args)
        {
            if (FeatureManager.IsEnabled("SayHello"))
            {
                Console.WriteLine("Hello World!");
            }


            Console.WriteLine("Press any key to exit...");
            Console.ReadLine();
        } 
    }
}

This code requires the following NuGets:

    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="3.0.0-preview6.19304.6" />
    <PackageReference Include="Microsoft.Extensions.DependencyInjection" Version="2.2.0" />
    <PackageReference Include="Microsoft.FeatureManagement" Version="1.0.0-preview-009000001-1251" />

The console app attempts to read from the appsettings.json file and looks in the FeatureManagement section to see whether features are enabled or not:

{
  "FeatureManagement": {
    "SayHello": true
  }
}

In the code, strings are used to evaluate whether a feature is enabled or not, e.g. the line: if (FeatureManager.IsEnabled("SayHello"))  looks for a configuration value called “SayHello”.

If this value is false, the Console.WriteLine("Hello World!"); will not execute; if it is true the text “Hello World!” will be output.

Comparing Microsoft.FeatureManagement to FeatureToggle

When compared to the FeatureToggle library there are some interesting differences, for example the same app created using the FeatureToggle library would like the following:

using System;
using FeatureToggle;

namespace ConsoleApp2
{
    class SayHelloFeature : SimpleFeatureToggle { }

    class Program
    {
        static void Main(string[] args)
        {
            if (Is<SayHelloFeature>.Enabled)
            {
                Console.WriteLine("Hello World!");
            }

            Console.WriteLine("Press any key to exit...");
            Console.ReadLine();            
        }
    }
}

Notice in the preceding code that there are no magic strings to read the toggle value, instead you define a class that represents the feature and then by convention, the name of the class is used to locate the toggle value in configuration which looks like:

{
  "FeatureToggle": {
    "SayHelloFeature": "false"
  }
}

Overall ,whilst it is still early days for the library, it is cool that Microsoft may have their own supported feature toggling library in the future and help bring the concept of feature toggles as an alternative/adjunct to feature branches in source control to “the masses”.

SHARE:

Accessing Cosmos DB JSON Properties in Azure Functions with Dynamic C#

This is the eighth part in a series of articles.

When working with the Cosmos DB Microsoft.Azure.Documents.Document class, if you need to get custom properties from the document you can use the GetPropertyValue method as we saw used in part six of this series and replicated as follows:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated1",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

In the preceding code, the Azure Function is triggered from new/updated documents and the data that needs processing is passed to the function by way of the IReadOnlyList<Document> modifiedDrivers parameter. If you have multiple functions that work with a document type you may end up with many duplicated GetPropertyValue calls such as repeatedly getting the driver’s name: var driverName = modifiedDriver.GetPropertyValue<string>("Name"); Also notice the use of the magic string “Name” to refer to the document property to retrieve, this is not type safe and will return null at runtime if there is no property with that name.

Another option to simplify the code and remove the magic string is to use dynamic as the following code demonstrates:

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated2",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<dynamic> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.Name;

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

Notice in the preceding code that the binding has been changed from IReadOnlyList<Document> modifiedDrivers to IReadOnlyList<dynamic> modifiedDrivers and the code has changed from var driverName = modifiedDriver.GetPropertyValue<string>("Name"); to var driverName = modifiedDriver.Name;

While this removes the magic string, it is still not type safe and an incorrectly spelled property name will not error at compile time. Furthermore if the property does not exist, rather than return null, an exception will be thrown in your function at runtime.

If you’re not familiar with dynamic C# be sure to check out my Dynamic C# Fundamentals Pluralsight course.

You can start watching with a Pluralsight free trial.

SHARE:

How to Schedule Cosmos DB Data Processing With Azure Functions

This is the seventh part in a series of articles.

You can perform scheduled/batch processing of Azure Cosmos DB data by making use of timer triggers in Azure Functions.

Timer triggers allow you to set up a function to execute periodically based on set schedule.

For example the following attribute will cause the function to be executed once per day at  22:00 hours:

[TimerTrigger("0 0 22 * * *")]

As an example, in this series we’ve been using the domain of pizza delivery. Suppose that once per day the manager wanted an SMS with the total sales for the day.

To accomplish this we can combine a timer trigger, cosmos DB input binding, and a Twilio SMS output binding. In this example, the CosmosDB input binding is bound to an instance of DocumentClient. This allows us to perform more complex queries against Cosmos DB as we saw in part 2 of this series.

Once we have a DocumentClient instance, we can use LINQ to query Cosmos DB:

DateTime startOfToday = DateTime.Today;
DateTime endOfToday = startOfToday.AddDays(1).AddTicks(-1); 

decimal totalValueOfTodaysOrders = 
    client.CreateDocumentQuery<Order>(ordersCollectionUri, options)
         .Where(order => order.OrderDate >= startOfToday && order.OrderDate <= endOfToday)
         .Sum(order => order.OrderTotal);

The preceding query gets the total value or orders that have todays date, where order documents look like the following:

[
    {
        "id": "3",
        "StoreId": 2,
        "OrderTotal": 10.25,
        "OrderDate": "2019-06-06T17:17:17.7251173Z",
        "_rid": "Vg08AKOQeVQBAAAAAAAAAA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQBAAAAAAAAAA==/",
        "_etag": "\"00000000-0000-0000-1c2d-b2a092fd01d5\"",
        "_attachments": "attachments/",
        "_ts": 1559801067
    },
    {
        "id": "4",
        "StoreId": 2,
        "OrderTotal": 10.25,
        "OrderDate": "2019-06-06T18:18:18.7251173Z",
        "_rid": "Vg08AKOQeVQCAAAAAAAAAA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQCAAAAAAAAAA==/",
        "_etag": "\"00000000-0000-0000-1c37-77f607ea01d5\"",
        "_attachments": "attachments/",
        "_ts": 1559805263
    },
    {
        "id": "1",
        "StoreId": 1,
        "OrderTotal": 100,
        "OrderDate": "2019-06-06T14:14:14.7251173Z",
        "_rid": "Vg08AKOQeVQBAAAAAAAACA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQBAAAAAAAACA==/",
        "_etag": "\"00000000-0000-0000-1c2d-a87985f501d5\"",
        "_attachments": "attachments/",
        "_ts": 1559801050
    },
    {
        "id": "2",
        "StoreId": 1,
        "OrderTotal": 25.87,
        "OrderDate": "2019-06-06T16:16:16.7251173Z",
        "_rid": "Vg08AKOQeVQCAAAAAAAACA==",
        "_self": "dbs/Vg08AA==/colls/Vg08AKOQeVQ=/docs/Vg08AKOQeVQCAAAAAAAACA==/",
        "_etag": "\"00000000-0000-0000-1c2d-aef57e8c01d5\"",
        "_attachments": "attachments/",
        "_ts": 1559801061
    }
]

Once the total order value has been calculated, a Twilio SMS can be created and written to the Twilio output binding.

The complete listing is as follows:

using System;
using System.Linq;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using Twilio.Rest.Api.V2010.Account;
using Twilio.Types;

namespace DontCodeTiredDemosV2.CosmosDemos
{
    public static class DailySales
    {
        [FunctionName("DailySales")]
        public static void Run(
            [TimerTrigger("0 0 22 * * *")]TimerInfo myTimer,            
            [CosmosDB(ConnectionStringSetting = "pizzaConnection")] DocumentClient client,
            [TwilioSms(AccountSidSetting = "TwilioAccountSid", AuthTokenSetting = "TwilioAuthToken", From = "%TwilioFromNumber%")
                ] out CreateMessageOptions messageToSend,
            ILogger log)
        {
            log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

            Uri ordersCollectionUri = UriFactory.CreateDocumentCollectionUri(databaseId: "pizza", collectionId: "orders");

            var options = new FeedOptions { EnableCrossPartitionQuery = true }; // Enable cross partition query

            DateTime startOfToday = DateTime.Today;
            DateTime endOfToday = startOfToday.AddDays(1).AddTicks(-1); 

            decimal totalValueOfTodaysOrders = 
                client.CreateDocumentQuery<Order>(ordersCollectionUri, options)
                      .Where(order => order.OrderDate >= startOfToday && order.OrderDate <= endOfToday)
                      .Sum(order => order.OrderTotal);


            var messageText = $"Total sales for today: {totalValueOfTodaysOrders}";

            log.LogInformation(messageText);

            var managersMobileNumber = new PhoneNumber(Environment.GetEnvironmentVariable("ManagersMobileNumber"));

            var mobileMessage = new CreateMessageOptions(managersMobileNumber)
            {
                Body = messageText
            };

            messageToSend = mobileMessage;
        }
    }
}

In the preceding code, notice the From = "%TwilioFromNumber%" element of the [TwilioSms] binding, the %% means that the from number will be read from configuration, e.g. local.settings.json in the development environment. Similarly, notice the phone number that the SMS is sent to is read from configuration: var managersMobileNumber = new PhoneNumber(Environment.GetEnvironmentVariable("ManagersMobileNumber"));

Now once per day at 10pm the function will run and send an SMS to the manager giving him the days sales figures.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Executing Multiple Azure Functions When Azure Cosmos DB Documents Are Created or Modified

This is the sixth part in a series of articles.

Sometimes you may want more than one Azure Function to execute when a document  is changed or inserted in Cosmos DB.

You could just use one function that performs multiple logical operations on the changed document but there are some things to consider when doing this:

  • What if the function throws an exception during the first logical operation? (operation 2 may not be executed).
  • What scaling do you want, you/Azure won’t be able to scale the 2 logical operations independently.
  • How long will the function execute for if performing multiple operations in a single function, will you risk function timeouts?
  • How will you monitor operations when they are all contained in a single function.
  • How will you update the code/fix bugs: you will have to update the entire function even if the bug is only related to one operation.
  • How will you write automated tests? They will be more complex if there are multiple operations in a single function.

In some cases you may decide the preceding points don’t matter, but if they do you will need to split the operations into multiple separate Azure Functions.

As an example, the following function contains two logical operations in a single function:

[FunctionName("PizzaDriverLocationUpdated")]
public static void RunMultipleOperations([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            // Simulate running logical operation 1
            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");

            // Simulate running logical operation 2
            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

The preceding function could be separated into two separate functions, each one containing only a single logical operation:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

If you try and run the function app (for example in the local development functions runtime) you will see some error such as the following:

Unhealthiness detected in the operation AcquireLease for localhost_...==_...=..1 Owner='626d5aec...' Continuation="49" Timestamp(local)=...
Unhealthiness detected in the operation AcquireLease for localhost_...==_...=..0 Owner='626d5aec... Continuation="586" Timestamp(local)=...

If you then make updates/inserts you may see that only one of the two functions is executed, rather than both of them. This is due to change feed leases.

Understanding Azure Cosmos DB Change Feed Leases

The Azure Functions Cosmos DB trigger knows when documents are changed/insert ed by way of the Cosmos DB change feed.

The change feed at a simple level listens for changes made in a collection and allows these changes to be passed to other processes (such as Azure Functions) to do work on.

Without a way to keep track of what changes in the underlying collection have been “fed” out to other process(es) there would be no way to know what changed documents have been passed to external process(es). This is where the lease collection comes in.

The lease collection stores a “checkpoint” for an Azure Function that is using the Cosmos DB trigger. Without this checkpoint, the function would not know if it has processed changed documents or not.

When only one function exists for Cosmos DB collection there is not a problem as only one checkpoint needs to be stored, because there is only one function.

When more that one function exists, there needs to be a way to store different checkpoints for different functions.

One way to do this is to use lease prefixes.

Sharing a Single Lease Collection Across Multiple Azure Functions

To use a single lease collection when you have multiple Azure Functions, you can use the LeaseCollectionPrefix property of the [CosmosDBTrigger] attribute. The value for this property needs to be unique for every function, as the following code demonstrates:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionPrefix = "PizzaDriverLocationUpdated1",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionPrefix = "PizzaDriverLocationUpdated2",
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

In the preceding code, notice LeaseCollectionPrefix = "PizzaDriverLocationUpdated1", and LeaseCollectionPrefix = "PizzaDriverLocationUpdated2".

If the function app is run now there is no startup error and changes made to a document trigger both functions:

Executing 'PizzaDriverLocationUpdated2' (Reason='New changes on collection driver at 2019-05-31T02:49:55.6946671Z', Id=b1476848-7f98-4362-a25f-69beb714c379)
Executing 'PizzaDriverLocationUpdated1' (Reason='New changes on collection driver at 2019-05-31T02:49:55.6946679Z', Id=366fa257-3b94-4d41-94f2-2777e0b8249a)
Running operation 2 for driver 1 Amrit
Running operation 1 for driver 1 Amrit
Executed 'PizzaDriverLocationUpdated2' (Succeeded, Id=b1476848-7f98-4362-a25f-69beb714c379)
Executed 'PizzaDriverLocationUpdated1' (Succeeded, Id=366fa257-3b94-4d41-94f2-2777e0b8249a)

If you check the lease collection behind the scenes notice the lease prefixes in use as the following screenshot shows:

Azure Functions Lease Prefixes

 

Using Multiple Azure Cosmos DB Lease Collections with Azure Functions

Rather than sharing a single lease collection, you can instead specify completely separate collections with the LeaseCollectionName property:

[FunctionName("PizzaDriverLocationUpdated1")]
public static void RunOperation1([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated1",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 1 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

[FunctionName("PizzaDriverLocationUpdated2")]
public static void RunOperation2([CosmosDBTrigger(
    databaseName: "pizza",
    collectionName: "driver",
    LeaseCollectionName = "PizzaDriverLocationUpdated2",
    CreateLeaseCollectionIfNotExists = true,
    ConnectionStringSetting = "pizzaConnection")] IReadOnlyList<Document> modifiedDrivers,
    ILogger log)
{
    if (modifiedDrivers != null)
    {
        foreach (var modifiedDriver in modifiedDrivers)
        {
            var driverName = modifiedDriver.GetPropertyValue<string>("Name");

            log.LogInformation($"Running operation 2 for driver {modifiedDriver.Id} {driverName}");
        }
    }
}

Notice in the preceding code LeaseCollectionName = "PizzaDriverLocationUpdated1", and LeaseCollectionName = "PizzaDriverLocationUpdated2". Also notice the CreateLeaseCollectionIfNotExists = true, as its name suggests this will create the lease collections if they don’t already exist.

Running the function app and once again and changing a document will result in both functions executing.

Because there are now two separate collections being used for leases there will be a cost associated with having these two collections. Also be sure to read up on RUs for your lease collections, especially if sharing a lease collection and using lease prefixes you should keep an eye on the metrics and make sure you are not getting throttled requests on your lease collection(s).

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE: