Running ASP.NET Core Apps on a Synology NAS with Docker

Now I’ve got the Synology NAS up and running, I thought it would be interesting to see what the Docker support is like. You can essentially run Docker container instances on the NAS box which also means you can deploy your own custom .NET Core apps to the Synology box.

This post is organized into 3 parts:

  1. Creating and testing a Docker-enabled ASP.NET Core app locally
  2. Deploying the app to the Synology NAS via Docker Hub
  3. Deploying the app locally to the NAS

Part 1: Creating and Testing a Docker ASP.NET Core App Locally

There’s a few things to setup to allow you to deploy and Test  Docker containers locally.

The first is to enable Hyper-V in Windows, this is a prerequisite of Docker Desktop for Windows:

Installing Windows Hyper-V Feature

Once you’ve enabled Hyper-V (a restart will probably be required) you can go and download and install Docker Desktop for Windows – this will allow you to enable Docker support when you create the project in Visual Studio.

Once Docker Desktop is installed and running you can check it’s running with PowerShell:

PS C:\Users\Admin> docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:23:10 2020
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:29:16 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
PS C:\Users\Admin>

Now you can fire up Visual Studio and create a new ASP.NET Core web application and tick the Enable Docker Support checkbox:

Creating an ASP.NET Core Web App with Docker Support

Once the project is created, you can click the Run button in Visual Studio (it should say “Docker” next to it).

Checking  the Output window for Container Tools you should can see something like:

========== Checking for Container Prerequisites ==========
Verifying that Docker Desktop is installed...
Docker Desktop is installed.
========== Verifying that Docker Desktop is running... ==========
Verifying that Docker Desktop is running...
Docker Desktop is running.
========== Verifying Docker OS ==========
Verifying that Docker Desktop's operating system mode matches the project's target operating system...
Docker Desktop's operating system mode matches the project's target operating system.
========== Pulling Required Images ==========
Checking for missing Docker images...
Pulling Docker images. To cancel this download, close the command prompt window.
docker pull mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim

After a while the build might fail with the following error: Error    CTC1001    Volume sharing is not enabled. On the Settings screen in Docker Desktop, click Shared Drives, and select the drive(s) containing your project files.  

To fix this, open up the Docker Desktop UI, and find the File Sharing section and enable C: drive if you want to make it available to Docker – this should fix the error:

Enabling File Sharing in Docker Desktop

 

Once this this change is applied and Docker Desktop restarted, click the Start button again in Visual Studio and after accepting dialog boxes to do with firewall and local certificate the web app should start up and run successfully and Docker Desktop should show the web app container running:

ASP.NET Core app running in Docker Desktop for Windows

So now you have a Docker-enabled .NET Core web app and have tested it locally you can deploy it to the Synology NAS.

Part 2: Deploying an ASP.NET Core Docker App To a Synology NAS Via Docker Hub (AKA There And Back Again – a Docker Hub Tale)

Docker Hub is place (“registry”) where you can store and manage Docker images. These images can then be pulled (downloaded) by  a Docker host and then a container started from this image.

Visual Studio has built-in support for pushing an image to Docker Hub and the Synology Docker app has the ability to pull images from Docker Hub. Images on Docker Hub can be public or private (depending on what plan you are using).

Once you’ve created a Docker Hub account, in Visual Studio go to the Build menu and choose Publish WebApplication1 (or whatever the name of your project is) and click Start. You will need to choose a publish target of Container Registry and choose Docker Hub:

Choosing Docker Hub as a publish target in Visual Studio

Click Create Profile - you’ll need to supply your Docker Hub user name and password and click save.

You can now click the Publish button and wait for a little while:

Publishing a ASP.NET Core web app to Docker Hub

You should see the app being pushed to Docker Hub:

Pushing to Docker Hub

Once the publish is complete you can head over to Docker Hub and you should see your image:

Docker Hub image

Now the image is in Docker Hub, you can enable Docker support on the Synology NAS, pull the image from Docker Hub, and start a container on the NAS.

First log into the Synology as an admin account and open the Package Center. Here you can search for “Docker” and install the Docker app:

Installing Docker support on a Synology NAS

Once you’ve installed the Docker app, open it and head to the Image section, click the Add button and choose Add From Url. Now you can head over to Docker Hub and copy the URL for your image, for example it will look something like this: https://hub.docker.com/r/jrdontcodetired/webapplication1:

Pulling an image from Docker Hub to a Synology NAS

Click Add and the image will be downloaded from Docker Hub to the NAS.

Once the image has downloaded, click on it and click the Launch button. This will enable you to start a container instance from the image.

You’ll need to click on Advanced Settings and go to the Port Settings tab. In the Dockerfile in Visual Studio, the image is set to use port 80. We need to map a port on the NAS to this port 80 in the container. For example you could set up port 7500 on the NAS itself to map traffic to port 80 in the container:

Mapping Synology port to docker container port

Click Apply and then Next. You will be given a summary of the settings (the “Run this container after the wizard is finished” box is ticked) and click Apply to finish the wizard and start the container.

You should now be able to see the container running in the Container section:

Docker container running on a Synology NAS

You can now point your browser to your NAS IP and the port your chose when staring the container, for example: http://192.168.20.17:7500/

You should now see your ASP.NET Core web app being served from the Docker container on the Synology NAS:

ASP.NET Core Web App running in a Docker container on a Synology NAS

Part 3: Directly Deploying Docker Container to a Synology NAS

The first step is to publish the web app and copy the published files to the Synology. You could also publish directly to a folder on the NAS such as: \\SYN001\Test1\DockerPublish

In Visual Studio from the Build menu choose Publish WebApplication1. Create a new Publish Profile this time using a Folder Target and as the folder choose a folder on the Synology:

Publish to Synoloy NAS folder from Visual Studio

Click Create Profile and then click Publish. Once this is finished you should see the web app files published to the Synology folder.

In the DockerPublish folder (this is an arbitrary name) on the NAS create a new Dockerfile with the following contents:

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
COPY . /app
WORKDIR /app
EXPOSE 80
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

Your folder on the Synology should now look like something like this:

image

The next step is to build the Docker image on the Synology NAS. To do this you can SSH into the NAS and use Docker build.

The first step is to enable SSH access on the Synology, you can do this from the Synology Control Panel in the Terminal & SNMP section – tick the Enable SSH Service box and click Apply:

Enabling SSH on a Synology NAS

Next in Windows, open a new PowerShell window and enter:

ssh Jason@192.168.20.17

Replace “Jason” with the name of one of your admin users and the IP address with the address of your Synology NAS – you will then need to enter the user’s password.

We need to SSH in as root (or set up a new user on the NAS). Be careful working in root or you could seriously mess your NAS up or introduce security problems. To get root access enter:

sudo -i

And once again enter the password.

You can now change to the folder that contains the published web app and Dockerfile:

cd /volume1/Test1/DockerPublish

And now build the image:

docker build -t manualwebapp .

This will produce the following output:

Sending build context to Docker daemon  4.706MB
Step 1/5 : FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
3.1-buster-slim: Pulling from dotnet/core/aspnet
c499e6d256d6: Pull complete
251bcd0af921: Pull complete
852994ba072a: Pull complete
f64c6405f94b: Pull complete
9347e53e1c3a: Pull complete
Digest: sha256:a9e160dbf5ed62c358f18af8c4daf0d7c0c30f203c0dd8dff94a86598c80003b
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
 ---> c819eb4381e7
Step 2/5 : COPY . /app
 ---> 0beff55307c9
Step 3/5 : WORKDIR /app
 ---> Running in e731c0fa1d6e
Removing intermediate container e731c0fa1d6e
 ---> b64c09a9d51e
Step 4/5 : EXPOSE 80
 ---> Running in 6fddd1f77f4e
Removing intermediate container 6fddd1f77f4e
 ---> 9aa4035379dc
Step 5/5 : ENTRYPOINT ["dotnet", "WebApplication1.dll"]
 ---> Running in 4f0b086e44d3
Removing intermediate container 4f0b086e44d3
 ---> ead6395bf486
Successfully built ead6395bf486
Successfully tagged manualwebapp:latest

If you now head to the Docker app on the Synology you will see the manualwebapp image:

Docker build image on Synology NAS

You can start a container from this image as we did before using the Synology GUI or from the PowerShell prompt - we can start it with the following command (notice we’re mapping port 7501 on the NAS to port 80 in the container):

docker run --name manualtestcontainer -p 7501:80 -d manualwebapp

Now heading back the the Synology GUI you should see a container called manualtestcontainer running:

Docker container running on Synology NAS

 

 

Now you can head to the URL in a browser (e.g. http://192.168.20.17:7501/) and see the ASP.NET Core web app running in the Docker container:

ASP.NET Core running in Docker app running on Synology NAS

Summary

The ability to run Docker containers on a NAS is really nice, not only can you develop your own apps and deploy them as containers, you can also use images from a registry such as Docker Hub, for example MySQL, ghost blogging engine, etc. You should of course only use images you trust.

If you have any cool containers running on your Synology let me know in the comments!

SHARE:

Synology DiskStation DS1618 Plus Setup And Initial Review

Early this year I tweeted this:

After seeing this, Synology reached out to me and asked if they could give me a unit to review. The contents of this post are my opinions based purely on my experience and this article was not pre-approved or edited by Synology.

What is a NAS?

A NAS device or Network Attached Storage device allows you to serve files over a network. A NAS can be a purpose built piece of hardware (like the Synology unit being discussed in this article) or a server set up with specific software to make it act like a NAS.

A NAS is like a hard disk that you can access over the network (potentially by multiple users) but depending on the hardware/software it can do a lot more. For example a NAS can enable you to fit multiple individual hard disks in a RAID configuration. RAID (Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks) allows you to combine multiple disks/SSDs in a number of different ways.

RAID comes in a number of flavours (“levels”) with  names such as RAID 0 or RAID 10. Each RAID level has its own benefit/trade-offs in terms of number of redundant disks, read/write speeds, and storage efficiency. For example, given 4 hard disks you could set up RAID to allow 2 disks to fail without loosing data, but you will not be able to use all the disk space on all the drives for your own storage.

In summary, a NAS is a network device that exposes file storage from one or more drives and may also use RAID for some redundancy.

One thing to bear in mind is RAID is not the same as backup. RAID gives you redundancy for hardware (drive) failure. If a drive fails in a RAID array you can usually keep working and just replace the damaged drive with a new one. You should still have a good backup strategy in place that backups data on the NAS, for example making sure you have off-site backups in case the building where the NAS is burns down, gets flooded, etc.. There are a number of ways you could do this on the Synology such as setting up Cloud Sync to Dropbox/Onedrive/etc or using an actual backup service that integrates with the Synology such as Synology C2, Backblaze, etc.

Setting Up a Synology DS1618+

Synology DS1618+ Box

The first think is to decide what redundancy characteristics  you want for your data that you’ll be storing on the Synology. Synology have a handy RAID calculator to help you work out how many drives will give you what amount of storage for different RAID levels.

For this device I decided I wanted 2 disk redundancy. This mean that even if 2 disks fails no data will be lost (using SHR-2). SHR (Synology Hybrid Raid) and SHR-2 are RAID-like configurations but allow mixing disks of different sizes.

I decided I wanted to start with 6 TB of accessible storage to start with to keep initial costs low. This means with 2 redundant disks (using SHR-2) I need a total of 4 drives each one being 3 TB in size.

There are hard drives that are designed for NAS applications and offer features such as rotational vibrational sensors and other features that make them more suitable than normal desktop hard drives. I decided for this Synology NAS I would go with 2 Western Digital Red NAS drives and 2 Seagate IronWolf NAS drives. The reason I went with 2 different brands is to minimize the chance of a manufacturing error in a single batch failing all the drives at once. This might however be overkill.

For maximum compatibility you should choose drives that have been verified as compatible, there is a handy compatibility list you can use though I found it almost impossible to find drives here in Australia that exactly matched the drive model numbers/firmware/etc. On the compatibility list you can also see drives that are explicitly incompatible. Before ordering the drives I checked that they were not explicitly marked as incompatible even though they also did not match exactly the drives on the compatible list.

Out of the box, the DS1618+ comes with a couple of network cables, a power cord, some mounting screws if you are using 2.5” drives.

DS1618+ unboxing

Installing Drives

Installing the drives is pretty easy, each bay pops open and the tray slides out into which the hard drive is inserted. For 3.5” drives no screws are required, instead the drives are help in place by a plastic strip on each side. These plastic were a bit fiddly however and felt a bit fragile and I was worried that I was going to snap them when trying to remove them from the tray. When the drive is inserted and the plastic strips clipped back in they feel like they hold the drive solidly in place however. Once the drive is held in place it can be slotted back into the NAS.

Adding a drive to the DS1618+

The drives can be secured in place using the supplied “key” to prevent the drives from accidentally being removed.

DS1618+ Setup

Once all 4 disks were installed, I connected the power cord and the network cable to the modem/router.

Back on my PC in a browser I navigated to http://find.synology.com – this then forwarded the browser to the NAS.

A wizard like setup process leads you through the required steps to install the NAS OS, create an admin user account, and optionally enable QuickConnect that allows you to access the management interface on your NAS over the Internet without needing to setup complex port forwarding rules which is a nice feature.

Once the setup is complete the web interface opens.

Synology DiskStation interface

From here you can manage the NAS and install additional packages such as Dropbox/Onedrive/etc cloud sync. This ability to install “apps” onto the NAS is a powerful feature that helps to add extra value to the NAS proposition.

Synology Package Center

At this point I had not actually setup or chosen a RAID level so I wasn’t sure what to do next.

Eventually I found the Main Menu button at the top left that allowed be to open the Storage Manager app that allows the setup of disks – it would have been nice if this was part of the initial setup wizard/guided workflow – at least for beginner users like myself.

There are 2 key concepts: Storage Pools and Volumes.This is where things started to get a little confusing for as a first-time NAS user, I knew that I wanted a single volume using SHR-2 but was not sure how to get there.

After a few minutes looking at the documentation I understood that a storage pool is a collection of drives. There are 2 types of storage pools: “Storage pool for better performance” and “Storage pool for higher flexibility” with the performance pool “better performance but less storage management flexibility” –unfortunately the doc I was looking at didn’t link to what this “storage management flexibility” refers to.

So I decided to just go and click the create storage pool button and see what happens. The popup then told me that the “flexible” pool is the one that supports SHR so I chose that option.

I gave the pool a not very original name of “MainPool” and chose SHR-2 as the RAID type. Then I proceed through the wizard and selected the 4 drives I installed to be part of this pool.

Next I clicked on the create new volume button and chose the storage pool I just created. I set the volume size to max because I only want 1 volume for all the 4 drives and then I was offered the choice of file system: Btrfs or ext – I went with the Btrfs option as it was the recommended option..

Once all this was done the NAS started running a parity consistency check check on the drives.

Setting Up A Share

Now the pool and volume are up and running it’s time to store some files!

I went to the Shared Folder Creation wizard and created a test share and then setup read/write permissions for myself.

Now heading over to Windows File Explorer and navigating to the NAS I had to provide credentials which I did for the user I created earlier:

Connecting to Synology Shared Folder in Windows

Now I can navigate to the Test1 share and create my first NAS-ed file :)

Setup Summary and First Impressions

One thing to bear in mind is that setting up a NAS is not the same as just plugging in an external USB. The Synology DS1618+ offers loads of configuration options and other than a short stumble where I learned about Storage Pools and Volumes the process was pretty painless. I now have the ability to store and retrieve files from anywhere in the house and also know that even if 2 of the 4 hard disks failed I would not lose data.

At this point I have not setup any backups so I won’t be putting any critical files on the NAS yet, I’m also looking forward to setting up things such as Cloud Sync that will sync Dropbox, Onedrive, etc to the NAS – at the moment due to smaller SSD sizes on my machines I’m having to make use of Dropbox selective sync all the time which is a bit annoying – having my entire Dropbox account on my local network will hopefully make things a lot nicer!.

I’m also looking forward to playing with the Synology Docker container support.

SHARE:

New Pluralsight Course: Creating Automated Browser Tests with Selenium in C#

My newest Pluralsight course was just published and you can start watching today. Selenium is a tool that allows you to automate a web browser and simulate an end-user interacting with your web app. You can combine Selenium with a test framework such as xUnit.net to create tests that check your web app is working as expected.

Automated browser tests can compliment your other types of tests such as unit and integration tests.

From the course description: “Unit and integration tests can help you catch a range of bugs, but not all of them. Even if your unit and integration tests pass, you could still deploy your web app to production and find it doesn’t work as expected. In this course, Creating Automated Browser Tests with Selenium in C#, you will gain the ability to create tests that automate the browser and simulate a real person using your web app. First, you will learn how to set up your test project and write your first test. Next, you will discover how to interact with web page elements from your tests, such as clicking a button or typing text. Finally, you will explore how to create a suite of automated web tests that are easier to maintain over time. When you are finished with this course, you will have the skills and knowledge of Selenium automated browser testing needed to help ensure your web app is working as expected before you release it to production.”

Check out the course today and if you’re not a Pluralsight member you can currently start watching for free with a Pluralsight Free Trial with Unlimited Access .

SHARE:

Adding Tuple Support to .NET Classes in C#

Edit: Updated to improve clarity (thanks to Paulo in the comments for helping to improve his article).

Tuples in C# are objects that can be created with a specific syntax. You don’t have to declare tuple types first like you do with classes for example, they can instead be created using a lightweight C# syntax.

A tuple is a object that holds a number of arbitrary data items and which has no custom behaviour. In contrast, a class or struct can have both data and custom behaviour.

For example the following creates a tuple with 2 string values:

(string, string) names = ("Sarah", "Smith");
Console.WriteLine($"First name: '{names.Item1}' Last name: '{names.Item2}'");

This code produces the output: First name: 'Sarah' Last name: 'Smith'

In the preceding code, the items inside the tuple don’t have names so they are referred to as Item1 and Item2 but you could also name the items, for example:

(string firstName, string lastName) names = ("Sarah", "Smith");
Console.WriteLine($"First name: '{names.firstName}' Last name: '{names.lastName}'");

If you had a rich Person class that had both data and behaviour, you could also add support for tuple-like deconstruction and unpackaging of a Person instance into variables just like you would do with a tuple instance.

Consider the following class:

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int AgeInYears { get; set; }
    public string FavoriteColor { get; set; }
    
    // methods etc.
}

We could create a tuple as before containing the first and last name as follows:

var sarah = new Person
{
    FirstName = "Sarah",
    LastName = "Smith",
    AgeInYears = 42,
    FavoriteColor = "red"
};

(string firstName, string lastName) names = (sarah.FirstName, sarah.LastName);
Console.WriteLine($"First name: '{names.firstName}' Last name: '{names.lastName}'");

This is however a little clunky, we can modify the Person class to provide support for a Person to have tuple-like deconstruction and unpacking semantics. To do this a public void method called Deconstruct can be added, for example:

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int AgeInYears { get; set; }
    public string FavoriteColor { get; set; }

    // methods etc.

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }
}

Now the code could be changed to:

var (firstName, lastName) = sarah;
Console.WriteLine($"First name: '{firstName}' Last name: '{lastName}'");

You could also add this deconstruction/unpackaging support to a class you can’t change by declaring an extension method such as:

static class PersonExtensions
{
    public static void Deconstruct(this Person person, out string firstName, out string lastName)
    {
        firstName = person.FirstName;
        lastName = person.LastName;
    }
}

Or as another example, you could add tuple-like deconstruction & unpackaging support for the .NET String type:

static class StringExtensions
{
    public static void Deconstruct(this string s, out string original, out string upper, out string lower, out int length)
    {
        original = s;
        upper = s.ToUpperInvariant();
        lower = s.ToLowerInvariant();
        length = s.Length;
    }
}

And then write:

var (original, upper, lower, length) = "The quick brown fox";
Console.WriteLine($"Original: {original}");
Console.WriteLine($"Uppercase: {upper}");
Console.WriteLine($"Lowercase: {lower}");
Console.WriteLine($"Length: {length}");

As Paulo points out in the comments there is no actual tuple instance per-se involved here, if look at the the decompiled source that Paulo links to you can see the Person has been unpackaged into multiple variables.

If you want to learn a load more C# tips check out my C# Tips and Traps course today. You can even currently start watching with a Pluralsight Free Trial with Unlimited Access .

SHARE:

Variables? We Don’t Need No Stinking Variables - C# Discards

C# 7.0 introduced the concept of discards. Discards are intentionally unused, temporarily dummy variables that we don’t care about and don’t want to use.

For example, the following shows the result of an addition being discarded:

_ = 1 + 1;

Note the underscore _ this is the discard character.

Given the preceding example, you cannot access the result of this addition, for example:

WriteLine(_); // Error CS0103  The name '_' does not exist in the current context 

Using C# Discards with Out Parameters

A more useful example is when you are working with a method that has one or more out parameters and you don’t care about using the outputted value.

As an example, consider one of the many TryParse methods in .NET such as int.TryParse. The following code show a method that writes to the console whether or not a string can be parsed as an int:

static void ParseInt()
{
    WriteLine("Please enter an int to validate");
    string @int = ReadLine();
    bool isValidInt = int.TryParse(@int, out int parsedInt);
    
    if (isValidInt)
    {
        WriteLine($"{@int} is a valid int");
    }
    else
    {
        WriteLine($"{@int} is NOT a valid int");
    }
}

The preceding method can be written using a discard because the out int parsedInt value is never used:

static void ParseIntUsingDiscard()
{
    WriteLine("Please enter an int to validate");
    string @int = ReadLine();

    if (int.TryParse(@int, out _))
    {
        WriteLine($"{@int} is a valid int");
    }
    else
    {
        WriteLine($"{@int} is NOT a valid int");
    }
}

For example we could create an expression bodied method using a similar approach:

static bool IsInt(string @int) => int.TryParse(@int, out _);

If you have a method that returns a lot of out values such as:

private static void GenerateDefaultCity(out string name, out string nickName, out long population, out DateTime founded)
{
    name = "London";
    nickName = "The Big Smoke";
    population = 8_000_000;
    founded = new DateTime(50, 1, 1);
}

In this case you might only care about the returned population value so you could discard all the other out values:

GenerateDefaultCity(out _,out _, out var population, out _);
WriteLine($"Population is: {population}");

Using C# Discards with Tuples

Another use for discards is where you don’t care about all the fields of a tuple. For example the following method returns a tuple containing a name and age:

static (string name, int age) GenerateDefaultPerson()
{
    return ("Amrit", 42);
}

If you only cared about the age you could write:

var (_, age) = GenerateDefaultPerson();
WriteLine($"Default person age is {age}");

Simplifying Null Checking Code with Discards

Take the following null checking code:

private static void Display(string message)
{
    if (message is null)
    {
        throw new ArgumentNullException(nameof(message));
    }
    WriteLine(message);
}

You could refactor this to make use of throw expressions:

private static void DisplayV2(string message)
{
    string checkedMessage = message ?? throw new ArgumentNullException(nameof(message));

    WriteLine(checkedMessage);
}

In the preceding version however, the checkedMessage variable is somewhat redundant, this could be refactored to use a discard:

private static void DisplayWithDiscardNullCheck(string message)
{
    _ = message ?? throw new ArgumentNullException(nameof(message));
    
    WriteLine(message);
}

Using C# Discards with Tasks

Take the following code:

// Warning CS1998  This async method lacks 'await' operators and will run synchronously.
Task.Run(() => SayHello());

Where the SayHello method is defined as:

private static string SayHello()
{
    string greeting = "Hello there!";
    return greeting;
}

If we don’t care about the return value and want to discard the result and get rid of the compiler warning::

// With discard - no compiler warning
_ = Task.Run(() => SayHello());

If there are any exceptions however, they will be supressed:

await Task.Run(() => throw new Exception()); // Exception thrown
_ = Task.Run(() => throw new Exception()); // Exception suppressed

Pattern Matching with Switch Statements and Discards

You can also use discards in switch statements:

private static void SwitchExample(object o)
{
    switch (o)
    {
        case null:
            WriteLine("o is null");
            break;
        case string s:
            WriteLine($"{s} in uppercase is {s.ToUpperInvariant()}");
            break;
        case var _:
            WriteLine($"{o.GetType()} type not supported.");
            break;
    }
}

If you want to learn a load more C# tips check out my C# Tips and Traps course today. You can even currently start watching with a Pluralsight Free Trial with Unlimited Access .

SHARE:

Simplifying Parameter Null and Other Checks with the GuardClauses Library

Often you want to add null checking and other check code at the start of a method to ensure all the values passed into the method are valid before continuing.

For example the following method checks the name and age:

public static void AddNewPerson(string name, int ageInYears)
{
    if (string.IsNullOrWhiteSpace(name))
    {
        throw new ArgumentException($"Cannot be null, empty, or contain only whitespace.", nameof(name));
    }

    if (ageInYears < 1)
    {
        throw new ArgumentOutOfRangeException(nameof(ageInYears), "Must be greater than zero.");
    }

    // Add to database etc.
}

This “guard” kind of code can “clutter” the method and reduce readability.

One library I recently came across is the Guard Clauses library from Steve Smith.

Once this library is installed we could refactor the preceding code to look like the following:

public static void AddNewPerson(string name, int ageInYears)
{
    Guard.Against.NullOrWhiteSpace(name, nameof(name));
    Guard.Against.NegativeOrZero(ageInYears, nameof(ageInYears));

    // Add to database etc.
}

Passing a null name results in the exception: System.ArgumentNullException: Value cannot be null. (Parameter 'name')

Passing an empty string results in: System.ArgumentException: Required input name was empty. (Parameter 'name')

Passing in an age of zero results in: System.ArgumentException: Required input ageInYears cannot be zero or negative. (Parameter 'ageInYears')

The code is also more readable and succinct.

Out of the box the library comes with the following guards (taken from the documentation):

  • Guard.Against.Null (throws if input is null)
  • Guard.Against.NullOrEmpty (throws if string or array input is null or empty)
  • Guard.Against.NullOrWhiteSpace (throws if string input is null, empty or whitespace)
  • Guard.Against.OutOfRange (throws if integer/DateTime/enum input is outside a provided range)
  • Guard.Against.OutOfSQLDateRange (throws if DateTime input is outside the valid range of SQL Server DateTime values)
  • Guard.Against.Zero (throws if number input is zero)

You can also define your own reusable clauses:

// Define in this namespace so can use alongside built-in guards with no additional namespaces required
namespace Ardalis.GuardClauses
{
    public static class PositiveGuard
    {
        public static void Positive(this IGuardClause guardClause, int input, string parameterName)
        {
            if (input >= 0)
            {
                throw new ArgumentOutOfRangeException(parameterName, $"Required input {parameterName} cannot be positive.");
            }                           
        }
    }
}

And then in a method we can write:

public static void ReportNegativeTemperature(int temp)
{
    Guard.Against.Positive(temp, nameof(temp));
    // Do something
}

And if we pass a positive (or zero) temp we get: System.ArgumentOutOfRangeException: Required input temp cannot be positive. (Parameter 'temp')

This is one of those simple libraries that can make basic tasks easier/more readable.

If you check this out and use it make sure you say thanks to Steve on Twitter and let him know @robertsjason sent you ;)

SHARE:

Writing Azure Functions with Function Monkey: Using Commands Without Handlers

If you’ve read the previous articles on Function Monkey you may be wondering if you always need a command handler. Sometimes you may want to accept a request into the system (for example via HTTP) and that pass that request off for further processing. For example the HTTP data can be accepted and then the data (“command”) can be put on a queue for processing. This allows the potential scale-out of the function that processes queue messages to improve the overall throughput of the system.

Take the following example that allows an invoice to be submitted via HTTP. The submitted invoice is validated before simply being returned from the SubmitInvoiceCommandHandler. The output of the handler gets sent to a storage queue called “invoices”. Then we have a queue storage trigger creating and handling the ProcessInvoiceCommand.

using System.Net.Http;
using System.Threading.Tasks;
using AzureFromTheTrenches.Commanding.Abstractions;
using FluentValidation;
using FunctionMonkey.Abstractions;
using FunctionMonkey.Abstractions.Builders;
using FunctionMonkey.FluentValidation;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace FunctionApp2
{
    public class SubmitInvoiceCommand : ICommand<SubmitInvoiceCommand>
    {
        public string Description { get; set; }
        public decimal Amount { get; set; }
    }

    public class SubmitInvoiceCommandValidator : AbstractValidator<SubmitInvoiceCommand>
    {
        public SubmitInvoiceCommandValidator()
        {
            RuleFor(x => x.Description).NotEmpty();
            RuleFor(x => x.Amount).GreaterThan(0);
        }
    }

    public class SubmitInvoiceCommandHandler : ICommandHandler<SubmitInvoiceCommand, SubmitInvoiceCommand>
    {
        public Task<SubmitInvoiceCommand> ExecuteAsync(SubmitInvoiceCommand command, SubmitInvoiceCommand previousResult)
        {
            // We are not actually "handling" anything here, the handler is just returning the sam command
            return Task.FromResult(command);
        }
    }

    public class ProcessInvoiceCommand : ICommand
    {
        public string Description { get; set; }
        public decimal Amount { get; set; }
    }

    public class ProcessInvoiceCommandHandler : ICommandHandler<ProcessInvoiceCommand>
    {
        private readonly ILogger Log;

        public ProcessInvoiceCommandHandler(ILogger log)
        {
            Log = log;
        }

        public Task ExecuteAsync(ProcessInvoiceCommand command)
        {
            Log.LogInformation($"Processing invoice {command.Description} {command.Amount}");
            return Task.CompletedTask;
        }
    }

    public class FunctionAppConfiguration : IFunctionAppConfiguration
    {
        public void Build(IFunctionHostBuilder builder)
        {
            builder
                .Setup((serviceCollection, commandRegistry) =>
                {
                    serviceCollection.AddTransient<IValidator<SubmitInvoiceCommand>, SubmitInvoiceCommandValidator>();
                    commandRegistry.Register<SubmitInvoiceCommandHandler>();
                    commandRegistry.Register<ProcessInvoiceCommandHandler>();
                })
                .AddFluentValidation()
                .Functions(functions => functions

                    .HttpRoute("v1/SubmitInvoice", route => route
                        .HttpFunction<SubmitInvoiceCommand>(HttpMethod.Post)
                        .OutputTo.StorageQueue("invoices"))

                    .Storage(storage => storage
                        .QueueFunction<ProcessInvoiceCommand>("invoices"))                    
                );
        }
    }
}

If we POST the JSON { "Description" : "NAS",    "Amount" : 1000 } we get the following (abridged) output:

Executing HTTP request: {"method": "POST",  "uri": "/api/v1/SubmitInvoice"}
Executing 'SubmitInvoice' 
Executed 'SubmitInvoice'
Executing 'StqFnProcessInvoice' (Reason='New queue message detected on 'invoices')
Storage queue trigger function StqFnProcessInvoice processed a request.
Processing invoice NAS 1000.0
Executed 'StqFnProcessInvoice' 

At the moment the SubmitInvoiceCommandHandler is not doing anything useful, it’s just passing the command back out so it can be output to queue storage.

With Function Monkey you can do away with the command handler in these cases.

One way to do this is when configuring the function app by adding the NoCommandHandler() option in the build method. This means that the SubmitInvoiceCommandHandler class can be deleted:

using System.Net.Http;
using System.Threading.Tasks;
using AzureFromTheTrenches.Commanding.Abstractions;
using FluentValidation;
using FunctionMonkey.Abstractions;
using FunctionMonkey.Abstractions.Builders;
using FunctionMonkey.FluentValidation;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;

namespace FunctionApp2
{
    public class SubmitInvoiceCommand : ICommand<SubmitInvoiceCommand>
    {
        public string Description { get; set; }
        public decimal Amount { get; set; }
    }

    public class SubmitInvoiceCommandValidator : AbstractValidator<SubmitInvoiceCommand>
    {
        public SubmitInvoiceCommandValidator()
        {
            RuleFor(x => x.Description).NotEmpty();
            RuleFor(x => x.Amount).GreaterThan(0);
        }
    }

    public class ProcessInvoiceCommand : ICommand
    {
        public string Description { get; set; }
        public decimal Amount { get; set; }
    }

    public class ProcessInvoiceCommandHandler : ICommandHandler<ProcessInvoiceCommand>
    {
        private readonly ILogger Log;

        public ProcessInvoiceCommandHandler(ILogger log)
        {
            Log = log;
        }

        public Task ExecuteAsync(ProcessInvoiceCommand command)
        {
            Log.LogInformation($"Processing invoice {command.Description} {command.Amount}");
            return Task.CompletedTask;
        }
    }

    public class FunctionAppConfiguration : IFunctionAppConfiguration
    {
        public void Build(IFunctionHostBuilder builder)
        {
            builder
                .Setup((serviceCollection, commandRegistry) =>
                {
                    serviceCollection.AddTransient<IValidator<SubmitInvoiceCommand>, SubmitInvoiceCommandValidator>();
                    commandRegistry.Register<ProcessInvoiceCommandHandler>();
                })
                .AddFluentValidation()
                .Functions(functions => functions

                    .HttpRoute("v1/SubmitInvoice", route => route
                        .HttpFunction<SubmitInvoiceCommand>(HttpMethod.Post)
                        .Options(options => options.NoCommandHandler())
                        .OutputTo.StorageQueue("invoices"))

                    .Storage(storage => storage
                        .QueueFunction<ProcessInvoiceCommand>("invoices"))                    
                );
        }
    }
}

If we submit the same JSON request we get:

Executing HTTP request: {  "method": "POST",  "uri": "/api/v1/SubmitInvoice"}
Executing 'SubmitInvoice' 
Executed 'SubmitInvoice' 
Executing 'StqFnProcessInvoice' 
Storage queue trigger function StqFnProcessInvoice processed a request.
Processing invoice NAS 1000.0
Executed 'StqFnProcessInvoice'

Even though we don’t have an explicit handler now for the SubmitInvoiceCommand, the validation still takes place.

Another option is to implement the marker interface ICommandWithNoHandler in the command and then you don’t need the .NoCommandHandler() option.

Other Function Monkey articles:

SHARE:

Writing Azure Functions with Function Monkey: Validation

Function Monkey is a framework to define Azure Functions in a fluent way as opposed to using binding attributes on function methods.

Other Function Monkey articles:

In addition to offering a different way to define functions, Function Monkey offers features such as validation.

Consider the following setup that generates a greeting:

using System.Net.Http;
using System.Threading.Tasks;
using AzureFromTheTrenches.Commanding.Abstractions;
using FunctionMonkey.Abstractions;
using FunctionMonkey.Abstractions.Builders;

namespace FunctionApp2
{
    public class GenerateGreetingCommand : ICommand<string>
    {
        public string Name { get; set; }
    }

    public class GenerateGreetingHandler : ICommandHandler<GenerateGreetingCommand, string>
    {
        public Task<string> ExecuteAsync(GenerateGreetingCommand command, string previousResult) => Task.FromResult($"Hello {command.Name}");
    }

    public class FunctionAppConfiguration : IFunctionAppConfiguration
    {
        public void Build(IFunctionHostBuilder builder)
        {
            builder
                .Setup((serviceCollection, commandRegistry) =>
                {
                    commandRegistry.Register<GenerateGreetingHandler>();                    
                })
                .Functions(functions => functions
                    .HttpRoute("v1/GenerateGreeting", route => route
                        .HttpFunction<GenerateGreetingCommand>(HttpMethod.Get))
                );
        }
    }  
}

If we run this and send a JSON payload of {"Name": ""} we’ll get back a response of "Hello ".

There is currently no validation on the name in the GenerateGreetingCommand.

To add validation with Function Monkey install the additional package: FunctionMonkey.FluentValidation. This will also install the dependent package FluentValidation.

To add validation to the Name property, we create a new class that inherits from AbstractValidator<T> where T is the command we want to validate, in this case the GenerateGreetingCommand:

public class GenerateGreetingCommandValidator : AbstractValidator<GenerateGreetingCommand>
{
    public GenerateGreetingCommandValidator()
    {
        RuleFor(x => x.Name).NotEmpty();
    }
}

In the constructor we use the FluentValidation syntax to define what validation to perform on the Name property of the command. In the preceding code we are saying name cannot be empty.

Next we need to wire up this new validator by adding the call to AddFluentValidation() and also registering the validator with serviceCollection.AddTransient<IValidator<GenerateGreetingCommand>, GenerateGreetingCommandValidator>();

So the setup now looks like:

public class FunctionAppConfiguration : IFunctionAppConfiguration
{
    public void Build(IFunctionHostBuilder builder)
    {
        builder
            .Setup((serviceCollection, commandRegistry) =>
            {
                serviceCollection.AddTransient<IValidator<GenerateGreetingCommand>, GenerateGreetingCommandValidator>();
                commandRegistry.Register<GenerateGreetingHandler>();                    
            })
            .AddFluentValidation()
            .Functions(functions => functions
                .HttpRoute("v1/GenerateGreeting", route => route
                    .HttpFunction<GenerateGreetingCommand>(HttpMethod.Get))
            );
    }
}

If we run the app again and try and submit an empty name, this time we get the following response:

{
  "errors": [
    {
      "severity": 0,
      "errorCode": "NotEmptyValidator",
      "property": "Name",
      "message": "'Name' must not be empty."
    }
  ],
  "isValid": false
}

If we wanted to enforce minimum and maximum Name length:

public class GenerateGreetingCommandValidator : AbstractValidator<GenerateGreetingCommand>
{
    public GenerateGreetingCommandValidator()
    {
        RuleFor(x => x.Name).NotEmpty()
                            .MinimumLength(5)
                            .MaximumLength(10);
    }
}

Now if we try and submit a name of “Joe”:

{
  "errors": [
    {
      "severity": 0,
      "errorCode": "MinimumLengthValidator",
      "property": "Name",
      "message": "The length of 'Name' must be at least 5 characters. You entered 3 characters."
    }
  ],
  "isValid": false
}

To add some custom validation in the form of an Action:

public class GenerateGreetingCommandValidator : AbstractValidator<GenerateGreetingCommand>
{
    public GenerateGreetingCommandValidator()
    {
        RuleFor(x => x.Name).NotEmpty()
                            .MinimumLength(5)
                            .MaximumLength(10)
                            .Custom((name, context) =>
                                {
                                    if (name == "Jason")
                                    {
                                        context.AddFailure("Jason is not a valid name");
                                    }
                                });

    }
}

Submitting a name of “Jason” now results in:

{
  "errors": [
    {
      "severity": 0,
      "errorCode": null,
      "property": "Name",
      "message": "Jason is not a valid name"
    }
  ],
  "isValid": false
}

We could also go and write units tests for the validator.

The ability to define command validation could also be useful if you had multiple ways for a client to submit requests, for example the same command (and validation) could be triggered from HTTP and a queue for example. In this case you could ensure the same validation is executed regardless of the input “channel”.

SHARE:

Watch All My Pluralsight Courses for FREE This weekend

This weekend Pluralsight is doing a free weekend promotion (click the above banner) which means you can start watching all my courses for FREE!

Once you’ve clicked the banner and followed the instructions, head over to my list of courses and start watching.

FREE WEEKEND STARTS FEBRUARY 7th 12:00PM MT

Enjoy :)

SHARE:

Writing Azure Functions with Function Monkey: Dependency Injection

In my continued exploration/experimentation with Function Monkey I thought I’d look at how easy/hard it is to inject dependencies into handlers.

Previous articles: Creating Azure Functions with Function Monkey–First Look and Refactoring an Azure Functions App to use Function Monkey.

If you’ve read the previous articles you’ll know the Function Monkey uses the concept of a command to represent “something that needs doing” and a command handler to “do the thing that needs doing”.

An Azure Function trigger results in the creation of a command, that command is passed to a handler, and the handler can return a result to the caller or an output binding.

Good practice dictates good separation of concerns, etc. so you may want to inject dependencies into your handlers to also make them easier to test.

Let’s start off my defining a dependency to represent the generation of a greeting:

public interface IGreetingGenerator
{
    string GenerateGreeting();
}

And we’ll create a basic implementation:

public class TimeOfDayGreetingGenerator : IGreetingGenerator
{
    public string GenerateGreeting()
    {
        var isAfternoon = DateTime.Now.Hour >= 12;

        if (isAfternoon)
        {
            return "Good afternoon";
        }

        return "Good morning";
    }
}

We could now go and write unit tests for this TimeOfDayGreetingGenerator – however we first have to go and provide a way to deterministically provide a specific date and time.

We’ll create another abstraction represent time so the code becomes:

public interface IGreetingGenerator
{
    string GenerateGreeting();
}

public interface ITime
{
    DateTime Now { get; }
}

public class Time : ITime
{
    public DateTime Now => DateTime.Now;
}

public class TimeOfDayGreetingGenerator : IGreetingGenerator
{
    private readonly ITime Time;

    public TimeOfDayGreetingGenerator(ITime time)
    {
        Time = time;
    }

    public string GenerateGreeting()
    {
        var isAfternoon = Time.Now.Hour >= 12;

        if (isAfternoon)
        {
            return "Good afternoon";
        }

        return "Good morning";
    }
}

And some example tests we could write:

public class TimeOfDayGreetingGeneratorShould
{
    [Fact]        
    public void GenerateMorningGreeting()
    {
        var mockTime = new Mock<ITime>();
        mockTime.Setup(x => x.Now).Returns(new DateTime(2020, 1, 1, 11, 59, 59));
        var sut = new TimeOfDayGreetingGenerator(mockTime.Object);

        var greeting = sut.GenerateGreeting();

        Assert.Equal("Good morning", greeting);
    }

    [Fact]
    public void GenerateAfternoonGreeting()
    {
        var mockTime = new Mock<ITime>();
        mockTime.Setup(x => x.Now).Returns(new DateTime(2020, 1, 1, 13, 0, 0));
        var sut = new TimeOfDayGreetingGenerator(mockTime.Object);

        var greeting = sut.GenerateGreeting();

        Assert.Equal("Good afternoon", greeting);
    }
}

The above tests are using the xUnit.net testing framework and Moq: you can learn how to use both of these by following this Pluralsight skills path that features some of my courses. You can start watching with a free trial.

Next we’ll create a command to represent the requirement to create a greeting for a person:

public class GenerateGreetingCommand : ICommand<string>
{
    public string Name { get; set; }
}

We can now create a handler for this command that also takes an IGreetingGenerator as a constructor dependency:

public class GenerateGreetingHandler : ICommandHandler<GenerateGreetingCommand, string>
{
    private readonly IGreetingGenerator GreetingGenerator;

    public GenerateGreetingHandler(IGreetingGenerator greetingGenerator)
    {
        GreetingGenerator = greetingGenerator;
    }
    public Task<string> ExecuteAsync(GenerateGreetingCommand command, string previousResult)
    {
        return Task.FromResult($"{GreetingGenerator.GenerateGreeting()} {command.Name}");
    }
}

And we can add a test:

public class GenerateGreetingHandlerShould
{
    [Fact]
    public async Task GenerateGreetingWithName()
    {
        var mockGenerator = new Mock<IGreetingGenerator>();
        mockGenerator.Setup(x => x.GenerateGreeting()).Returns("mock greeting");
        var sut = new GenerateGreetingHandler(mockGenerator.Object);
        var command = new GenerateGreetingCommand { Name = "Amrit" };

        var greeting = await sut.ExecuteAsync(command, null);

        Assert.Equal("mock greeting Amrit", greeting);
    }
}

Now we have tested some of the moving parts we can put them all together with Function Monkey (note there are more tests cases we should write but we’ll keep this example short):

public class FunctionAppConfiguration : IFunctionAppConfiguration
{
    public void Build(IFunctionHostBuilder builder)
    {
        builder
            .Setup((serviceCollection, commandRegistry) =>
            {
                commandRegistry.Register<GenerateGreetingHandler>();
            })
            .Functions(functions => functions
                .HttpRoute("v1/GenerateGreeting", route => route
                    .HttpFunction<GenerateGreetingCommand>(HttpMethod.Get))
            );
    }
}

If we try and run this and submit an HTTP request to the function we’ll get the following error:

Error occurred executing command GenerateGreetingCommand
AzureFromTheTrenches.Commanding: Error occurred during command execution. Microsoft.Extensions.DependencyInjection: Unable to resolve service for type 'FunctionApp2.IGreetingGenerator' while attempting to activate 'FunctionApp2.GenerateGreetingHandler'.

This is because we haven’t wired up the dependencies which we can do by adding:

serviceCollection.AddTransient<ITime, Time>();
serviceCollection.AddTransient<IGreetingGenerator, TimeOfDayGreetingGenerator>();

This makes the entire setup look like the following:

public class FunctionAppConfiguration : IFunctionAppConfiguration
{
    public void Build(IFunctionHostBuilder builder)
    {
        builder
            .Setup((serviceCollection, commandRegistry) =>
            {
                serviceCollection.AddTransient<ITime, Time>();
                serviceCollection.AddTransient<IGreetingGenerator, TimeOfDayGreetingGenerator>();
                commandRegistry.Register<GenerateGreetingHandler>();                    
            })
            .Functions(functions => functions
                .HttpRoute("v1/GenerateGreeting", route => route
                    .HttpFunction<GenerateGreetingCommand>(HttpMethod.Get))
            );
    }
}

Running the app now and executing the function with the JSON “{"Name": "Sarah"}” returns "Good afternoon Sarah".

It’s nice that DI is built into Function Monkey and that the registration of dependencies is pretty simple.

SHARE: