C# Source Generators: Less Boilerplate Code, More Productivity

One exciting feature of the upcoming .NET 5 are Source Generators.

Source Generators as the name suggests generate C# source code as part of the compilation process. Code generation is not a new concept in Visual Studio and .NET – for example T4 templates have been around for a while now and enable you to programmatically generate/transform content that can be compiled. There are also techniques such as IL Weaving that tools such as Fody use to manipulate the assembly that is produced from the compilation process.

Source Generators essentially enable you to add new code dynamically as part of the build process, for example adding new classes based on the hand written code in the project.

One thing to note is that Source Generators are designed to add additional generated code and not modify the code you have already written.

Source Generators can examine the existing code you have written and make decisions about what new code to generate, they can also access other files to determine what to generate.

When using Source Generators the sequence looks like this: Begin Compilation –> Any Source Generators Being Used? –> Yes –> Analyse Source Code –> Generate New Source Code –> Add Generated Source Code to Compilation –> Compile Hand-Written and Generated Source Into Output Assembly.

Creating a Simple C# Source Generator

Step 1: Creating the Source Generator

The first step is to actually define the Source Generator, this is done by creating a separate project and once it’s created, referencing it in the project you want to add generated source to.

First off you will need Visual Studio Preview and .NET 5 Preview installed.

Once installed, open VS Preview and create a new C# .NET Standard 2.0 Class Library Project project called “CheeseSourceGenerator”.

Once the project is created, you’ll need to modify the project file by double clicking on it. Source Generators are currently in preview so we can expect better tooling support in the final versions. Change the project file to the following:

<Project Sdk="Microsoft.NET.Sdk">
    <PropertyGroup>
        <TargetFramework>netstandard2.0</TargetFramework>
        <LangVersion>preview</LangVersion>
    </PropertyGroup>
    
    <PropertyGroup>
        <RestoreAdditionalProjectSources>https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json ;$(RestoreAdditionalProjectSources)</RestoreAdditionalProjectSources>
    </PropertyGroup>
    
    <ItemGroup>
        <PackageReference Include="Microsoft.CodeAnalysis.CSharp.Workspaces" Version="3.6.0-3.20207.2" PrivateAssets="all" />
        <PackageReference Include="Microsoft.CodeAnalysis.Analyzers" Version="3.0.0-beta2.final" PrivateAssets="all" />
    </ItemGroup>
</Project>

Save the project file and build the project to check there are no errors.

The next thing to do is to actually define a Source Generator. To do this add a new class to hold the generator called “Generator” and implement the ISourceGenerator interface and decorate the class with the [Generator] attribute – both of these are from the Microsoft.CodeAnalysis namespace:

using System;
using Microsoft.CodeAnalysis;


namespace CheeseSourceGenerator
{
    [Generator]
    public class Generator : ISourceGenerator
    {
        public void Execute(SourceGeneratorContext context)
        {
            throw new NotImplementedException();
        }

        public void Initialize(InitializationContext context)
        {
            throw new NotImplementedException();
        }
    }
}

The Execute method is where the actual source code generation takes place and the Initialize method allows for some more complex scenarios. In this simple example we’ll just add code to the Generate method and leave the method empty.

In this simple example we’ll just add a new class to the compilation – in this simple example there is no logic involved in the generation:

using System;
using System.Text;
using Microsoft.CodeAnalysis;
using Microsoft.CodeAnalysis.Text;

namespace CheeseSourceGenerator
{
    [Generator]
    public class Generator : ISourceGenerator
    {
        public void Execute(SourceGeneratorContext context)
        {
            const string source = @"
namespace GeneratedCheese
{
    public class CheeseChooser
    {
        public string BestCheeseForPasta => ""Parmigiano-Reggiano"";
        public string BestCheeseForBakedPotato => ""Mature Cheddar"";
    }
}
";
            const string desiredFileName = "CheeseChooser.cs";
            
            SourceText sourceText = SourceText.From(source, Encoding.UTF8); // If no encoding specified then SourceText is not debugable

            // Add the "generated" source to the compilation
            context.AddSource(desiredFileName, sourceText);
        }

        public void Initialize(InitializationContext context)
        {
            // Advanced usage
        }
    }
}

Notice in the preceding code that the SourceGeneratorContext passed to the Execute method is the object that allows us to add the source to the compilation.

Build the project. At this point no source code generation has taken place, we’ve just compiled the generator into an assembly.

Step 2: Register the C# Source Generator in a Project

Add a new .NET Core Console project to the solution called “CheeseConsole”.

Once created add a project reference to the CheeseSourceGenerator project. This will allow the console app to generate source code as part of its compilation.

The project file will now look like:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net5.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <ProjectReference Include="..\Cheese\CheeseSourceGenerator.csproj" />
  </ItemGroup>

</Project>

To actually opt-in to the code generation, the CheeseConsole project file needs to modified to add <LangVersion>preview</LangVersion> and change the reference to the generator project to be an analyser reference:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net5.0</TargetFramework>
    <LangVersion>preview</LangVersion>
  </PropertyGroup>

  <ItemGroup>
      <ProjectReference Include="..\Cheese\CheeseSourceGenerator.csproj" 
                        OutputItemType="Analyzer"
                        ReferenceOutputAssembly="false"/>
  </ItemGroup>

</Project>

If you build everything now you should see no errors.

Step 3: Use the Generated Code

In the console app Program.cs add a using directive to the namespace that was used in the source code string, namely using GeneratedCheese;

In the Main method we can now create an instance of a CheeseChooser and make use of it. Add the following code and notice that you get Intellisense support when referencing the  BestCheeseForPasta and BestCheeseForBakedPotato properties.

The Program.cs file should look like:

using System;
using GeneratedCheese;

namespace CheeseConsole
{
    class Program
    {
        static void Main(string[] args)
        {
            var cheeseChooser = new CheeseChooser();

            Console.WriteLine($"The best cheese for pasta is: {cheeseChooser.BestCheeseForPasta}");
            Console.WriteLine($"The best cheese for potato is: {cheeseChooser.BestCheeseForBakedPotato}");

            Console.ReadLine();
        }
    }
}

If you run the console app you should see the following:

The best cheese for pasta is: Parmigiano-Reggiano
The best cheese for potato is: Mature Cheddar

This example is very simplistic but there are a  number of other use cases that I’ll cover in future posts such as:

  • Augmenting existing code
  • Auto-implementing boilerplate code (such as INotifyPropertyChanged)
  • Generation from (non C#) external file
  • Generation from database contents
  • Serialization without reflection
  • etc.

SHARE:

Pretty Method Display in xUnit.net

One little-known feature of the xUnit.net testing framework is the ability to write test method names in a specific way and then have them converted to a ‘pretty’ version for example in Visual Studio Test Explorer.

Take the following test method:

using ClassLibrary1;
using Xunit;

namespace XUnitTestProject2
{
    public class CalculatorShould
    {
        [Fact]
        public void Add2PositiveNumbers()
        {
            var sut = new Calculator();

            sut.Add(1);
            sut.Add(1);

            Assert.Equal(2, sut.Value);
        }
    }
}

By default, this will look like the following screenshot in Visual Studio Test Explorer:

Default xUnit.net Test Method Name Display

The first thing that can be modified to to simplify the test method name display to only display the test method name and not the preceding namespace and class name, for example “XUnitTestProject2.CalculatorShould.Add2PositiveNumbers” becomes more simply “Add2PositiveNumbers” by making a simple configuration change.

Displaying Only Test Method Names in xUnit.net Tests

To control the rendering of method names in xUnit.net, the first thing to do is add a new file called “xunit.runner.json” to the root of the test project and set the Copy To Output Directory property to Copy if newer. This will make this file copy to the output bin directory. Once this is done, if you open the project file you should see something like:

<ItemGroup>
  <None Update="xunit.runner.json">
    <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
  </None>
</ItemGroup>

Next, modify the json file to the following:

{
  "$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
   "methodDisplay": "method"
}

Notice in the preceding json configuration the methodDisplay has been set to “method”, this will prevent the namespace and class being prepended to the method name in Test Explorer.

Now if you head back to Test Explorer you should see the following:

Method name only display in xUnit.net tests

Enabling Pretty Method Names in xUnit.net

In addition to shortening test method name display we can also make use of xUnit.net’s “pretty method display”.

To enable this feature, modify the json configuration file and add the "methodDisplayOptions": "all" configuration as follows:

{
  "$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
  "methodDisplay": "method",
  "methodDisplayOptions": "all"
}

Now the previous test can be renamed to “Add_2_positive_numbers” as follows:

[Fact]
public void Add_2_positive_numbers()
{
    var sut = new Calculator();

    sut.Add(1);
    sut.Add(1);

    Assert.Equal(2, sut.Value);
}

In test explorer this test method will show up as “Add 2 positive numbers” as the following screenshot shows:

xUnit.net pretty method display names

You can use other items in the test method name, for example you can use the monikers eq, ne, lt, le, gt, ge that get replaced with =, !=, <, <=, >, >= respectively, for example a test name of “Have_a_value_eq_0_when_multiplied_by_zero” would be displayed as “Have a value = 0 when multiplied by zero”. Here the eq has been replaced with =.

You can also use ASCII or Unicode escape sequences, for example the test name “Divide_by_U00BD” gets displayed as “Divide by ½” and the test “Email_address_should_only_contain_a_single_U0040” gets displayed as “Email address should only contain a single @”, or “The_U2211_of_1U002C_2_and_3_should_be_6” becomes “The ∑ of 1, 2 and 3 should be 6”:

xUnit Pretty methods

You could also combine the "methodDisplay": "classAndMethod" to create something like  and the following:

namespace Given_a_cleared_calculator
{
    public class when_a_number_gt_0_is_added
    {
        [Fact]
        public void then_the_value_should_be_gt_0()
        {
            // etc.
        }

        [Fact]
        public void then_the_value_should_eq_the_one_added()
        {
            // etc.
        }
    }
}

This would produce the following tests in Test Explorer:

xUnit.net Pretty Display Names

If you want to learn more about writing tests with xUnit.net check out my Pluralsight course today.

SHARE:

Simplifying Parameter Null and Other Checks with the Pitcher Library

In a previous post I looked at the GaurdClauses library that can simplify the usual guard checks we sometimes need to write. In the comments someone mentioned the Pitcher library that accomplishes the same thing so I thought I’d check it out here.

First, the NuGet package needs to be installed and a using Pitcher; directive added, then we can make use of the library.

As an example, without the library you might end up with some code like the following:

public static void AddNewPerson(string name, int ageInYears)
{
    if (string.IsNullOrWhiteSpace(name))
    {
        throw new ArgumentException($"Cannot be null, empty, or contain only whitespace.", nameof(name));
    }

    if (ageInYears < 1)
    {
        throw new ArgumentOutOfRangeException(nameof(ageInYears), "Must be greater than zero.");
    }

    // Add to database etc.
}

This is just some boilerplate type code to check for null/empty strings and that the age that’s passed in is positive.

With the Pitcher library this could be refactored to the following (I’ve left the GaurdClauses code commented as a comparison):

public static void AddNewPerson(string name, int ageInYears)
{
    // GuardClauses version:
    // Guard.Against.NullOrWhiteSpace(name, nameof(name));
    // Guard.Against.NegativeOrZero(ageInYears, nameof(ageInYears));


    // Pitcher version:
    Throw.ArgumentNull.WhenNullOrWhiteSpace(name, nameof(name));

    Throw.ArgumentOutOfRange.WhenLessThan(ageInYears, mustBeMoreThan:0, nameof(ageInYears));
    // or more simply:
    Throw.ArgumentOutOfRange.WhenNegativeNumber(ageInYears, nameof(ageInYears));            
}

Pitcher allows you to throw either an ArgumentOutOfRangeException or an ArgumentNullException when using this syntax and also provides ways to throw other exception types when a given condition is true, for example:

Throw.When(ageInYears == 42, new InvalidOperationException("This age has no meaning"));

You can find the source code and more examples on GitHub and don’t forget to say hi/thanks to the project maintainer Alex Kamsteeg and tell him you heard about Pitcher here :)

SHARE:

You Can Watch All My Pluralsight Training Videos for Free This April

No credit card needed, sign up for free now and start watching all my Pluralsight training courses for free.

Some suggestions:

C#

Testing Frameworks:

Testing Tools You May Not Know About:

Expand Your Software Development Horizons:

The following are some suggested courses on topics that may not be on your radar but that your may find interesting.

Skills Paths Featuring My and Other Author’s Courses:

If you want a ready made “curriculum” in the form of a skills path check out the follow paths that feature some of my courses and courses by fellow Pluralsight authors:

P.s. Remember to take care of yourself physically, mentally, and emotionally during these trying times.

From the Pluralsight website: "Free April is open to anyone who is not a current, active subscriber."

SHARE:

Running ASP.NET Core Apps on a Synology NAS with Docker

Now I’ve got the Synology NAS up and running, I thought it would be interesting to see what the Docker support is like. You can essentially run Docker container instances on the NAS box which also means you can deploy your own custom .NET Core apps to the Synology box.

This post is organized into 3 parts:

  1. Creating and testing a Docker-enabled ASP.NET Core app locally
  2. Deploying the app to the Synology NAS via Docker Hub
  3. Deploying the app locally to the NAS

Part 1: Creating and Testing a Docker ASP.NET Core App Locally

There’s a few things to setup to allow you to deploy and Test  Docker containers locally.

The first is to enable Hyper-V in Windows, this is a prerequisite of Docker Desktop for Windows:

Installing Windows Hyper-V Feature

Once you’ve enabled Hyper-V (a restart will probably be required) you can go and download and install Docker Desktop for Windows – this will allow you to enable Docker support when you create the project in Visual Studio.

Once Docker Desktop is installed and running you can check it’s running with PowerShell:

PS C:\Users\Admin> docker version
Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:23:10 2020
 OS/Arch:           windows/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:29:16 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
PS C:\Users\Admin>

Now you can fire up Visual Studio and create a new ASP.NET Core web application and tick the Enable Docker Support checkbox:

Creating an ASP.NET Core Web App with Docker Support

Once the project is created, you can click the Run button in Visual Studio (it should say “Docker” next to it).

Checking  the Output window for Container Tools you should can see something like:

========== Checking for Container Prerequisites ==========
Verifying that Docker Desktop is installed...
Docker Desktop is installed.
========== Verifying that Docker Desktop is running... ==========
Verifying that Docker Desktop is running...
Docker Desktop is running.
========== Verifying Docker OS ==========
Verifying that Docker Desktop's operating system mode matches the project's target operating system...
Docker Desktop's operating system mode matches the project's target operating system.
========== Pulling Required Images ==========
Checking for missing Docker images...
Pulling Docker images. To cancel this download, close the command prompt window.
docker pull mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim

After a while the build might fail with the following error: Error    CTC1001    Volume sharing is not enabled. On the Settings screen in Docker Desktop, click Shared Drives, and select the drive(s) containing your project files.  

To fix this, open up the Docker Desktop UI, and find the File Sharing section and enable C: drive if you want to make it available to Docker – this should fix the error:

Enabling File Sharing in Docker Desktop

 

Once this this change is applied and Docker Desktop restarted, click the Start button again in Visual Studio and after accepting dialog boxes to do with firewall and local certificate the web app should start up and run successfully and Docker Desktop should show the web app container running:

ASP.NET Core app running in Docker Desktop for Windows

So now you have a Docker-enabled .NET Core web app and have tested it locally you can deploy it to the Synology NAS.

Part 2: Deploying an ASP.NET Core Docker App To a Synology NAS Via Docker Hub (AKA There And Back Again – a Docker Hub Tale)

Docker Hub is place (“registry”) where you can store and manage Docker images. These images can then be pulled (downloaded) by  a Docker host and then a container started from this image.

Visual Studio has built-in support for pushing an image to Docker Hub and the Synology Docker app has the ability to pull images from Docker Hub. Images on Docker Hub can be public or private (depending on what plan you are using).

Once you’ve created a Docker Hub account, in Visual Studio go to the Build menu and choose Publish WebApplication1 (or whatever the name of your project is) and click Start. You will need to choose a publish target of Container Registry and choose Docker Hub:

Choosing Docker Hub as a publish target in Visual Studio

Click Create Profile - you’ll need to supply your Docker Hub user name and password and click save.

You can now click the Publish button and wait for a little while:

Publishing a ASP.NET Core web app to Docker Hub

You should see the app being pushed to Docker Hub:

Pushing to Docker Hub

Once the publish is complete you can head over to Docker Hub and you should see your image:

Docker Hub image

Now the image is in Docker Hub, you can enable Docker support on the Synology NAS, pull the image from Docker Hub, and start a container on the NAS.

First log into the Synology as an admin account and open the Package Center. Here you can search for “Docker” and install the Docker app:

Installing Docker support on a Synology NAS

Once you’ve installed the Docker app, open it and head to the Image section, click the Add button and choose Add From Url. Now you can head over to Docker Hub and copy the URL for your image, for example it will look something like this: https://hub.docker.com/r/jrdontcodetired/webapplication1:

Pulling an image from Docker Hub to a Synology NAS

Click Add and the image will be downloaded from Docker Hub to the NAS.

Once the image has downloaded, click on it and click the Launch button. This will enable you to start a container instance from the image.

You’ll need to click on Advanced Settings and go to the Port Settings tab. In the Dockerfile in Visual Studio, the image is set to use port 80. We need to map a port on the NAS to this port 80 in the container. For example you could set up port 7500 on the NAS itself to map traffic to port 80 in the container:

Mapping Synology port to docker container port

Click Apply and then Next. You will be given a summary of the settings (the “Run this container after the wizard is finished” box is ticked) and click Apply to finish the wizard and start the container.

You should now be able to see the container running in the Container section:

Docker container running on a Synology NAS

You can now point your browser to your NAS IP and the port your chose when staring the container, for example: http://192.168.20.17:7500/

You should now see your ASP.NET Core web app being served from the Docker container on the Synology NAS:

ASP.NET Core Web App running in a Docker container on a Synology NAS

Part 3: Directly Deploying Docker Container to a Synology NAS

The first step is to publish the web app and copy the published files to the Synology. You could also publish directly to a folder on the NAS such as: \\SYN001\Test1\DockerPublish

In Visual Studio from the Build menu choose Publish WebApplication1. Create a new Publish Profile this time using a Folder Target and as the folder choose a folder on the Synology:

Publish to Synoloy NAS folder from Visual Studio

Click Create Profile and then click Publish. Once this is finished you should see the web app files published to the Synology folder.

In the DockerPublish folder (this is an arbitrary name) on the NAS create a new Dockerfile with the following contents:

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
COPY . /app
WORKDIR /app
EXPOSE 80
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

Your folder on the Synology should now look like something like this:

image

The next step is to build the Docker image on the Synology NAS. To do this you can SSH into the NAS and use Docker build.

The first step is to enable SSH access on the Synology, you can do this from the Synology Control Panel in the Terminal & SNMP section – tick the Enable SSH Service box and click Apply:

Enabling SSH on a Synology NAS

Next in Windows, open a new PowerShell window and enter:

ssh Jason@192.168.20.17

Replace “Jason” with the name of one of your admin users and the IP address with the address of your Synology NAS – you will then need to enter the user’s password.

We need to SSH in as root (or set up a new user on the NAS). Be careful working in root or you could seriously mess your NAS up or introduce security problems. To get root access enter:

sudo -i

And once again enter the password.

You can now change to the folder that contains the published web app and Dockerfile:

cd /volume1/Test1/DockerPublish

And now build the image:

docker build -t manualwebapp .

This will produce the following output:

Sending build context to Docker daemon  4.706MB
Step 1/5 : FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
3.1-buster-slim: Pulling from dotnet/core/aspnet
c499e6d256d6: Pull complete
251bcd0af921: Pull complete
852994ba072a: Pull complete
f64c6405f94b: Pull complete
9347e53e1c3a: Pull complete
Digest: sha256:a9e160dbf5ed62c358f18af8c4daf0d7c0c30f203c0dd8dff94a86598c80003b
Status: Downloaded newer image for mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
 ---> c819eb4381e7
Step 2/5 : COPY . /app
 ---> 0beff55307c9
Step 3/5 : WORKDIR /app
 ---> Running in e731c0fa1d6e
Removing intermediate container e731c0fa1d6e
 ---> b64c09a9d51e
Step 4/5 : EXPOSE 80
 ---> Running in 6fddd1f77f4e
Removing intermediate container 6fddd1f77f4e
 ---> 9aa4035379dc
Step 5/5 : ENTRYPOINT ["dotnet", "WebApplication1.dll"]
 ---> Running in 4f0b086e44d3
Removing intermediate container 4f0b086e44d3
 ---> ead6395bf486
Successfully built ead6395bf486
Successfully tagged manualwebapp:latest

If you now head to the Docker app on the Synology you will see the manualwebapp image:

Docker build image on Synology NAS

You can start a container from this image as we did before using the Synology GUI or from the PowerShell prompt - we can start it with the following command (notice we’re mapping port 7501 on the NAS to port 80 in the container):

docker run --name manualtestcontainer -p 7501:80 -d manualwebapp

Now heading back the the Synology GUI you should see a container called manualtestcontainer running:

Docker container running on Synology NAS

 

 

Now you can head to the URL in a browser (e.g. http://192.168.20.17:7501/) and see the ASP.NET Core web app running in the Docker container:

ASP.NET Core running in Docker app running on Synology NAS

Summary

The ability to run Docker containers on a NAS is really nice, not only can you develop your own apps and deploy them as containers, you can also use images from a registry such as Docker Hub, for example MySQL, ghost blogging engine, etc. You should of course only use images you trust.

If you have any cool containers running on your Synology let me know in the comments!

SHARE:

Synology DiskStation DS1618 Plus Setup And Initial Review

Early this year I tweeted this:

After seeing this, Synology reached out to me and asked if they could give me a unit to review. The contents of this post are my opinions based purely on my experience and this article was not pre-approved or edited by Synology.

What is a NAS?

A NAS device or Network Attached Storage device allows you to serve files over a network. A NAS can be a purpose built piece of hardware (like the Synology unit being discussed in this article) or a server set up with specific software to make it act like a NAS.

A NAS is like a hard disk that you can access over the network (potentially by multiple users) but depending on the hardware/software it can do a lot more. For example a NAS can enable you to fit multiple individual hard disks in a RAID configuration. RAID (Redundant Array of Inexpensive Disks or Redundant Array of Independent Disks) allows you to combine multiple disks/SSDs in a number of different ways.

RAID comes in a number of flavours (“levels”) with  names such as RAID 0 or RAID 10. Each RAID level has its own benefit/trade-offs in terms of number of redundant disks, read/write speeds, and storage efficiency. For example, given 4 hard disks you could set up RAID to allow 2 disks to fail without loosing data, but you will not be able to use all the disk space on all the drives for your own storage.

In summary, a NAS is a network device that exposes file storage from one or more drives and may also use RAID for some redundancy.

One thing to bear in mind is RAID is not the same as backup. RAID gives you redundancy for hardware (drive) failure. If a drive fails in a RAID array you can usually keep working and just replace the damaged drive with a new one. You should still have a good backup strategy in place that backups data on the NAS, for example making sure you have off-site backups in case the building where the NAS is burns down, gets flooded, etc.. There are a number of ways you could do this on the Synology such as setting up Cloud Sync to Dropbox/Onedrive/etc or using an actual backup service that integrates with the Synology such as Synology C2, Backblaze, etc.

Setting Up a Synology DS1618+

Synology DS1618+ Box

The first think is to decide what redundancy characteristics  you want for your data that you’ll be storing on the Synology. Synology have a handy RAID calculator to help you work out how many drives will give you what amount of storage for different RAID levels.

For this device I decided I wanted 2 disk redundancy. This mean that even if 2 disks fails no data will be lost (using SHR-2). SHR (Synology Hybrid Raid) and SHR-2 are RAID-like configurations but allow mixing disks of different sizes.

I decided I wanted to start with 6 TB of accessible storage to start with to keep initial costs low. This means with 2 redundant disks (using SHR-2) I need a total of 4 drives each one being 3 TB in size.

There are hard drives that are designed for NAS applications and offer features such as rotational vibrational sensors and other features that make them more suitable than normal desktop hard drives. I decided for this Synology NAS I would go with 2 Western Digital Red NAS drives and 2 Seagate IronWolf NAS drives. The reason I went with 2 different brands is to minimize the chance of a manufacturing error in a single batch failing all the drives at once. This might however be overkill.

For maximum compatibility you should choose drives that have been verified as compatible, there is a handy compatibility list you can use though I found it almost impossible to find drives here in Australia that exactly matched the drive model numbers/firmware/etc. On the compatibility list you can also see drives that are explicitly incompatible. Before ordering the drives I checked that they were not explicitly marked as incompatible even though they also did not match exactly the drives on the compatible list.

Out of the box, the DS1618+ comes with a couple of network cables, a power cord, some mounting screws if you are using 2.5” drives.

DS1618+ unboxing

Installing Drives

Installing the drives is pretty easy, each bay pops open and the tray slides out into which the hard drive is inserted. For 3.5” drives no screws are required, instead the drives are help in place by a plastic strip on each side. These plastic were a bit fiddly however and felt a bit fragile and I was worried that I was going to snap them when trying to remove them from the tray. When the drive is inserted and the plastic strips clipped back in they feel like they hold the drive solidly in place however. Once the drive is held in place it can be slotted back into the NAS.

Adding a drive to the DS1618+

The drives can be secured in place using the supplied “key” to prevent the drives from accidentally being removed.

DS1618+ Setup

Once all 4 disks were installed, I connected the power cord and the network cable to the modem/router.

Back on my PC in a browser I navigated to http://find.synology.com – this then forwarded the browser to the NAS.

A wizard like setup process leads you through the required steps to install the NAS OS, create an admin user account, and optionally enable QuickConnect that allows you to access the management interface on your NAS over the Internet without needing to setup complex port forwarding rules which is a nice feature.

Once the setup is complete the web interface opens.

Synology DiskStation interface

From here you can manage the NAS and install additional packages such as Dropbox/Onedrive/etc cloud sync. This ability to install “apps” onto the NAS is a powerful feature that helps to add extra value to the NAS proposition.

Synology Package Center

At this point I had not actually setup or chosen a RAID level so I wasn’t sure what to do next.

Eventually I found the Main Menu button at the top left that allowed be to open the Storage Manager app that allows the setup of disks – it would have been nice if this was part of the initial setup wizard/guided workflow – at least for beginner users like myself.

There are 2 key concepts: Storage Pools and Volumes.This is where things started to get a little confusing for as a first-time NAS user, I knew that I wanted a single volume using SHR-2 but was not sure how to get there.

After a few minutes looking at the documentation I understood that a storage pool is a collection of drives. There are 2 types of storage pools: “Storage pool for better performance” and “Storage pool for higher flexibility” with the performance pool “better performance but less storage management flexibility” –unfortunately the doc I was looking at didn’t link to what this “storage management flexibility” refers to.

So I decided to just go and click the create storage pool button and see what happens. The popup then told me that the “flexible” pool is the one that supports SHR so I chose that option.

I gave the pool a not very original name of “MainPool” and chose SHR-2 as the RAID type. Then I proceed through the wizard and selected the 4 drives I installed to be part of this pool.

Next I clicked on the create new volume button and chose the storage pool I just created. I set the volume size to max because I only want 1 volume for all the 4 drives and then I was offered the choice of file system: Btrfs or ext – I went with the Btrfs option as it was the recommended option..

Once all this was done the NAS started running a parity consistency check check on the drives.

Setting Up A Share

Now the pool and volume are up and running it’s time to store some files!

I went to the Shared Folder Creation wizard and created a test share and then setup read/write permissions for myself.

Now heading over to Windows File Explorer and navigating to the NAS I had to provide credentials which I did for the user I created earlier:

Connecting to Synology Shared Folder in Windows

Now I can navigate to the Test1 share and create my first NAS-ed file :)

Setup Summary and First Impressions

One thing to bear in mind is that setting up a NAS is not the same as just plugging in an external USB. The Synology DS1618+ offers loads of configuration options and other than a short stumble where I learned about Storage Pools and Volumes the process was pretty painless. I now have the ability to store and retrieve files from anywhere in the house and also know that even if 2 of the 4 hard disks failed I would not lose data.

At this point I have not setup any backups so I won’t be putting any critical files on the NAS yet, I’m also looking forward to setting up things such as Cloud Sync that will sync Dropbox, Onedrive, etc to the NAS – at the moment due to smaller SSD sizes on my machines I’m having to make use of Dropbox selective sync all the time which is a bit annoying – having my entire Dropbox account on my local network will hopefully make things a lot nicer!.

I’m also looking forward to playing with the Synology Docker container support.

SHARE:

New Pluralsight Course: Creating Automated Browser Tests with Selenium in C#

My newest Pluralsight course was just published and you can start watching today. Selenium is a tool that allows you to automate a web browser and simulate an end-user interacting with your web app. You can combine Selenium with a test framework such as xUnit.net to create tests that check your web app is working as expected.

Automated browser tests can compliment your other types of tests such as unit and integration tests.

From the course description: “Unit and integration tests can help you catch a range of bugs, but not all of them. Even if your unit and integration tests pass, you could still deploy your web app to production and find it doesn’t work as expected. In this course, Creating Automated Browser Tests with Selenium in C#, you will gain the ability to create tests that automate the browser and simulate a real person using your web app. First, you will learn how to set up your test project and write your first test. Next, you will discover how to interact with web page elements from your tests, such as clicking a button or typing text. Finally, you will explore how to create a suite of automated web tests that are easier to maintain over time. When you are finished with this course, you will have the skills and knowledge of Selenium automated browser testing needed to help ensure your web app is working as expected before you release it to production.”

Check out the course today and if you’re not a Pluralsight member you can currently start watching for free with a Pluralsight Free Trial with Unlimited Access .

SHARE:

Adding Tuple Support to .NET Classes in C#

Edit: Updated to improve clarity (thanks to Paulo in the comments for helping to improve his article).

Tuples in C# are objects that can be created with a specific syntax. You don’t have to declare tuple types first like you do with classes for example, they can instead be created using a lightweight C# syntax.

A tuple is a object that holds a number of arbitrary data items and which has no custom behaviour. In contrast, a class or struct can have both data and custom behaviour.

For example the following creates a tuple with 2 string values:

(string, string) names = ("Sarah", "Smith");
Console.WriteLine($"First name: '{names.Item1}' Last name: '{names.Item2}'");

This code produces the output: First name: 'Sarah' Last name: 'Smith'

In the preceding code, the items inside the tuple don’t have names so they are referred to as Item1 and Item2 but you could also name the items, for example:

(string firstName, string lastName) names = ("Sarah", "Smith");
Console.WriteLine($"First name: '{names.firstName}' Last name: '{names.lastName}'");

If you had a rich Person class that had both data and behaviour, you could also add support for tuple-like deconstruction and unpackaging of a Person instance into variables just like you would do with a tuple instance.

Consider the following class:

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int AgeInYears { get; set; }
    public string FavoriteColor { get; set; }
    
    // methods etc.
}

We could create a tuple as before containing the first and last name as follows:

var sarah = new Person
{
    FirstName = "Sarah",
    LastName = "Smith",
    AgeInYears = 42,
    FavoriteColor = "red"
};

(string firstName, string lastName) names = (sarah.FirstName, sarah.LastName);
Console.WriteLine($"First name: '{names.firstName}' Last name: '{names.lastName}'");

This is however a little clunky, we can modify the Person class to provide support for a Person to have tuple-like deconstruction and unpacking semantics. To do this a public void method called Deconstruct can be added, for example:

class Person
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public int AgeInYears { get; set; }
    public string FavoriteColor { get; set; }

    // methods etc.

    public void Deconstruct(out string firstName, out string lastName)
    {
        firstName = FirstName;
        lastName = LastName;
    }
}

Now the code could be changed to:

var (firstName, lastName) = sarah;
Console.WriteLine($"First name: '{firstName}' Last name: '{lastName}'");

You could also add this deconstruction/unpackaging support to a class you can’t change by declaring an extension method such as:

static class PersonExtensions
{
    public static void Deconstruct(this Person person, out string firstName, out string lastName)
    {
        firstName = person.FirstName;
        lastName = person.LastName;
    }
}

Or as another example, you could add tuple-like deconstruction & unpackaging support for the .NET String type:

static class StringExtensions
{
    public static void Deconstruct(this string s, out string original, out string upper, out string lower, out int length)
    {
        original = s;
        upper = s.ToUpperInvariant();
        lower = s.ToLowerInvariant();
        length = s.Length;
    }
}

And then write:

var (original, upper, lower, length) = "The quick brown fox";
Console.WriteLine($"Original: {original}");
Console.WriteLine($"Uppercase: {upper}");
Console.WriteLine($"Lowercase: {lower}");
Console.WriteLine($"Length: {length}");

As Paulo points out in the comments there is no actual tuple instance per-se involved here, if look at the the decompiled source that Paulo links to you can see the Person has been unpackaged into multiple variables.

If you want to learn a load more C# tips check out my C# Tips and Traps course today. You can even currently start watching with a Pluralsight Free Trial with Unlimited Access .

SHARE:

Variables? We Don’t Need No Stinking Variables - C# Discards

C# 7.0 introduced the concept of discards. Discards are intentionally unused, temporarily dummy variables that we don’t care about and don’t want to use.

For example, the following shows the result of an addition being discarded:

_ = 1 + 1;

Note the underscore _ this is the discard character.

Given the preceding example, you cannot access the result of this addition, for example:

WriteLine(_); // Error CS0103  The name '_' does not exist in the current context 

Using C# Discards with Out Parameters

A more useful example is when you are working with a method that has one or more out parameters and you don’t care about using the outputted value.

As an example, consider one of the many TryParse methods in .NET such as int.TryParse. The following code show a method that writes to the console whether or not a string can be parsed as an int:

static void ParseInt()
{
    WriteLine("Please enter an int to validate");
    string @int = ReadLine();
    bool isValidInt = int.TryParse(@int, out int parsedInt);
    
    if (isValidInt)
    {
        WriteLine($"{@int} is a valid int");
    }
    else
    {
        WriteLine($"{@int} is NOT a valid int");
    }
}

The preceding method can be written using a discard because the out int parsedInt value is never used:

static void ParseIntUsingDiscard()
{
    WriteLine("Please enter an int to validate");
    string @int = ReadLine();

    if (int.TryParse(@int, out _))
    {
        WriteLine($"{@int} is a valid int");
    }
    else
    {
        WriteLine($"{@int} is NOT a valid int");
    }
}

For example we could create an expression bodied method using a similar approach:

static bool IsInt(string @int) => int.TryParse(@int, out _);

If you have a method that returns a lot of out values such as:

private static void GenerateDefaultCity(out string name, out string nickName, out long population, out DateTime founded)
{
    name = "London";
    nickName = "The Big Smoke";
    population = 8_000_000;
    founded = new DateTime(50, 1, 1);
}

In this case you might only care about the returned population value so you could discard all the other out values:

GenerateDefaultCity(out _,out _, out var population, out _);
WriteLine($"Population is: {population}");

Using C# Discards with Tuples

Another use for discards is where you don’t care about all the fields of a tuple. For example the following method returns a tuple containing a name and age:

static (string name, int age) GenerateDefaultPerson()
{
    return ("Amrit", 42);
}

If you only cared about the age you could write:

var (_, age) = GenerateDefaultPerson();
WriteLine($"Default person age is {age}");

Simplifying Null Checking Code with Discards

Take the following null checking code:

private static void Display(string message)
{
    if (message is null)
    {
        throw new ArgumentNullException(nameof(message));
    }
    WriteLine(message);
}

You could refactor this to make use of throw expressions:

private static void DisplayV2(string message)
{
    string checkedMessage = message ?? throw new ArgumentNullException(nameof(message));

    WriteLine(checkedMessage);
}

In the preceding version however, the checkedMessage variable is somewhat redundant, this could be refactored to use a discard:

private static void DisplayWithDiscardNullCheck(string message)
{
    _ = message ?? throw new ArgumentNullException(nameof(message));
    
    WriteLine(message);
}

Using C# Discards with Tasks

Take the following code:

// Warning CS1998  This async method lacks 'await' operators and will run synchronously.
Task.Run(() => SayHello());

Where the SayHello method is defined as:

private static string SayHello()
{
    string greeting = "Hello there!";
    return greeting;
}

If we don’t care about the return value and want to discard the result and get rid of the compiler warning::

// With discard - no compiler warning
_ = Task.Run(() => SayHello());

If there are any exceptions however, they will be supressed:

await Task.Run(() => throw new Exception()); // Exception thrown
_ = Task.Run(() => throw new Exception()); // Exception suppressed

Pattern Matching with Switch Statements and Discards

You can also use discards in switch statements:

private static void SwitchExample(object o)
{
    switch (o)
    {
        case null:
            WriteLine("o is null");
            break;
        case string s:
            WriteLine($"{s} in uppercase is {s.ToUpperInvariant()}");
            break;
        case var _:
            WriteLine($"{o.GetType()} type not supported.");
            break;
    }
}

If you want to learn a load more C# tips check out my C# Tips and Traps course today. You can even currently start watching with a Pluralsight Free Trial with Unlimited Access .

SHARE:

Simplifying Parameter Null and Other Checks with the GuardClauses Library

Often you want to add null checking and other check code at the start of a method to ensure all the values passed into the method are valid before continuing.

For example the following method checks the name and age:

public static void AddNewPerson(string name, int ageInYears)
{
    if (string.IsNullOrWhiteSpace(name))
    {
        throw new ArgumentException($"Cannot be null, empty, or contain only whitespace.", nameof(name));
    }

    if (ageInYears < 1)
    {
        throw new ArgumentOutOfRangeException(nameof(ageInYears), "Must be greater than zero.");
    }

    // Add to database etc.
}

This “guard” kind of code can “clutter” the method and reduce readability.

One library I recently came across is the Guard Clauses library from Steve Smith.

Once this library is installed we could refactor the preceding code to look like the following:

public static void AddNewPerson(string name, int ageInYears)
{
    Guard.Against.NullOrWhiteSpace(name, nameof(name));
    Guard.Against.NegativeOrZero(ageInYears, nameof(ageInYears));

    // Add to database etc.
}

Passing a null name results in the exception: System.ArgumentNullException: Value cannot be null. (Parameter 'name')

Passing an empty string results in: System.ArgumentException: Required input name was empty. (Parameter 'name')

Passing in an age of zero results in: System.ArgumentException: Required input ageInYears cannot be zero or negative. (Parameter 'ageInYears')

The code is also more readable and succinct.

Out of the box the library comes with the following guards (taken from the documentation):

  • Guard.Against.Null (throws if input is null)
  • Guard.Against.NullOrEmpty (throws if string or array input is null or empty)
  • Guard.Against.NullOrWhiteSpace (throws if string input is null, empty or whitespace)
  • Guard.Against.OutOfRange (throws if integer/DateTime/enum input is outside a provided range)
  • Guard.Against.OutOfSQLDateRange (throws if DateTime input is outside the valid range of SQL Server DateTime values)
  • Guard.Against.Zero (throws if number input is zero)

You can also define your own reusable clauses:

// Define in this namespace so can use alongside built-in guards with no additional namespaces required
namespace Ardalis.GuardClauses
{
    public static class PositiveGuard
    {
        public static void Positive(this IGuardClause guardClause, int input, string parameterName)
        {
            if (input >= 0)
            {
                throw new ArgumentOutOfRangeException(parameterName, $"Required input {parameterName} cannot be positive.");
            }                           
        }
    }
}

And then in a method we can write:

public static void ReportNegativeTemperature(int temp)
{
    Guard.Against.Positive(temp, nameof(temp));
    // Do something
}

And if we pass a positive (or zero) temp we get: System.ArgumentOutOfRangeException: Required input temp cannot be positive. (Parameter 'temp')

This is one of those simple libraries that can make basic tasks easier/more readable.

If you check this out and use it make sure you say thanks to Steve on Twitter and let him know @robertsjason sent you ;)

SHARE: