It can be difficult  to write unit tests for code that accesses the file system.

It’s possible to write integration tests that read in an actual file from the file system, do some processing, and check the resultant output file (or result) for correctness. There are a number of potential problems with these types of integration tests including the potential for them to more run slowly (real IO access overheads), additional test file management/setup code, etc. (this does not mean that some integration tests wouldn’t be useful however).

The System.IO.Abstractions NuGet package can help to make file access code more testable. This package provides a layer of abstraction over the file system that is API-compatible with existing code.

Take the following code as an example:

using System.IO;
namespace ConsoleApp1
{
public class FileProcessorNotTestable
{
public void ConvertFirstLineToUpper(string inputFilePath)
{
string outputFilePath = Path.ChangeExtension(inputFilePath, ".out.txt");

using (StreamWriter outputWriter = File.CreateText(outputFilePath))
{
bool isFirstLine = true;

{

if (isFirstLine)
{
line = line.ToUpperInvariant();
isFirstLine = false;
}

outputWriter.WriteLine(line);
}
}
}
}
}


The preceding code opens a text file, and writes it to a new output file, but with the first line converted to uppercase.

This class is not easy to unit test however, it is tightly coupled to the physical file system with the calls to File.OpenText and File.CreateText.

Once the System.IO.Abstractions NuGet package is installed, the class can be refactored as follows:

using System.IO;
using System.IO.Abstractions;

namespace ConsoleApp1
{
public class FileProcessorTestable
{

public FileProcessorTestable() : this (new FileSystem()) {}

public FileProcessorTestable(IFileSystem fileSystem)
{
_fileSystem = fileSystem;
}

public void ConvertFirstLineToUpper(string inputFilePath)
{
string outputFilePath = Path.ChangeExtension(inputFilePath, ".out.txt");

using (StreamWriter outputWriter = _fileSystem.File.CreateText(outputFilePath))
{
bool isFirstLine = true;

{

if (isFirstLine)
{
line = line.ToUpperInvariant();
isFirstLine = false;
}

outputWriter.WriteLine(line);
}
}
}
}
}



The key things to notice in the preceding code is the ability to pass in an IFileSystem as a constructor parameter. The calls to File.OpenText and File.CreateText are now redirected to _fileSystem.File.OpenText and _fileSystem.File.CreateText  respectively.

If the parameterless constructor is used (e.g. in production at runtime) an instance of FileSystem will be used, however at test time, a mock IFileSystem can be supplied.

Handily, the System.IO.Abstractions.TestingHelpers NuGet package provides a pre-built mock file system that can be used in unit tests, as the following simple test demonstrates:

using System.IO.Abstractions.TestingHelpers;
using Xunit;

namespace XUnitTestProject1
{
public class FileProcessorTestableShould
{
[Fact]
public void ConvertFirstLine()
{
var mockFileSystem = new MockFileSystem();

var mockInputFile = new MockFileData("line1\nline2\nline3");

var sut = new FileProcessorTestable(mockFileSystem);
sut.ConvertFirstLineToUpper(@"C:\temp\in.txt");

MockFileData mockOutputFile = mockFileSystem.GetFile(@"C:\temp\in.out.txt");

string[] outputLines = mockOutputFile.TextContents.SplitLines();

Assert.Equal("LINE1", outputLines[0]);
Assert.Equal("line2", outputLines[1]);
Assert.Equal("line3", outputLines[2]);
}
}
}


To see this in action or to learn more about file access, check out my Working with Files and Streams in C# Pluralsight course.

In previous posts we looked at testing for thrown exceptions in xUnit.net and NUnit. In this post we’re going to see how to test in MSTest V2.

As with the previous posts, the class being tested is as follows:

public class TemperatureSensor
{
bool _isInitialized;

public void Initialize()
{
// Initialize hardware interface
_isInitialized = true;
}

{
if (!_isInitialized)
{
throw new InvalidOperationException("Cannot read temperature before initializing.");
}

return 42; // Simulate for demo code purposes
}
}


And the first test to check the normal execution:

[TestMethod]
{
var sut = new TemperatureSensor();

sut.Initialize();

Assert.AreEqual(42, temperature);
}


Next, a test can be written to check that the expected exception is thrown:

[TestMethod]
{
var sut = new TemperatureSensor();

}


The preceding code using the Assert.ThrowsException method, this method takes the type of the expected exception as the generic type parameter (in this case InvalidOperationException). As the method parameter an action/function can be specified – this is the code that is supposed to cause the exception to be thrown.

The thrown exception can also be captured if you need to test the exception property values:

[TestMethod]
{
var sut = new TemperatureSensor();

var ex = Assert.ThrowsException<InvalidOperationException>(() => sut.ReadCurrentTemperature());

Assert.AreEqual("Cannot read temperature before initializing.", ex.Message);
}


To learn more about using exceptions to handle errors in C#, check out my Error Handling in C# with Exceptions Pluralsight course or to learn more about MS Test V2 check out my Automated Testing with MSTest V2 Pluralsight course.

In a previous post, testing for thrown exceptions using xUnit.net was demonstrated. In this post we’ll see how to do the same with NUnit.

Once again the class being tested is as follows:

public class TemperatureSensor
{
bool _isInitialized;

public void Initialize()
{
// Initialize hardware interface
_isInitialized = true;
}

{
if (!_isInitialized)
{
throw new InvalidOperationException("Cannot read temperature before initializing.");
}

return 42; // Simulate for demo code purposes
}
}


The first test can be to test the happy path:

[Test]
{
var sut = new TemperatureSensor();

sut.Initialize();

Assert.AreEqual(42, temperature);
}


Next, a test can be written to check that the expected exception is thrown:

[Test]
{
var sut = new TemperatureSensor();

}


Notice in the preceding code that any InvalidOperationException thrown will pass the test. To ensure that the thrown exception is correct, it can be captured and further asserts performed against it:

[Test]
{
var sut = new TemperatureSensor();

var ex = Assert.Throws<InvalidOperationException>(() => sut.ReadCurrentTemperature());

Assert.AreEqual("Cannot read temperature before initializing.", ex.Message);
// or:
Assert.That(ex.Message, Is.EqualTo("Cannot read temperature before initializing."));
}


There’s also other ways to assert against expected exceptions such as the following:

Assert.Throws(Is.TypeOf<InvalidOperationException>()



There’s some personal preference involved when choosing a style, for example the preceding code could be considered more verbose by some and may muddle the distinction between the Act and Assert phases of a test.

To learn more about using exceptions to handle errors in C#, check out my Error Handling in C# with Exceptions Pluralsight course.

Bogus is a lovely library from Brian Chavez to use in automated tests to automatically generate test data of different kinds.

As an example suppose the following class is involved in a unit test:

public class Review
{
public int Id { get; set; }
public string Title { get; set; }
public string Body { get; set; }
public int Rating { get; set; }
public DateTimeOffset Created { get; set; }

public override string ToString()
{
return $"{Id} '{Title}'"; } }  In a test, a Review instance may need properties populating with values. This could be done manually, for example to check the ToString() implementation: [Fact] public void BeRepresentedAsAString() { var sut = new Review { Id = 42, Title = "blah blah" }; Assert.Equal("42 'blah blah'", sut.ToString()); }  Notice in the preceding test, the actual values and title don’t really matter, only the fact that they’re joined as part of the ToString() call. In this example the values for Id and Title could be considered anonymous variable / values in that we don’t really care about them. The following test uses the Bogus NuGet package and uses its non-fluent facade syntax: [Fact] public void BeRepresentedAsAString_BogusFacadeSyntax() { var faker = new Faker("en"); // default en var sut = new Review { Id = faker.Random.Number(), Title = faker.Random.String() }; Assert.Equal($"{sut.Id} '{sut.Title}'", sut.ToString());
}



Bogus also has a powerful fluent syntax to define what a test object will look like. To use the fluent version, a Faker<T> instance is created where T is the test object to be configured and created, for example:

[Fact]
public void BeRepresentedAsAString_BogusFluentSyntax()
{
var reviewFaker = new Faker<Review>()
.RuleFor(x => x.Id, f => f.Random.Number(1, 10))
.RuleFor(x => x.Title, f => f.Lorem.Sentence());

var sut = reviewFaker.Generate();

Assert.Equal(\$"{sut.Id} '{sut.Title}'", sut.ToString());
}


The first argument to the RuleFor() methods allows the property of the Review object to be selected and the second argument specifies how the property value should be generated. There is a huge range of test data types supported. In the preceding code the Random API is used as well as the Lorem API.

Some examples of the types of auto generated data include:

• Addresses: ZipCode, City, Country, Latitude, etc.
• Commerce: Department name, ProductName, ProductAdjective, Price, etc.
• Company: CompanyName, CatchPhrase, Bs, etc.
• Date: Past, Soon, Between, etc.
• Finance: Account number, TransactionType, Currency, CreditCardNumber, etc.
• Image URL: Random image, Animals image, Nature image, etc.
• Internet: Email, DomainName, Ipv6, Password, etc.
• Lorem: single word, Words, Sentence, Paragraphs, etc.
• Name: FirstName, LastName, etc.
• Rant: Random user review, etc.
• System: FileName, MimeType, FileExt, etc.

Some of the random generated values are quite entertaining, for example Rant.Review() may produce "My co-worker Fate has one of these. He says it looks tall."; Company.Bs() may produce "transition cross-media users", and Company.CatchPhrase() may produce "Face to face object-oriented focus group".

Bogus configuration is quite powerful and allows fairly complex setup as the following code demonstrates:

[Fact]
public void CalculateAverageRatingWhenMultipleReviews()
{
int rating = 0;

var reviewFaker = new Faker<Review>()
.RuleFor(x => x.Id, f => f.Random.Number(1, 10))
.RuleFor(x => x.Rating, f => rating++);

var productFaker = new Faker<Product>()
.RuleFor(x => x.PricePerUnit, f => f.Finance.Amount())
.RuleFor(x => x.Description, f => f.WaffleText(3))
.FinishWith((f, x) =>
{
});

var sut = productFaker.Generate();

Assert.Equal(1, sut.AverageRating); // (0 + 1 + 2) / 3
}


The WaffleText() API is provided by one of the extensions to Bogus (WaffleGenerator.Bogus) that produces inane looking waffle text such as the following:

The Quality Of Hypothetical Aesthetic

"The parallel personal hardware cannot explain all the problems in maximizing the efficacy of any fundamental dichotomies of the logical psychic principle. Generally the requirements of unequivocal reciprocal individuality is strictly significant. On the other hand the characteristic organizational change reinforces the weaknesses in the evolution of metaphysical terminology over a given time limit. The objective of the explicit heuristic discordance is to delineate the truly global on-going flexibility or the preliminary qualification limit. A priority should be established based on a combination of functional baseline and inevitability of amelioration The Quality Of Hypothetical Aesthetic"

- Michael Stringer in The Journal of the Proactive Directive Dichotomy (20174U)

structure plan.

To make the main points more explicit, it is fair to say that;
* the value of the optical continuous reconstruction is reciprocated by what should be termed the sanctioned major issue.
* The core drivers poses problems and challenges for both the heuristic non-referent spirituality and any discrete or Philosophical configuration mode.
* an anticipation of the effects of any interpersonal fragmentation reinforces the weaknesses in the explicit deterministic service. This may be due to a lack of a doctrine of the interpersonal quality..
* any significant enhancements in the strategic plan probably expresses the strategic personal theme. This trend may dissipate due to the personal milieu.

firm assumptions about ideal major monologism evinces the universe of attitude.

The Flexible Implicit Aspiration.

Within current constraints on manpower resources, any consideration of the lessons learnt can fully utilize what should be termed the two-phase multi-media program.

For example, the assertion of the importance of the integration of doctrine of the prime remediation with strategic initiatives cannot be shown to be relevant. This is in contrast to the strategic fit.


When writing tests it is sometimes useful to check that the correct exceptions are thrown at the expected time.

When using xUnit.net there are a number of ways to accomplish this.

As an example consider the following simple class:

public class TemperatureSensor
{
bool _isInitialized;

public void Initialize()
{
// Initialize hardware interface
_isInitialized = true;
}

{
if (!_isInitialized)
{
throw new InvalidOperationException("Cannot read temperature before initializing.");
}

return 42; // Simulate for demo code purposes
}
}



The first test we could write against the preceding class is to check the “happy path”:

[Fact]
{
var sut = new TemperatureSensor();

sut.Initialize();

Assert.Equal(42, temperature);
}


Next a test could be written to check that if the temperature is read before initializing the sensor, an exception of type InvalidOperationException is thrown. To do this the xUnit.net Assert.Throws method can be used. When using this method the generic type parameter indicates the type of expected exception and the method parameter takes an action that should cause this exception to be thrown, for example:

[Fact]
{
var sut = new TemperatureSensor();

}


In the preceding test, if an InvalidOperationException is not thrown when the ReadCurrentTemperature method is called the test will fail.

The thrown exception can also be captured in a variable to make further asserts against the exception property values, for example:

[Fact]
{
var sut = new TemperatureSensor();

var ex = Assert.Throws<InvalidOperationException>(() => sut.ReadCurrentTemperature());

Assert.Equal("Cannot read temperature before initializing.", ex.Message);
}


The Assert.Throws method expects the exact type of exception and not derived exceptions. In the case where you want to also allow derived exceptions, the Assert.ThrowsAny method can be used.

Similar exception testing features also exist in MSTest and NUnit frameworks.

To learn more about using exceptions to handle errors in C#, check out my Error Handling in C# with Exceptions Pluralsight course.

In the (relatively) distant past, MSTest was often used by organizations because it was provided by Microsoft “in the box” with Visual Studio/.NET. Because of this, some organizations trusted MSTest over open source testing frameworks such as NUnit. This was at a time when the .NET open source ecosystem was not as advanced as it is today and before Microsoft began open sourcing some of their own products.

Nowadays MSTest is cross-platform and open source and is known as MSTest V2, and as the documentation states: “is a fully supported, open source and cross-platform implementation of the MSTest test framework with which to write tests targeting .NET Framework, .NET Core and ASP.NET Core on Windows, Linux, and Mac.”.

MSTest V2 provides typical assert functionality such as asserting on the values of: strings, numbers, collections, thrown exceptions, etc. Also like other testing frameworks, MSTest V2 allows the customization of the test execution lifecycle such as the running of additional setup code before each test executes. The framework also allows the creation of data driven tests (a single test method executing  multiple times with different input test data) and the ability to extend the framework with custom asserts and custom test attributes.

You can find out more about MSTest V2 at the GitHub repository, the documentation, or check out my Pluralsight course: Automated Testing with MSTest V2.

I was asked a question on Twitter so I thought I’d write it up here.

When using the FeatureToggle library you may have some some code that behaves differently if a toggle is enabled.

When writing a test, you can create a mock IFeatureToggle and set it up to be enabled (or not) and then assert the result is as expected.

The following code show a simple console app that has an OptionsConsoleWriter.Generate method that uses a toggle to output a printing feature option:

using static System.Console;
using System.Text;
using FeatureToggle.Toggles;
using FeatureToggle.Core;

namespace ConsoleApp1
{
public class Printing : SimpleFeatureToggle {}

public class OptionsConsoleWriter
{
public string Generate(IFeatureToggle printingToggle)
{
var sb = new StringBuilder();

sb.AppendLine("Options:");
sb.AppendLine("(e)xport");
sb.AppendLine("(s)ave");

if (printingToggle.FeatureEnabled)
{
sb.AppendLine("(p)rinting");
}

return sb.ToString();
}
}

class Program
{
static void Main(string[] args)
{
Printing printingToggle = new Printing();

string options = new OptionsConsoleWriter().Generate(printingToggle);

Write(options);

}
}
}



To write a couple of simple tests for this method, you can use a mocking framework such as Moq to generate a mocked IFeatureToggle and pass it to the Generate method:

using Xunit;
using Moq;
using FeatureToggle.Core;
using ConsoleApp1;

namespace ClassLibrary1.Tests
{
public class OptionsConsoleWriterTests
{
[Fact]
public void ShouldGeneratePrintingOption()
{
var sut = new OptionsConsoleWriter();

var mockPrintingToggle = new Mock<IFeatureToggle>();
mockPrintingToggle.SetupGet(x => x.FeatureEnabled)
.Returns(true);

string options = sut.Generate(mockPrintingToggle.Object);

Assert.Contains("(p)rinting", options);
}

[Fact]
public void ShouldNotGeneratePrintingOption()
{
var sut = new OptionsConsoleWriter();

var mockPrintingToggle = new Mock<IFeatureToggle>();
mockPrintingToggle.SetupGet(x => x.FeatureEnabled)
.Returns(false);

string options = sut.Generate(mockPrintingToggle.Object);

Assert.DoesNotContain("(p)rinting", options);
}
}
}



One way to run automated tests is to use Visual Studio’s Test Explorer. Test Explorer can be found under the Test –> Windows –> Test Explorer menu items.

In this article we’ll look at how to manage the list of tests using grouping and also how to specify custom search filter expressions.

## Grouping Tests

There are a number of ways to group tests in Test Explorer, at the highest structural level we can group by project.

To select a group by method, click the drop down arrow as show in the following screenshot:

With the grouping set to Project, the test list looks as follows:

The next structural grouping is Class:

The final structural grouping is by Namespace:

There are a number of non-structural groupings.

Group by Duration:

Group by Outcome:

…and group by Traits:

## Filtering Tests

Custom filters can also be applied.

For example by file path:

Other search examples include:

• Trait:"Smoke Test"
• Message:"System.Exception"
• Class1
• Outcome:"Passed"
• Outcome:"Failed"

Subsets can also be excluded by prefixing the type of filter with a -. For example to show all tests in Class1 except failed tests: Class:"TestClass1" -Outcome:"Passed".

xUnit.net is a testing framework that can be used to write automated tests for .NET (full) framework and also .NET Core.

To get started, first create a .NET Core application, in the following example a .NET Core console app.

A testing project can now be added to the solution:

This test project will come pre-configured with the relevant NuGet packages installed to start writing test code, though you may want to update the pre-configured packages to the newest NuGet versions.

The xUnit Test Project template will also create the following default test class:

using System;
using Xunit;

namespace ConsoleCalculator.Tests
{
public class UnitTest1
{
[Fact]
public void Test1()
{

}
}
}


Notice in the preceding code, the Test1 method is decorated with the [Fact] attribute. This is an xUnit.net attribute that tells a test runner that it should execute the method, treat it as a test, and report on if the test passed or not.

Next add a project reference from the test project to the project that contains the code that is to be tested, this gives the test project access to the production code.

In the production project, the following class can be added:

namespace ConsoleCalculator
{
public class Calculator
{
public int Add(int a, int b)
{
return a + b;
}
}
}


Now the test class can be renamed (for example to “CalculatorTests”) and the test method changed to create a test:

using Xunit;

namespace ConsoleCalculator.Tests
{
public class CalculatorTests
{
[Fact]
{
Calculator calculator = new Calculator();

Assert.Equal(10, result);
}
}
}


In the preceding code, once again the [Fact] attribute is being used, then the thing being tested is created (the Calculator class instance). The next step is to perform some kind of action on the thing being tested, in this example calling the Add method. The final step is to signal to the test runner if the test has passed or not, this is done by using one of the many xUnit.net Assert methods; in the preceding code the Assert.Equal method is being used. The first parameter is the expected value of 10, the second parameter is the actual value produced by the code being tested. So if  result is 10 the test will pass, otherwise it will fail.

One way to execute tests is to use Visual Studio’s Test Explorer which can be found under the Test –> Windows –> Test Explorer menu item. Once the test project is built, the test will show up and can be executed as the following screenshot shows:

To learn more about how to get started testing .NET Core code check out my Testing .NET Core Code with xUnit.net: Getting Started Pluralsight course or check out the docs.

It’s often useful to take a step back and look at the bigger picture, this is true in different aspects of life such as health or wealth or relationships, and is also true of software development.

When it comes to creating automated tests (as with other aspects of software development) dogmatism and absolutist schools of though can exist.

As with all things, the decision to write tests (and how many tests, what type of tests, test coverage aims, etc.) ultimately should boil down to one question: do they add value to what you are doing?

To be clear, I absolutely believe in the creation of automated tests in many cases, however it is good to not be dogmatic. For example if there is a niche market that is ripe for capitalizing on, and time-to-market is the most important thing to capture this market, then an extensive suite of automated tests may slow down getting to that initial release. This of course assumes that the potential market will have some tolerance for software defects. It also depends on what the product is; medical life-critical software is probably going to have a higher quality requirement than a social media app for example. This can be a trade-off however with shorter term delivery speeds being quicker but at the expense of delivery speed in the long term, if you’re overwhelmed fixing production outages you have very little time to add new features/value.

Another aspect to consider is that of risk. What are the risks associated with defects in the software making their way into production? Different features/application areas may also have different risk profiles; for example  the “share on social media” feature may not be deemed as important as a working shopping cart. It’s also important to remember that risk is not just monetary, in the previous example a broken “share on social media” feature may bring the business into disrepute, aka “reputational risk”.

When it comes to the myriad of different types of tests (unit, integration, subcutaneous, etc.) the testing pyramid is an oft-quoted model of how many of each type of tests to have in the test suite. While the testing pyramid may be of great help for someone new to automated testing to help them learn and navigate their initial steps, as experience grows the model may no longer be optimal to some of the projects that are being worked on. Beyond the testing pyramid the different aspects of test types can be considered such as execution speed, breadth/depth, reliability/brittleness etc.

Automated tests also do not exist in and of themselves, they are part of a bigger set of processes/considerations such as pair programming, code reviews, good management, well-understood requirements, good environment management/DevOps, etc.

If you want to take a step back and look at the big picture, or know a developer or manager who wants to understand the trade-offs/benefits check out my Testing Automation: The Big Picture Pluralsight course.