Testing EventGridTrigger Azure Functions Locally (Without Using ngrok)

(This post refers to Azure Functions v2)

One way to test Azure Functions that use Event Grid triggers is to run the Function App locally and then get Azure in the cloud to invoke the function running on the local machine. As an example, suppose you want to use Event Grid to improve the reliability and responsiveness of Blob Storage processing. To do this the documentation suggests the use of ngrok. Now when a blob is added to a container in the cloud, the locally running function on the dev machine will be invoked via ngrok.

There is a somewhat simpler solution that allows you to invoke the Event Grid triggered function locally.

This approach bypasses Event Grid completely, so it is not a substitute for proper end-to-end testing, it’s more a development-time testing & debugging tool.

Manually Running Non HTTP-Triggered Azure Functions

You can manually trigger a non HTTP-triggered function (such as a timer triggered or Event Grid triggered function) via a special HTTP endpoint.

The endpoint is of the format: {host}/admin/functions/{function name}

For example, take the following function (which was also used in the post Improving Azure Functions Blob Trigger Performance and Reliability - Part 3: Using Event Grid to Respond to New Blobs):

public static class ProcessFoodBlobsEventGrid
{
    private static readonly string[] _meats = { "steak", "chicken", "venison" };

    [FunctionName("ProcessFoodBlobsEventGrid")]
    public static void Run(
     [EventGridTrigger]EventGridEvent blobCreatedEvent,
     [Blob("{data.url}")] string foods, // assumes small blob size so using string not stream
     [Blob("{data.url}.vegetarian")] out string vegetarian,
     [Blob("{data.url}.nonvegetarian")] out string nonVegetarian,
     ILogger log)
    {
        log.LogInformation("Processing a blob created event");

        StorageBlobCreatedEventData createdEvent = ((JObject)blobCreatedEvent.Data).ToObject<StorageBlobCreatedEventData>();

        log.LogInformation($"Blob: {createdEvent.Url}");
        log.LogInformation($"Api operation: {createdEvent.Api}");

        vegetarian = null;
        nonVegetarian = null;

        string[] foodLines = foods.Split(new[] { "\r\n", "\n" }, StringSplitOptions.RemoveEmptyEntries);


        foreach (var food in foodLines)
        {
            var isMeat = _meats.Contains(food);

            if (isMeat)
            {
                nonVegetarian += food + Environment.NewLine;
            }
            else
            {
                vegetarian += food + Environment.NewLine;
            }
        }
    }
}

The preceding function when running locally in development would have the special URL: http://localhost:7071/admin/functions/ProcessFoodBlobsEventGrid

If you had a timer-triggered function called HerdCats that you wanted to manually invoke (so you didn’t have to wait for the next timed invocation) the special URL would be:http://localhost:7071/admin/functions/HerdCats

Note: when running locally in development you do not have to authenticate. If you wanted to manually invoke a deployed function in Azure, you need to provide an x-functions-key header that contains the function master key.

Manually Invoking an Event Grid Triggered Azure Function

When using the special URL to invoke a function, you can also provide data to be passed to the function. The type of data passed will depend on the trigger type of the function that you are invoking.

To provide data to the function, a JSON payload can be posted to the special URL. The data that is passed to the function is contained in a JSON property called “input”:

{
    "input": "trigger data goes here"
}

If the Event Grid triggered function will be invoked by a new blob event, the contents of this input property must match the event schema for an Azure Blob Storage event.

An example of event JSON (taken from the Microsoft documentation):

[{
  "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount",
  "subject": "/blobServices/default/containers/testcontainer/blobs/testfile.txt",
  "eventType": "Microsoft.Storage.BlobCreated",
  "eventTime": "2017-06-26T18:41:00.9584103Z",
  "id": "831e1650-001e-001b-66ab-eeb76e069631",
  "data": {
    "api": "PutBlockList",
    "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
    "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
    "eTag": "0x8D4BCC2E4835CD0",
    "contentType": "text/plain",
    "contentLength": 524288,
    "blobType": "BlockBlob",
    "url": "https://example.blob.core.windows.net/testcontainer/testfile.txt",
    "sequencer": "00000000000004420000000000028963",
    "storageDiagnostics": {
      "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
    }
  },
  "dataVersion": "",
  "metadataVersion": "1"
}]

When testing the function outlined earlier, the first thing to do is ensure that there is a blob in the local blob container that will be read by the function by way of the blob input binding: [Blob("{data.url}")] string foods.

For example, in the Storage Emulator a blob called in.txt can be uploaded to the food-in container.

Now the new blob event data JSON needs to be modified, specifically the data.url property needs to contain the URL to the local blob: http://127.0.0.1:10000/devstoreaccount1/food-in/in.txt

A modified version with updated data.url would be as follows:

[{
  "topic": "/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount",
  "subject": "/blobServices/default/containers/testcontainer/blobs/testfile.txt",
  "eventType": "Microsoft.Storage.BlobCreated",
  "eventTime": "2017-06-26T18:41:00.9584103Z",
  "id": "831e1650-001e-001b-66ab-eeb76e069631",
  "data": {
    "api": "PutBlockList",
    "clientRequestId": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
    "requestId": "831e1650-001e-001b-66ab-eeb76e000000",
    "eTag": "0x8D4BCC2E4835CD0",
    "contentType": "text/plain",
    "contentLength": 524288,
    "blobType": "BlockBlob",
    "url": "http://127.0.0.1:10000/devstoreaccount1/food-in/in.txt",
    "sequencer": "00000000000004420000000000028963",
    "storageDiagnostics": {
      "batchId": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
    }
  },
  "dataVersion": "",
  "metadataVersion": "1"
}]

The next step is to remove the surrounding [], and replace the with . Then paste the resulting JSON into the input property:

{
    "input": "
  {
    'topic': '/subscriptions/{subscription-id}/resourceGroups/Storage/providers/Microsoft.Storage/storageAccounts/xstoretestaccount',
    'subject': '/blobServices/default/containers/oc2d2817345i200097container/blobs/oc2d2817345i20002296blob',
    'eventType': 'Microsoft.Storage.BlobCreated',
    'eventTime': '2017-06-26T18:41:00.9584103Z',
    'id': '831e1650-001e-001b-66ab-eeb76e069631',
    'data': {
      'api': 'PutBlockList',
      'clientRequestId': '6d79dbfb-0e37-4fc4-981f-442c9ca65760',
      'requestId': '831e1650-001e-001b-66ab-eeb76e000000',
      'eTag': '0x8D4BCC2E4835CD0',
      'contentType': 'application/octet-stream',
      'contentLength': 524288,
      'blobType': 'BlockBlob',
      'url': 'http://127.0.0.1:10000/devstoreaccount1/food-in/in.txt',
      'sequencer': '00000000000004420000000000028963',
      'storageDiagnostics': {
        'batchId': 'b68529f3-68cd-4744-baa4-3c0498ec19f0'
      }
    },
    'dataVersion': '',
    'metadataVersion': '1'
  }
"
}

Now this JSON can be POSTed to the special URL: in the case of the example in this post the URL would be: http://localhost:7071/admin/functions/ProcessFoodBlobsEventGrid

The following screenshot shows posting using Postman:

 

Using Postman to post to Event Grid triggered Azure Function

Posting will cause the Event Grid triggered function to be invoked and the JSON contained inside the input property will be passed to the trigger input EventGridEvent blobCreatedEvent object. The function will execute and read in the blob called “in.txt”.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Creating Custom Azure Functions Bindings

(This article refers to Azure Functions v2)

Out of the box, Azure Functions comes with a range of triggers, input bindings, and output bindings to work with blobs, queues, HTTP, etc.

You can also create your own input and/or output bindings.

Overview of Custom Azure Function Bindings

The general process to create a custom binding is:

  1. Create a class library (e.g. .NET Standard)
  2. Create a class that implements IAsyncCollector<T>
  3. Implement the AddAsync method from the interface and add code to perform some output
  4. Create a custom C# attribute (check out my course if you’ve never created custom attributes before ) to represent the binding attribute that will be used in functions
  5. Create a class that implements IExtensionConfigProvider that wires up the new binding
  6. In the Function App, create a startup class to register your custom extension

An Example Scenario – Integrating with Pushover

Pushover is a service (with accompanying phone app) that lets you send notifications to your phone.There is a 7 day free trial to start with and the simplest way to get started is to search for the Pushover app in the app store on your phone. Once registered you can follow the instructions to set up your application/device that push notifications can be sent to. At the end of this process, ultimately you will end up with a user key and and application API token key. Both of these are needed when calling the Pushover API.

In the following example, a custom Azure Function output binding will be created that can be used from any Azure Function to send notifications. The output could be anything however, for example you could replace the Pushover API call with a Twitter call, LinkedIn call, etc.

The following example creates an output binding but you can also create bindings that get input and passes it to a function.

Creating a POCO to Represent a Notification Message

In the new class library project, add a class:

namespace PushoverBindingExtensions
{
    public class PushoverNotification
    {
        public string Title { get; set; }
        public string Message { get; set; }
    }
}

Implementing an IAsyncCollector

The next step is to create a class that implements IAsyncCollector<T> where T is the “data” that we want to pass to the output binding; in our case its the custom POCO class we just created but this could be a primitive type such as a string.

using Microsoft.Azure.WebJobs;
using System.Collections.Generic;
using System.Net.Http;
using System.Threading;
using System.Threading.Tasks;

namespace PushoverBindingExtensions
{
    internal class PushoverNotificationAsyncCollector : IAsyncCollector<PushoverNotification>
    {
        private static readonly HttpClient _httpClient = new HttpClient();

        private PushoverAttribute _pushoverAttribute;

        public PushoverNotificationAsyncCollector(PushoverAttribute attribute)
        {
            _pushoverAttribute = attribute;
        }

        public async Task AddAsync(PushoverNotification notification, CancellationToken cancellationToken = default(CancellationToken))
        {
            await SendNotification(notification);
        }

        public Task FlushAsync(CancellationToken cancellationToken = default(CancellationToken))
        {
            return Task.CompletedTask;
        }

        private async Task SendNotification(PushoverNotification notification)
        {
            var parameters = new Dictionary<string, string>
                {
                    { "token", _pushoverAttribute.AppToken },
                    { "user", _pushoverAttribute.UserKey },
                    { "title", notification.Title },
                    { "message", notification.Message }
                };

            var response = await _httpClient.PostAsync("https://api.pushover.net/1/messages.json", new FormUrlEncodedContent(parameters));
            response.EnsureSuccessStatusCode();
        }
    }
}

The key point in the preceding code is the implemented AddAsync method. This method gets called by your function when you use the binding, in this example it’s calling into the SendNotification method that talks to the Pushover API.

Notice that the constructor takes an instance of a PushoverAttribtue which we’ll define next.

Defining a Custom Binding Attribute for Azure Functions

The following code defines a .NET attribute that will be used to decorate parameters in function run methods:

using Microsoft.Azure.WebJobs.Description;
using System;

namespace PushoverBindingExtensions
{
    [Binding]
    [AttributeUsage(AttributeTargets.Parameter | AttributeTargets.ReturnValue)]
    public class PushoverAttribute : Attribute
    {
        public PushoverAttribute(string appToken, string userKey)
        {
            AppToken = appToken;
            UserKey = userKey;
        }

        [AutoResolve]        
        public string AppToken { get; set; }

        [AutoResolve]       
        public string UserKey { get; set; }
    }
}

Note in the preceding code that the [Binding] attribute has been applied and the attribute has been limited to use on parameters and return values. (You can learn more about how to create custom attributes in my Pluralsight course).

This attribute definition has a couple of string properties to represent the Pushover user and app tokens. Notice these properties have been decorated with the [AutoResolve] attribute. This enables binding expressions for the properties and allows them to be read from AppSettings with %% syntax.

Creating the Azure Function Binding Custom Extension

The next step is to create a class that implements the IExtensionConfigProvider interface. This class will define the rules that are applicable to the custom binding:

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Description;
using Microsoft.Azure.WebJobs.Host.Config;

namespace PushoverBindingExtensions
{
    [Extension("PushoverExtensions")]
    public class PushoverExtensions : IExtensionConfigProvider
    {
        public void Initialize(ExtensionConfigContext context)
        {
            var rule = context.AddBindingRule<PushoverAttribute>();            

            rule.BindToCollector<PushoverNotification>(BuildCollector);            
        }

        private IAsyncCollector<PushoverNotification> BuildCollector(PushoverAttribute attribute)
        {            
            return new PushoverNotificationAsyncCollector(attribute);
        }
    }
}

The preceding class is decorated with the [Extension] attribute to mark the class as an extension and the Initialize method is where the binding rules are defined. In this method we can add binding rules and also custom convertors (for example if we wanted to be able to bind to IAsyncCollector<string> and this be automatically converted to IAsyncCollector<PushoverNotification>). In this example we’re not defining any such convertors.

Once the binding rule has been added (for the PushoverAttribute we created earlier) it can be configured as an output binding by calling the BindToCollector method. This method is used to create an instance of the PushoverNotificationAsyncCollector we created earlier, in this example this is done by calling the BuildCollector method that returns a IAsyncCollector<PushoverNotification> instance.

So hopefully your still with me; at this point we have the custom binding created in the class library project, we can now actually use it in our functions.

Using a Custom Binding in an Azure Function

In the function app project, add a reference to the class library project containing the custom binding.

We can now use the custom binding just as we would any of the pre-supplied ones as the following function code demonstrates:

[FunctionName("SendPushoverNotification")]
public static async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req,
    [Pushover("%appkey%", "%userkey%")] IAsyncCollector<PushoverNotification> notifications,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    // validation omitted for demo purposes
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);

    await notifications.AddAsync(new PushoverNotification { Title = data.Title, Message = data.Message });

    return new OkResult();
}

The preceding function code just happens to have an HTTP trigger but the custom binding can be used in functions with other triggers as well.

Notice the [Pushover] custom binding attribute being applied. The attribute decorates the notifications parameter that is of type IAsyncCollector<PushoverNotification>. (Hopefully it’s a bit clearer now how all the parts fit together…).

Now in the function body code, a notification can be send by calling the AddAsync method and passing the PushoverNotification to be sent.

Also notice in the [Pushover] attribute the binding expressions to the app key and user key stored in app settings (or local.settings.json in the development environment) This means these sensitive keys do not need to be hard coded.

There are a few final steps to getting this all to work.

The first is to register the custom binding extension by creating the following Startup class in the function app project:

using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Hosting;
using PushoverBindingExtensions;

namespace DontCodeTiredDemosV2
{
    public class Startup : IWebJobsStartup
    {
        public void Configure(IWebJobsBuilder builder)
        {
            builder.AddExtension<PushoverExtensions>();
        }
    }
}

And in the AssemblyInfo.cs (which you can create manually) add the following:

using Microsoft.Azure.WebJobs.Hosting;

// Register custom extension of Function App startup
[assembly: WebJobsStartup(typeof(DontCodeTiredDemosV2.Startup))]

This points to the Startup class we just created.

Testing It All Out

Now just run the function app and post the following JSON to the function address (e.g. http://localhost:7071/api/SendPushoverNotification):

{
    "Title" : "Functions App",
    "Message" : "I like cheese!"
}

This will result in the Azure Function executing and sending a request to the Pushover API, will will result in a notification arriving on your phone as the following screenshot shows:

Pushover notification via Azure Functions

I hope that’s help clarify the process a  little on how to create custom binding expressions in Azure Functions. If you like me to throw the code up on GitHub let me know :)

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Improving Azure Functions Blob Trigger Performance and Reliability - Part 4: Periodically Checking for Unprocessed Blobs

In the this final part of this series we wrap up by briefly discussing some ways to check for blobs that have not been processed correctly.

When using Azure Functions, a timer trigger can be used to automatically periodically execute a function based on a CRON expression. The following code is and example of a timer-triggered function:

public static class CheckBlobs
{
    [FunctionName("CheckBlobs")]
    public static void Run(
        [TimerTrigger("0 */5 * * * *")]TimerInfo myTimer, 
        ILogger log)
    {
        log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

        // Blob checking logic here
    }
}

There are a number of ways to check for unprocessed blobs depending on the solution you are building, some examples:

  • Check that an output blob exists for every input blob
  • Use a database to keep track of blobs that were uploaded and compare this to actual output blobs
  • If blobs are deleted after they have been processed, check there are no blobs in the container
  • Etc.

Some things to bear in mind if implementing this kind of checking include:

  • How often/when to run the function?
  • How long after a blob is written should you give it to be processed normally?
  • Will running this function interfere with any other processing in the system?
  • What if a blob is due to be processed (e.g. message sitting in a queue but not yet processed)? Could this create false positives or cause duplication of processing?
  • How long does the checking function take to execute? Will it take too long as the number of blobs increases and will the function be terminated by the runtime?
  • How/who do you notify of missed blobs (email, SMS, create ticket in CRM/bug system, etc.)?
  • Do you try to perform auto-retry of processing? Again, could this cause duplication, errors, etc.?

You could also use logging/Application Insights to provide you with information, or write every incoming blob name to a database and update that record when a blob has been processed, this way unprocessed blobs can be found with a simple “IF NOT PROCESSED” query.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Improving Azure Functions Blob Trigger Performance and Reliability - Part 3: Using Event Grid to Respond to New Blobs

In the previous part of the series we saw how to improve the reliability of responding to new blobs by introducing a queue.This required the introduction of a Storage Queue to the solution and also that the writer of new blobs also write a queue message.

In this article, instead of manually writing messages to a queue on blob creation, we use Event Grid events.

Azure Event Grid has support for Blob Storage, meaning that when a new blob is written, Event Grid will notice this. We can then trigger an Azure Function from this Event Grid event.

This approach can improve the reliability and responsiveness compared to using a simple blob trigger: “Blob storage events are reliably sent to the Event grid service which provides reliable delivery services to your applications through rich retry policies and dead-letter delivery.” [Microsoft]

Creating an Event Grid Triggered Function

The following Azure Function code is a modified version of the code used in the previous article:

public static class ProcessFoodBlobsEventGrid
{
    private static readonly string[] _meats = { "steak", "chicken", "venison" };

    [FunctionName("ProcessFoodBlobsEventGrid")]
    public static void Run(
     [EventGridTrigger]EventGridEvent blobCreatedEvent,
     [Blob("{data.url}")] string foods, // assumes small blob size so using string not stream
     [Blob("{data.url}.vegetarian")] out string vegetarian,
     [Blob("{data.url}.nonvegetarian")] out string nonVegetarian,
     ILogger log)
    {
        log.LogInformation("Processing a blob created event");

        StorageBlobCreatedEventData createdEvent = ((JObject)blobCreatedEvent.Data).ToObject<StorageBlobCreatedEventData>();

        log.LogInformation($"Blob: {createdEvent.Url}");
        log.LogInformation($"Api operation: {createdEvent.Api}");

        vegetarian = null;
        nonVegetarian = null;

        string[] foodLines = foods.Split(new[] { "\r\n", "\n" }, StringSplitOptions.RemoveEmptyEntries);


        foreach (var food in foodLines)
        {
            var isMeat = _meats.Contains(food);

            if (isMeat)
            {
                nonVegetarian += food + Environment.NewLine;
            }
            else
            {
                vegetarian += food + Environment.NewLine;
            }
        }
    }
}

In the preceding code, the [EventGridTrigger]EventGridEvent blobCreatedEvent will cause the function to be trigged based on an Event Grid event being directed to the function.

The input blob binding [Blob("{data.url}")] string foods uses a binding expression and accesses the data.url property from the JSON data that’s contained in the event (this comes from the event schema for Blob Storage). The 2 output bindings also use the original blob path/name and append .vegetarian or .nonvegetarian. This implementation writes output blobs to the same container as the input blob. You could also use dynamic binding in Azure Functions with imperative runtime bindings to just extract the filename from the blob and write the output blobs to a different container.

Creating an Event Subscription for New Blobs

The function needs an event subscription to be created in Azure to recognize when new blobs are written and invoke the function. This can be done by navigating to the storage account (requires storage account v2) in the Azure Portal and clicking the Events link. You can then add a new event subscription as the following screenshot shows (note the Defined Event Types is set to Blob Created):

Creating a new Azure Event Grid Subscription to trigger an Azure Function

You can also specify subject filters to limit the event to a specific container and/or file type as the following screenshot shows:

Configuring Azure Event Grid subscription to filter on blob storage containers

You could also specify dead-lettering and retry policies in case the Function App is unable to respond.

Now when a blob is added, the event subscription will notice it and invoke the function.

Ultimately “Use the Event Grid trigger instead of the Blob storage trigger for blob-only storage accounts, for high scale, or to reduce latency.” [Microsoft]

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Improving Azure Functions Blob Trigger Performance and Reliability - Part 2: Processing Delays and Missed Blobs

This is the second part of a series or articles.

When you add a new blob, your blob-triggered function may not be triggered immediately: “If the blob container being monitored contains more than 10,000 blobs, the Functions runtime scans log files to watch for new or changed blobs. This process can result in delays. A function might not get triggered until several minutes or longer after the blob is created.” [Microsoft]

Also when scanning log files to find new blobs that need processing, there’s “no guarantee that all events are captured. Under some conditions, logs may be missed.” [Microsoft]

This means that it is possible for some new blobs to be missed and not processed.

Using a Storage Queue to Trigger Processing of New Blobs

One alternative to reduce the likelihood of missed blobs and also improve the responsiveness of blob processing is to use a slightly more  complex (but still relatively straight forward) approach.

Essentially this alternative approach has the following workflow:

  1. New blob written to blob storage
  2. Write message to storage queue containing new blob path
  3. Queue-triggered function gets message from step 2
  4. Blob processing occurs

(Note that this alternative approach may not suit all situations depending on how new blobs are making their way into blob storage – who or whatever is writing the blob in step 1 also needs to be able to write a queue message.)

Blob Writing

This approach requires that when a blob is written, a queue message is also written.

As a simple example, this could be from client code as follows:

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System.IO;
using System.Threading.Tasks;

namespace AddNewBlob
{
    class Program
    {
        static async Task Main(string[] args)
        {
            CloudStorageAccount storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
            CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();
            CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("food-in");
            CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference("recipe1.txt");

            await WriteBlob();
            await WriteMessage();

            async Task WriteBlob()
            {
                using (var stream = await cloudBlockBlob.OpenWriteAsync())
                using (var sw = new StreamWriter(stream))
                {
                    await sw.WriteLineAsync("carrot");
                    await sw.WriteLineAsync("steak");
                    await sw.WriteLineAsync("apple");
                }
            }

            async Task WriteMessage()
            {
                var queueClient = storageAccount.CreateCloudQueueClient();
                var queue = queueClient.GetQueueReference("food-in");
                await queue.AddMessageAsync(new Microsoft.WindowsAzure.Storage.Queue.CloudQueueMessage("recipe1.txt"));
            }
        }

        
    }
}

Or perhaps the blob data comes in via an HTTP-triggered function as follows:

public static class AddRecipe
{
    [FunctionName("AddRecipe")]
    [return: Queue("food-in")]
    public static async Task<string> Run(
        [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,            
        ILogger log)
    {
        log.LogInformation("C# HTTP trigger function processed a request.");
        
        string ingredients = await new StreamReader(req.Body).ReadToEndAsync();

        // validation/error code omitted for demo purposes

        var blobName = Guid.NewGuid().ToString();

        await WriteBlob(); // ensure blob is written *before* function returns and add message to the queue

        return blobName; // write to queue


        async Task WriteBlob()
        {
            var account = CloudStorageAccount.DevelopmentStorageAccount; // In real app load this from secure config location
            var blobClient = account.CreateCloudBlobClient();
            var blobContainer = blobClient.GetContainerReference("food-in");
            var cloudBlockBlob = blobContainer.GetBlockBlobReference(blobName);
            await cloudBlockBlob.UploadTextAsync(ingredients);
        }
    }
}

Notice in the preceding code , the writing of the blob is being done explicitly in code to ensure that the queue message isn’t added until the blob is definitely available to be processed by the next function in the chain. (See this related GitHub issue).

More Reliable Blob Processing

The next function is where the actual processing of the new blob is carried out, it is however triggered from a queue rather than relying on a blob trigger:

public static class ProcessFoodBlobs
{
    private static readonly string[] _meats = { "steak", "chicken", "venison" };      

    [FunctionName("ProcessFoodBlobs")]
    public static void Run(
        [QueueTrigger("food-in")]string newBlobPath, 
        [Blob("food-in/{queueTrigger}")] string foods,
        [Blob("food-out/{queueTrigger}.vegetarian")] out string vegetarian,
        [Blob("food-out/{queueTrigger}.nonvegetarian")] out string nonVegetarian,
        ILogger log)
    {
        vegetarian = null;
        nonVegetarian = null;

        string[] foodLines = foods.Split(new[] {"\r\n", "\n"  }, StringSplitOptions.RemoveEmptyEntries);


        foreach (var food in foodLines)
        {
            var isMeat = _meats.Contains(food);

            if (isMeat)
            {
                nonVegetarian += food + Environment.NewLine;
            }
            else
            {
                vegetarian += food + Environment.NewLine;
            }
        }    
    }
}

In the preceding code we’re making use of automatic input blob binding.

Summary

This approach may offer some benefits at the cost of some additional complexity if you have a lot of blobs being written/stored/processed. It also has some other considerations to bear in mind such as what happens if the blob is deleted or changed before the message is picked up off the queue? As with all things you should consider your own requirements and ensure you do thorough testing which includes performance/load/stress testing.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Improving Azure Functions Blob Trigger Performance and Reliability - Part 1: Memory Usage

This is the first part of a series or articles.

When creating blob-triggered Azure Functions there are some memory usage considerations to bear in mind.

“The consumption plan limits a function app on one virtual machine (VM) to 1.5 GB of memory. Memory is used by each concurrently executing function instance and by the Functions runtime itself.” [Microsoft]

A blob-triggered function can execute concurrently and internally uses a queue: “the maximum number of concurrent function invocations is controlled by the queues configuration in host.json. The default settings limit concurrency to 24 invocations. This limit applies separately to each function that uses a blob trigger.” [Microsoft]

So, if you have 1 blob-triggered function in a Function App, with the default concurrency setting of 24, you could have a maximum of 24 (1 * 24) concurrently executing function invocations. (The documentation describes this as per-VM concurrency, with 2 VMs you could have 48 (2vm * 1 * 24 concurrently executing function invocations.)

If you had 3 blob-triggered functions in a Function App (assuming 1 VM) then you could have 72 (3 * 24) concurrently executing function invocations.

Because the consumption plan “limits a function app on one virtual machine (VM) to 1.5 GB of memory”, if you are processing blobs that are non-trivial in size then you may need to consider overall memory usage.

OutOfMemoryException When Using Azure Functions Blob Trigger

As an example, suppose the following function exists:

public static class BlobPerformanceAndReliability
{
    [FunctionName("BlobPerformanceAndReliability")]
    public static void Run(
        [BlobTrigger("big-blobs/{name}")]string blob, 
        string name, 
        [Blob("big-blobs-out")] out string foundData,
        ILogger log)
    {
        log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {blob.Length} Bytes");

        // Code to find and output a specific line
        foundData = "This line will never be reached if out of memory";
    }
}

The preceding function code is triggered by blobs in the big-blobs container, the omitted code towards the end of the function would find a specific line of text in the blob and output it to big-blobs-out.

We can create a large file (appx. 1.8 GB) with the following code in a console app:

using System.IO;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var sw = new StreamWriter(@"c:\temp\bigblob.txt"))
            {
                for (int i = 0; i < 40_000_000; i++)
                {
                    sw.WriteLine("Some line we are not interested in processing");
                }
                sw.WriteLine("Data: 42");
            }
        }
    }
}

The contents of the last line in the file will be set to “Data: 42”.

If we run the function app locally and upload this big file to the Azure Storage Emulator, the function will trigger and will error with: “System.Private.CoreLib: Exception while executing function: BlobPerformanceAndReliability. Microsoft.Azure.WebJobs.Host: One or more errors occurred. (Exception binding parameter 'blob') (Exception binding parameter 'name'). Exception binding parameter 'blob'. System.Private.CoreLib: Exception of type 'System.OutOfMemoryException' was thrown.”.

The reason for this is that when you bind a blob trigger/input and bind to string or byte[] the entire blob will be read into memory, if the blob is too big (and/or there are other function invocations executing concurrently also processing big files) it will exceed the memory restrictions of the Functions Runtime.

Processing Large Blobs with Azure Functions

Instead of binding to string or byte[], you can bind to a Stream. This will not load the entire blob into memory and will allow you to instead process it incrementally.

The function can be re-written as follows:

public static class BlobPerformanceAndReliability
{
    [FunctionName("BlobPerformanceAndReliability")]
    public static void Run(
        [BlobTrigger("big-blobs/{name}")]Stream blob,
        string name,
        [Blob("big-blobs-out/{name}")] out string foundData,
        ILogger log)
    {
        log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {blob.Length} Bytes");

        // Code to find and output a specific line            

        foundData = null; // Don't write an output blob by default

        string line;

        using (var sr = new StreamReader(blob))
        {                
            while (!sr.EndOfStream)
            {
                line = sr.ReadLine();

                if (line.StartsWith("Data"))
                {
                    foundData = line;
                    break;
                }                    
            }
        }            
    }
}

If you’re not familiar with using streams in .NET, check out my Working with Files and Streams in C# Pluralsight course.

If we force the same blob to be reprocessed with this new function code, there will be no error and the output blob containing “Data: 42” will be seen in the big-blobs-out container.

Another thing to bear in mind when processing large files is that there is a timeout on function execution.

In the next part of this series we’ll look at how to improve the responsiveness of function execution when new blobs are written and also improve the reliability and reduce the chances of blobs being missed.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Handling Errors and Poison Blobs in Azure Functions With Azure Blob Storage Triggers

(This article applies to Azure Functions V2)

An Azure Function can be triggered by new blobs being written (or updated). If an unhandled exception occurs in the function, by default Azure Functions will retry the blob 5 times. This means the function will be triggered again for the same blob up to 5 times. If the same blob causes errors 5 times, no further attempts will be made and the processing of the blob will be “lost”.

Understanding Blob Processing Errors in Azure Functions

When a new (or updated) blob triggers a function, the Azure Functions runtime makes sure that the same blob is not processed twice (if no error occurs in the function execution). To do this the runtime makes use of “blob receipts”. These are stored in the Azure storage account associated with the function app (as defined in the AzureWebJobsStorage Function App settings).

As an example, suppose a new blob (called “followupletterrequest.data”) triggered the following function:

class FollowupLetterRequest
{
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

public static class PoisonBlobExampleFunctions
{
    [FunctionName("PoisonBlobExampleFunctions")]
    public static void Run(
        [BlobTrigger("followup-letters/{blobname}.data")]string blobData, 
        string blobname,
        [Blob("followup-letters/{blobname}.txt")] out string letter,
        ILogger log)
    {
        var settings = new JsonSerializerSettings
        {
            MissingMemberHandling = MissingMemberHandling.Error
        };

        // This code assumes blob JSON is valid, if not an exception will be thrown
        var request = JsonConvert.DeserializeObject<FollowupLetterRequest>(blobData, settings);

        string firstName = request.FirstName;
        string lastName = request.LastName;

        letter = RenderFollowUpLetterText(firstName, lastName);
    }
    
    private static string RenderFollowUpLetterText(string firstName, string lastName)
    {
        string simulateLetterText = WaffleEngine.Text(paragraphs: 3, includeHeading: false);

        return $"Dear {firstName} {lastName}\r\n \r\n{simulateLetterText}";
    }
}

After the function runs, in the storage account under a path like “azure-webjobs-hosts/blobreceipts” the blob receipt can be seen. On a development machine using the local storage emulator the full path would be something like: “blobreceipts/desktop/DontCodeTiredDemosV2.PoisonBlobExampleFunctions.Run/"0x8D69224161F4590"/followup-letters/followupletterrequest.data”.

This full path to the blob receipt blob represents:

  • Function Id that the blob triggered (DontCodeTiredDemosV2.PoisonBlobExampleFunctions.Run)
  • Blob Container Name (followup-letters)
  • Name of triggering blob (followupletterrequest.data)
  • Triggering blob version ETag (“0x8D69224161F4590”)

 

If we now added another new blob called “followupletterrequest_bad.data” that contains bad data (e.g. a missing JSON property), so that an exception is thrown, a second blob receipt will be generated: “blobreceipts/desktop/DontCodeTiredDemosV2.PoisonBlobExampleFunctions.Run/"0x8D692245985E910"/followup-letters/followupletterrequest_bad.data”.

Because this blob generated an error, after the default number of retries (5) there will be no more attempts to process it.

Manually Forcing a Blob to Be Reprocessed

The documentation states that if the blob receipt is manually deleted, this will force the blob to reprocessed. This may be suitable to force reprocessing of a set of blobs that failed processing due to some transient error such as a database or network being temporarily offline. You should obviously take care that reprocessing blobs wont cause problems such as duplicate orders, emails, etc. or other errors in the system. You  may also need to consider what would happen if blobs are retried in a different order and/or interleaved with new blobs being added. Also blobs may not be reprocessed immediately. Using the local function runtime development environment, once the blob receipt has been deleted, it seems that the function app needs restarting to cause the blob to be reprocessed (either that or I didn’t wait long enough…). Once deployed to Azure there can be a delay between when the blob receipt is deleted and the blob being retried, the following timeline shows the delay between the blob receipt being deleted and the retry attempt 1.

2019-02-14 03:40:24.374 <attempt 1 - failure>
2019-02-14 03:40:24.763 <attempt 2 - failure>
2019-02-14 03:40:24.891 <attempt 3 - failure>
2019-02-14 03:40:25.007 <attempt 4 - failure>
2019-02-14 03:40:25.117 <attempt 5 - failure>
<blob receipt deleted>
2019-02-14 04:24:24.327 <retry attempt 1 - failure>
2019-02-14 04:24:25.155 <retry attempt 2 - failure>
2019-02-14 04:24:25.288 <retry attempt 3 - failure>
2019-02-14 04:24:25.455 <retry attempt 4 - failure>
2019-02-14 04:24:25.592 <retry attempt 5 - failure>

Automatically Responding to Blob Failures in Azure Functions

When a blob fails for the last time, information about the failure will written as a message to a Storage queue called “webjobs-blobtrigger-poison”. The message contains a JSON payload describing the triggering blob that didn’t complete processing successfully, for example:

{
  "Type": "BlobTrigger",
  "FunctionId": "DontCodeTiredDemosV2.PoisonBlobExampleFunctions.Run",
  "BlobType": "BlockBlob",
  "ContainerName": "followup-letters",
  "BlobName": "followupletterrequest_bad.data",
  "ETag": "\"0x8D692245985E910\""
}

The information contained in the JSON can be used to alert support people about the error and take appropriate action as required such as writing to a support ticket database or sending an email. You could also implement logic to automatically delete the blob receipt to force reprocessing but there would probably want to be some retry count otherwise bad data could cause an infinite processing loop. Exactly how you handle failed blob processing will depend on the business scenario.

As an example, the following function monitors the “webjobs-blobtrigger-poison” queue and grabs the information about the failed blob:

[FunctionName("PoisonBlobQueueProcessor")]
public static void PoisonBlobQueueProcessor(
    [QueueTrigger("webjobs-blobtrigger-poison")] string message,
    ILogger log)
{
    var poisonBobDetails = JsonConvert.DeserializeObject<dynamic>(message);

    log.LogInformation($"Found an unprocessed blob {poisonBobDetails.ContainerName}/{poisonBobDetails.BlobName}\r\n");
    
    // Send an email, log a ticket in a fault system, log a CRM issue, etc.            
}

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Getting Blob Metadata When Using Azure Functions Blob Storage Triggers

(This article refers to Azure Functions V2)

Basic Blob Metadata

There are a few basic pieces of metadata that are often useful.

The following code show a simple example of a blob-triggered Azure Function:

[FunctionName("BlobMetadataExample")]
public static void Run(
    [BlobTrigger("decline-letters/{name}")]Stream myBlob, 
    string name, 
    ILogger log)
{
    log.LogInformation($"Name: {name} Size: {myBlob.Length} Bytes");
}

With the preceding code, if we add a blob called “declineletterrequest.data” to the “decline-letters” container, the function will be triggered with the output: “Name: declineletterrequest.data Size: 50 Bytes”.

Notice that the string name parameter has been automatically populated with the full name of the blob that triggered the function execution.

If you want to get the blob name and blob extension separately you could write the following:

[FunctionName("BlobMetadataExample")]
public static void Run(
    [BlobTrigger("decline-letters/{blobname}.{blobextension}")]Stream myBlob,
    string blobName,
    string blobExtension,
    ILogger log)
{
    log.LogInformation($"Name: {blobName} Extension: {blobExtension} Size: {myBlob.Length} Bytes");
}

If the preceding function executes we get the output: “Name: declineletterrequest Extension: data Size: 50 Bytes”.

In addition to being able to use this simple blob metadata in code, you can also use the elements of the triggering blob name in other bindings:

[FunctionName("BlobMetadataExample")]
public static void Run(
        [BlobTrigger("decline-letters/{blobname}.{blobextension}")]Stream myBlob,
        string blobName,
        string blobExtension,
        [Queue("output-queue-{blobextension}")] out string message,
        ILogger log)
{
    log.LogInformation($"Name: {blobName} Extension: {blobExtension} Size: {myBlob.Length} Bytes");

    message = "Hello world";
}

In the preceding code, the output queue that is written to is dependent on the extension of the triggering blob. If the triggering blob name was “declineletterrequest.bankofmars” then a message will be written to the queue “output-queue-bankofmars” or if the input blob was called “output-queue-bankofvenus” then a message would be written to the “output-queue-bankofvenus”.

You can also do a similar thing by binding an input blob binding to the contents of a triggering queue message.

Advanced Metadata

There are a number of additional metadata items that you can get by simply adding the correct method arguments with the correct names:

[FunctionName("BlobMetadataExample")]
public static void Run(
        [BlobTrigger("decline-letters/{blobname}.{blobextension}")]Stream myBlob,
        string blobName,
        string blobExtension,
        string blobTrigger, // full path to triggering blob
        Uri uri, // blob primary location
        IDictionary<string, string> metaData, // user-defined blob metadata
        BlobProperties properties, // blob system properties, e.g. LastModified
        ILogger log)
{
    log.LogInformation($@"
blobName      {blobName}
blobExtension {blobExtension}
blobTrigger   {blobTrigger}
uri           {uri}
metaData      {metaData.Count}
properties    {properties.Created}");
}

Executing the preceding code will give the following output:

blobName      declineletterrequest
blobExtension data
blobTrigger   decline-letters/declineletterrequest.data
uri           http://127.0.0.1:10000/devstoreaccount1/decline-letters/declineletterrequest.data
metaData      0
properties    12/02/2019 2:15:53 AM +00:00

The BlobProperties give you access to a host of information such as ETag, DeletedTime, ContentEncoding, etc.

You can use this additional metadata in further binding expressions, the following example shows how to bind a blob output name to the ETag of the original triggering blob:

[FunctionName("BlobMetadataExample")]
public static void Run(
[BlobTrigger("decline-letters/{blobname}.{blobextension}")]Stream myBlob,
string blobName,
BlobProperties properties,
[Blob("decline-letters/{properties.ETag}")] out string message,
ILogger log)
{
    message = "Hello world";
}

The preceding code would create an output blob with a name such as “0x8D6909193F68C10”.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Dealing With Unprocessed Storage Queue Poison Messages in Azure Functions

If an Azure Function that is triggered by a message on a Storage Queue throws an exception, the message will automatically be returned to the queue and retried again in the future.

In addition to specifying how soon the message will be retried, you can also configure how many times the message will be retried by editing the host.json file. By default a message will be retried 5 times before finally failing. The following host.json specifies that a message should be retried 10 times before finally failing:

{
  "version": "2.0",
  "extensions": {
    "queues": {      
      "maxDequeueCount": 10
    } 
  }  
}

Handling Poison Messages in Azure Functions

Once a message has been retried the maximum number of times, it will be considered a poisonous message, essentially that if we keep it on the queue it will “poison” the application/function and cause harm. Poison messages will be removed from the queue and placed onto a poison queue.

For example, if the queue that triggers the function is called “input-queue”, poison messages will be moved to a queue called “input-queue-poison”.

Because we know the name of the poison queue, we can process these poison messages somehow. Exactly how you choose to process these messages will depend on the application you are building.

One thing to think about is why the message may have failed:

  • Is the message content itself corrupted?
  • Is the function code itself defective/have a bug?
  • Were the exceptions caused by a transient error in a service the function uses?
  • Etc.

You could have some automated process (function) attempt to resolve the poison messages or forward them to a human to resolve (for example writing the message to database that a human can query).

Triggering an Azure Function From a Poison Message Queue

As an example, the following function retrieves messages from the “input-queue-poison” queue and writes out to table storage for a human to manually correct somehow:

using System;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
using Microsoft.WindowsAzure.Storage.Queue;

namespace DontCodeTiredDemosV2
{
    public class PoisonMessageDetails
    {        
        public string RowKey { get; set; }
        public string PartitionKey { get; set; }
        public string OriginalMessageContent { get; set; }
        public string OriginalMessageId { get; set; }
    }

    public static class HandlePoisonMessages
    {
        [FunctionName("HandlePoisonMessages")]
        [return: Table("HumanInterventionRequired")]
        public static PoisonMessageDetails Run(
            [QueueTrigger("input-queue-poison")]CloudQueueMessage poisonMessage,            
            ILogger log)
        {
            log.LogInformation($"Processing poison message {poisonMessage.Id}");            

            return new PoisonMessageDetails
            {
                RowKey = Guid.NewGuid().ToString(),
                PartitionKey = "input-queue",
                OriginalMessageContent = poisonMessage.AsString,
                OriginalMessageId = poisonMessage.Id
            };                       
        }
    }
}

Once a poison message is processed (for example with the content of “Amrit”) a row will be added to the table as the following screenshot shows:

Azure Table Storage to process poison messages

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE:

Specifying How Soon a Storage Queue Message Will Be Retried in an Azure Function

By default, if an exception occurs in an Azure Function that uses a Storage Queue trigger, the message will be returned to the queue and automatically retried again in the future (up to a maximum number of times).

By default, there is no delay in how soon the message can be retried.

public static class MakeUppercase
{
    [FunctionName("MakeUppercase")]
    public static void Run(
        [QueueTrigger("input-queue")]CloudQueueMessage inputQueueItem,
        ILogger log)
    {
        log.LogInformation($"Message Dequeued : {inputQueueItem.DequeueCount} time(s)");
        log.LogInformation($"Message Next Visible : {inputQueueItem.NextVisibleTime}");

        throw new Exception("Forced exception for demonstration purposes.");
    }
}

With the preceding function, when a single message is added to the queue, the following (abbreviated) output will be seen:

[15/01/2019 11:55:49 PM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=44f95504-7a99-4f23-81f2-096f0bd434a2)
[15/01/2019 11:55:49 PM] Message Dequeued : 1 time(s)
[15/01/2019 11:55:49 PM] Message Next Visible : 16/01/2019 12:05:49 AM +00:00
[15/01/2019 11:55:50 PM] Executed 'MakeUppercase' (Failed, Id=44f95504-7a99-4f23-81f2-096f0bd434a2)
[15/01/2019 11:55:50 PM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=bbede56f-e22a-461f-945c-3f3b47114de3)
[15/01/2019 11:55:50 PM] Message Dequeued : 2 time(s)
[15/01/2019 11:55:50 PM] Message Next Visible : 16/01/2019 12:05:50 AM +00:00
[15/01/2019 11:55:50 PM] Executed 'MakeUppercase' (Failed, Id=bbede56f-e22a-461f-945c-3f3b47114de3)
[15/01/2019 11:55:50 PM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=10b7495f-9cfb-4d75-bc31-db68581b8055)
[15/01/2019 11:55:50 PM] Message Dequeued : 3 time(s)
[15/01/2019 11:55:50 PM] Message Next Visible : 16/01/2019 12:05:50 AM +00:00
[15/01/2019 11:55:50 PM] Executed 'MakeUppercase' (Failed, Id=10b7495f-9cfb-4d75-bc31-db68581b8055)
[15/01/2019 11:55:50 PM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=385beb36-80b7-47a5-ba65-b0d04f956cc6)
[15/01/2019 11:55:50 PM] Message Dequeued : 4 time(s)
[15/01/2019 11:55:50 PM] Message Next Visible : 16/01/2019 12:05:50 AM +00:00
[15/01/2019 11:55:51 PM] Executed 'MakeUppercase' (Failed, Id=385beb36-80b7-47a5-ba65-b0d04f956cc6)
[15/01/2019 11:55:51 PM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=f6acfb11-41fb-416e-88b1-e113aa4424f5)
[15/01/2019 11:55:51 PM] Message Dequeued : 5 time(s)
[15/01/2019 11:55:51 PM] Message Next Visible : 16/01/2019 12:05:51 AM +00:00
[15/01/2019 11:55:51 PM] Executed 'MakeUppercase' (Failed, Id=f6acfb11-41fb-416e-88b1-e113aa4424f5)
[15/01/2019 11:55:51 PM] Message has reached MaxDequeueCount of 5. Moving message to queue 'input-queue-poison'.

Notice in the preceding output, the next visible times don’t include a delay in when the message can potentially be retried.

The next visible time controls when the message will become visible to be consumed. This default value in Azure Functions is 0. You may want to change this default if you want to add some delay between message retries (for example to help prevent message loss* for transient failures).

* Eventually, failed messages will be moved to a poison message queue.

The next visible time can be configured in the host.json file (we are using Azure Functions V2 in this article):

{
  "version": "2.0",
  "extensions": {
    "queues": {
      "visibilityTimeout": "00:00:30" 
    } 
  }  
}

The visibilityTimeout value represents a timespan (HH:MM:SS) to wait before a message becomes visible next time, in the preceding configuration, 30 seconds. Running again with this new configuration, the following output can be seen:

[16/01/2019 12:13:01 AM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=1f4f7177-4de6-4f4c-98c1-48d318892112)
[16/01/2019 12:13:01 AM] Message Dequeued : 1 time(s)
[16/01/2019 12:13:01 AM] Message Next Visible : 16/01/2019 12:23:00 AM +00:00
[16/01/2019 12:13:01 AM] Executed 'MakeUppercase' (Failed, Id=1f4f7177-4de6-4f4c-98c1-48d318892112)
[16/01/2019 12:13:32 AM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=93e9a90c-dd83-410f-9435-003712f64513)
[16/01/2019 12:13:32 AM] Message Dequeued : 2 time(s)
[16/01/2019 12:13:32 AM] Message Next Visible : 16/01/2019 12:23:32 AM +00:00
[16/01/2019 12:13:33 AM] Executed 'MakeUppercase' (Failed, Id=93e9a90c-dd83-410f-9435-003712f64513)
[16/01/2019 12:14:04 AM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=7ffe157b-7186-4f84-b8eb-02c43a260352)
[16/01/2019 12:14:04 AM] Message Dequeued : 3 time(s)
[16/01/2019 12:14:04 AM] Message Next Visible : 16/01/2019 12:24:04 AM +00:00
[16/01/2019 12:14:04 AM] Executed 'MakeUppercase' (Failed, Id=7ffe157b-7186-4f84-b8eb-02c43a260352)
[16/01/2019 12:14:36 AM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=ba3c186b-8b33-4b4c-b896-724a95fa2b25)
[16/01/2019 12:14:36 AM] Message Dequeued : 4 time(s)
[16/01/2019 12:14:36 AM] Message Next Visible : 16/01/2019 12:24:36 AM +00:00
[16/01/2019 12:14:36 AM] Executed 'MakeUppercase' (Failed, Id=ba3c186b-8b33-4b4c-b896-724a95fa2b25)
[16/01/2019 12:15:07 AM] Executing 'MakeUppercase' (Reason='New queue message detected on 'input-queue'.', Id=3de85cf0-89ad-430f-8219-ebd7b1701d4d)
[16/01/2019 12:15:07 AM] Message Dequeued : 5 time(s)
[16/01/2019 12:15:07 AM] Message Next Visible : 16/01/2019 12:25:07 AM +00:00
[16/01/2019 12:15:07 AM] Executed 'MakeUppercase' (Failed, Id=3de85cf0-89ad-430f-8219-ebd7b1701d4d)
[16/01/2019 12:15:07 AM] Message has reached MaxDequeueCount of 5. Moving message to queue 'input-queue-poison'.

Notice now that the message won’t be retried for 30 seconds between each attempt (look at the “Message Next Visible” lines).

Setting a visibility other than zero will not prevent other messages that come into the queue from being processed while waiting for retried messages to become visible again.

If you want to fill in the gaps in your C# knowledge be sure to check out my C# Tips and Traps training course from Pluralsight – get started with a free trial.

SHARE: