How to get queue message type - azure

I'm using Azure Storage Queues and I want to write some code that retrieves all queues, and then finds a handler that can process the message in this queue. For that I defined an interface like this:
public interface IHandler<T>
I have multiple implementations of this interface, like these: IHandler<CreateAccount> or IHandler<CreateOrder>. I use 1 queue per message type, so the CreateAccount messages would go into the create-account-queue.
How do I hook these up? In order to find the right Handler class for a message, I first need to know the message type, but it seems that CloudQueueMessage objects don't contain that information.

Not really an answer to your question but I will share how we're handling exact same situation in our application.
In our application, we're sending different kinds of messages like you are and handling those messages in a background process.
What we're doing is including the message type in the message body itself. So our message typically looks like:
message: {
type: 'Type Of Message',
contents: {
//Message contents
}
}
One key difference is that all messages go in a single queue (instead of different queues in your case). The receiver (background process) just polls one queue, gets the message and identifies the type of message and call handler for that message accordingly.

You can associate metadata with each queue. Since you mentioned that you use one queue per message type, you could put the handler name in the metadata for each queue. You can then enumerate all queues and get the metadata per queue that tells you what type of handler you should use. Here's a quick console app that demonstrates what I think you're asking for:
using System;
using System.Collections.Generic;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Queue;
namespace QueueDemo
{
class Program
{
static void Main(string[] args)
{
//get a ref to our account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("UseDevelopmentStorage=true;");
CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
//create our queues and add metadata showing what type of class each queue contains.
CloudQueue queue1 = cloudQueueClient.GetQueueReference("queue1");
queue1.Metadata.Add("classtype", "classtype1");
queue1.CreateIfNotExists();
CloudQueue queue2 = cloudQueueClient.GetQueueReference("queue2");
queue2.Metadata.Add("classtype", "classtype2");
queue2.CreateIfNotExists();
//enumerate our queues in a storage account and look at their metadata...
QueueContinuationToken token = null;
List<CloudQueue> cloudQueueList = new List<CloudQueue>();
List<string> queueNames = new List<string>();
do
{
QueueResultSegment segment = cloudQueueClient.ListQueuesSegmented(token);
token = segment.ContinuationToken;
cloudQueueList.AddRange(segment.Results);
}
while (token != null);
try
{
foreach (CloudQueue cloudQ in cloudQueueList)
{
//call this, or else your metadata won't be included for the queue.
cloudQ.FetchAttributes();
Console.WriteLine("Cloud Queue name = {0}, class type = {1}", cloudQ.Name, cloudQ.Metadata["classtype"]);
queueNames.Add(cloudQ.Name);
}
}
catch (Exception ex)
{
Console.WriteLine("Exception thrown listing queues: " + ex.Message);
throw;
}
//clean up after ourselves and delete queues.
foreach (string oneQueueName in queueNames)
{
CloudQueue cloudQueue = cloudQueueClient.GetQueueReference(oneQueueName);
cloudQueue.DeleteIfExists();
}
Console.ReadKey();
}
}
}
However, it might be easier to subclass QueueMessage, then dequeue each message and identify what subclass you're currently looking at, then pass it to the proper handler.

Related

best practices with poison message handling for Azure service bus topic

Dealing with poison messages (throwing exception while consuming) from Azure Service Bus can lead to loops till number of retries has reached maxDeliveryCount setting of topic subscription.
Does the SequenceNumber of message added by Azure Service bus keeps on increasing on each failed attempt till it reaches maxDeliveryCount ?
Setting maxDeliveryCount = 1, is that best practice to deal with poison messages so that consumer never attempt twice to process message once it failed
Best practices depend on your application and your retry approach.
Most of time I noticed message get failed
Dependent service not available (Redis, SQL connection issue)
Faulty message (message doesn't have a mandatory parameter or some value is incorrect)
Process code issue (bug in message processing code)
For the 1st and 3rd scenario, I created C# web job to run and reprocess deadletter message.
Below is my code
internal class Program
{
private static string connectionString = ConfigurationSettings.AppSettings["GroupAssetConnection"];
private static string topicName = ConfigurationSettings.AppSettings["GroupAssetTopic"];
private static string subscriptionName = ConfigurationSettings.AppSettings["GroupAssetSubscription"];
private static string databaseEndPoint = ConfigurationSettings.AppSettings["DatabaseEndPoint"];
private static string databaseKey = ConfigurationSettings.AppSettings["DatabaseKey"];
private static string deadLetterQueuePath = "/$DeadLetterQueue";
private static void Main(string[] args)
{
try
{
ReadDLQMessages(groupAssetSyncService, log);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
throw;
}
finally
{
documentClient.Dispose();
}
Console.WriteLine("All message read successfully from Deadletter queue");
Console.ReadLine();
}
public static void ReadDLQMessages(IGroupAssetSyncService groupSyncService, ILog log)
{
int counter = 1;
SubscriptionClient subscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName + deadLetterQueuePath);
while (true)
{
BrokeredMessage bmessgage = subscriptionClient.Receive(TimeSpan.FromMilliseconds(500));
if (bmessgage != null)
{
string message = new StreamReader(bmessgage.GetBody<Stream>(), Encoding.UTF8).ReadToEnd();
syncService.UpdateDataAsync(message).GetAwaiter().GetResult();
Console.WriteLine($"{counter} message Received");
counter++;
bmessgage.Complete();
}
else
{
break;
}
}
subscriptionClient.Close();
}
}
For 2nd scenario, we manually verify deadletter messages (Custom UI/ Service Bus explore), sometimes we correct message data or sometimes we purge message and clear queue.
I won't recommend maxDeliveryCount=1. If some network/connection issue occurs, the built-in retry will process and clear from the queue. When I was working in a finance application, I was keeping maxDeliveryCount=5 while in my IoT application is maxDeliveryCount=3.
If you are reading messages in batch, a complete batch will re-process if an error occurred any of message.
SequenceNumber The sequence number can be trusted as a unique identifier since it is assigned by a central and neutral authority and not by clients. It also represents the true order of arrival, and is more precise than a time stamp as an order criterion, because time stamps may not have a high enough resolution at extreme message rates and may be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes.

In queue-triggered Azure Webjobs can an Azure Storage Queue message be modified after webjob function failure but before poisoning?

I've got queue-triggered functions in my Azure webjobs. Normal behavior of course is when the function fails MaxDequeueCount times the message is put into the appropriate poison queue. I would like to modify the message after the error but before poison queue insertion. Example:
Initial message:
{ "Name":"Tom", "Age", 30" }
And upon failure I want to modify the message as follows and have the modified message be inserted into the poison queue:
{ "Name":"Tom", "Age", 30", "ErrorMessage":"Unable to find user" }
Can this be done?
According to the Webjobs documentation, messages will get put on the poison queue after 5 failed attempts to process the message:
The SDK will call a function up to 5 times to process a queue message.
If the fifth try fails, the message is moved to a poison queue. The
maximum number of retries is configurable.
Source: https://github.com/Azure/azure-webjobs-sdk/wiki/Queues#poison
This is the automatic behavior. But you can still handle exceptions in your WebJobs Function code (so the exception doesn't leave your function and automatic poison message handling is not triggered) and put a modified message to the poison queue using output bindings.
Another option would be to check the dequeueCount property which indicates how many times the message was tried to be processed.
You can get the number of times a message has been picked up for
processing by adding an int parameter named dequeueCount to your
function. You can then check the dequeue count in function code and
perform your own poison message handling when the number exceeds a
threshold, as shown in the following example.
public static void CopyBlob(
[QueueTrigger("copyblobqueue")] string blobName, int dequeueCount,
[Blob("textblobs/{queueTrigger}", FileAccess.Read)] Stream blobInput,
[Blob("textblobs/{queueTrigger}-new", FileAccess.Write)] Stream blobOutput,
TextWriter logger)
{
if (dequeueCount > 3)
{
logger.WriteLine("Failed to copy blob, name=" + blobName);
}
else
{
blobInput.CopyTo(blobOutput, 4096);
}
}
(also taken from above link).
Your function signature could look like this
public static void ProcessQueueMessage(
[QueueTrigger("myqueue")] CloudQueueMessage message,
[Queue("myqueue-poison")] CloudQueueMessage poisonMessage,
TextWriter logger)
The default maximum retry time is 5. you also can set this value by yourself using the property Queues.MaxDequeueCount of the JobHostConfiguration() instance, code like below:
static void Main(string[] args)
{
var config = new JobHostConfiguration();
config.Queues.MaxDequeueCount = 5; // set the maximum retry time
var host = new JobHost(config);
host.RunAndBlock();
}
Then you can update the failed queue message when the maximum retry time have reached. You can specify a non-existing Blob container to enforce the retry mechanism. Code like below:
public static void ProcessQueueMessage([QueueTrigger("queue")] CloudQueueMessage message, [Blob("container/{queueTrigger}", FileAccess.Read)] Stream myBlob, ILogger logger)
{
string yourUpdatedString = "ErrorMessage" + ":" + "Unable to find user";
string str1 = message.AsString;
if (message.DequeueCount == 5) // here, the maximum retry time is set to 5
{
message.SetMessageContent(str1.Replace("}", "," + yourUpdatedString + "}")); // modify the failed message here
}
logger.LogInformation($"Blob name:{message} \n Size: {myBlob.Length} bytes");
}
When the above is done, you can see the updated queue message in the queue-poison.
UPDATED:
Since CloudQueueMessage is a sealed class, we cannot inherit it.
For your MySpecialPoco message, you can use JsonConvert.SerializeObject(message), code like below:
using Newtonsoft.Json;
static int number = 0;
public static void ProcessQueueMessage([QueueTrigger("queue")] object message, [Blob("container/{queueTrigger}", FileAccess.Read)] Stream myBlob, ILogger logger)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference("queue-poison");// get the poison queue
CloudQueueMessage msg1 = new CloudQueueMessage(JsonConvert.SerializeObject(message));
number++;
string yourUpdatedString = "\"ErrorMessage\"" + ":" + "\"Unable to find user\"";
string str1 = msg1.AsString;
if (number == 5)
{
msg1.SetMessageContent(str1.Replace("}", "," + yourUpdatedString + "}"));
queue.AddMessage(msg1);
number = 0;
}
logger.LogInformation($"Blob name:{message} \n Size: {myBlob.Length} bytes");
}
But the bad thing is that, both the original / updated queue messages are written into poison queue.

How to DeadLetter a brokered message on custom exception

I need to move my BrokeredMessage to deadletter queue forcefully, if I got a custom exception.
Here is my code I have used:
public static async Task Run([ServiceBusTrigger("myqueue", Connection = "myservicebus:cs")]BrokeredMessage myQueueItem, TraceWriter log)
{
try
{
// process message logic..
}
catch(CustomException ex)
{
//forcefully dead letter if custom exception occurs
await myQueueItem.DeadLetterAsync();
}
}
But, some times I'm getting MessageLockLost, exceptions if I call DeadLetterAsync, AbandonAsync() etc., explicitly in my code even though the lock was not actually lost.
Can anyone suggest me, what is the best way to move a brokered message to DeadLetter queue to handle custom exceptions.
Thanks.
Not exactly what you want for, but a creative workaround:
Add an output Service Bus binding to your function. In place of dead letter'ing the message, add a new message to the output:
public static async Task Run(
[ServiceBusTrigger("myqueue", Connection = "mysb")] BrokeredMessage myQueueItem,
[ServiceBus("mydlq", Connection = "mysb")] IAsyncCollector<BrokeredMessage> dlq,
TraceWriter log)
{
try
{
// process message logic..
}
catch(CustomException ex)
{
// forward to "DLQ" when exception occurs
var dlqMessage = ...; // you need to create a new message here
await dlq.AddAsync(dlqMessage);
}
}
The original message will be successfully completed.
Note that you need to create a new BrokeredMessage and carefully copy all the data and metadata from the original message. If you have no metadata, maybe it's better to change the type of collector to something simple like IAsyncCollector<string>.

Azure web jobs - parallel message processing from queues not working properly

I need to provision SharePoint Online team rooms using azure queues and web jobs.
I have created a console application and published as continuous web job with the following settings:
config.Queues.BatchSize = 1;
config.Queues.MaxDequeueCount = 4;
config.Queues.MaxPollingInterval = TimeSpan.FromSeconds(15);
JobHost host = new JobHost();
host.RunAndBlock();
The trigger function looks like this:
public static void TriggerFunction([QueueTrigger("messagequeue")]CloudQueueMessage message)
{
ProcessQueueMsg(message.AsString);
}
Inside ProcessQueueMsg function i'm deserialising the received json message in a class and run the following operations:
I'm creating a sub site in an existing site collection;
Using Pnp provisioning engine i'm provisioning content in the sub
site (lists,upload files,permissions,quick lunch etc.).
If in the queue I have only one message to process, everything works correct.
However, when I send two messages in the queue with a few seconds delay,while the first message is processed, the next one is overwriting the class properties and the first message is finished.
Tried to run each message in a separate thread but the trigger functions are marked as succeeded before the processing of the message inside my function.This way I have no control for potential exceptions / message dequeue.
Tried also to limit the number of threads to 1 and use semaphore, but had the same behavior:
private const int NrOfThreads = 1;
private static readonly SemaphoreSlim semaphore_ = new SemaphoreSlim(NrOfThreads, NrOfThreads);
//Inside TriggerFunction
try
{
semaphore_.Wait();
new Thread(ThreadProc).Start();
}
catch (Exception e)
{
Console.Error.WriteLine(e);
}
public static void ThreadProc()
{
try
{
DoWork();
}
catch (Exception e)
{
Console.Error.WriteLine(">>> Error: {0}", e);
}
finally
{
// release a slot for another thread
semaphore_.Release();
}
}
public static void DoWork()
{
Console.WriteLine("This is a web job invocation: Process Id: {0}, Thread Id: {1}.", System.Diagnostics.Process.GetCurrentProcess().Id, Thread.CurrentThread.ManagedThreadId);
ProcessQueueMsg();
Console.WriteLine(">> Thread Done. Processing next message.");
}
Is there a way I can run my processing function for parallel messages in order to provision my sites without interfering?
Please let me know if you need more details.
Thank you in advance!
You're not passing in the config object to your JobHost on construction - that's why your config settings aren't having an effect. Change your code to:
JobHost host = new JobHost(config);
host.RunAndBlock();

Azure Service Bus SessionHandler issue with partitioned queue

I got into an issue with IMessageSessionAsyncHandlerFactory where new instances of IMessageSessionAsyncHandler are not created when the volume of writing goes to 0 and then up to a normal level.
To be more precise, I'm using SessionHandlerOptions with a value of 500 for MaxConcurrentSessions. This allows reading at a speed of more than 1k msg/s.
The queue I'm reading from is a partitioned queue.
The volume of messages in the queue is rather constant, but from time to time it gets down to 0. When the volume gets back to the normal level, the SessionFactory is not spawning any handlers so I'm not able to read messages anymore. It's like the sessions were not correctly recycled or are held into a sort of continuous waiting.
Here is the code for the factory registering:
private void RegisterHandler()
{
var sessionHandlerOptions = new SessionHandlerOptions
{
AutoRenewTimeout = TimeSpan.FromMinutes(1),
MessageWaitTimeout = TimeSpan.FromSeconds(1),
MaxConcurrentSessions = 500
};
_queueClient.RegisterSessionHandlerFactoryAsync(new SessionHandlerFactory(_callback), sessionHandlerOptions);
}
The factory class:
public class SessionHandlerFactory : IMessageSessionAsyncHandlerFactory
{
private readonly Action<BrokeredMessage> _callback;
public SessionHandlerFactory(Action<BrokeredMessage> callback)
{
_callback = callback;
}
public IMessageSessionAsyncHandler CreateInstance(MessageSession session, BrokeredMessage message)
{
return new SessionHandler(session.SessionId, _callback);
}
public void DisposeInstance(IMessageSessionAsyncHandler handler)
{
var disposable = handler as IDisposable;
disposable?.Dispose();
}
}
And the handler:
public class SessionHandler : MessageSessionAsyncHandler
{
private readonly Action<BrokeredMessage> _callback;
public SessionHandler(string sessionId, Action<BrokeredMessage> callback)
{
SessionId = sessionId;
_callback = callback;
}
public string SessionId { get; }
protected override async Task OnMessageAsync(MessageSession session, BrokeredMessage message)
{
try
{
_callback(message);
}
catch (Exception ex)
{
Logger.Error(...);
}
}
I can see that the session handlers are closed and that the factories are disposed when the writing/reading is at a normal level. However, once the queue empties, there's no way new session handlers are created. Is there a policy for allocating session IDs that forbids reallocating the same sessions after a period of inactivity?
Edit 1:
I'm adding two pictures to illustrate the behavior:
When the writer is stopped and restarted, the running reader is not able to read as much as before.
The number of sessions created after that moment is also much lower than before:
The volume of messages in the queue is rather constant, but from time to time it gets down to 0. When the volume gets back to the normal level, the SessionFactory is not spawning any handlers so I'm not able to read messages anymore. It's like the sessions were not correctly recycled or are held into a sort of continuous waiting.
When using IMessageSessionHandlerFactory to control how the IMessageSessionAsyncHandler instances are created, you could try to log the creation and destruction for all of your IMessageSessionAsyncHandler instances.
Based on your code, I created a console application to this issue on my side. Here is my code snippet for initializing queue client and handling messages:
InitializeReceiver
static void InitializeReceiver(string connectionString, string queuePath)
{
_queueClient = QueueClient.CreateFromConnectionString(connectionString, queuePath, ReceiveMode.PeekLock);
var sessionHandlerOptions = new SessionHandlerOptions
{
AutoRenewTimeout = TimeSpan.FromMinutes(1),
MessageWaitTimeout = TimeSpan.FromSeconds(5),
MaxConcurrentSessions = 500
};
_queueClient.RegisterSessionHandlerFactoryAsync(new SessionHandlerFactory(OnMessageHandler), sessionHandlerOptions);
}
OnMessageHandler
static void OnMessageHandler(BrokeredMessage message)
{
var body = message.GetBody<Stream>();
dynamic recipeStep = JsonConvert.DeserializeObject(new StreamReader(body, true).ReadToEnd());
lock (Console.Out)
{
Console.ForegroundColor = ConsoleColor.Cyan;
Console.WriteLine(
"Message received: \n\tSessionId = {0}, \n\tMessageId = {1}, \n\tSequenceNumber = {2}," +
"\n\tContent: [ title = {3} ]",
message.SessionId,
message.MessageId,
message.SequenceNumber,
recipeStep.title);
Console.ResetColor();
}
Task.Delay(TimeSpan.FromSeconds(3)).Wait();
message.Complete();
}
Per my test, the SessionHandler could work as expected when the volume of messages in the queue from normal to zero and from zero to normal for some time as follows:
I also tried to leverage QueueClient.RegisterSessionHandlerAsync to test this issue and it works as well. Additionally, I found this git sample about Service Bus Sessions, you could refer to it.

Resources