We followed this example (http://masstransit-project.com/MassTransit/usage/azure-functions.html) to try to set up Azure Functions as Azure Service Bus event (topic) subscribers using MassTransit (for .Net CORE 2.1, Azure Functions 2.0).
When using Azure Webjobs this is as simple as using RabbitMQ, configure the publisher, let the subscriber configure and set up its queue, and have Masstransit automatically create one topic per event, redirect to queue and to "queue_error" after all retries have failed. You do not have to setup anything manually.
But with Azure Functions we seem to manually (through Service Bus Explorer or ARM templates) have to add the subscribers to the topic (which is created by the publisher on the first event it publishes) and the queues as well (though these don't even seem to be necessary, the events are handled directly by the consuming Azure Function topic subscribers.).
Maybe we are doing something wrong, I cannot see from the docs that MT will not, as it normally does, set up the subscriber andd creating queues when using Azure Functions. But it works, except for when the consumer throws an exception and after all setup retries have been executed. We simply do not get the event in the deadletter queue and the normally MT-generated error queue does not even get generated.
So how do we get MT to create the error queues, and MOVE the failed events there?
Our code:
[FunctionName("OrderShippedConsumer")]
public static Task OrderShippedConsumer(
[ServiceBusTrigger("xyz.events.order/iordershipped", "ordershippedconsumer-queue", Connection = "AzureServiceBus")] Message message,
IBinder binder,
ILogger logger,
CancellationToken cancellationToken,
ExecutionContext context)
{
var config = CreateConfig(context);
var handler = Bus.Factory.CreateBrokeredMessageReceiver(binder, cfg =>
{
var serviceBusEndpoint = Parse.ConnectionString(config["AzureServiceBus"])["Endpoint"];
cfg.CancellationToken = cancellationToken;
cfg.SetLog(logger);
cfg.InputAddress = new Uri($"{serviceBusEndpoint}{QueueName}");
cfg.UseRetry(x => x.Intervals(TimeSpan.FromSeconds(5)));
cfg.Consumer(() => new OrderShippedConsumer(cfg.Log, config));
});
return handler.Handle(message);
}
And the Consumer code:
public OrderShippedConsumer(ILog log, IConfigurationRoot config)
{
this.config = config;
this.log = log;
}
public async Task Consume(ConsumeContext<IOrderShipped> context)
{
// Handle the event
}
}
Related
I am trying a sample code of Azure Event Hub Producer and trying to send some message to Azure Event Hub.
The eventhub and its policy is correctly configured for sending and listening messages. I am using Dotnet core 3.1 console application. However, the code doesn't move beyond CreateBatchAsync() call. I tried debugging and the breakpoint doesn't go to next line. Tried Try-catch-finally and still no progress. Please guide what I am doing wrong here. The Event hub on Azure is shows some number of successful incoming requests.
class Program
{
private const string connectionString = "<event_hub_connection_string>";
private const string eventHubName = "<event_hub_name>";
static async Task Main()
{
// Create a producer client that you can use to send events to an event hub
await using (var producerClient = new EventHubProducerClient(connectionString, eventHubName))
{
// Create a batch of events
using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
// Add events to the batch. An event is a represented by a collection of bytes and metadata.
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("First event")));
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Second event")));
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Third event")));
// Use the producer client to send the batch of events to the event hub
await producerClient.SendAsync(eventBatch);
Console.WriteLine("A batch of 3 events has been published.");
}
}
}
The call to CreateBatchAsync would be the first need to create a connection to Event Hubs. This indicates that you're likely experiencing a connectivity or authorization issue.
In the default configuration you're using, the default network timeout is 60 seconds and up to 3 retries are possible, with some back-off between them.
Because of this, a failure to connect or authorize may take up to roughly 5 minutes before it manifests. That said, the majority of connection errors are not eligible for retries, so the failure would normally surface after roughly 1 minute.
To aid in your debugging, I'd suggest tweaking the default retry policy to speed things up and surface an exception more quickly so that you have the information needed to troubleshoot and make adjustments. The options to do so are discussed in this sample and would look something like:
var connectionString = "<< CONNECTION STRING FOR THE EVENT HUBS NAMESPACE >>";
var eventHubName = "<< NAME OF THE EVENT HUB >>";
var options = new EventHubProducerClientOptions
{
RetryOptions = new EventHubsRetryOptions
{
// Allow the network operation only 15 seconds to complete.
TryTimeout = TimeSpan.FromSeconds(15),
// Turn off retries
MaximumRetries = 0,
Mode = EventHubsRetryMode.Fixed,
Delay = TimeSpan.FromMilliseconds(10),
MaximumDelay = TimeSpan.FromSeconds(1)
}
};
await using var producer = new EventHubProducerClient(
connectionString,
eventHubName,
options);
I have a following azure storage queue trigger azure function which is binded to azure table for the output.
[FunctionName("TestFunction")]
public static async Task<IActionResult> Run(
[QueueTrigger("myqueue", Connection = "connection")]string myQueueItem,
[Table("TableXyzObject"), StorageAccount("connection")] IAsyncCollector<TableXyzObject> tableXyzObjectRecords)
{
var tableAbcObject = new TableXyzObject();
try
{
tableAbcObject.PartitionKey = DateTime.UtcNow.ToString("MMddyyyy");
tableAbcObject.RowKey = Guid.NewGuid();
tableAbcObject.RandomString = myQueueItem;
await tableXyzObjectRecords.AddAsync(tableAbcObject);
}
catch (Exception ex)
{
}
return new OkObjectResult(tableAbcObject);
}
public class TableXyzObject : TableEntity
{
public string RandomString { get; set; }
}
}
}
I am looking for a way to read 15 messages from poisonqueue which is different than myqueue (queue trigger on above azure function) and batch insert it in to dynamic table (tableXyz, tableAbc etc) based on few conditions in the queue message. Since we have different poison queues, we want to pick up messages from multiple poison queues (name of the poison queue will be provided in the myqueue message). This is done to avoid to spinning up new azure function every time we have a new poison queue.
Following is the approach I have in my mind,
--> I might have to get 15 queue messages using queueClient (create new one) method - ReceiveMessages(15) of Azure.Storage.Queue package
--> And do a batch insert using TableBatchOperation class (cannot use output binding)
Is there any better approch than this?
Unfortunately, storage queues don't have a great solution for this. If you want it to be dynamic then the idea of implementing your own clients and table outputs is probably your best option. The one thing I would suggest changing is using a timer trigger instead of a queue trigger. If you are putting a message on your trigger queue every time you add something to the poison queue it would work as is, but if not a timer trigger ensures that poisoned messages are handled in a timely fashion.
Original Answer (incorrectly relating to Service Bus queues)
Bryan is correct that creating a new queue client inside your function isn't the best way to go about this. Fortunately, the Service Bus extension does allow batching. Unfortunately the docs haven't quite caught up yet.
Just make your trigger receive an array:
[QueueTrigger("myqueue", Connection = "connection")]string myQueueItem[]
You can set your max batch size in the host.json:
"extensions": {
"serviceBus": {
"batchOptions": {
"maxMessageCount": 15
}
}
}
It really annoys me that we're unable to move messages from a Dead Letter Queue over to the Original Queue for processing when using Azure Servicebus. So, I figured out that I will try to implement this feature myself. We are using Masstransit to publish events. The queuename in ASB will be an events full assembly name.
I've created an REST Endpoint in my application to move messages from the DLQ to the original queue for reprocessing. This is where I'm stuck at the moment.
To get all messages in a DLQ, the user gives me the queuename, and I will format it to contain the DeadLetterQueue. Like this:
myproject.events.usercreatedevent -> myproject.events.usercreatedevent/$DeadLetterQueue
I get all the messages from this queue by using classes from the Nuget package Microsoft.Azure.Servicebus
public async Task RequeueMessagesAsync(string queueName)
{
var msg = new MessageReceiver(BuildConnectionString(), queueName);
var messages = await msg.PeekAsync(50);
foreach (var message in messages)
{
var content = Encoding.UTF8.GetString(message.Body);
var jsonObject = JsonConvert.DeserializeObject<JObject>(content);
var destinationAddress = jsonObject["destinationAddress"].ToString();
var messageContent = jsonObject["message"].ToString();
var messageType = destinationAddress.Split("/").Last();
await _bus.SendAsync(jsonObject, messageType);
}
}
The when calling _bus.SendAsync(object, address) the message ends up in a _skipped queue. I think the reason for this is that the messageHeaders is set to JObject, and not the actual message type. I cannot use reflection to recreated the event either, as we have a lot of microservices and source code of the event it not necessarily available. The code behind the _bus.SendAsync(object, address) looks like this:
public async Task SendAsync(object message, string queueName, CancellationToken cancellationToken = default)
{
ISendEndpoint sender = await GetSenderAsync(queueName);
sender.ConnectSendObserver(new ErrorQueueConfiguration(_addressProvider.GetAddress("error")));
await sender.Send(message, cancellationToken);
}
Can I trick Masstransit to forward this "unknown" type to my Consumer by changing the MessageHeaders somehow? Have anyone successfully moved messages from a DLQ to its original queue?
I have a scenario in which I am calling RegisterMessageHandler of SubscriptionClient class of Azure Service Bus library.
Basically I am using trigger based approach while receiving the messages from Service Bus in one of my services in Service Fabric Environment as a stateless service.
So I am not closing the subscriptionClient object immediately, rather I am keeping it open for the lifetime of the Service so that it keeps on receiving the message from azure service bus topics.
And when the service needs to shut down(due to some reasons), I want to handle the cancellation token being passed into the service of Service Fabric.
My question is how can I handle the cancellation token in the RegisterMessageHandler method which gets called whenever a new message is received?
Also I want to handle the closing of the Subscription client "Gracefully", i.e I want that if a message is already being processed, then I want that message to get processed completely and then I want to close the connection.
Below is the code I am using.
Currently We are following the below approach:
1. Locking the process of the message using semaphore lock and releasing the lock in finally block.
2. Calling the cancellationToken.Register method to handle cancellation token whenever cancellation is done. Releasing the lock in the Register Method.
public class AzureServiceBusReceiver
{
private SubscriptionClient subscriptionClient;
private static Semaphore semaphoreLock;
public AzureServiceBusReceiver(ServiceBusReceiverSettings settings)
{
semaphoreLock = new Semaphore(1, 1);
subscriptionClient = new SubscriptionClient(
settings.ConnectionString, settings.TopicName, settings.SubscriptionName, ReceiveMode.PeekLock);
}
public void Receive(
CancellationToken cancellationToken)
{
var options = new MessageHandlerOptions(e =>
{
return Task.CompletedTask;
})
{
AutoComplete = false,
};
subscriptionClient.RegisterMessageHandler(
async (message, token) =>
{
semaphoreLock.WaitOne();
if (subscriptionClient.IsClosedOrClosing)
return;
CancellationToken combinedToken = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken, token).Token;
try
{
// message processing logic
}
catch (Exception ex)
{
await subscriptionClient.DeadLetterAsync(message.SystemProperties.LockToken);
}
finally
{
semaphoreLock.Release();
}
}, options);
cancellationToken.Register(() =>
{
semaphoreLock.WaitOne();
if (!subscriptionClient.IsClosedOrClosing)
subscriptionClient.CloseAsync().GetAwaiter().GetResult();
semaphoreLock.Release();
return;
});
}
}
Implement the message client as ICommunicationListener, so when the service is closed, you can block the call until message processing is complete.
Don't use a static Semaphore, so you can safely reuse the code within your projects.
Here is an example of how you can do this.
And here's the Nuget package created by that code.
And feel free to contribute!
When an exception is thrown from webjob, it exits without logging to the application insights. Observed that flushing the logs to application insights takes few minutes, so we are missing the exceptions here. How to handle this?
Also, is there a way to move the message which hit the exception to poison queue automatically without manually inserting that message to poison queue?
I am using latest stable 3.x versions for the 2 NuGet packages:
Microsoft.Azure.WebJobs and Microsoft.Azure.WebJobs.Extensions
Created a host that implemented IHost as below:
var builder = new HostBuilder()
.UseEnvironment("Development")
.ConfigureWebJobs(b =>
{
...
})
.ConfigureLogging((context, b) =>
{
string appInsightsKey = context.Configuration["APPINSIGHTS_INSTRUMENTATIONKEY"];
if (!string.IsNullOrEmpty(appInsightsKey))
{
b.AddApplicationInsights(o => o.InstrumentationKey = appInsightsKey);
appInsights.TrackEvent("Application Insights is starting!!");
}
})
.ConfigureServices(services =>
{
….
})
.UseConsoleLifetime();
var host = builder.Build();
using (host)
{
host.RunAsync().Wait();
}
and Function.cs
public static async void ProcessQueueMessageAsync([QueueTrigger("queue")] Message message, int dequeueCount, IBinder binder, ILogger logger)
{
switch (message.Name)
{
case blah:
...break;
default:
logger.LogError("Invalid Message object in the queue.", message);
logger.LogWarning("Current dequeue count: " + dequeueCount);
throw new InvalidOperationException("Simulated Failure");
}
}
My questions here are:
1) When the default case is hit, webjob is terminating immediately and the loggers are not getting flushed into app insights even after waiting and starting the web job again. As it takes few minutes to reflect in app insights, and webjob stops, I am losing the error logs. How to handle this?
2) From the sample webjobs here, https://github.com/Azure/azure-webjobs-sdk-samples/blob/master/BasicSamples/QueueOperations/Functions.cs they are using JobHost host = new JobHost(); and if the 'FailAlways' function fails, it automatically retries for 5 times and pushed the message into poison queue. But this is not happening in my code. Is it because of different Hosts? or do I have to add any more configurations?
Try changing your function to return Task instead of void:
public static async Task ProcessQueueMessageAsync([QueueTrigger("queue")] Message message, int dequeueCount, IBinder binder, ILogger logger)
This worked for me where even though I was logging the error and throwing the exception, Application Insights would either show a successful invocation or no invocation occurring.
After inspecting the source code of the Application Insights SDK it became apparent that to get an Exception in Application Insights you must pass an exception object into the LogError call.
log.Error(ex, "my error message") - will result in Application Insight Exception
log.Error("my error message") - will result in Application Insight Trace.
is there a way to move the message which hit the exception to poison queue automatically without manually inserting that message to poison queue?
You could set config.Queues.MaxDequeueCount = 1; in webjob. The number of times to try processing a message before moving it to the poison queue.
And where is the MaxDequeueCount configuration should be added in the code?
You could set the property in JobHostConfiguration in program.cs