I'm using an Azure function that sends an array of around 200 documents to a CosmosDB via the Output Binding. That function gets triggered about 1000 at the same time by queue messages.
In some cases I get the "Request rate is large" error and the function execution fails. The documentation says when this error occurs, I can retry the execution in some milliseconds, but I suspect the azure function runtime is doing that for me. I couldn't find any documentation explicitly saying that when the output binding throws that exception it will retry automatically (like with the .NET Linq library).
Can someone point me out to see if this is the case?
The Output binding uses SDK 1.13.2 which already has the retry mechanism in place.
Assuming you are using Azure Functions v1, if you are using the IAsyncCollection the Function will do an UpsertDocumentAsync for each AddAsync, if you are using a single document output, then the UpsertDocumentAsync should be happening once.
In any case, the SDK retries by default 9 times on a throttled result, after that, the exception is bubbled and you Function will error; the document should go back to the queue for retrying as per the QueueTrigger design and after a couple of iterations, it goes to the deadletter queue..
If you want more granular control of the flow, you could obtain the DocumentClient and do the UpsertDocumentAsync yourself with a try/catch, if it fails more than 9 times, you can opt to send to another Queue or retry another set of times. Something like:
using Microsoft.Azure.Documents;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.Documents.Linq;
[FunctionName("CosmosDBSample")]
public static async Task<HttpResponseMessage> Run(
[QueueTrigger("my-queue")] MyPOCOClass myMessage,
[DocumentDB("test", "test", ConnectionStringSetting = "CosmosDB"] DocumentClient client,
TraceWriter log)
{
try
{
await client.UpsertDocumentAsync(myMessage);
}
catch(DocumentClientException ex)
{
// retry / queue somewhere else?
log.Warning($"DocumentClientException {ex.Message} in document {myMessage.Id}.");
}
}
Related
I have an issue with Azure Function Service Bus trigger.
The issue is Azure function cannot wait a message done before process a new message. It process Parallel, it not wait 5s before get next message. But i need it process sequencecy (as image bellow).
How can i do that?
[FunctionName("HttpStartSingle")]
public static void Run(
[ServiceBusTrigger("MyServiceBusQueue", Connection = "Connection")]string myQueueItem,
[OrchestrationClient] DurableOrchestrationClient starter,
ILogger log)
{
Console.WriteLine($"MessageId={myQueueItem}");
Thread.Sleep(5000);
}
I resolved my problem by using this config in my host.json
{
"version": "2.0",
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"maxConcurrentCalls": 1
}
}
}}
There are two approaches you can accomplish this,
(1) You are looking for Durable Function with function chaining
For background jobs you often need to ensure that only one instance of
a particular orchestrator runs at a time. This can be done in Durable
Functions by assigning a specific instance ID to an orchestrator when
creating it.
(2) Based on the messages that you are writing to Queue, you need to partition the data, that will automatically handle the order of messages which you do not need to handle manually by azure function
In general, ordered messaging is not something I'd be striving to implement since the order can and at some point will be distorted. Saying that, in some scenarios, it's required. For that, you should either use Durable Function to orchestrate your messages or use Service Bus message Sessions.
Azure Functions has recently added support for ordered message delivery (accent on the delivery part as processing can still fail). It's almost the same as the normal Function, with a slight change that you need to instruct the SDK to utilize sessions.
public async Task Run(
[ServiceBusTrigger("queue",
Connection = "ServiceBusConnectionString",
IsSessionsEnabled = true)] Message message, // Enable Sessions
ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {Encoding.UTF8.GetString(message.MessageId)}");
await _cosmosDbClient.Save(...);
}
Here's a post for more detials.
Warning: using sessions will require messages to be sent with a session ID, potentially requiring a change on the sending side.
I'm developing a Service Bus Trigger in Azure Functions v1 locally with Visual Studio 2017. I want to test the example from the official docs without having to put a message in the service bus. So I trigger it via Postman at endpoint POST http://localhost:7071/admin/functions/ServiceBusQueueTriggerCSharp with body { "input": "foo" }.
This fails with a script host error: Exception while executing function: ServiceBusQueueTriggerCSharp. Microsoft.Azure.WebJobs.Host: One or more errors occurred. Exception binding parameter 'deliveryCount'. Microsoft.Azure.WebJobs.Host: Binding data does not contain expected value 'deliveryCount'.
I tried removing the deliveryCount argument, but then it fails at enqueueTimeUtc. Removing that too works. Is there a way to keep these arguments and test the Function locally?
I understand that these two arguments wouldn't make much sense when triggered via HTTP, but they could be given default values. messageId has a non-zero value.
Example for reference:
[FunctionName("ServiceBusQueueTriggerCSharp")]
public static void Run(
[ServiceBusTrigger("myqueue", AccessRights.Manage, Connection = "ServiceBusConnection")]
string myQueueItem,
Int32 deliveryCount, // this fails
DateTime enqueuedTimeUtc, // this fails too
string messageId,
TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
log.Info($"EnqueuedTimeUtc={enqueuedTimeUtc}");
log.Info($"DeliveryCount={deliveryCount}");
log.Info($"MessageId={messageId}");
}
As of right now, if you want to be able to work with these additional metadata properties, you'll need to use a real service bus message.
In theory, the admin endpoint could be smart enough to allow you to pass additional binding data (such as deliveryCount in this case) as query parameters. I filed the following feature request to track:
https://github.com/Azure/azure-functions-host/issues/2955
I am encountering one major road block issue when trying to use ServiceBusTrigger in azureFunction. I am trying to abandon, or deadletter, a service bus message in V2 ServiceBusTrigger, How can I do so? I've tried the following solution, but I didn't get anywhere.
Here is the codeSample I used:
public async static Task Run(Message myQueueItem, TraceWriter log, ExecutionContext context)
{
log.Info($"C# ServiceBus queue trigger function processed message delivery count: {myQueueItem.SystemProperties.DeliveryCount}");
QueueClient queueClient = new QueueClient("[connectionstring]","[queueName]");
////await queueClient.DeadLetterAsync(myQueueItem.SystemProperties.LockToken);
await queueClient.AbandonAsync(myQueueItem.SystemProperties.LockToken);
}
Solution 1: I tried to substitute Message myQueueItem for BrokeredMessage like in V1, I then can call myQueueItem.Abandon, or deadletter, on the message lever. However It came back with exception:
Microsoft.Azure.WebJobs.Host: Exception binding parameter 'myQueueItem'. System.Private.DataContractSerialization: There was an error deserializing the object of type Microsoft.ServiceBus.Messaging.BrokeredMessage. The input source is not correctly formatted. System.Private.DataContractSerialization: The input source is not correctly formatted."
At least I can go one step further. to
solution 2. Solution 2: is to use:
QueueClient queueClient = new QueueClient("[connectionstring]","[queueName]");
////await queueClient.DeadLetterAsync(myQueueItem.SystemProperties.LockToken);
await queueClient.AbandonAsync(myQueueItem.SystemProperties.LockToken);
I can use the lock provided in the Message Object, however, when I try to send it with queueClient, It said the message gone from the queue. or no longer available.
Can anybody let me know if i am on the right track? If I am not, please kindly guide me in the right track.
Service Bus messages are automatically completed or abandoned by Azure Functions runtime based on the success/failure of the function call, docs:
The Functions runtime receives a message in PeekLock mode. It calls Complete on the message if the function finishes successfully, or calls Abandon if the function fails.
So, the suggested way to Abandon your message is to throw an exception from function invocation.
My azure function is calculating results of certain request jobs (cca. 5s-5min) where each job has unique jobId based on the hash of the request message. Execution leads to deterministic results. So it is functionally "pure function". Therefore we are caching results of already evaluated jobs in a blob storage based on the jobId. All great so far.
Now if a request for jobId comes three scenarios are possible.
Result is in the cache already => then it is served from the cache.
Result is not in the cache and no function is running the evaluation => new invocation
Result is not in the cache, but some function is already working on it => wait for result
We do some custom table storage based progress tracking magic to tell if function is working on given jobId or not yet.
It works somehow, up to the point of 5 x restart -> poison queue scenarios. There we are quite hopeless.
I feel like we are hacking around some of already reliably implemented feature of Azure Functions internals, because exactly the same info can be seen in the monitor page in azure portal or used to be visible in kudu webjobs monitor page.
How to reliably find out in c# if a given message (jobId) is currently being processed by some function and when it is not?
Azure Durable Functions provide a mechanism how to track progress of execution of smaller tasks.
https://learn.microsoft.com/en-us/azure/azure-functions/durable-functions-overview
Accroding to the "Pattern #3: Async HTTP APIs" the orchestrator can provide information about the function status in form like this:
{"runtimeStatus":"Running","lastUpdatedTime":"2017-03-16T21:20:47Z", ...}
This solves my problem about finding if given message is being processed.
How to reliably find out in c# if a given message (jobId) is currently being processed by some function and when it is not?
If you’d like to detect which message is being processed and get the message ID in queue triggered Azure function, you can try the following code:
#r "Microsoft.WindowsAzure.Storage"
using System;
using Microsoft.WindowsAzure.Storage.Queue;
public static void Run(CloudQueueMessage myQueueItem, TraceWriter log)
{
log.Info($"messageid: {myQueueItem.Id}, messagebody: {myQueueItem.AsString}");
}
I have a queue receiver, which reads messages from the queue and process the message (do some processing and inserts some data to the azure table or retrieves the data).
What I observed was that any exception that my processing method (SendResponseAsync()) throws results in retry i.e. redelivery of the message to the default 10 times.
Can this behavior be customized i.e. I only retry for certain exception and ignore for other. Like if there is some network issue, then it makes sense to retry but if it is BadArgumentException(poisson message), then I may not want to retry.
Since retry is taken care by ServiceBus client library, can we customize this behavior ?
This is the code at the receiver end
public MessagingServer(QueueConfiguration config)
{
this.requestQueueClient = QueueClient.CreateFromConnectionString(config.ConnectionString, config.QueueName);
this.requestQueueClient.OnMessageAsync(this.DispatchReplyAsync);
}
private async Task DispatchReplyAsync(BrokeredMessage message)
{
await this.SendResponseAsync(message);
}