I would like to have my queue retry failed webjobs every 90 minutes and only for 3 attempts.
When creating the queue i use the following code
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
IRetryPolicy linearRetryPolicy = new LinearRetry(TimeSpan.FromSeconds(5400), 3);
queueClient.DefaultRequestOptions.RetryPolicy = linearRetryPolicy;
triggerformqueue = queueClient.GetQueueReference("triggerformqueue");
triggerformqueue.CreateIfNotExists();
However when simulating a failed webjob attempt the queue uses the default retry policy.
I'm i missing something.
I think you might be thinking about this backwards. Queues don't actually perform behavior. Instead what I am guessing you want to do is have a web job that is configured to pull messages from a queue and then if it fails to process the message from a queue for some reason have the web job retry 90 minutes later. In this case you just need to set the invisibility timeout to be 90 minutes (default is 30 seconds) which will ensure that if the message isn't fully processed (ie - GetMessage and DeleteMessage are both called) then the message will reappear on the queue 90 minutes later.
Take a look at this Getting Started with Queue Storage document for more information.
There is something like Azure WebJobs SDK Extensions and ErrorTriggerAttribute (it isn't yet available in nuget 1.0.0-beta1 package, but you have access to public repository)
public static void ErrorMonitor(
[ErrorTrigger("0:30:00", 10, Throttle = "1:00:00")] TraceFilter filter,
TextWriter log)
https://github.com/Azure/azure-webjobs-sdk-extensions#errortrigger
You need to use your RetryPolicy when you add an item to the queue, not on the queue itself, eg.
var queue = queueClient.GetQueueReference("myQueue");
queue.CreateIfNotExists();
options = new QueueRequestOptions { RetryPolicy = linearRetryPolicy };
await queue.AddMessageAsync(yourMessage, null, new TimeSpan(0, delayMinutes, 0), options, null);
Related
In Azure WebJobs, in the OnMessageOptions class, I'm calling the QueueClient.Complete(Guid) method by setting the AutoComplete flag to true and messages seem to dequeue just fine when running the ProcessQueue function. Active messages count goes down by 1 after successful processing of each message. However, when I want to requeue a message (because it cannot be processed currently) back to the queue that triggers the service bus function, as a new brokered message after a minute, using BrokeredMessage.ScheduledEnqueueTimeUtc, it seems like it isn't working. Scheduled messages count seems to go up initially. I go back to the queue after a few hours and see active messages in the thousands. The copies are of the same message. What is happening? I'd expect the message to be taken off the queue because of QueueClient.Complete(Guid) and the new scheduled message to be its replacement.
Some detail:
To send the message I do the following:
var queueclient = QueueClient.CreateFromConnectionString(connectionString, queueName);
queueclient.Send(message);
queueclient.close();
Inside the WebJob I created a ServiceBusConfiguration object which requires a onMessageOptions object where I set the AutoComplete=true. I pass the ServiceBusConfiguration object to the JobHostConfiguration.UserServiceBus
method.
Inside the WebJob service bus queue triggered function I again do the following to requeue, by first creating a new instance of the brokered message again.
//if not available yet for processing please requeue...
var queueclient = QueueClient.CreateFromConnectionString(connectionString, queueName);
queueclient.Send(message);
queueclient.close();
I don't do the following/use callbacks which is may be why it isn't working?
var options = new OnMessageOptions();
options.AutoComplete = false; // to call complete ourselves
Callback to handle received messages
client.OnMessage(m =>
{
var clone = m.Clone();
clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(60);
client.Send(clone);
m.Complete();
}, options);
when I want to requeue a message (because it cannot be processed currently) back to the queue that triggers the service bus function, as a new brokered message after a minute, using BrokeredMessage.ScheduledEnqueueTimeUtc, it seems like it isn't working
If you fail to process your message, do not re-queue it. Instead, abandon (with a reason) and it will be picked up again.
BrokeredMessage.ScheduledEnqueueTimeUtc is intended to be used for messages added to the queue. When you receive a message, you can complete, dead-letter, defer, or abandon. If you abandon a message, it will be retried, but you can't control when that will happen. If you have no other messages in the queue, it will be retried almost immediately.
Note: when you see a behaviour that you suspect is not right, having a simple repro to share would be very helpful.
If I schedule a message in the future using something like this:
d = datetime.utcnow() + timedelta(minutes=5)
task = {"some": "object"}
sbs.send_queue_message(
qn,
Message(
task,
broker_properties={'ScheduledEnqueueTimeUtc': d}
)
)
Then is there a way that I can view/delete messages that have been scheduled? send_queue_message doesn't return anything, and receive_queue_message understandably doesn't return items that are scheduled to be queued later - so I can't get hold of it to pass to delete_queue_message for example.
The Azure team seem aware of the usecase because Storage Queues seem to have something like this feature: https://azure.microsoft.com/en-gb/blog/azure-storage-queues-new-feature-pop-receipt-on-add-message/
Basically I need to be able to schedule a message to be queued later, but have this cancelable. Ideally I'd like to be able to also view all future scheduled tasks, but being able to just store an id that can be used to later delete the queued message would be sufficient.
The Azure UI shows the count of active/scheduled messages too, which seems to suggest there should be some way to see those scheduled ones!
Would queue storage be better for this? Or does service bus have some approach that might work? ScheduledEnqueueTimeUtc seems more flexible than the visibility timeout in queue storage so it'd be nice to stick with it if I can.
Yes, it's possible.
Don't know if NodeJS client has support for it or not, but with C# client there's an alternative to ScheduledEnqueueTimeUtc approach I've described here. Using QueueClient.ScheduleMessageAsync() you can send a scheduled message and get the SequenceNumber. Which then can be used to cancel the message at any point in time using QueueClient.CancelScheduledMessageAsync(sequenceNumber).
You can use "Microsoft.ServiceBus.Messaging" and purge messages by en-queue time. Receive the messages, filter by ScheduledEnqueueTime and perform purge when the message has been en-queued at the specific time.
Microsoft.ServiceBus.Messaging;
MessagingFactory messagingFactory = MessagingFactory.CreateFromConnectionString(connectionString);
var queueClient = messagingFactory.CreateQueueClient(resourceName, ReceiveMode.PeekLock);
var client = messagingFactory.CreateMessageReceiver(resourceName, ReceiveMode.PeekLock);
BrokeredMessage message = client.Receive();
if (message.EnqueuedTimeUtc < MessageEnqueuedDateTime)
{
message.Complete();
}
For completeness, this can be done using the storage queue service Python SDK:
from azure.storage.queue import QueueService
account_name = '<snip>'
account_key = '<snip>'
queue_service = QueueService(account_name=account_name, account_key=account_key)
a = queue_service.put_message('queue_name', u'Hello World (delete)', visibility_timeout=30)
print(a.id) # id
print(a.pop_receipt) # pop_receipt
Then in another Python instance before the visibility timeout expires:
queue_service.delete_message('queue_name', id, pop_receipt)
I have an Azure webjob that is triggered by a queue (inherited, not originally written by me). The queue usually only has one item placed on it at a time, but on the first of every month has many items.
On these occasions it has always processed one queue item at a time, finishing processing before picking up the next.
I noticed, however, that as of a couple of months ago it started processing two files at any one time, which is causing problems.
Whilst i could refactor the code to allow for this, I really don't have the time, and the return would be minimal. I simply want it to process one item at a time again, but i cant find anything that may have caused this to change.
Are there any settings in the azure portal I should be aware of? I don't believe any code relating to the trigger itself has changed.
Thank you in advance
Sure, this can be done. Note that a WebJob can be triggered by either a Service Bus Queue or an Azure Storage Queue. Here's info for both.
For Azure Storage Queues
By default, a QueueTrigger will grab 16 messages at a time and process them in parallel. If you don't want this, you need to set the JobHostConfiguration instance's BatchSize property to 1 in your WebJob's static void Main method. Example:
static void Main(string[] args)
{
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.BatchSize = 8;
JobHost host = new JobHost(config);
host.RunAndBlock();
}
For Service Bus Queues
Similarly, you'll set properties in the JobHostConfiguration. If you're using Service Bus, there's a little more setup. Example:
static void Main(string[] args)
{
JobHostConfiguration config = new JobHostConfiguration();
ServiceBusConfiguration serviceBusConfig = new ServiceBusConfiguration();
serviceBusConfig.MessageOptions.MaxConcurrentCalls = 1;
config.UseServiceBus(serviceBusConfig);
JobHost host = new JobHost(config);
host.RunAndBlock();
}
In my case, the problem was that someone had scaled out the Azure app service to 2 instances. However, I'm marking Rob's answer as the most helpful.
I’m using Azure WebJobs SDK 2.0 and when I specify VisibilityTimeout then MaxDequeuCount, the message is not removed from the queue when it fails 3 times but only copied to poison queue. You can see the DequeueCount is greater than MaxDequeuCount and the message is still in the queue.
class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
config.Queues.BatchSize = 8;
config.Queues.MaxDequeueCount = 3;
config.Queues.VisibilityTimeout = TimeSpan.FromSeconds(5);
config.Queues.MaxPollingInterval = TimeSpan.FromSeconds(3);
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
The function is throwing an exception to mimic the error condition.
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([QueueTrigger("queue")] string message, TextWriter log)
{
log.WriteLine(message);
throw new Exception("There was an error processing the message");
}
}
After three tries the message is moved to the poison queue, which is expected but after 10 minutes or so the message that was moved to poison queue appears again in the queue.
console output
In the queue you can see that Dequeue count is greater than MaxDequeuCount and still the message is not deleted from the queue.queue
In the poison queue you can see M1 message processed twice.
message is not removed from the queue when it fails 3 times but only copied to poison queue
As far as I know, currently we cannot use Storage SDK 8.x with WebJobs SDK, others have reported the similar issue.
Poison messages stay undeleted-but-invisible with the latest
WindowsAzure.Storage
8.1.1
WebJobs V1.1.2 fails to remove poison messages from queue with
V8.0.1 of
Storage
If you are using Azure Storage SDK 8.x, please try to downgrad the version of Azure Storage SDK. Besides, in the second link, asiffermann shared us a workaround with sample: write a custom QueueProcessor to create a new CloudQueueMessage in CopyMessageToPoisonQueueAsync, please refer to it.
Is there any way to configure triggers without attributes? I cannot know the queue names ahead of time.
Let me explain my scenario here.. I have one service bus queue, and for various reasons (complicated duplicate-suppression business logic), the queue messages have to be processed one at a time, so I have ServiceBusConfiguration.OnMessageOptions.MaxConcurrentCalls set to 1. So processing a message holds up the whole queue until it is finished. Needless to say, this is suboptimal.
This 'one at a time' policy isn't so simple. The messages could be processed in parallel, they just have to be divided into groups (based on a field in message), say A and B. Group A can process its messages one at a time, and group B can process its own one at a time, etc. A and B are processed in parallel, all is good.
So I can create a queue for each group, A, B, C, ... etc. There are about 50 groups, so 50 queues.
I can create a queue for each, but how to make this work with the Azure Webjobs SDK? I don't want to copy-paste a method for each queue with a different ServiceBusTrigger for the SDK to discover, just to enforce one-at-a-time per queue/group, then update the code with another copy-paste whenever another group is needed. Fetching a list of queues at startup and tying to the function is preferable.
I have looked around and I don't see any way to do what I want. The ITypeLocator interface is pretty hard-set to look for attributes. I could probably abuse the INameResolver, but it seems like I'd still have to have a bunch of near-duplicate methods around. Could I somehow create what the SDK is looking for at startup/runtime?
(To be clear, I know how to use INameResolver to get queue name as at How to set Azure WebJob queue name at runtime? but though similar this isn't my problem. I want to setup triggers for multiple queues at startup for the same function to get the one-at-a-time per queue processing, without using the trigger attribute 50 times repeatedly. I figured I'd ask again since the SDK repo is fairly active and it's been a year..).
Or am I going about this all wrong? Being dumb? Missing something? Any advice on this dilemma would be welcome.
The Azure Webjob Host discovers and indexes the functions with the ServiceBusTrigger attribute when it starts. So there is no way to set up the queues to trigger at the runtime.
The simpler solution for you is to create a long time running job and implement it manually:
public class Program
{
private static void Main()
{
var host = new JobHost();
host.CallAsync(typeof(Program).GetMethod("Process"));
host.RunAndBlock();
}
[NoAutomaticTriggerAttribute]
public static async Task Process(TextWriter log, CancellationToken token)
{
var connectionString = "myconnectionstring";
// You can also get the queue name from app settings or azure table ??
var queueNames = new[] {"queueA", "queueA" };
var messagingFactory = MessagingFactory.CreateFromConnectionString(connectionString);
foreach (var queueName in queueNames)
{
var receiver = messagingFactory.CreateMessageReceiver(queueName);
receiver.OnMessage(message =>
{
try
{
// do something
....
// Complete the message
message.Complete();
}
catch (Exception ex)
{
// Log the error
log.WriteLine(ex.ToString());
// Abandon the message so that it can be retry.
message.Abandon();
}
}, new OnMessageOptions() { MaxConcurrentCalls = 1});
}
// await until the job stop or restart
await Task.Delay(Timeout.InfiniteTimeSpan, token);
}
}
Otherwise, if you don't want to deal with multiple queues, you can have a look at azure servicebus topic/subscription and create SqlFilter to send your message to the right subscription.
Another option could be to create your own trigger: The azure webjob SDK provides extensibility points to create your own trigger binding :
Binding Extensions Overview
Good Luck !
Based on my understanding, your needs seems to be building a message batch system in parallel. The #Thomas solution is good, but I think Azure Batch service with Table storage may be better and could be instead of the complex solution of ServiceBus queue + WebJobs with a trigger.
Using Azure Batch with Table storage, you can control the task creation and execute the task in parallel and at scale, even monitor these tasks, please refer to the tutorial to know how to.