I am using latest version of .net sdk (Azure.Messaging.ServiceBus).
I want to renew the lock duration of the message since processing of messages takes more than 5 min so that other listeners/consumers to the client cant receive while one of the listner/consumer processing message.
I have tried by setting MaxAutoLockRenewalDuration property , but it didnt work, after 5 min other consumer is consuming the message before compelting the current consumer.
Sample code which i have used
var client = CreateQueueClient("managedQueue");
var proc = client.CreateProcessor("managedQueue", options: new ServiceBusProcessorOptions
{
MaxConcurrentCalls = 2,
AutoCompleteMessages = false,
ReceiveMode = ServiceBusReceiveMode.PeekLock,
MaxAutoLockRenewalDuration = new TimeSpan(0,20,0),
}) ;
proc.ProcessMessageAsync += MessageHandler;
proc.ProcessErrorAsync += ErrorHandler;
Sample Callback Code
async Task MessageHandler(ProcessMessageEventArgs args)
{
try
{
string body = args.Message.Body.ToString();
Console.WriteLine($"Received: {body}");
await Task.Delay(480000);//8 min
// complete the message. messages is deleted from the queue.
await args.CompleteMessageAsync(args.Message);
}
catch(Exception ex)
{
Console.WriteLine("Exception In Handler:"+ex);
}
}
Can you please any one help me regarding this auto renew the lock duration of the message.
Thanks
Related
Code for publishing the messages here:
async function publishMessage(topicName) {
console.log(`[${new Date().toISOString()}] publishing messages`);
const pubsub = new PubSub({ projectId: PUBSUB_PROJECT_ID });
const topic = pubsub.topic(topicName, {
batching: {
maxMessages: 10,
maxMilliseconds: 10 * 1000,
},
});
const n = 5;
const dataBufs: Buffer[] = [];
for (let i = 0; i < n; i++) {
const data = `message payload ${i}`;
const dataBuffer = Buffer.from(data);
dataBufs.push(dataBuffer);
}
const results = await Promise.all(
dataBufs.map((dataBuf, idx) =>
topic.publish(dataBuf).then((messageId) => {
console.log(`[${new Date().toISOString()}] Message ${messageId} published. index: ${idx}`);
return messageId;
})
)
);
console.log('results:', results.toString());
}
As you can see, I am going to publish 5 messages. The time to publish is await Promise.all(...), I mean, for users, We can say send messages at this moment, but for internal of pubsub library maybe not. I set maxMessages to 10, so pubsub will wait for 10 seconds(maxMilliseconds), then publish these 5 messages.
The exuection result meets my expectations:
[2020-05-05T09:53:32.078Z] publishing messages
[2020-05-05T09:53:42.209Z] Message 36854 published. index: 0
[2020-05-05T09:53:42.209Z] Message 36855 published. index: 1
[2020-05-05T09:53:42.209Z] Message 36856 published. index: 2
[2020-05-05T09:53:42.209Z] Message 36857 published. index: 3
[2020-05-05T09:53:42.209Z] Message 36858 published. index: 4
results: 36854,36855,36856,36857,36858
In fact, I think topic.publish does not directly call the remote pubsub service, but pushes the message into the memory queue. And there is a window time to calculate the count of the messages, maybe in a tick or something like:
// internal logic of #google/pubsub library
setTimeout(() => {
// if user messages to be published gte maxMessages, then, publish them immediately
if(getLength(messageQueue) >= maxMessages) {
callRemotePubsubService(messageQueue)
}
}, /* window time = */ 100);
Or using setImmediate(), process.nextTick()?
Note that the conditions for sending a message to the service is an OR not an AND. In other words, if either maxMessages messages are waiting to be sent OR maxMilliseconds has passed since the library received the first outstanding message, it will send the outstanding messages to the server.
The source code for the client library is available, so you can see exactly what it does. The library has a queue that it uses to track messages that haven't been sent yet. When a message is added, if the queue is now full (based on the batching settings), then it immediately calls publish. When the first message is added, it uses setTimeout to schedule a call that ultimately calls publish on the service. The publisher client has an instance of the queue to which it adds messages when publish is called.
I have millions of messages in a queue and the first ten million or so are irrelevant. Each message has a sequential ActionId so ideally anything < 10000000 I can just ignore or better yet delete from the queue. What I have so far:
let azure = require("azure");
function processMessage(sb, message) {
// Deserialize the JSON body into an object representing the ActionRecorded event
var actionRecorded = JSON.parse(message.body);
console.log(`processing id: ${actionRecorded.ActionId} from ${actionRecorded.ActionTaken.ActionTakenDate}`);
if (actionRecorded.ActionId < 10000000) {
// When done, delete the message from the queue
console.log(`Deleting message: ${message.brokerProperties.MessageId} with ActionId: ${actionRecorded.ActionId}`);
sb.deleteMessage(message, function(deleteError, response) {
if (deleteError) {
console.log("Error deleting message: " + message.brokerProperties.MessageId);
}
});
}
// immediately check for another message
checkForMessages(sb);
}
function checkForMessages(sb) {
// Checking for messages
sb.receiveQueueMessage("my-queue-name", { isPeekLock: true }, function(receiveError, message) {
if (receiveError && receiveError === "No messages to receive") {
console.log("No messages left in queue");
return;
} else if (receiveError) {
console.log("Receive error: " + receiveError);
} else {
processMessage(sb, message);
}
});
}
let connectionString = "Endpoint=sb://<myhub>.servicebus.windows.net/;SharedAccessKeyName=KEYNAME;SharedAccessKey=[mykey]"
let serviceBusService = azure.createServiceBusService(connectionString);
checkForMessages(serviceBusService);
I've tried looking at the docs for withFilter but it doesn't seem like that applies to queues.
I don't have access to create or modify the underlying queue aside from the operations mentioned above since the queue is provided by a client.
Can I either
Filter my results that I get from the queue
speed up the queue processing somehow?
Filter my results that I get from the queue
As you found, filters as a feature are only applicable to Topics & Subscriptions.
speed up the queue processing somehow
If you were to use the #azure/service-bus package which is the newer, faster library to work with Service Bus, you could receive the messages in ReceiveAndDelete mode until you reach the message with ActionId 9999999, close that receiver and then create a new receiver in PeekLock mode. For more on these receive modes, see https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-transfers-locks-settlement#settling-receive-operations
I'm trying to get a queue's retry logic working and I'm having an issue. the minBackoff variable doesn't seem to actually work. I see in my logs, a message gets received then fails then retries almost immediately. My minBackoff is set to 600 seconds.
Here's the code that sets up the query:
NamespaceManager nsManager = NamespaceManager.CreateFromConnectionString(connectionString);
nsManager.Settings.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(5),
maxBackoff: TimeSpan.FromSeconds(30),
maxRetryCount: 3);
if (!nsManager.QueueExists(queueName))
{
nsManager.CreateQueue(queueName);
}
else
{
nsManager.DeleteQueue(queueName);
nsManager.CreateQueue(queueName);
}
QueueClient client = QueueClient.CreateFromConnectionString(connectionString, queueName);
client.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(15),
maxBackoff: TimeSpan.FromSeconds(600),
maxRetryCount: 3);
for (int i = 0; i < 2000; i++)
{
UserCreationSubmitted creationMessage = new UserCreationSubmitted()
{
CreationStatus = "Step 1",
Id = Guid.NewGuid(),
UserName = "user number " + i,
Userid = Guid.NewGuid()
};
BrokeredMessage message = new BrokeredMessage(creationMessage);
client.Send(message);
}
Here's the code that's not working how I think it should...
client.RetryPolicy = new RetryExponential(minBackoff: TimeSpan.FromSeconds(15),
maxBackoff: TimeSpan.FromSeconds(600),
maxRetryCount: 3);
client.OnMessage(message =>
{
UserCreationSubmitted msg = message.GetBody<UserCreationSubmitted>();
Console.WriteLine("------------------------------");
Console.WriteLine($"Body {msg.UserName}");
Random rnd = new Random();
int ranNum = rnd.Next(0, 9);
if (msg.UserName.Contains(ranNum.ToString()))
{
Console.WriteLine("!!!Error!!!");
Console.WriteLine("------------------------------");
throw new Exception();
}
});
Does anyone have an idea as to why the minbackoff and maxbackoff's don't seem to actually work here? Oddly enough the maxRetryCount's working like a trooper so I imagine it's definitely something in my implementation that's causing the others to not work.
RetryExponential is used by ASB client for retries when a failure to receive a message is happening. In your code, exception is thrown during processing within OnMessage API callback, after the fact that message was received. OnMessage API will abandon the message, causing it to show up right away.
There a few options you could take:
Clone a message, set ScheduledEnqueueTimeUtc to the delay you want, send it and then complete the original message.
Defer your message, but then you can only receive it by SequenceNumber. In that case, you could create a new message with that would contain original message sequence number as a payload, schedule the new message and send it. That way you'd have the delivery count on the original message have an accurate count.
Ideally, would be nice to abandon a message with a timespan, but that's not possible with the current API.
SubscriptionClient receiver = messageFactory.CreateSubscriptionClient("NewTopic", subscriberId);
TimeSpan e = new TimeSpan(0, 0, 5, 0, 0);
RetryExponential x = new RetryExponential(e,e,e,e,2);
OnMessageOptions options = new OnMessageOptions();
options.AutoComplete = false;
//options.AutoRenewTimeout = TimeSpan.FromMinutes(1);
options.ExceptionReceived += options_ExceptionReceived;
receiver.OnMessage(receivedMessage =>
{
try
{
Console.WriteLine(receivedMessage.Label);
bool t = receivedMessage.IsBodyConsumed;
Console.WriteLine(string.Format("Message received: {0}", receivedMessage.GetBody<string>()));
Console.WriteLine(receivedMessage.SequenceNumber);
Console.WriteLine(receivedMessage.TimeToLive);
Console.WriteLine(receivedMessage.To);
Console.WriteLine(receivedMessage.DeliveryCount);
receivedMessage.Abandon();
}
catch (Exception)
{
// Indicates a problem, unlock message in subscription.
receivedMessage.Abandon();
}
}, options);
Hi All,
In the retryExponential Constructor i set the maxRetryCount as 2.
And i delibretly Abandon the message in Onmessage to check the max retry count. Even after setting the retry count to 2 i am receiving the message more than 2 times.
--TIA
It looks like you're confusing transient fault handling with dead lettering.
The retry mechanism you're using is to cope with distributed computing issues. Like throttling and service unavailability.
If you're unable to handle an incoming message, you usually move it to the DLQ after some attempts.
(and you're not using the 'x' variable)
I have used .net C# code to put messages on the queue and get messages back. I have no problem in accessing the queue and getting messages. Now I want to have the get message calls under Transaction and used explicit transaction option to commit and rollback the messages.
try
{
MQQueueManager queueManager;
MQEnvironment.Hostname = hostName;
MQEnvironment.Channel = channelName;
MQEnvironment.Port = 1414;
MQEnvironment.properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES);
queueManager = new MQQueueManager(queueManagerName);
// obtain a read/write queue reference
var queue = queueManager.AccessQueue(queueName, MQC.MQOO_INPUT_AS_Q_DEF + MQC.MQOO_INQUIRE + MQC.MQOO_FAIL_IF_QUIESCING);
IList<string> Messages = new List<string>();
using (var scope = new CommittableTransaction())
{
CommittableTransaction.Current = scope;
var message = new MQMessage();
try
{
var getMessageOptions = new MQGetMessageOptions();
getMessageOptions.Options += MQC.MQGMO_SYNCPOINT ;
int i = queue.CurrentDepth;
queue.Get(message,getMessageOptions);
Console.WriteLine(message.ReadString(message.MessageLength));
scope.Rollback();
}
catch (MQException mqe)
{
if (mqe.ReasonCode == 2033)
{
Console.WriteLine("No more message available");
Console.ReadLine();
scope.Rollback();
}
else
{
Console.WriteLine("MQException caught: {0} - {1}", mqe.ReasonCode, mqe.Message);
Console.ReadLine();
scope.Rollback();
}
}
CommittableTransaction.Current = null;
}
// closing queue
queue.Close();
// disconnecting queue manager
queueManager.Disconnect();
Console.ReadLine();
}
catch (MQException mqe)
{
Console.WriteLine("");
Console.WriteLine("MQException caught: {0} - {1}", mqe.ReasonCode, mqe.Message);
Console.WriteLine(mqe.StackTrace);
Console.ReadLine();
}
The first problem I faced was , related to access to System.Dotnet.XARecovery queue. Even though I had access to the queue to get messages from the queue , the program started to fail because of the access rights on the recovery queue when below line was invoked.
queue.Get(messages),
Then I got the access on the recovery queue and access denied problem was resolved. Now after getting the message from the queue , the messages are not roll backed after scope.RollBack() is called.
I checked in the System.Dotnet.XARecovery queue and dead letter queue and there was not nothing there as well.
Why I am not able to see the rolled back messages in the WMQ message queue.
You have a scope.Commit(); after queue.Get(message); After getting the message you are explicitly calling the Commit. If the Get is successful, the Commit call tells the queue manager to remove the message from queue. So there is no chance of message getting rolled back.
EDIT: GMO_SYNCPOINT option is missing in your code. You need to have something like this
MQGetMessageOptions getMessageOptions = new MQGetMessageOptions();
getMessageOptions.Options += MQC.MQGMO_SYNCPOINT;
queue.Get(message, getMessageOptions);
I figured out the solution of my problem. In my code above if I change the line from
MQEnvironment.properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES);
to
MQEnvironment.properties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
then it starts to register the transactions of the local DTC as well as it works fine in rolling back or commit a message on the queue.