Difference between peek and receive (azure service bus) - azure

Does anyone knows the difference between the receive and peek options in azure service bus?
var client = new MessageReceiver("ServiceBusConnectionString", "Queue");
// difference between this one:
var peekResults = await client.PeekAsync(100);
// and this one
var receiveResults = await client.ReceiveAsync(100);
I see I can get the same results, but I want to know which one should I use and why? so internally what would be the difference?

Peek will fetch messages w/o increasing delivery counter. It's a way to "preview" messages w/o removing from the queue.
Receive will increase the delivery counter. When received in ReceiveAndDelete mode, messages will be gone from the queue. With PeekLock mode messages will remain on the queue unless MaxDeliveryCount was exceeded and they will be dead-lettered.

Related

Peek messages does not return all messages

I have the logic below that I am using to peek at the messages on a subscription
var path = EntityNameHelper.FormatSubscriptionPath(TopicName, subscriptionName);
var receiver = new MessageReceiver(connection string, path);
var messages = await receiver.PeekAsync(1000);
When I look at Service Bus Explorer it shows that there are 800 messages on the subscription
However the logic only returns 23
Does anyone know why this happens, is there some kind of caching or something?
Paul
That's by design. Peek and receive operations will return as much as broker can at that specific moment. If you want to retrieve all the messages, you'd have to write some code to iterate over the request one or more time until the number of items you need is reached.
If you want to raise a broker request to clarify this, there's a service issue tracker here.

Why do my messages always get delivered to Dead Letter Queue in Azure Service Bus?

C# .NetCore 2.2 -
Azure Service Bus 3.4.0
I have 3 queues in Azure Service Bus with same properties. While sending messages to these queues, the messages in one of the queues always get delivered to Dead letter queues, while other 2 queues receive active messages.
I have tried playing with the properties - increase TTL, maximum delivery count etc. The properties of all 3 queues are same, the only difference is the name of the queues.
I have used this tutorial - https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues
queue properties image
static async Task SendMessagesAsync(int numberOfMessagesToSend)
{
try
{
for (var i = 0; i < numberOfMessagesToSend; i++)
{
// Create a new message to send to the queue.
string messageBody = $"Message {i}";
var message = new Message(Encoding.UTF8.GetBytes(messageBody));
Console.WriteLine($"Sending message: {messageBody}");
// Send the message to the queue.
await queueClient.SendAsync(message);
}
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} :: Exception: {exception.Message}");
}
}
How do I prevent messages from going to Dead Letter Queue? Why does it happen with only 1 queue, not the other 2?
When messages are dead-lettered, there is a reason user property gets added. Check that property to see the reason and troubleshoot accordingly. Specifically, check for DeadLetterReason and DeadLetterErrorDescription custom properties.
The common reasons for a message to be dead-lettered are
Maximum transfer hop count exceeded
Session Id Is Null
TTLExpiredException
HeaderSizeExceeded
The messages might also have got dead-lettered due to some errors while receiving the message from the Queue. As Sean Feldman mentioned, looking into the DeadLetterReason and DeadLetterDescription property will help you diagnose the error reason clearly.
Also try to increase or set the time to live of the message sent if the DeadLetterReason is TTLExpiredException. Because if you have set the time to live of the message to a lower value then it will override the time to live property of the Queue.
Check whether the Queue whether the queue where the messages are getting dead-lettered is a Session enabled queue and the message sent has the Session Id value set.
Without seeing your app / messages it's hard to help. But probably there's an error with the application that is trying to consume the message. As it could not be completed, the message goes to the dead letter queue.
Log the messages from this particular queue and see if there's any missing required properties. Sometimes you're trying to deserialize to an uncompatible type.
The purpose of the dead-letter queue is to hold messages that cannot
be delivered to any receiver, or messages that could not be processed.

QueueClient.Complete(Guid) doesn't seem to be working when queueing another message in a service bus queue triggered function

In Azure WebJobs, in the OnMessageOptions class, I'm calling the QueueClient.Complete(Guid) method by setting the AutoComplete flag to true and messages seem to dequeue just fine when running the ProcessQueue function. Active messages count goes down by 1 after successful processing of each message. However, when I want to requeue a message (because it cannot be processed currently) back to the queue that triggers the service bus function, as a new brokered message after a minute, using BrokeredMessage.ScheduledEnqueueTimeUtc, it seems like it isn't working. Scheduled messages count seems to go up initially. I go back to the queue after a few hours and see active messages in the thousands. The copies are of the same message. What is happening? I'd expect the message to be taken off the queue because of QueueClient.Complete(Guid) and the new scheduled message to be its replacement.
Some detail:
To send the message I do the following:
var queueclient = QueueClient.CreateFromConnectionString(connectionString, queueName);
queueclient.Send(message);
queueclient.close();
Inside the WebJob I created a ServiceBusConfiguration object which requires a onMessageOptions object where I set the AutoComplete=true. I pass the ServiceBusConfiguration object to the JobHostConfiguration.UserServiceBus
method.
Inside the WebJob service bus queue triggered function I again do the following to requeue, by first creating a new instance of the brokered message again.
//if not available yet for processing please requeue...
var queueclient = QueueClient.CreateFromConnectionString(connectionString, queueName);
queueclient.Send(message);
queueclient.close();
I don't do the following/use callbacks which is may be why it isn't working?
var options = new OnMessageOptions();
options.AutoComplete = false; // to call complete ourselves
Callback to handle received messages
client.OnMessage(m =>
{
var clone = m.Clone();
clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(60);
client.Send(clone);
m.Complete();
}, options);
when I want to requeue a message (because it cannot be processed currently) back to the queue that triggers the service bus function, as a new brokered message after a minute, using BrokeredMessage.ScheduledEnqueueTimeUtc, it seems like it isn't working
If you fail to process your message, do not re-queue it. Instead, abandon (with a reason) and it will be picked up again.
BrokeredMessage.ScheduledEnqueueTimeUtc is intended to be used for messages added to the queue. When you receive a message, you can complete, dead-letter, defer, or abandon. If you abandon a message, it will be retried, but you can't control when that will happen. If you have no other messages in the queue, it will be retried almost immediately.
Note: when you see a behaviour that you suspect is not right, having a simple repro to share would be very helpful.

How to listen to a queue using azure service-bus with Node.js?

Background
I have several clients sending messages to an azure service bus queue. To match it, I need several machines reading from that queue and consuming the messages as they arrive, using Node.js.
Research
I have read the azure service bus queues tutorial and I am aware I can use receiveQueueMessage to read a message from the queue.
However, the tutorial does not mention how one can listen to a queue and read messages as soon as they arrive.
I know I can simply poll the queue for messages, but this spams the servers with requests for no real benefit.
After searching in SO, I found a discussion where someone had a similar issue:
Listen to Queue (Event Driven no polling) Service-Bus / Storage Queue
And I know they ended up using the C# async method ReceiveAsync, but it is not clear to me if:
That method is available for Node.js
If that method reads messages from the queue as soon as they arrive, like I need.
Problem
The documentation for Node.js is close to non-existant, with that one tutorial being the only major document I found.
Question
How can my workers be notified of an incoming message in azure bus service queues ?
Answer
According to Azure support, it is not possible to be notified when a queue receives a message. This is valid for every language.
Work arounds
There are 2 main work arounds for this issue:
Use Azure topics and subscriptions. This way you can have all clients subscribed to an event new-message and have them check the queue once they receive the notification. This has several problems though: first you have to pay yet another Azure service and second you can have multiple clients trying to read the same message.
Continuous Polling. Have the clients check the queue every X seconds. This solution is horrible, as you end up paying the network traffic you generate and you spam the service with useless requests. To help minimize this there is a concept called long polling which is so poorly documented it might as well not exist. I did find this NPM module though: https://www.npmjs.com/package/azure-awesome-queue
Alternatives
Honestly, at this point, you may be wondering why you should be using this service. I agree...
As an alternative there is RabbitMQ which is free, has a community, good documentation and a ton more features.
The downside here is that maintaining a RabbitMQ fault tolerant cluster is not exactly trivial.
Another alternative is Apache Kafka which is also very reliable.
You can receive messages from the service bus queue via subscribe method which listens to a stream of values. Example from Azure documentation below
const { delay, ServiceBusClient, ServiceBusMessage } = require("#azure/service-bus");
// connection string to your Service Bus namespace
const connectionString = "<CONNECTION STRING TO SERVICE BUS NAMESPACE>"
// name of the queue
const queueName = "<QUEUE NAME>"
async function main() {
// create a Service Bus client using the connection string to the Service Bus namespace
const sbClient = new ServiceBusClient(connectionString);
// createReceiver() can also be used to create a receiver for a subscription.
const receiver = sbClient.createReceiver(queueName);
// function to handle messages
const myMessageHandler = async (messageReceived) => {
console.log(`Received message: ${messageReceived.body}`);
};
// function to handle any errors
const myErrorHandler = async (error) => {
console.log(error);
};
// subscribe and specify the message and error handlers
receiver.subscribe({
processMessage: myMessageHandler,
processError: myErrorHandler
});
// Waiting long enough before closing the sender to send messages
await delay(20000);
await receiver.close();
await sbClient.close();
}
// call the main function
main().catch((err) => {
console.log("Error occurred: ", err);
process.exit(1);
});
source :
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-nodejs-how-to-use-queues
I asked myslef the same question, here is what I found.
Use Google PubSub, it does exactly what you are looking for.
If you want to stay with Azure, the following ist possible:
cloud functions can be triggered from SBS messages
trigger an event-hub event with that cloud function
receive the event and fetch the message from SBS
You can make use of serverless functions which are "ServiceBusQueueTrigger",
they are invoked as soon as message arrives in queue,
Its pretty straight forward doing in nodejs, you need bindings defined in function.json which have type as
"type": "serviceBusTrigger",
This article (https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger---javascript-example) probably would help in more detail.

How to guarantee azure queue FIFO

I understand that MS Azure Queue service document http://msdn.microsoft.com/en-us/library/windowsazure/dd179363.aspx says first out (FIFO) behavior is not guaranteed.
However, our application is such that ALL the messages have to be read and processed in FIFO order. Could anyone please suggest how to achieve a guaranteed FIFO using Azure Queue Service?
Thank you.
The docs say for Azure Storage queues that:
Messages in Storage queues are typically first-in-first-out, but sometimes they can be out of order; for example, when a message's
visibility timeout duration expires (for example, as a result of a
client application crashing during processing). When the visibility
timeout expires, the message becomes visible again on the queue for
another worker to dequeue it. At that point, the newly visible message
might be placed in the queue (to be dequeued again) after a message
that was originally enqueued after it.
Maybe that is good enough for you? Else use Service bus.
The latest Service Bus release offers reliable messaging queuing: Queues, topics and subscriptions
Adding to #RichBower answer... check out this... Azure Storage Queues vs. Azure Service Bus Queues
MSDN (link retired)
http://msdn.microsoft.com/en-us/library/windowsazure/hh767287.aspx
learn.microsoft.com
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted
Unfortunately, many answers misleads to Service Bus Queues but I assume the question is about Storage Queues from the tags mentioned. In Azure Storage Queues, FIFO is not guranteed, whereas in Service Bus, FIFO message ordering is guaranteed and that too, only with the use of a concept called Sessions.
A simple scenario could be, if any consumer receives a message from the queue, it is not visible to you when you are the second receiver. So you assume the second message you received is actually the first message (Where FIFO failed :P)
Consider using Service Bus if this is not your requirement.
I don't know how fast do you want to process the messages, but if you need to have a real FIFO, don't allow Azure's queue to get more than one message at a time.
Use this at your "program.cs" at the top of the function.
static void Main()
{
var config = new JobHostConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
}
config.Queues.BatchSize = 1; //Number of messages to dequeue at the same time.
config.Queues.MaxPollingInterval = TimeSpan.FromMilliseconds(100); //Pooling request to the queue.
JobHost host = new JobHost(config);
....your initial information...
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
This will get one message at a time with a wait period of 100 miliseconds.
This is working perfectly with a logger webjob to write to files the traze information.
As mentioned here https://www.jayway.com/2013/12/20/message-ordering-on-windows-azure-service-bus-queues/ ordering is not guaranteed also in service bus, except of using recieve and delete mode which is risky
You just need to follow below steps to ensure Message ordering.:
1) Create a Queue with session enabled=false.
2) While saving message in the queue, provide the session id like below:-
var message = new BrokeredMessage(item);
message.SessionId = "LB";
Console.WriteLine("Response from Central Scoring System : " + item);
client.Send(message);
3) While creating receiver for reviving message:-
queueClient.OnMessage(s =>
{
var body = s.GetBody<string>();
var messageId = s.MessageId;
Console.WriteLine("Message Body:" + body);
Console.WriteLine("Message Id:" + messageId);
});
4) While having the same session id, it would automatically ensure order and give the ordered message.
Thanks!!

Resources