I'm trying to get some information from the twilio queue. And in order to get that information i have to send requests every time i want to know whether or not the queue is full. Is there a way to kind of "watch" with the help of sockets to see if the queue is full or not?
Here is a blog post to derive the message queue based on message time stamps.
How to Calculate Your Message Queue Length
Related
I have a request to implement a dashboard with the information about which message in Azure Service Bus queue was completed when (with some info about message parameters). Unfortunately we do not have an access to the reciever's code and cannot change the code to log the time of the message delivery. So, we need to subscribe somehow to a moment when reciever takes away the message.
I have already investigated Azure portal API in order to find something, but there is no such a possibility, I have tried to find something on stackoverflow and in Google, but no results.
There is 1 idea: use 2 queues and azure function between them. Put all messages to the first queue, azure function recieves a message, logs the info about the message and puts it to the second queue and waits until other services takes the message away from the second queue. Second queue will always have only 1 message and this way we will be able to understand what message was for sure delivered and when.
However what I do not like is the second message queue executes not the role of the real queue (it means something is wrong here and I need to use something else), performance of such a system can be not high enough...
Any help is appreciated (articles, videos, ideas). Thank you.
I'm wondering if it is possible to send a brokered message to a queue/topic where the message is already in a deferred state?
I'm asking this because I currently have a process that does the following ...
The process starts and a brokered message is sent to a queue (this triggers a function that records the message body as an entity in table storage with a 'Processing' status).
Additional work is done in the process
If we get to the end of the process without any issues, another brokered message is sent to the queue with a completion message (this triggers the same function that updates the entity in table storage with a 'Complete' status).
While this method is mostly working, it feels clunky and fragile. I would really like to be able to send a message to the queue and then have the final step make the message visible on the queue so it can be consumed by the function (Durable Function).
I thought about setting the ScheduledEnqueueTimeUtc, but I can't guarantee when the process will finish (I'm thinking worst case scenario here) so I'm not sure how long to set it.
I also looked at the Defer option for a BrokeredMessage but it seems this can only be set from the receiver and not be in a deferred state initially.
Is what I'm trying to do possible with Service Bus brokered messages? Could I set the Scheduled Enqueue time so some ridiculously long time (e.g. 2 hours) and if it reaches that time it is automatically expired and moved to the Dead Letter queue? Should I send the initial message to the Dead Letter queue and then once the process is complete, retrieve it and resubmit it?
Has anyone had any experience with implementing a process like this ... send a start message and only process the message once a completion notification has been received? I need this to be as robust as possible as I'm dealing with financial transactions in this process.
Hopefully my explanation makes sense.
I'm wondering if it is possible to send a brokered message to a queue/topic where the message is already in a deferred state?
That's not possible. You can only delay a brand new message, not defer it. Deferring required a message to be received first for it to have a SequenceNumber.
Using ScheduledEnqueueTimeUtc has its challenges as you will be sending it in the future, but cannot cancel once processing is over. Instead, you could leverage QueueClient.ScheduleMessageAsync() that returns back SequenceNumber immediately. This way you can set the message far into future, but also cancel it if processing is finished earlier.
I ended up solving this issue by keeping the process of sending two messages, but refactoring my durable function to record the messages in Table Storage, check that both messages have been received and if they have, add a new message to Azure Queue Storage. A second function listens to the queue which starts its process.
After much testing, this appears to be quite a robust solution. It then doesn't matter what order the two messages arrive, or how long they take ... as long as both of them have arrive, that is when the second function will kick off.
I have huge number of messages in azure service bus dead letter queue. When I see the messages, I see that most of the messages are expired.
I want to know what happens when we try to re-submit the expired deadletter queue message back to its original queue?
Can anyone help me out in explaining this ?
Thank you !
I am trying to answer two of your questions below,
when you receive an expired message from the dead letter queue to process/resubmit to main queue(Using ReceiveAsync() to receive a message), the state of the message will be changed to deferred state. So, the message won't be available for receiving in the Dead-Letter queue anymore.
For your question on, what happens to the message when you resubmit, it would be submitted as a new message into the target queue.
We could use FormatDeadLetterPath() method to build the format name for the specified dead letter queue path and create a receiver and retrieve messages from a DLQ. If you’d like to resubmit message back into the main queue, you could create and send a new message based on retrieved message in DLQ. And you could investigate why the message has been dead-lettered via checking DeadLetterReason and DeadLetterErrorDescription properties.
This link explained Dead-Letter Queues with a sample, please refer to it.
We are using the azure service bus to facilitate the parallel processing of messages through workers listening to a queue.
First an aggregated message is received and then this message is split in thousands of individual messages which are posted through a request-response pattern since we need to know when all messages have been completed to run a separate process.
Our issue is that the request-response method has a timeout which is causing the following issue:
Lets say we post 1000 messages to be processed and there is only one worker listening. Messages left in the queue after the timeout expiration are discarded which is something that we do not want. If we set the expiry time to a large value that will guarantee that all messages will be processed then we run the risk of a message failing and having to wait the timeout to understand that something has gone wrong.
Is there a way to dynamically change the expiration of a single message in a request-response scenario or any other pattern that we should consider?
Thanks!
You got the things wrong, The Time to live of an azure service bus message https://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.brokeredmessage.timetolive.aspx It is the time on which the message will be on the queue if it is consumed or not.
This it is not the timeout, if you post a message with this larger time to live the message will stay on the queue for a long time but if you fail to consume you should warn the other end that you failed to consume this message.
You can do this using another queue and putting another message on this other queue with the message id that failed and the error.
This is an asynchronous process so you should not be holding requests based on that but work with the asynchronous nature of the problem.
I would appreciate your thoughts on this.
I have a node app which subscribes to a RabbitMQ queue. When it receives a message, it checks it for something and then saves it to a database.
However, if the message is missing some information or some other criteria is not yet met, I would like the subscriber to publish the message back onto the RabbitMQ queue.
I understand logically this is just connecting to the queue and publishing the message, but is it really this simple or is this a bad practice or potentially dangerous?
Thanks for your help.
As I point out in the comment, When you create connection with queue, and set autoAck = true, to enable message acknowledge. The message in the queue will be deleted until receive acknowledge.
When the received message meets requirement, then send ack message to this queue, and this message will be deleted from queue. Otherwise, no ack message is sent to queue, this message will stay in the queue.
As for you mentioned in comment, the valid process may take 5 minutes, just set the send ack message as callback function of validation function.
In your question, you describe two criterion for when a message may not be processed:
if the message is missing some information or
some other criteria is not yet met
The first of these appears to be an issue with the message, and it doesn't seem that it makes much sense to re-queue a message that has a problem. The appropriate action is to log an error and drop the message (or invoke whatever error-handling logic your application contains).
The second of these is rather vague, but for the purposes of this answer, we will assume that the problem is not with the message but with some other component in the system (e.g. perhaps a network connection issue). In this case, the consuming application can send a Nack (negative acknowldegement) which can optionally requeue the message.
Keep in mind that in the second case, it will be necessary to shut down the consumer until the error condition has resolved, or the message will be redelivered and erroneously processed ad infinitum until the system is back up, thus wasting resources on an unprocessable message.
Why use a nack instead of simply re-publishing?
This will set the "redelivered" flag on the message so that you know it was delivered once already. There are other options as well for handling bad messages.