Spring Integration - Design - Prevent Infinite Processing Loop - spring-integration

I have a spring integration flow where a processor (scheduled) sequentially reads messages from a queue (jms) and attempts processing. If the processor finds that the message can't be processed until another event finishes, it sends the message back to the original queue and attempts processing later.
If it just keep sending messages that can't be processed, back to the queue, it creates an infinite loop.
So I need to hold onto them until I finish reading all messages in the queue that already exist. And trigger a release when all existing messages are read, before sending them to the queue. How do I go about this?
Note that I don't want to aggregate the messages, but just temporarily hold them, and somehow. Also note that my processor is scheduled to read messages (not message driven).

In this case you have to acknowledge those messages in the queue anyway and re-send them back to it using JmsTemplate (or JmsSendingMessageHandler).
The feature with dequeue that the failed message is returned to the head of the queue. That's how you see it again and again and don't reach other messages (also you can do that with the concurrency).
With the resending messages in case of failure back to the queue, you place them in the tail of the queue. So, "bad" messages will be available later, after processing other, existing messages.

Related

I want to re-queue into RabbitMQ when there is error with added values to queue payload

I have a peculiar type of problem statement to be solved.
Configured RabbitMQ as message broker and its working but when there is failure in process in consume I'm now acknowledging with nack but it blindly re-queues with whatever already came in as payload but i want to add some-more fields to it and re-queue with simpler steps
For Example:
When consume gets payload data from RabbitMQ it will then process it and try to do some process based on it in multiple host machines, but due to some thing if one machine not reachable i need to process that alone after some time .
Hence I'm planning to re-queue failed data with one more fields with machine name again back to queue so it will be processed again with existing logic itself.
How to achieve this ? Can someone help on me
When a message is requeued, the message will be placed to its original position in its queue, if possible. If not (due to concurrent deliveries and acknowledgements from other consumers when multiple consumers share a queue), the message will be requeued to a position closer to queue head. This way you will end up in an infinite loop(consuming and requeuing the message). To avoid this, you can, positively acknowledge the message and publish it to the queue with the updated fields. Publishing the message puts it at the end of the queue, hence you will be able to process it after some time.
Reference https://www.rabbitmq.com/nack.html

How to know if the queue has already been read fully using PEEK method in Azure Service Bus

I am using Azure Service Bus REST API to receive messages.
The requirement is to have a scheduled job to read messages from Azure Service Bus Queues and forward them for processing. If processed successfully, then delete them from the Queue or keep them in the Queue to be processed in the next scheduled job. I am using Peek-Lock Message (Non-Destructive Read) method(https://learn.microsoft.com/en-us/rest/api/servicebus/peek-lock-message-non-destructive-read).
The problem i am facing is inside my loop, how to know that i have read the queue fully so that i do not re-read the same queue again.
Your requirement is somewhat problematic.
If processed successfully, then delete them from the Queue or keep them in the Queue to be processed in the next scheduled job.
Successful processing should always result in message completion. Otherwise, you're asking for trouble. When processing messages in peek-lock mode, the message is locked for up to 5 minutes. It's your responsibility to complete it if the processing is successful. If it wasn't completed, that's a sign the processing wasn't successful and it should be read again given your requirement. Do not leave successfully processed messages in the queue.
The problem i am facing is inside my loop, how to know that i have read the queue fully so that i do not re-read the same queue again.
You shouldn't be concerned about this. Read messages and process. If failed to process, the message will reappear. Otherwise, a message should be removed. If you want to handle idempotency, i.e. ensure that if for some reason the message is not processed more than once, upon successful processing and prior to completion store the message ID (assuming it's unique) in a data store and validate any new message against that data store.

Azure Service Bus - Add a message to the queue in a deferred state

I'm wondering if it is possible to send a brokered message to a queue/topic where the message is already in a deferred state?
I'm asking this because I currently have a process that does the following ...
The process starts and a brokered message is sent to a queue (this triggers a function that records the message body as an entity in table storage with a 'Processing' status).
Additional work is done in the process
If we get to the end of the process without any issues, another brokered message is sent to the queue with a completion message (this triggers the same function that updates the entity in table storage with a 'Complete' status).
While this method is mostly working, it feels clunky and fragile. I would really like to be able to send a message to the queue and then have the final step make the message visible on the queue so it can be consumed by the function (Durable Function).
I thought about setting the ScheduledEnqueueTimeUtc, but I can't guarantee when the process will finish (I'm thinking worst case scenario here) so I'm not sure how long to set it.
I also looked at the Defer option for a BrokeredMessage but it seems this can only be set from the receiver and not be in a deferred state initially.
Is what I'm trying to do possible with Service Bus brokered messages? Could I set the Scheduled Enqueue time so some ridiculously long time (e.g. 2 hours) and if it reaches that time it is automatically expired and moved to the Dead Letter queue? Should I send the initial message to the Dead Letter queue and then once the process is complete, retrieve it and resubmit it?
Has anyone had any experience with implementing a process like this ... send a start message and only process the message once a completion notification has been received? I need this to be as robust as possible as I'm dealing with financial transactions in this process.
Hopefully my explanation makes sense.
I'm wondering if it is possible to send a brokered message to a queue/topic where the message is already in a deferred state?
That's not possible. You can only delay a brand new message, not defer it. Deferring required a message to be received first for it to have a SequenceNumber.
Using ScheduledEnqueueTimeUtc has its challenges as you will be sending it in the future, but cannot cancel once processing is over. Instead, you could leverage QueueClient.ScheduleMessageAsync() that returns back SequenceNumber immediately. This way you can set the message far into future, but also cancel it if processing is finished earlier.
I ended up solving this issue by keeping the process of sending two messages, but refactoring my durable function to record the messages in Table Storage, check that both messages have been received and if they have, add a new message to Azure Queue Storage. A second function listens to the queue which starts its process.
After much testing, this appears to be quite a robust solution. It then doesn't matter what order the two messages arrive, or how long they take ... as long as both of them have arrive, that is when the second function will kick off.

Amazon SQS better way of handling listeners

I have an SQS Queue which has a lot of messages (typically in thousands). Presently I am having multiple listeners (which are created by threads created from the same source) and each listener listens to the queue and receives messages. As soon as a listener receives a message from the Queue, that listener deletes the message from the Queue. The message will be processed only after deleting the message from the queue. I am having a visibility timeout of 30 seconds.
I am not using any locks or anything to handle duplicates since I am deleting the message from the queue as soon as after receiving. I haven't seen a case of duplicity until now but I am just worried it might.
Now, the question is, which is a better way, having multiple listeners this way or listening to the queue in a single thread, and then spinning up new threads to process each message you receive?
Firstly, it is worth understanding the concept of message invisibility timeout.
When a message is retrieved from an Amazon SQS queue (eg by your thread), the message is marked as invisible in Amazon SQS. Best-practice is for your thread to then process the message and then delete the message after it has completed processing the message. This way, if the thread fails, the message will automatically become visible on the queue again and another thread can process it.
With your current application design, if a thread fails then the message is lost and will not be retried. You should consider changing your code to delete the message only after it has been processed.
Using multiple threads to process messages is recommended, because it will allow higher message throughput by processing messages in parallel. It is also a simpler design, and simple is always best. Your alternate idea of having one process retrieve messages and then firing off threads to process the message is more complex and does not provide any benefits.
Amazon SQS queues can occasionally return the same message more than once. It is rare, but can happen. The multiple-thread design will probably result in it happening more than the single-thread design because multiple threads might simultaneously retrieve the same message. However, there it could still happen in the single-thread model, too.
If processing the same message twice is a concern, then consider using a FIFO queue (not currently available in every AWS Region). This will guarantee that every message is received only once. Alternatively, your code would need to check whether a particular message has already been processed (eg by checking in a database).
The multiple-thread design will also allow you to horizontally scale by having multiple system (even across multiple Availability Zones) process messages, whereas your single-thread design has a single point of failure and is less scalable.

How to put a message at the end of MQRabbit Queue

I'm working on a worker which is able to treat message from a RabbitMQ.
However, I am unsure of how to accomplish this.
If I receive a message and during my treating an error occurs, how can I put the message into the end of the queue?
I'm trying to using nack or reject, but the message is always re-put in the first position, and other messages stay frozen!
I don't understand why the message has to be put in the first position, I'm trying to "play" with other options like requeue or AllupTo but none of them seem to work.
Thank you in advance!
Documentation says:
Messages can be returned to the queue using AMQP methods that feature a requeue parameter (basic.recover, basic.reject and
basic.nack), or due to a channel closing while holding unacknowledged
messages. Any of these scenarios caused messages to be requeued at the
back of the queue for RabbitMQ releases earlier than 2.7.0. From
RabbitMQ release 2.7.0, messages are always held in the queue in
publication order, even in the presence of requeueing or channel
closure.
With release 2.7.0 and later it is still possible for individual
consumers to observe messages out of order if the queue has multiple
subscribers. This is due to the actions of other subscribers who may
requeue messages. From the perspective of the queue the messages are
always held in the publication order.
Remember to ack your successful messages, otherwise they will not be removed from the queue.
If you need more control over your rejected messages you should take a look to dead letter exchanges.
nack or reject either discard the message or re-queue the message.
For your requirement following could be suitable,
Once the consumer receives the message, just before start processing it, send ack() back to rabbitmq server.
Process the message then after, If found any error in the process then send ( publish ) the same message into the same queue. This will put the message at the back of the queue.
On successful processing do nothing. ack() has been already sent to rabbitmq server. Just take the next message and process it.

Resources