I'm implementing Kafka consumer with custom acknowledgement mechanism using spring-integration-kafka.
The code from this example was used.
What I'm trying to achieve is when an exception is thrown, the acknowledgement should not be sent back to Kafka (i.e. no offset commit should be performed) so the next fromKafka.receive(10000) method call will return the same message as the previous one.
But I faced with a problem: even if the acknowledgement isn't sent to Kafka, the consumer knows somehow the offset of the next message and continues to read new messages in spite of the fact that offset value in offset topic remains unchanged.
How to make consumer reread message in case of some failures?
There's not currently any support for re-fetching failed messages
One thing you can do is add retry (e.g. using a request handler retry advice) downstream of the message-driven adapter.
By not acking, the message(s) will be delivered after a restart but not during the current instantiation.
Since messages are prefetched into the adapter, one thing you could do is detect the failure, stop the adapter, drain the prefetched messages and restart.
You could inject a custom ErrorHandler to stop the adapter and signal to your downstream flow that it should ignore the draining messages.
EDIT
There is now a SeekToCurrentErrorHandler.
Related
I have a peculiar type of problem statement to be solved.
Configured RabbitMQ as message broker and its working but when there is failure in process in consume I'm now acknowledging with nack but it blindly re-queues with whatever already came in as payload but i want to add some-more fields to it and re-queue with simpler steps
For Example:
When consume gets payload data from RabbitMQ it will then process it and try to do some process based on it in multiple host machines, but due to some thing if one machine not reachable i need to process that alone after some time .
Hence I'm planning to re-queue failed data with one more fields with machine name again back to queue so it will be processed again with existing logic itself.
How to achieve this ? Can someone help on me
When a message is requeued, the message will be placed to its original position in its queue, if possible. If not (due to concurrent deliveries and acknowledgements from other consumers when multiple consumers share a queue), the message will be requeued to a position closer to queue head. This way you will end up in an infinite loop(consuming and requeuing the message). To avoid this, you can, positively acknowledge the message and publish it to the queue with the updated fields. Publishing the message puts it at the end of the queue, hence you will be able to process it after some time.
Reference https://www.rabbitmq.com/nack.html
I'm working on a worker which is able to treat message from a RabbitMQ.
However, I am unsure of how to accomplish this.
If I receive a message and during my treating an error occurs, how can I put the message into the end of the queue?
I'm trying to using nack or reject, but the message is always re-put in the first position, and other messages stay frozen!
I don't understand why the message has to be put in the first position, I'm trying to "play" with other options like requeue or AllupTo but none of them seem to work.
Thank you in advance!
Documentation says:
Messages can be returned to the queue using AMQP methods that feature a requeue parameter (basic.recover, basic.reject and
basic.nack), or due to a channel closing while holding unacknowledged
messages. Any of these scenarios caused messages to be requeued at the
back of the queue for RabbitMQ releases earlier than 2.7.0. From
RabbitMQ release 2.7.0, messages are always held in the queue in
publication order, even in the presence of requeueing or channel
closure.
With release 2.7.0 and later it is still possible for individual
consumers to observe messages out of order if the queue has multiple
subscribers. This is due to the actions of other subscribers who may
requeue messages. From the perspective of the queue the messages are
always held in the publication order.
Remember to ack your successful messages, otherwise they will not be removed from the queue.
If you need more control over your rejected messages you should take a look to dead letter exchanges.
nack or reject either discard the message or re-queue the message.
For your requirement following could be suitable,
Once the consumer receives the message, just before start processing it, send ack() back to rabbitmq server.
Process the message then after, If found any error in the process then send ( publish ) the same message into the same queue. This will put the message at the back of the queue.
On successful processing do nothing. ack() has been already sent to rabbitmq server. Just take the next message and process it.
I would appreciate your thoughts on this.
I have a node app which subscribes to a RabbitMQ queue. When it receives a message, it checks it for something and then saves it to a database.
However, if the message is missing some information or some other criteria is not yet met, I would like the subscriber to publish the message back onto the RabbitMQ queue.
I understand logically this is just connecting to the queue and publishing the message, but is it really this simple or is this a bad practice or potentially dangerous?
Thanks for your help.
As I point out in the comment, When you create connection with queue, and set autoAck = true, to enable message acknowledge. The message in the queue will be deleted until receive acknowledge.
When the received message meets requirement, then send ack message to this queue, and this message will be deleted from queue. Otherwise, no ack message is sent to queue, this message will stay in the queue.
As for you mentioned in comment, the valid process may take 5 minutes, just set the send ack message as callback function of validation function.
In your question, you describe two criterion for when a message may not be processed:
if the message is missing some information or
some other criteria is not yet met
The first of these appears to be an issue with the message, and it doesn't seem that it makes much sense to re-queue a message that has a problem. The appropriate action is to log an error and drop the message (or invoke whatever error-handling logic your application contains).
The second of these is rather vague, but for the purposes of this answer, we will assume that the problem is not with the message but with some other component in the system (e.g. perhaps a network connection issue). In this case, the consuming application can send a Nack (negative acknowldegement) which can optionally requeue the message.
Keep in mind that in the second case, it will be necessary to shut down the consumer until the error condition has resolved, or the message will be redelivered and erroneously processed ad infinitum until the system is back up, thus wasting resources on an unprocessable message.
Why use a nack instead of simply re-publishing?
This will set the "redelivered" flag on the message so that you know it was delivered once already. There are other options as well for handling bad messages.
My application currently uses RabbitMQ to queue and process messages to initiate data streams and to pass the streamed data to a processing area.
Because we only want one client to consume the data stream and only one client to process the streamed data, I am currently using PUSH messages.
The issue I am finding is that if I acknowledge the PUSH message to initiate the data stream and that process fails, the message will not be requeued. If I do not acknowledge the message, none of my other PUSH messages will be received until after I either acknowledge the data stream message or the process dies.
I have looked at REQUEST/REPLY messages, however I think the same issue may apply here, where I need to requeue automatically should the process/server die.
Is it possible to use non-blocking PUSH messages?
Perhaps of value is the "qos" / prefetch setting for consumers: https://www.rabbitmq.com/consumer-prefetch.html It's possible to set the value to greater than one to allow a single consumer to get more than one message at a time. Would this result in the non-blocking PUSH (read only) message you're looking for?
Here is my channels set-up:
A jdbc message-store backed queue
A bridge connecting the queue to a pub-sub channel
The poller configured on the pub-sub channel is transactional
Now when there is an exception raised in any one of the subscribers then the transaction rolls back and the message is retried for ever. The message is again processed by all the subscribers again. If this is a permanent exception in at least subscriber then the message is not getting processed by none of the other subscribers.
What is the best exception handling strategy here?
I prefer exception handling on the subscriber i.e.only the failing subscriber will retry, other subscribers will process the message and move on.
How can this be implemented in spring integration?
More details here..
If the poller is made transactional and the message fails processing in at least one of the subscribers, then the message is rolled back to the message store and retried. I also configured a jdbc message store for the errorChannel. Every time the message processing fails, the message gets rolled back to the original message store and the error channel message store has one entry for each retry.
If the poller is made non-transactional and the message fails processing in the first subscriber, then the message is put to the error channel, but the second subscriber never gets the message.
It appears that there is something fundamentally wrong.. Is it with my configuration?
http://forum.springsource.org/archive/index.php/t-75000.html
The discussion in the above thread explains the ups and downsides of the framework with respect to pubsub impl.
We chose to go with the below approach:
Pollers will be transactional, meaning all subscribers process the message successfully or none of them. Message will be retried with all subs until all of them complete successfully.
Error handling is the subscribers responsibility
Only system exceptions will be bubbled back to the poller. Business exceptions will be handled by the subscriber and the message will be put to some error channel manually.