RabbitMQ multiple consumers across multiple queues - messages delayed from being processed - multithreading

We have recently experienced unexpected behaviour with our application that is powered by RabbitMQ.
RabbitMQ version is 3.6.12 and we are using .NET Client 5.0.1
The application subscribes to two queues, one for commands, and another for events - we also use manual acknowledgements.
Our application is configured to have 7 consumers. Each has its own channel(IModel) and each has its own EventingBasicConsumer
We end up processing messages when EventingBasicConsumer.Received is fired.
Our application must process messages as close as possible to when they are routed onto the queues and to date we have not had issues.
However recently, we have seen that when one of our messages being processed takes a long time to complete, it delays when another message is to be processed although there are many consumers available (6) that are not busy.
Note we have observed that this issue does not happen when an application is only subscribing to a single queue, it becomes an issue when there is multiple queues involved.
This is best illustrated using the following example:
We have a simple consuming application that subscribes to two queues,
one for commands and one for events. This application have 7
consumers, each with their own channel and EventingBasicConsumer We
start a simple publishing application, that publishes 20 messages, a
second apart. Every message is an event so is published to the event
queue except for the 5th and 10th messages, which are commands and
sent to the command queue. Note that every event is processed without
delay whereas commands take 30 seconds
The following table describes what we are observing in relation to assigning multiple channels to messages across multiple queues:
Once Message5 completes after 30 seconds with C1, then Messaqe9 is assigned immediately to C1 and is processed without delay
Once Message10 completes after 30 seconds with C2, then Messaqe11 is assigned immediately to C2 and is processed without delay
Hence, to us it looks like the assignment of channels is done independently per queue - meaning you can have delayed execution if some messages take a long time to process.
Is it possible that when multiple consumers are subscribing to multiple queues, RabbitMQ can assign a message to be handled by a consumer that is busy even if there are consumers that are currently idle?
Is there any documentation that explains the RabbitMQ algorithm that selects which consumers EventingBasicConsumer.received fires from a collection of consumers?

We have fixed this issue.
In the RMQ documentation (https://www.rabbitmq.com/api-guide.html#consuming) we came across the following:
"Each Channel has its own dispatch thread. For the most common use case of one Consumer per Channel, this means Consumers do not hold up other Consumers. If you have multiple Consumers per Channel be aware that a long-running Consumer may hold up dispatch of callbacks to other Consumers on that Channel.”
In our code, we had 2 consumers per channel, meaning consumers could hold up other consumers.
We changed to have one consumer per channel and that fixed the issue.

Related

How to throttle my cron worker form pushing messages to RabbitMQ?

Context:
We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.
Producing message for this queue happens in two places
The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).
When we encounter long/delayed execution business logic We have messages table which has entries of messages which has to be executed after some time.
Now we have cron worker which runs every 10 mins which scans the messages table and pushes the messages to RabbitMQ.
Scenario:
Let's say the messages table has 10,000 messages which will be queued in next cron run,
9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete 1 Min.
9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.
Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing
Design Idea I had in my mind but not best logic
I can have 4 status ( RequiresQueuing, Queued, Completed, Failed )
Whenever a message is inserted i can set the status to RequiresQueuing
Next when cron worker picks and pushes the messages successfully to Queue i can set it to Queued
When subscribers completes it mark the queue status as Completed / Failed.
There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.
Now the messages which are marked as Queued is in wrong state, because they have to be once again identified and status needs to be changed manually.
Another Example
Let say I have RabbitMQ Queue named ( events )
This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.
Use Case:
Due to high load the numbers events produced becomes 3x.
Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.
Now the question is, If keep sending the messages to events queue, it is just bloating the queue.
https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.
So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.
Questions
How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?
Thanks in advance.
Answer
Check the accepted answer Comments for the throttling using queueCount
You can combine QoS - (Quality of service) and Manual ACK to get around this problem.
Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.
Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.
If you want to increase the throughput of message processing, you can increase the worker nodes count.
The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.

Azure Service Bus Queue Performance

I am using the Azure service bus queue for one of my requirements. The requirement is simple, an azure function will act as an API and creates multiple jobs in the queue. The function is scalable and on-demand new instance creation. The job which microservice creates will be processed by a windows service. So the sender is Azure function and the receiver is windows service. Since the azure function is scalable, there will be multiple numbers of functions will be executed in parallel. So, the number of jobs getting created into the queue will be in parallel, and probably one job in every 500MS. Windows service is a single instance that is a Queue listener listens to this Queue and executes in parallel. So, the number of senders might be more, the receiver is one instance. And each job can run in parallel must be limited(4, since it takes more time and CPU) Right now, I am using Aure Service Bus Queue with the following configuration. My doubt is which configuration produces the best performance for this particular requirement.
The deletion of the Job in the queue will not be an issue for me. So, Can I use Delete instead of Peek-Lock?
Also, right now, the number of items receiving by the listener is not in order. I want to maintain an order in which it got created. My requirement is maximum performance. The job is done by the windows service is a CPU intensive task, that's why I have limited to 4 since the system is a 4 Core.
Max delivery count: 4, Message lock duration: 5 min, MaxConcurrentCalls: 4 (In listener). I am new to the service bus, I need a suggestion for this.
One more doubt is, let's consider the listener got 4 jobs in parallel and start execution. One job completed its execution and became a completed status. So the listener will pick the next item immediately or wait for all the 4 jobs to be completed (MaxConcurrentCalls: 4).
The deletion of the Job in the queue will not be an issue for me. So, Can I use Delete instead of Peek-Lock?
Receiving messages in PeekLock receive mode will less performant than ReceiveAndDelete. You'll be saving roundtrips to the broker to complete messages.
Max delivery count: 4, Message lock duration: 5 min, MaxConcurrentCalls: 4 (In listener). I am new to the service bus, I need a suggestion for this.
MaxDeliveryCount is how many times a message can be attempted before it's dead-lettered. It appears to be equal to the number of cores, but it shouldn't. Could be just a coincidence.
MessageLockDuration will only matter if you use PeekLock receive mode. For ReceiveAndDelete it won't matter.
As for Concurrency, even though your work is CPU bound, I'd benchmark if higher concurrency would be possible.
An additional parameter on the message receiver to look into would be PrefetchCount. It can improve the overall performance by making fewer roundtrips to the broker.
One more doubt is, let's consider the listener got 4 jobs in parallel and start execution. One job completed its execution and became a completed status. So the listener will pick the next item immediately or wait for all the 4 jobs to be completed (MaxConcurrentCalls: 4).
The listener will immediately start processing the 5th message as your concurrency is set to 4 and one message processing has been completed.
Also, right now, the number of items receiving by the listener is not in order. I want to maintain an order in which it got created.
To process messages in the order they were sent in you will need to send and receive messages using sessions.
My requirement is maximum performance. The job is done by the windows service is a CPU intensive task, that's why I have limited to 4 since the system is a 4 Core.
There are multiple things to take into consideration. The location of your Windows Service location would impact the latency and message throughput. Scaling out could help, etc.

Amazon SQS better way of handling listeners

I have an SQS Queue which has a lot of messages (typically in thousands). Presently I am having multiple listeners (which are created by threads created from the same source) and each listener listens to the queue and receives messages. As soon as a listener receives a message from the Queue, that listener deletes the message from the Queue. The message will be processed only after deleting the message from the queue. I am having a visibility timeout of 30 seconds.
I am not using any locks or anything to handle duplicates since I am deleting the message from the queue as soon as after receiving. I haven't seen a case of duplicity until now but I am just worried it might.
Now, the question is, which is a better way, having multiple listeners this way or listening to the queue in a single thread, and then spinning up new threads to process each message you receive?
Firstly, it is worth understanding the concept of message invisibility timeout.
When a message is retrieved from an Amazon SQS queue (eg by your thread), the message is marked as invisible in Amazon SQS. Best-practice is for your thread to then process the message and then delete the message after it has completed processing the message. This way, if the thread fails, the message will automatically become visible on the queue again and another thread can process it.
With your current application design, if a thread fails then the message is lost and will not be retried. You should consider changing your code to delete the message only after it has been processed.
Using multiple threads to process messages is recommended, because it will allow higher message throughput by processing messages in parallel. It is also a simpler design, and simple is always best. Your alternate idea of having one process retrieve messages and then firing off threads to process the message is more complex and does not provide any benefits.
Amazon SQS queues can occasionally return the same message more than once. It is rare, but can happen. The multiple-thread design will probably result in it happening more than the single-thread design because multiple threads might simultaneously retrieve the same message. However, there it could still happen in the single-thread model, too.
If processing the same message twice is a concern, then consider using a FIFO queue (not currently available in every AWS Region). This will guarantee that every message is received only once. Alternatively, your code would need to check whether a particular message has already been processed (eg by checking in a database).
The multiple-thread design will also allow you to horizontally scale by having multiple system (even across multiple Availability Zones) process messages, whereas your single-thread design has a single point of failure and is less scalable.

Failure handling for Queue Centric work pattern

I am planning to use a queue centric design as described here for one of my applications. That essentially consists of using a Azure queue where work requests are queued from the UI. A worker reads from the queue, processes and deletes the message from the queue.
The 'work' done by the worker is within a transaction so if the worker fails before completing, upon restart it again picks up the same message (as it has not be deleted from the queue) and tries to perform the operation again (up to a max number of retries)
To scale I could use two methods:
Multiple workers each with a separate queue. So if I have five workers W1 to W5, I have 5 queues Q1 to Q5 and each worker knows which queue to read from and failure handling is similar as the case with one queue and one worker
One queue and multiple workers. Here failure/Retry handling here would be more involved and might end up using the 'Invisibility' time in the message queue to make sure no two workers pick up the same job. The invisibility time would have to be calculated to make sure that its enough for the job to complete and yet not be large enough that retries are performed after a long time.
Would like to know if the 1st approach is the correct way to go? What are robust ways of handling failures in the second approach above?
You would be better off taking approach 2 - a single queue, but with multiple workers.
This is better because:
The process that delivers messages to the queue only needs to know about a single queue endpoint. This reduces complexity at this end;
Scaling the number of workers that are pulling from the queue is now decoupled from any code / configuration changes - you can scale up and down much more easily (and at runtime)
If you are worried about the visibility, you can initially choose a default timespan, and then if the worker looks like it's taking too long, it can periodically call UpdateMessage() to update the visibility of the message.
Finally, if your worker timesout and failed to complete processing of the message, it'll be picked up again by some other worker to try again. You can also use the DequeueCount property of the message to manage number of retries.
Multiple workers each with a separate queue. So if I have five workers
W1 to W5, I have 5 queues Q1 to Q5 and each worker knows which queue
to read from and failure handling is similar as the case with one
queue and one worker
With this approach I see following issues:
This approach makes your architecture tightly coupled (thus beating the whole purpose of using queues). Because each worker role listens to a dedicated queue, the web application responsible for pushing messages in the queue always need to know how many workers are running. Anytime you scale up or down your worker role, some how you need to tell web application so that it can start pushing messages in appropriate queue.
If a worker role instance is taken down for whatever reason there's a possibility that some messages may not be processed ever as other worker role instances are working on their dedicated queues.
There may be a possibility of under utilization/over utilization of worker role instances depending on how web application pushes the messages in the queue. For optimal utilization, web application should know about the worker role utilization so that it can decide which queue to send message to. This is certainly not a desired thing for a web application to do.
I believe #2 is the correct way to go. #Brendan Green has covered your concerns about #2 in his answer excellently.

RabbitMQ - Single concurrent worker per routing key

Quite new to RabbitMQ and I'm trying to see if I can achieve what I need with it.
I am looking for the Worker Queues pattern but with one caveat. I want to have only a single worker running concurrently per routing key.
An example for clarification:
If i send the following messages with routing keys by order: a, a, b, c, I want to have only 3 workers running concurrently. When the first a message is received a worker picks it up and handles it.
When the next a message is received and the previous a message is still handled (not acknowledged) the new a message should wait in queue. When the b and c messages are received they each get a worker handling them. When the first a message is acknowledged any worker can pick up the next a message.
Would that pattern be possible using RabbitMQ in a natural way (without writing any application code on my side to handle the locking and stuff...)
Edit:
Another clarification. All workers can and should handle all messages, and I don't want to have a queue per Worker as I want to share the load between them, and the Publisher doesn't know which Worker should process the message. But I do want to make sure that no 2 Workers are working on messages sharing the same key at the same time.
For example, if I have a Publisher publishing messages with a userId field, I want to make sure no 2 Workers are handling messages with the same userId at the same time.
Edit 2
Expanding on the userId example. Let's say I have a single Publisher and 3 Workers. The publisher publishes messages like these: { userId: 1, text: 'Hello' }, with varying userIds. My 3 Workers all do the same thing to this messages, so I can have any of them handle the messages coming in. But what I'm trying to achieve is to have only a single worker processing a message from a certain user at the same time. If a Worker has received a message with userId 1 and is still processing it, and another message with userId 1 is received I want to make sure no other Worker picks up that message. But other messages coming in with different userIds should be processed by other available Workers.
userIds are not known beforehand, and the publisher doesn't know how many workers are or anything specific about them, he just wants to schedule the messages for processing.
what your asking is not possible with routing keys, but is built into queues with a few settings.
if you define "queue_a" for a messages, "queue_b" for b messages, etc, you can then have as many consumers connect to it as you want.
RabbitMQ will only deliver a given message to a single consumer of a given queue.
The way it works with multiple consumers on a single queue is basic round-robin style dispatch of the messages. that is, the first message will be delivered to one of the consumers, and the next message (assuming the first consumer is still busy) will be delivered to the next consumer.
So, that should satisfy the need to deliver the message to any given consumer of the queue.
To ensure your messages have an equal chance of getting to any of the consumer (and are not all delivered to the same consumer all the time), there are a few other settings you should put in place.
First, make sure to set the message consumer no ack setting to false (sometimes called "auto ack"). This will force you to ack the message from your code.
Lastly, set the "consumer prefetch" limit of the consumer to 1.
With this combination of settings, a single consumer will retrieve a single message and begin working on it. While that consumer is working, any message waiting in the queue will be delivered to other consumers if any are available. If there are none available, the message will wait in the queue until a consumer is available.
With this, you should be able to achieve the behavior you are wanting, on a given queue.
...
Keep in mind this only applies to queues, though. routing keys cannot be managed this way. all matched routing keys from an exchange will cause a copy of the message to be sent to the destination queue.

Resources