We are using ActiveMQ Scheduled Message feature to trigger events and process on time. If a cron scheduled at 00:00:00 every day, the event is having a brokerInTime something like 00:01:00. This is not the accurate time. Depending on the number of crons and number of messages published and also depending on the underlying box used, I understand that there will be a delay in the brokerInTime/brokerOutTime.
We have a requirement where while processing these messages, we need the exact schedule time (Feb 21, 00:00:00 in above example) at which its supposed to be triggered, instead of when its actually received by broker or dispatched.
Does activeMQ/JMS have any property that gives us Schedule time at which the cron is supposed to be triggered ??
Thanks.
There is no such property that the broker can apply, the OpenWire protocol only defines the BrokerInTime value which will reflect the time the message hits the queue. The scheduler makes a sort of best effort to process scheduled messages but is not to be treated as a real time event source. JMS 1.1 has no concept of scheduled messages at all, and the JMS 2.0 API doesn't define a specific field for this low level bit of detail either.
Related
Context:
We have micro service which consumes(subscribes)messages from 50+ RabbitMQ queues.
Producing message for this queue happens in two places
The application process when encounter short delayed execution business logic ( like send emails OR notify another service), the application directly sends the message to exchange ( which in turn it is sent to the queue ).
When we encounter long/delayed execution business logic We have messages table which has entries of messages which has to be executed after some time.
Now we have cron worker which runs every 10 mins which scans the messages table and pushes the messages to RabbitMQ.
Scenario:
Let's say the messages table has 10,000 messages which will be queued in next cron run,
9.00 AM - Cron worker runs and it queues 10,000 messages to RabbitMQ queue.
We do have subscribers which are listening to the queue and start consuming the messages, but due to some issue in the system or 3rd party response time delay it takes each message to complete 1 Min.
9.10 AM - Now cron worker once again runs next 10 Mins and see there are yet 9000+ messages yet to get completed and time is also crossed so once again it pushes 9000+ duplicates messages to Queue.
Note: The subscribers which consumes the messages are idempotent, so there is no issue in duplicate processing
Design Idea I had in my mind but not best logic
I can have 4 status ( RequiresQueuing, Queued, Completed, Failed )
Whenever a message is inserted i can set the status to RequiresQueuing
Next when cron worker picks and pushes the messages successfully to Queue i can set it to Queued
When subscribers completes it mark the queue status as Completed / Failed.
There is an issue with above logic, let's say RabbitMQ somehow goes down OR in some use we have purge the queue for maintenance.
Now the messages which are marked as Queued is in wrong state, because they have to be once again identified and status needs to be changed manually.
Another Example
Let say I have RabbitMQ Queue named ( events )
This events queue has 5 subscribers, each subscribers gets 1 message from the queue and post this event using REST API to another micro service ( event-aggregator ). Each API Call usually takes 50ms.
Use Case:
Due to high load the numbers events produced becomes 3x.
Also the micro service ( event-aggregator ) which accepts the event also became slow in processing, the response time increased from 50ms to 1 Min.
Cron workers follows your design mentioned above and queues the message for each min. Now the queue is becoming too large, but i cannot also increase the number of subscribers because the dependent micro service ( event-aggregator ) is also lagging.
Now the question is, If keep sending the messages to events queue, it is just bloating the queue.
https://www.rabbitmq.com/memory.html - While reading this page, i found out that rabbitmq won't even accept the connection if it reaches high watermark fraction (default is 40%). Of course this can be changed, but this requires manual intervention.
So if the queue length increases it affects the rabbitmq memory, that is reason i thought of throttling at producer level.
Questions
How can i throttle my cron worker to skip that particular run or somehow inspect the queue and identify it already being heavily loaded so don't push the messages ?
How can i handle the use cases i said above ? Is there design which solves my problem ? Is anyone faced the same issue ?
Thanks in advance.
Answer
Check the accepted answer Comments for the throttling using queueCount
You can combine QoS - (Quality of service) and Manual ACK to get around this problem.
Your exact scenario is documented in https://www.rabbitmq.com/tutorials/tutorial-two-python.html. This example is for python, you can refer other examples as well.
Let says you have 1 publisher and 5 worker scripts. Lets say these read from the same queue. Each worker script takes 1 min to process a message. You can set QoS at channel level. If you set it to 1, then in this case each worker script will be allocated only 1 message. So we are processing 5 messages at a time. No new messages will be delivered until one of the 5 worker scripts does a MANUAL ACK.
If you want to increase the throughput of message processing, you can increase the worker nodes count.
The idea of updating the tables based on message status is not a good option, DB polling is the main reason that system uses queues and it would cause a scaling issue. At one point you have to update the tables and you would bottleneck because of locking and isolations levels.
I schedule publisher every 3hrs and consumer runs indefinitely, is there a way to find out total number of messages consumed when the queue is empty after every schedule?
RabbitMQ doesn't trace this kind of information.
there are different ways to do that:
Add a counters consumer side
Use a monitoring system, see https://www.rabbitmq.com/prometheus.html
I'm having a little trouble understanding the difference between a message that has a scheduled message time ('scheduledEnqueueTime') and the time to live (default 14 days).
What's the difference between the them?
I'm understanding it as the longest time that I can put something on the queue before it wakes up and does a dequeue is 14 days (default). Is this incorrect?
FYI - In my app I need to place messages on the queue to wake up, in some cases, up to 60 days from the current day. I see I can increase the pricing tier of the service bus to standard pricing and that will increase the time to live. Is this what I need to do?
Time to Live is the duration until ServiceBus will discard the message if nobody processed it.
With Scheduled Enqueue Time you can hide the message so nobody can process the message until you want it to. This is independent from the time to live.
Scheduled messages do not materialize in the queue until the defined enqueue time
Sidenote: You can also "defer" messages, but you have to explicitly unlock these messages from the queue. Scheduling would be better for your case.
I am using the Azure service bus queue for one of my requirements. The requirement is simple, an azure function will act as an API and creates multiple jobs in the queue. The function is scalable and on-demand new instance creation. The job which microservice creates will be processed by a windows service. So the sender is Azure function and the receiver is windows service. Since the azure function is scalable, there will be multiple numbers of functions will be executed in parallel. So, the number of jobs getting created into the queue will be in parallel, and probably one job in every 500MS. Windows service is a single instance that is a Queue listener listens to this Queue and executes in parallel. So, the number of senders might be more, the receiver is one instance. And each job can run in parallel must be limited(4, since it takes more time and CPU) Right now, I am using Aure Service Bus Queue with the following configuration. My doubt is which configuration produces the best performance for this particular requirement.
The deletion of the Job in the queue will not be an issue for me. So, Can I use Delete instead of Peek-Lock?
Also, right now, the number of items receiving by the listener is not in order. I want to maintain an order in which it got created. My requirement is maximum performance. The job is done by the windows service is a CPU intensive task, that's why I have limited to 4 since the system is a 4 Core.
Max delivery count: 4, Message lock duration: 5 min, MaxConcurrentCalls: 4 (In listener). I am new to the service bus, I need a suggestion for this.
One more doubt is, let's consider the listener got 4 jobs in parallel and start execution. One job completed its execution and became a completed status. So the listener will pick the next item immediately or wait for all the 4 jobs to be completed (MaxConcurrentCalls: 4).
The deletion of the Job in the queue will not be an issue for me. So, Can I use Delete instead of Peek-Lock?
Receiving messages in PeekLock receive mode will less performant than ReceiveAndDelete. You'll be saving roundtrips to the broker to complete messages.
Max delivery count: 4, Message lock duration: 5 min, MaxConcurrentCalls: 4 (In listener). I am new to the service bus, I need a suggestion for this.
MaxDeliveryCount is how many times a message can be attempted before it's dead-lettered. It appears to be equal to the number of cores, but it shouldn't. Could be just a coincidence.
MessageLockDuration will only matter if you use PeekLock receive mode. For ReceiveAndDelete it won't matter.
As for Concurrency, even though your work is CPU bound, I'd benchmark if higher concurrency would be possible.
An additional parameter on the message receiver to look into would be PrefetchCount. It can improve the overall performance by making fewer roundtrips to the broker.
One more doubt is, let's consider the listener got 4 jobs in parallel and start execution. One job completed its execution and became a completed status. So the listener will pick the next item immediately or wait for all the 4 jobs to be completed (MaxConcurrentCalls: 4).
The listener will immediately start processing the 5th message as your concurrency is set to 4 and one message processing has been completed.
Also, right now, the number of items receiving by the listener is not in order. I want to maintain an order in which it got created.
To process messages in the order they were sent in you will need to send and receive messages using sessions.
My requirement is maximum performance. The job is done by the windows service is a CPU intensive task, that's why I have limited to 4 since the system is a 4 Core.
There are multiple things to take into consideration. The location of your Windows Service location would impact the latency and message throughput. Scaling out could help, etc.
I am creating an application that stores events and sends reminder emails to people who signed up 1 hour before the event(the time of each event is stored in the database). At first I was thinking about using CronJobs to schedule these emails, but now I am not sure if that will work. Is there any other node module that will allow me to implement the reminder email functionality.
If you have Redis available to backend it, you might look at something like bull.
From the readme:
Minimal CPU usage due to a polling-free design.
Robust design based on Redis.
Delayed jobs.
Schedule and repeat jobs according to a cron specification.
Rate limiter for jobs.
Retries.
Priority.
Concurrency.
Pause/resume—globally or locally.
Multiple job types per queue.
Threaded (sandboxed) processing functions.
Automatic recovery from process crashes.
You can give a try node-schedule. It is using cron-job underneath.
In a quality interval, you can check if there is an upcoming interval, and send the reminder to the appropriate persons.