How does a worker know which queue to get tasks from? - node.js

Am working on a nodejs project where i need to implement task queueing. I have picked the bullMq with redis packages for this. Following the documentation here
import { Queue, Worker } from 'bullmq'
// Create a new connection in every instance
const myQueue = new Queue('myqueue', { connection: {
host: "myredis.taskforce.run",
port: 32856
}});
const myWorker = new Worker('myworker', async (job)=>{}, { connection: {
host: "myredis.taskforce.run",
port: 32856
}});
After digging deeper in the documentation, i ended up asking some questions:
Do i need one worker and queue instance for the whole app? (I think
this depends on the kind of tasks and operations you need) I need a task queue that
process payments. Another task queue to work on marketing emails. I cant figure out how
this would work if we had only one instance of worker and queue. It would but it requires setting up identifiers for every kind of operation and acting on each accordingly.
If i were to have many queues and worker instances,how would a worker know from which queue it should listen for tasks. From the code sample above, the worker seems to have be named myworker and the queue is called myqueue. How are these two connected? How does the worker know it should listen to jobs from that specific queue without colliding with other queues and workers?
Am quite new in tasks and queues, any help will be appreciated.

How does a worker know which queue to get tasks from?
The first argument to the Worker is supposed to be the name of the queue that you want it to pull messages from. The code you show is not doing that properly. But, the doc here explains that.
Do i need one worker and queue instance for the whole app? (I think this depends on the kind of tasks and operations you need) I need a task queue that process payments. Another task queue to work on marketing emails. I cant figure out how this would work if we had only one instance of worker and queue. It would but it requires setting up identifiers for every kind of operation and acting on each accordingly.
This really depends upon your design. You could have one queue that holds multiple types of things and one worker that processes whatever it finds in the queue.
Or, if want jobs to be processed concurrently, you can create more than one worker and those additional workers can even be in different processes.
If i were to have many queues and worker instances,how would a worker know from which queue it should listen for tasks. From the code sample above, the worker seems to have be named myworker and the queue is called myqueue. How are these two connected? How does the worker know it should listen to jobs from that specific queue without colliding with other queues and workers?
As explained above, the first argument to the Worker is supposed to be the name of the queue that you want it to pull messages from.

Related

How to access to worker's queued requests?

I'm implementing a web server using nodejs which must serve a lot of concurrent requests. As nodejs processes the requests one by one, it keeps them in an internal queue (in libuv, I guess).
I also want to run my web server using cluster module, so there will be one requests queue per worker.
Questions:
If any worker dies, how can I retrieve its queued
requests?
How can I put retrieved requests into other workers' queues?
Is there any API to access to alive workers' requests queue?
By No. 3 I want to keep queued requests somewhere such as Redis (if possible), so in case of server crash, failure or even hardware restart I can retrieve them.
As you mentioned in the tags that you are-already-using/want-to-use redis, you can use queue-manager based on redis to do all the work for you.
Checkout https://github.com/OptimalBits/bull (or it's alternatives).
bull has a concept of queue. you add jobs to the queue and listen to the same queue from different processes/vms. bull will send the same job to only one listener and you have the ability to control how many jobs each listener is processing at the same time (concurrency-level).
In addition, if one of the jobs fails to run (in other words, the listener of the queue threw an error), bull will try to give the same job to different listener.

How to implement work stealing in SimPy 3?

I want to implement something akin to work stealing or task migration in multiprocessor systems. Details below.
I am simulating a scheduling system with multiple worker nodes (resources, each with multiple capacity), and tasks (process) that arrive randomly and are queued by the scheduler at a specific worker node. This is working fine.
However, I want to trigger an event when a worker node has spare capacity, so that it steals the front task from the worker with the longest wait queue.
I can implement the functionality described above. The problem is that all the tasks waiting on the worker queue from which we are stealing work receive the event notification. I want to notify ONLY the task at the front of the queue (or only N tasks at the front of the queue).
The Bank reneging example is the closest example to what I want to implement. However, it (1) ALL the customers leave the queue when they are notified that the event was triggered, and (2) when event is triggered, the customers leave the system; in my example, I want to make the task wait at another worker (though it wouldn't wait, since the queue of that worker is empty).
Old question: Can this be done in SimPy?
New questions: How can I do this in SimPy?
1) How can I have many processes, waiting for a resource, listen for an event, but notify only the first one?
2) How can I make a process migrate to another resource?

Failure handling for Queue Centric work pattern

I am planning to use a queue centric design as described here for one of my applications. That essentially consists of using a Azure queue where work requests are queued from the UI. A worker reads from the queue, processes and deletes the message from the queue.
The 'work' done by the worker is within a transaction so if the worker fails before completing, upon restart it again picks up the same message (as it has not be deleted from the queue) and tries to perform the operation again (up to a max number of retries)
To scale I could use two methods:
Multiple workers each with a separate queue. So if I have five workers W1 to W5, I have 5 queues Q1 to Q5 and each worker knows which queue to read from and failure handling is similar as the case with one queue and one worker
One queue and multiple workers. Here failure/Retry handling here would be more involved and might end up using the 'Invisibility' time in the message queue to make sure no two workers pick up the same job. The invisibility time would have to be calculated to make sure that its enough for the job to complete and yet not be large enough that retries are performed after a long time.
Would like to know if the 1st approach is the correct way to go? What are robust ways of handling failures in the second approach above?
You would be better off taking approach 2 - a single queue, but with multiple workers.
This is better because:
The process that delivers messages to the queue only needs to know about a single queue endpoint. This reduces complexity at this end;
Scaling the number of workers that are pulling from the queue is now decoupled from any code / configuration changes - you can scale up and down much more easily (and at runtime)
If you are worried about the visibility, you can initially choose a default timespan, and then if the worker looks like it's taking too long, it can periodically call UpdateMessage() to update the visibility of the message.
Finally, if your worker timesout and failed to complete processing of the message, it'll be picked up again by some other worker to try again. You can also use the DequeueCount property of the message to manage number of retries.
Multiple workers each with a separate queue. So if I have five workers
W1 to W5, I have 5 queues Q1 to Q5 and each worker knows which queue
to read from and failure handling is similar as the case with one
queue and one worker
With this approach I see following issues:
This approach makes your architecture tightly coupled (thus beating the whole purpose of using queues). Because each worker role listens to a dedicated queue, the web application responsible for pushing messages in the queue always need to know how many workers are running. Anytime you scale up or down your worker role, some how you need to tell web application so that it can start pushing messages in appropriate queue.
If a worker role instance is taken down for whatever reason there's a possibility that some messages may not be processed ever as other worker role instances are working on their dedicated queues.
There may be a possibility of under utilization/over utilization of worker role instances depending on how web application pushes the messages in the queue. For optimal utilization, web application should know about the worker role utilization so that it can decide which queue to send message to. This is certainly not a desired thing for a web application to do.
I believe #2 is the correct way to go. #Brendan Green has covered your concerns about #2 in his answer excellently.

Distributing topics between worker instances with minimum overlap

I'm working on a Twitter project, using their streaming API, built on Heroku with Node.js.
I have a collection of topics that my app needs to process, which are pulled from MongoDB. I need to track each of these topics via the API, however it needs to be done such that each topic is tracked only once. As each worker process expires after approximately 1 hour, when a worker receives SIGTERM it needs to untrack each topic assigned, and release it back to the pool again.
I've been using RabbitMQ to communicate between app and worker processes, however with this I'm a little stuck. Are there any good examples, or advice you can offer on the correct way to do this?
Couldn't the worker just send a message via the messagequeue to the application when it receives a SIGTERM? According to the heroku docs on shutdown the process is allowed a couple of seconds (10) before it will be forecefully killed.
So you can do something like this:
// listen for SIGTERM sent by heroku
process.on('SIGTERM', function () {
// - notify app that this worker is shutting down
messageQueue.sendSomeMessageAboutShuttingDown();
// - shutdown process (might need to wait for async completion
// of message delivery to not prevent it from being delivered)
process.exit()
});
Alternatively you could break up your work in much smaller chunks and have workers only 'take' work that will run for a couple of minutes or even seconds max. Your main application should be the bookkeeper and if a process doesn't complete its task within a specified time assume it has gone missing and make the task available for another process to handle. You can probably also implement this behavior using confirms in rabbitmq.
RabbitMQ won't do this for you.
It will allow you to distribute the work to another process and/or computer, but it won't provide the kind of mechanism you need to prevent more than one process / computer from working on a particular topic.
What you want is a semaphore - a way to control access to a particular "resource" from multiple processes... a way to ensure only one process is working on a particular resource at a given time. In your case the "resource" will be the topic... but it will still be the resource that you want to control access to.
FWIW, there has been discussion of using RabbitMQ to implement a distributed semaphore in the past:
https://www.rabbitmq.com/blog/2014/02/19/distributed-semaphores-with-rabbitmq/
https://aphyr.com/posts/315-call-me-maybe-rabbitmq
but the general consensus is that this is a bad idea. there are too many edge cases and scenarios in which RabbitMQ will fail to work as proper semaphore.
There are some node.js semaphore libraries available. I would recommend looking at them, and using one of them. Have a single process manage the semaphore and decide which other process can / cannot work on which topic.

Concurrent message processing in RabbitMQ consumer

I am new to RabbitMQ so please excuse me if my question sound trivial. I want to publish message on RabbitMQ which will be processed by RabbitMQ consumer.
My consumer machine is a multi core machine (preferably worker role on azure). But QueueBasicConsumer pushes one message at a time. How can I program to utilize all core where I can process multiple message concurrently.
One solution could be to open multiple channels in multiple threads and then process message over there. But in this case how will i decide the number of threads.
Another approach could be to read message on main thread and then create task and pass message to this task. In this case I will have to stop consuming messages in case there are many message (after a threshold) already in progress. Not sure how could this be implemented.
Thanks In Advance
Your second option sounds much more reasonable - consume on a single channel and spawn multiple tasks to handle the messages. To implement concurrency control, you could use a semaphore to control the number of tasks in flight. Before starting a task, you would wait for the semaphore to become available, and after a task has finished, it would signal the semaphore to allow other tasks to run.
You haven't specified you language/technology stack of choice, but whatever you do - try to utilise a thread pool instead of creating and managing threads yourself. In .NET, that would mean using Task.Run to process messages asynchronously.
Example C# code:
using (var semaphore = new SemaphoreSlim(MaxMessages))
{
while (true)
{
var args = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
semaphore.Wait();
Task.Run(() => ProcessMessage(args))
.ContinueWith(() => semaphore.Release());
}
}
Instead of controlling the concurrency level yourself, you might find it easier to enable explicit ACK control on the channel, and use RabbitMQ Consumer Prefetch to set the maximum number of unacknowledged messages. This way, you will never receive more messages than you wanted at once.

Resources