I'm trying to figure out the best architecture for a scalable BullMQ implementation. We have a number of different services that are going to be feeding jobs into queues. In some situations we may have multiple different services feeding jobs into the same queue.
Initially I had thought to contain all BullMQ implementation on a single instance and stand up a simple API with an endpoint that can receive jobs to be added to the queue. So for any service that wants to add a job to a queue, they just hit a specific endpoint and the job gets added to the queue.
I was wondering though whether an alternative approach could be to instantiate a BullMQ queue on the various services that want to add jobs to queues, and then just have the workers located on a separate service to pick up jobs from the queue when they are ready for execution? This 'worker box' can then horizontally scale up as required.
If this approach is possible, I have concerns about what the implications may be of having multiple services adding jobs to the same queue - can this cause issues or is BullMQ designed to handle such a situation?
I'm finding it difficult to find information about what standard 'best-practice' approaches are for BullMQ implementation. Any guidance greatly appreciated. Thanks.
Related
I have been researching how to efficiently solve the following use case and I am struggling to find the best solution.
Basically I have a Node.js REST API which handles requests for users from a mobile application. We want some requests to launch background tasks outside of the req/res flow because they are CPU intensive or might just take a while to execute. We are trying to implement or use any existing frameworks which are able to handle different job queues in the following way (or at least compatible with the use case):
Every user has their own set job queues (there are different kind of jobs).
The jobs within one specific queue have to be executed sequentially and only one job at a time but everything else can be executed in parallel (it would be preferable if there are no queues hogging the workers or whatever is actually consuming the tasks so all queues get more or less the same priority).
Some queues might fill up with hundreds of tasks at a given time but most likely they will be empty a lot of the time.
Queues need to be persistent.
We currently have a solution with RabbitMQ with one queue for every kind of task which all the users share. The users dump tasks into the same queues which results in them filling up with tasks from a specific user for a long time and having the rest of users wait for those tasks to be done before their own start to be consumed. We have looked into priority queues but we don't think that's the way to go for our own use case.
The first somewhat logical solution we thought of is to create temporary queues whenever a user needs to run background jobs and have them be deleted when empty. Nevertheless we are not sure if having that many queues is scalable and we are also struggling with dynamically creating RabbitMQ queues, exchanges, etc (we have even read somewhere that it might be an anti-pattern?).
We have been doing some more research and maybe the way to go would be with other stuff such as Kafka or Redis based stuff like BullMQ or similar.
What would you recommend?
If you're on AWS, have you considered SQS? There is no limit on number of standard queues created, and in flight messages can reach up to 120k. This would seem to satisfy your requirements above.
While the mentioned SQS solution did prove to be very scalable our amount of polling we would need to do or use of SNS did not make the solution optimal. On the other hand implementing a self made solution via database polling was too much for our use case and we did not have the time or computational resources to consider a new database in our stack.
Luckily, we ended up finding that the Pro version of BullMQ does have a "Group" functionality which performs a round robin strategy for different tasks within a single queue. This ended up adjusting perfectly to our use case and is what we ended up using.
I have following task to implement using AWS stack:
One job is triggered periodically and put message to queue (SQS). Worker recieves this task and based on it additional tasks need to be created (approximately 1-10 K tasks). And all these tasks are also put to another queue and there are additional workers to process these tasks.
These flow can be described displayed in following way:
Periodic task ->SQS->woker_1(creates more tasks) -> SQS -> workers_2
Based on project conventions and bureaucracy it will take some time to create two separate services for worker_1 that listen to periodic task and creates fine grained tasks and for workers_2 that just process particular tasks, make docker images, CI jobs etc... and get deploy it.
So, here is the tradeof:
1. Spend additional time and create two separate services. On the other hand these services might be really simple. And even there is a doubt to have 2 separate projects.
2. Make this as a one service that put messages to the same queue and also will listen to the messages on the same queue and perorm work for: worker_1 and worker_2.
Any suggestions or thoughts are appreciated!
I don't think there can be a "correct" answer to this, you already have a good list of pros and cons for both options. Some additional things I thought of:
SQS queues don't really allow you to pick out specific types of messages, you pretty much need to read everything first-in-first-out. So if you share queues, you may have less control of prioritizing messages.
For the two services to interact, they need a shared message definition. Sharing the same codebase would make it easier to dev and test the messaging code. Of course, it could also be a shared library.
Deploying both worker types in the same server/application would share resources, which might be more economical at the low end, or it might be confusing at high scale.
It may be possible to develop all the code into the same application, and leave the decision to deployment-time if it is all on the same server and queue or separate servers reading from separate queues. This seems ideal to me.
I'm using Azure WebJobs as part of a project at work. These are configured as continuously running jobs that monitor a number of different queues. As queue messages are received they cause various API commands to be run. The issue I have is that some of the API commands run quickly (ie. a few seconds) and some run slowly (several minutes), and I'm not sure how best to split the queue handlers between the WebJobs.
For example, I could put all of the slow API command handlers in one WebJob and all of the quick handlers in a different WebJob. My concern is that the "slow" WebJob process would always be busy whereas the "quick" WebJob process would be idling most of the time.
Another approach would be to mix quick and slow handlers in the same WebJob project. My concern with that would be the quicker handlers starving the slower ones of attention, or vice versa.
A third approach would be to have a separate WebJob for each individual message handler, but given the number of message types we have to deal with I'd rather not go down that route. It also seems like overkill to be honest.
I was wondering if anyone had encountered a similar scenario and could offer any insight into how Azure WebJobs choose which message to handle when they are monitoring multiple queues? Numerous internet searches have failed to turn-up any guidance or help in this area. To be clear, I'm not really after opinions as to which approach people think would be best; I'm looking for answers from people who have actually dealt with this kind of problem and can say with some degree of certainty which of the different approaches would be best given the way the Azure WebJobs API currently prioritizes queue message handling.
If you have multiple functions listening on different queues, the SDK will call them in parallel when messages are received simultaneously. You can not set which queue should be processed first.
Depending on you configuration, you will handle them parallel. If you think that some executions will stall others, you can split the handling in multiple webjobs and scale them seperately.
I am working on a system that has lots of tasks that are perfect for queueing and has some existing home made legacy solutions already in place that work to varying degrees, I am familiar with gearman and have read through the RabbitMQ tutorials and am keen to upgrade the current solutions to use one of these more robust existing solutions (leaning towards rabbitMQ atm because of the flexibility and scalability and the management plugin).
I am having trouble understanding how to address a problem that allows user A to queue up a large number of a jobs (lets say 5000) of type A which then blocks the processing of any newly added jobs of type A until user A's jobs are done. Id like to implement a solution that will fairly share the load, or even just round-robin between the queued users.
Does anyone have any suggestions or insights into how I might implement a solution to this ?
I thought routing_keys might help but if User A's jobs are queued before User B adds their jobs then they still wont be processed until User A's jobs have been consumed ?
I have also thought of creating a queue for each user & jobtype but I am unsure how to do this dynamically ?
Perhaps I need to implement some sort of control queue that sets up queues and dynamically adjusts the worker processes to consume the newly added user only queue, but would the worker collect the jobs from the queues in a round-robin type way ? And how would I decide when to remove the queues ?
thanks in advance for any help !
Ok no comments from anyone so in the end I figured out that in rabbitmq you can consume from multiple queues in a round robin type fashion. So I built a queue that informs consumer workers to consume from a queue and dynamically create a queue for each users tasks, that are periodically deleted when empty.
I have several WorkerRole that only do job for a short time, and it would be a waste of money to put them in a single instance each. We could merge them in a single one, but it'd be a mess and in the far future they are supposed to work independently when the load increases.
Is there a way to create a "multi role" WorkerRole in the same way you can create a "multi site" WebRole?
In negative case, I think I can create a "master worker role", that is able to load the assemblies from a given folder, look for RoleEntryPoint derivated classes with reflection, create instances and invoke the .Run() or .OnStart() method. This "master worker role" will also rethrown unexpected exceptions, and call .OnStop() in all sub RoleEntryPoints when .OnStop() is called in the master one. Would it work? What should I be aware of?
As mentioned by others, this is a very common technique for maximizing utilization of your instances. There may examples and "frameworks" that abstract the worker infrastructure and the actual work you want to be done, including one in this (our) sample: http://msdn.microsoft.com/en-us/library/ff966483.aspx (scroll down to "inside the implementation")
Te most common ways of triggering work are:
Time scheduled workers (like "cron"
jobs)
Message baseds workers (work triggered by the presence of a message).
The code sample mentioned above implements further abstractions for #2 and is easily extensible for #1.
Bear in mind though that all interactions with queues are based on polling. The worker will not wake up with a new message on the queue. You need to actively query the queue for new messages. Querying too often will make Microsoft happy, but probably not you :-). Each query counts as a transaction that is billed (10K of those = $0.01). A good practice is to poll the queue for messages with some kind of delayed back-off. Also, get messages in batches.
Finally, taking this to an extreme, you can also combine web roles and worker roles in a single instance. See here for an example: http://blog.smarx.com/posts/web-page-image-capture-in-windows-azure
Multiple worker roles provide a very clean implementation. However, the cost footprint for idle role instances is going to be much higher than a single worker role.
Role-combining is a common pattern I've seen, working with ISV's on their Windows Azure deployments. You can have a background thread that wakes up every so often and runs a process. Another common implementation technique is to use an Azure Queue to send a message representing a process to execute. You can have multiple queues if you want, or a single command queue. In any case, you would have a queue listener running in a background thread, which would run in each instance. The first one to get the message processes it. You could take it further, and have a timed process pushing those messages onto the queue (maybe every 24 hours, or every hour).
Aside from CPU and memory limits, just remember that a single role can only have a maximum of 5 endpoints (less if you're using Remote Desktop).
EDIT: As of September 2011, role configuration has become much more flexible, now that you have 25 Input endpoints (accessible from the outside world) and 25 Internal endpoints (used for communication between roles) across an entire deployment. The MSDN article is here
I recently blogged about overloading a Web Role, which is somewhat related.
While there's no real issue with the solutions that have been pointed out for finding ways to do multiple worker components within a single Worker Role, I just want you to keep in mind the entire point of having distinct Worker Roles defined in the first place is isolation in the face of faults. If you just shove everything into a single Worker Role instance, just one of those worker components behaving badly has the ability to take down every other worker component in that role. Now all of a sudden you're writing a lot of infrastructure to provide isolation and fault tolerance across components which is pretty much what Azure is there to provide for you.
Again, I'm not saying it's an absolute to strickly do one thing. There's a place where multiple components under a single Worker Role makes sense (especially monaterily). Simply saying that you should keep in mind why it's designed this way in the first place and factor that in appropriately as you plan your architecture.
Why would a 'multi role' be a mess? You could write each worker role implementation as a loosely coupled component and then compose a Worker Role from all appropriate components.
When you later need to separate some of the responsibilities out to a separate worker role, you can compose a new worker role with only this component, while at the same time removing it from the old worker role.
If you wanted to, you could employ late binding so that this could even be done without recompilation, but often I don't think that would be worth the effort.