I have two tomcat servers running at the same time. I have reports which are requested from server 1 sent to server 2 for processing. So how would I go about managing the threads on server 2? For example, if I wanted to queue up the threads how would I go about doing that?
Use a message queue (like RabbitMQ) in the middle to queue up the tasks that need to be done.
Then, your report generating server can pull jobs from the queue and work on them. If you need to slow down or speed up, then you can increase the number of "workers" running.
Related
I'm implementing a web server using nodejs which must serve a lot of concurrent requests. As nodejs processes the requests one by one, it keeps them in an internal queue (in libuv, I guess).
I also want to run my web server using cluster module, so there will be one requests queue per worker.
Questions:
If any worker dies, how can I retrieve its queued
requests?
How can I put retrieved requests into other workers' queues?
Is there any API to access to alive workers' requests queue?
By No. 3 I want to keep queued requests somewhere such as Redis (if possible), so in case of server crash, failure or even hardware restart I can retrieve them.
As you mentioned in the tags that you are-already-using/want-to-use redis, you can use queue-manager based on redis to do all the work for you.
Checkout https://github.com/OptimalBits/bull (or it's alternatives).
bull has a concept of queue. you add jobs to the queue and listen to the same queue from different processes/vms. bull will send the same job to only one listener and you have the ability to control how many jobs each listener is processing at the same time (concurrency-level).
In addition, if one of the jobs fails to run (in other words, the listener of the queue threw an error), bull will try to give the same job to different listener.
I'm using Managed Executor Service to implement a process manager which will process tasks in the background upon receiving an JMS message event. Normally, there will be a small number of tasks running (maybe 10 max) but what if something happens and my application starts getting hundred of JMS message events. How do I handle such event?
My thought is to limit the number of threads if possible and save all the other messages to database and will be run when thread available. Thanks in advance.
My thought is to limit the number of threads if possible and save all the other messages to database and will be run when thread available.
The detailed answer to this question depends on which Java EE app server you choose to run on, since they all have slightly different configuration.
Any Java EE app server will allow you to configure the thread pool size of your Managed Executor Service (MES), this is the number of worker threads for your thread pool.
Say you have a 10 worker threads, and you get flooded with 100 requests all at once, the MES will keep a queue of requests that are backlogged, and the worker threads will take work off the queue whenever they finish work until the queue is empty.
Now, it's fine if work goes to the queue sometimes but if overall your work queue increases more quickly than your worker threads can take work off the queue, you will run into problems. The solution to this is to increase your thread pool size otherwise the backlog will get overrun and your server will run out of memory.
what if something happens and my application starts getting hundred of JMS message events. How do I handle such event?
If the load on your server will be so sporadic that tasks need to be saved to a database, it seems that the best approach would be to either:
increase thread pool size
have the server immediately reject incoming tasks when the task backlog queue is full
have clients do a blocking wait for the server task queue to be not full (I would only advise this option if client task submission is in no way connected to user experience)
We currently process a set of tasks using Queue workers in Laravel. When I am using multiple threads of php artisan queue:work jobs end up running together (async). We are using Beanstalkd as the queue driver.
The issue is that in the queue work we are polling an API that only allows one concurrent session for a particular agent_id. That is, only one API call with the same agent_id can run at a time.
We thought of spinning up multiple php artisan queue:work threads with a filter on the queue_name matching the agent_id but we have over 500 agents therefore we would need 500 threads so this is not ideal.
Is there anyway to implement a lock style feature for each agent_id so that if a job is already running for a particular agent_id it will send it back to the queue? Or are there any features of beanstalkd that would allow for this?
The other option could also be to gracefully handle the rejection from the API when the user is already logged in (and send the job back to the queue). But this could get messy and could clutter the logs.
You could either run only a single worker that is capable of running the fetch-from-API job, or use some sort of external marshalling/lock service.
The options for that, may be either an internal rate limiting system, or some kind of common atomically locking system. A memcached or redis server where a worker tries to set a lock-key, and only the agent that successfully sets it, gets to work on the task. An advantage of that may be that as soon as the API request has been completed, you can remove the lock, and then while the worker processes the results, a different worker can make a new request.
I am trying to fork worker clusters to a maximun of 10, and only if the working load increases. Can it be done?
I have tried with strong-cluster-control's setSize, but I can't find an easy way of forking automatically (if many requests are being done then fork, for example), or closing/"suiciding" forks (maybe with a timeOut if nothing is being done, like in this answer)
This is my repo's main file at GitHub
Thank you in advance!!
I assume that you already have some idea as to how you would like to spread your load so I will not include details about that and instead focus on the interprocess communication required for this.
Notifying the master
To send arbitrary data to the master, you can use process.send() from a worker. The way I would go about this is probably something along these steps:
The application is started
Minimum amount of workers are spawned
Each worker will send the master a request message every time it receives a new request, via process.send()
The master keeps track of all the request events from all workers
If the amount of request events increases above a predefined threshold (i.e. > 100 requests/s) it spawns a new worker
If the amount of request events decreases below a predefined threshold it asks one of the workers to stop processing new requests and close itself gracefully (note that it should not simply kill the process to avoid interrupting ongoing requests)
Main point is: Do not focus on time - focus on rate. In an application that is supposed to handle tens to thousands of requests per second, your setTimout() (the task of which might be to kill the worker if it has been idle for too long) will never fire because Node.js evenly distributes your load across your workers - you could start with one worker, but once you reach your maximum you will never drop to one worker again under continuous load even if there is only one request per second.
It should be noted that it is counterproductive to spawn more workers than the amount of CPU cores you have at your disposal. It might, however, be beneficial to start with a single worker and incrementally increase the amount to all cores as load increases.
as I seen topic which not recommending more than 200 threads for server machine,
I am trying to implement Listener class which listens 1000 devices, I mean 1000 devices sending different type of messages to that application,
I tried 2 different way 1. create thread for each device runtime and dynamic list which hold the messages for that device and start thread for processing those messages from the list
but my machine not creating thread more than 50 :), and I agree its bad idea...
I created 10 different lists which holds the messages 10 different type of messages
and I created 10 processor thread for those list, which go to its relevant list and process the message and then delete it.
but here is the problem, let say I received 50 messages from 50 devices in List 1
by the time its list1's processor thread will go to last message (50th) its time will be expired which is 10 second
any idea to for best architecture that talk to more than 500 devices and process their different type of messages with in 10 seconds.
I am working in C#, my application connected with the server as a client using tcp/ip,
that server further connects with online devices, sending messages to server with device id and message data and message typ thn further I receiving messages from that server and then reply back through that server using device id,
I think you need to partition the system differently. The listeners should be high priority but only enqueue the requests. The queue should then be processed using a pool of workers. You could add prioritisation and other optimisations on the dequeuing side. In terms of getting every process done in 10s you will really be getting the second half of the system optimised.
Think of the traditional queuing system. You have a queue of work requests to process. Each request has a series of attributes. Lets same Name (string) and Priority (int). Once a the work request has been queued, other worker (thread/processes etc) can interrogate the queue to pull out items based on priority and process them.
To get the 10s I'd say as soon a worker has started processing the request a timer comes in to play and will mark that request as timed out in 10s unless the worker completes the task. Other workers can watch for results of the work in the queue and then handle the response behaviours.
use other Highly-concurrent programming models other than threaded, though threaded is one of the highly-concurrent models too.
if socket/tcpip/network messaging, please use epoll on Linux 2.6x and completion port on win/msvc.
see the docoument named EffoNetMsg.pdf at http://code.google.com/p/effonetmsg/downloads/list to learn more about highly-concurrent programming models. we only use 2 or 3 threads for multi-listeners and >1000 clients.