I am new to boost::thread_pool.
I can create thread pool with 4 threads.
But how many tasks can I post?
More that 4. Let say 100.
Does that mean that 96 will wait while first 4 will be processed?
From documentation there is only join method which waits when all threads will be done.
There is no method to check if there is only one thread available to post any data.
I would like to wait until there is at least one available thread exists to post a new task.
Is that possible?
Assuming you meant boost::asio::thread_pool, then yes, if you post 100 tasks to a pool of 4 it means that they will all be executed in turn on the next available thread, on average about 25 tasks per thread, assuming they all have similar execution times.
This is the nature of a thread pool. If you want to limit the amount of pending tasks, use a bounded queue. You could have a different queue capacity than the number of threads in the pool.
Related
I am trying to understand the concept behind the threadpool. Based on my understanding, a thread can not be restarted once completed. One will have to create a new thread in order to execute a new task. If that is the right understanding, does ThreadPool executor recreates new thread for every task that is added?
One will have to create a new thread in order to execute a new task
No. Task are an abstraction of a logical work to perform. It can be typically a function reference/pointer with an ordered list of well-defined parameters (to give to the function). Multiple tasks can be assigned to a given thread. A thread pool is usually a set of threads waiting for new incoming tasks to be executed.
As a result, threads of a given thread-pool are created once.
I know "cluster" and "child_process" can use multiple cores of a CPU so that we can achieve true parallel processing.
I also know that the async event loop is single-threaded so we can only achieve concurrency.
My question is about worker_threads:
Assume that My computer has 4 core CPU And I'm executing a nodejs script. The script creates three worker threads.
Would the three worker thread make use of the remaining 3 cores in the CPU to achieve parallelism?
or the three worker threads will only use the main core and the remaining 3 core are not used just like the event loop?
Would the three worker thread make use of the remaining 3 cores in the CPU to achieve parallelism?
Yes, you can achieve parallelism. The actual CPU allocation is, of course, up to the operating system, but these will be true OS threads and will be able to take advantage of multiple CPUs.
or the three worker threads will only use the main core and the remaining 3 core are not used just like the event loop?
No. Each worker thread can use a separate CPU. Each thread has its own separate event loop.
The main time that the four threads will not be independent is when they wish to communicate with each other via messaging because those messages will go through the recipient's event loop. So, if thread A sends a message to the main thread, then that message will go into the main thread's event queue and won't be received by the main loop until the main loop gets back to the event loop retrieve that next message from the event queue. The same is true for the reverse. If you sent a message from the main thread to thread A, but thread A was busy executing a CPU intensive task, that message won't be received until thread A gets back to the event loop (e.g. finishes its CPU-intensive task).
Also, be careful if your threads are doing I/O (particularly disk I/O) as they may be competing for access to those resources and may get stuck waiting for other threads to finish using a resource before they can proceed.
a) I have a task which I want the server to do every X hours for every user (~5000 users). Is it better to:
1 - Create a worker thread for each user that does the task and sleep for X hours then start again, where each task is running in random time (so that most tasks are sleeping at every moment)
2 - Create one Thread that loops through the users and do the task for each user then start again (even if this takes more than X hours).
b) if plan 1 is used, do sleeping threads aftect the performance of the server?
c) If the answer is yes, do the sleeping thread have the same effect as the thread that is doing the task?
Note that this server is not only used for this task. It is used for all the communications with the ~5000 clients.
Sleeping threads generally do not affect CPU usage. They consume 1MB of stack memory each. This is not a big deal for dozens of threads. It is a big deal for 5000 threads.
Have one thread or timer dedicated to triggering the hourly work. Once per hour you can process the users. You can use parallelism if you want. Process the users using Parallel.ForEach or any other technique you like.
Whether you choose a thread or timer doesn't matter for CPU usage in any meaningful way. Do what fits your use app most.
There is not enough details about your issue for a complete answer. However, based on the information you provided, I would:
create a timer (threading.timer)
set an interval which will be the time interval between processing a "batch" of 5'000 users
Then say the method/task you want to perform is called UpdateUsers:
when timer "ticks", in the UpdateUsers method (callback):
1. stop timer
2. loop and perform task for each user 3. start timer
This way you ensure that the task is performed for each user and there is no overlapping if it takes more than X hours total. The updates will happen every Y time, where Y is the time interval you set for your timer. Also, this uses maximum one thread, depending on how your server/service is coded.
I need help in enhancing a thread scheduling strategy I am working on.
Background
To set the context, I have a couple (20-30 thousand) "tasks" that needs to be executed. Each task can execute independently. In reality, the range of execution time varies between 40ms and 5mins across tasks. Also, each individual task when re-run takes the same amount of time.
I need options to control the way these tasks are executed, so I have come up with a scheduling engine that schedules these tasks based on various strategies. The most basic strategy is FCFS, i.e. my tasks get executed sequentially, one by one. The second one is a batch strategy, the scheduler has a bucket size "b" which controls how many threads can run in parallel. The scheduler will kick off non-blocking threads for the frist "b" tasks it gets, then waits for those started tasks to complete, then proceed with the next "b" tasks, starting them in parallel and then waiting for completion. Each "b" set of tasks processed at a time is termed a batch and hence batch scheduling.
Now, with batch scheduling, activity begins to increase at the beginning of the batch, when threads start getting created, then peaks in the middle, when most of the threads would be running, and then wanes down as we block and wait for the threads to join back in. Batch scheduling becomes FCFS scheduling when batch size "b" = 1.
One way to improve on batch scheduling is what I will term as parallel scheduling - the scheduler will ensure, if sufficient number of tasks are present, that "b" number of threads will keep running at any point in time. The number of threads initially will ramp up to "b", then maintain the count at "b" running threads, until the last set of tasks finish execution. To maintain execution of "b" threads at any time, we need to start a new thread the moment an old thread finishes execution. This approach can reduce the amount of time taken to finish processing all the tasks compared to batch scheduling (average case scenario).
Part where I need help
The logic I have to implement parallel scheduling follows. I would be obliged if anyone can help me on:
Can we avoid the use of the
startedTasks list? I am using that
because I need to be sure that when
the Commit() exits, all tasks have
completed execution, so I just loop
through all startedTasks and block
until they are complete. One current
problem is that list will be long.
--OR--
Is there a better way to do parallel scheduling?
(Any other suggestions/strategies are also welcome - main goal here is to shorten overall execution duration within the constraints of the batch size "b")
ParallelScheduler pseudocode
// assume all variable access/updates are thread safe
Semaphore S: with an initial capacity of "b"
Queue<Task> tasks
List<Task> startedTasks
bool allTasksCompleted = false;
// The following method is called by a callee
// that wishes to start tasks, it can be called any number of times
// passing various task items
METHOD void ScheduleTask( Task t )
if the PollerThread not started yet then start it
// starting PollerThead will call PollerThread_Action
// set up the task so that when it is completed, it releases 1
// on semaphore S
// assume OnCompleted is executed when the task t completes
// execution after a call to t.Start()
t.OnCompleted() ==> S.Release(1)
tasks.Enqueue ( t )
// This method is called when the callee
// wishes to notify that no more tasks are present that needs
// a ScheduleTask call.
METHOD void Commit()
// assume that the following assignment is thread safe
stopPolling = true;
// assume that the following check is done efficiently
wait until allTasksCompleted is set to true
// this is the method the poller thread once started will execute
METHOD void PollerThread_Action
while ( !stopPolling )
if ( tasks.Count > 0 )
Task nextTask = tasks.Deque()
// wait on the semaphore to relase one unit
if ( S.WaitOne() )
// start the task in a new thread
nextTask.Start()
startedTasks.Add( nextTask )
// we have been asked to start polling
// this means no more tasks are going to be added
// to the queue
// finish off the remaining tasks
while ( tasks.Count > 0 )
Task nextTask = tasks.Dequeue()
if ( S.WaitOne() )
nextTask.Start()
startedTasks.Add ( nextTask )
// at this point, there are no more tasks in the queue
// each task would have already been started at some
// point
for every Task t in startedTasks
t.WaitUntilComplete() // this will block if a task is running, else exit immediately
// now all tasks are complete
allTasksCompleted = true
Search for 'work stealing scheduler' - it is one of the most efficient generic schedulers. There are also several open source and commercial implementations around.
The idea is to have fixed number of worker threads, that take tasks from a queue. But to avoid the congestion on a single queue shared by all the threads (very bad performance problems for multi-CPU systems) - each thread has its own queue. When a thread creates new tasks - it places them to its own queue. After finishing tasks, thread gets next task from its own queue. But if the thread's queue is empty, it "steals" work from some other thread's queue.
When your program knows a task needs to be run, place it in a queue data structure.
When your program starts up, also start up as many worker threads as you will need. Arrange for each thread to do a blocking read from the queue when it needs something to do. So, when the queue is empty or nearly so, most of your threads will be blocked waiting for something to go into the queue.
When the queue has plenty of tasks in it, each thread will pull one task from the queue and carry it out. When it is done, it will pull another task and do that one. Of course this means that tasks will be completed in a different order than they were started. Presumably that is acceptable.
This is far superior to a strategy where you have to wait for all threads to finish their tasks before any one can get another task. If long-running tasks are relatively rare in your system, you may find that you don't have to do much more optimization. If long-running tasks are common, you may want to have separate queues and separate threads for short- and long- running tasks, so the short-running tasks don't get starved out by the long-running ones.
There is a hazard here: if some of your tasks are VERY long-running (that is, they never finish due to bugs) you'll eventually poison all your threads and your system will stop working.
You want to use a space-filling-curve to subdivide the tasks. A sfc reduce a 2d complexity to a 1d complexity.
How do I control the number of threads that my program is working on?
I have a program that is now ready for mutithreading but one problem is that the program is extremely memory intensive and i have to limit the number of threads running so that i don't run out of ram. The main program goes through and creates a whole bunch of handles and associated threads in suspended state.
I want the program to activate a set number of threads and when one thread finishes, it will automatically unsuspended the next thread in line until all the work has been completed. How do i do this?
Someone has once mentioned something about using a thread handler, but I can't seem to find any information about how to write one or exactly how it would work.
If anyone can help, it would be greatly appreciated.
Using windows and visual c++.
Note: i don't need to worry about the traditional problems of access with the threads, each one is completely independent of each other, its more of like batch processing rather than true mutithreading of a program.
Thanks,
-Faken
Don't create threads explicitly. Create a thread pool, see Thread Pools and queue up your work using QueueUserWorkItem. The thread pool size should be determined by the number of hardware threads available (number of cores and ratio of hyperthreading) and the ratio of CPU vs. IO your work items do. By controlling the size of the thread pool you control the number of maximum concurrent threads.
A Suspended thread doesn't use CPU resources, but it still consumes memory, so you really shouldn't be creating more threads than you want to run simultaneously.
It is better to have only as many threads as your maximum number of simultaneous tasks, and to use a queue to pass units of work to the pool of worker threads.
You can give work to the standard pool of threads created by Windows using the Windows Thread Pool API.
Be aware that you will share these threads and the queue used to submit work to them with all of the code in your process. If, for some reason, you don't want to share your worker threads with other code in your process, then you can create a FIFO queue, create as many threads as you want to run simultaneously and have each of them pull work items out of the queue. If the queue is empty they will block until work items are added to the queue.
There is so much to say here.
There are a few ways
You should only create as many thread handles as you plan on running at the same time, then reuse them when they complete. (Look up thread pool).
This guarantees that you can never have too many running at the same time. This raises the question of funding out when a thread completes. You can have a callback be called just before a thread terminates where a parameter in that callback is the thread handle that just finished. Use Boost bind and boost signals for that. When the callback is called, look for another task for that thread handle and restart the thread. That way all you have to do is add to the "tasks to do" list and the callback will remove the tasks for you. No polling needed, and no worries about too many threads.