So with synchronous threads, you have some threads waiting for other threads to finish.
As we can see in the observer pattern, etc.
An Example:
Thread 3 Waiting for Thread 2 to execute Method C
Thread 2 waiting for Thread 1 to execute Method B
Thread 1 execute Method A
Does this scenario have any benefit in performance in contrast with:
Thread 1
Executing Method A
Executing Method B
Executing Method C
Maybe they just do it for sake of spliting tasks according to behavior?
Related
I have 30 tasks.
I want to run 4 threads at a same time to do 4 first tasks.
If any threads completed, i want to excute next thread and it always has 4 threads at same time
When I completed 28 tasks (7 times), I only do 2 tasks (2 threads)
How to solve it ? i use threading namespace
Thank you
You have not mentioned any particular language here, but in case you are using java this is a classic use case of ThreadPoolExecutor.
If you are using some other coding language, you can have your own implementation of simplified ThreadPoolExecutor. Basically:
A thread safe list of tasks to be executed
4 threads reading from the queue and executing tasks
Implement the termination logic for your threads (you may want to terminate if thread finds that the queue is empty or may be wait for some time and then try again)
Background:
The Little Book of Semaphores by Allen B. Downey talks about assumptions needed to prevent thread starvation.
He states that the scheduler needs to guarantee the following:
Property 2: if a thread is ready to run, then the time it waits until it runs is bounded.
And a weak semaphore guarantees:
Property 3: if there are threads waiting on a semaphore when a thread executes signal, then one of the waiting threads has to be woken.
However, he states that even with these properties, the following code when run for 3 or more threads (Thread A,B,C) can cause starvation:
while True:
mutex.wait()
# critical section
mutex.signal()
The argument is that if A executes first, then wakes up B, A could wait on the mutex again before B releases it. At this point, the A could be woken up again reacquire the mutex and repeat this cycle with B. C would be starved.
Question:
Wouldn't Property 2 guarantee that C would have to be woken up by the scheduler in some finite amount of time? If so, then Thread C couldn't be starved. Even if weak semaphore does not guarantee that Thread C will be woken up, shouldn't the scheduler run it?
I thought about it a little bit more and realized that Property 2 is guarantees that Threads in a RUNNABLE state will be scheduled in a finite amount of time.
The argument in the book states that Thread C would never get to a RUNNABLE state so Property 2 and 3 do not guarantee no starvation.
Threads 1 and 2 are executing concurrently with shared integer variables A, B, and C.
Thread 1 executes:A=4, B=5, C=B-A; Thread 2 executes: A=3, B=6, C=A+B;
Suppose there no synchronization implemented. What are all possible values of C after the execution of this fragment?
I know if there is no synchronization then the reads see writes that occur later in the execution order and it will be counter intuitive ( execution will be happens-before consistency)
I am confused what can be the possible values of C.
I have a conceptual question about multithreading:
In a application using RPC through DCOM, with multithread appartment configuration, the main form is freezing.
1 - If the CriticalSession is created in the unit initialization, the code in the critical session will run in the main thread context?
2 - When you call the method to execute a task:
Thread 1 is created. (DCOM Thread)
Thread 1 creates Thread 2.
Thread 1 WaitFor Thread 2.
Thread 2 creates 4 thread to run the task faster.
Thread 2 loops sleeping 2 seconds until the end of the 4 threads. In this processes the main form is supposed to be refreshed to display the percent done. A message is posted to the main form thread with the percent done, but nothing happens and the main form is freezed.
3 - There is a better way instead of synchronized() method, to synchronize inside one of the 4 threads when they need to CRUD (Create Read Update Delete) objects in the Thread 2?
4 - The 4 Threads have higher priority then the main thread is this a problem? When this become a problem?
The image below represent the architecture of the system:
1: No. By using a cricital section, you guarantee the code is run in only one thread at a time; in practice any thread that calls Enter will hang there until any other thread that is also running that code gets to the Leave call. But this doesn't mean it will run in the main thread (check with GetCurrentThreadID)
2: You mention apartment configuration, but which apartment threading model? This defines when (D)COM will do thread synchronzation for you. In practice COM will work with proxy stubs and marshalling behind the scenes to traverse apartment (and network) boundaries, unless you've selected multi threaded apartment, in which case COM will suppose the components take care of threading issues by themselves.
If I understand correctly, the main form freezes on the 'Thread 1 WaitFor Thread 2'. Instead of calling WaitFor you'll be better off using the OnTerminate event on Thread2.
3: I'm not sure what you mean by 'CRUD objects in Thread 2'. If it is not important to know in what order the 4 threads finish, I would suggest to call WaitFor on the threads in sequence. If it is, you should check out WaitForMultipleObjects.
4: Different priorities should not be a problem. It only might be a problem when there are too much high-priority threads doing too much work so the normal-priority threads doing internal communication can't keep up, but in that case you should review how worker threads report their work.
Does semaphore satisfies bounded waiting or they are just for providing mutual exclusion??
Answer
It may break bounded waiting condition theoretically as you'll see below. Practically, it depends heavily on which scheduling algorithm is used.
The classic implementation of wait() and signal() primitive is as:
//primitive
wait(semaphore* S)
{
S->value--;
if (S->value < 0)
{
add this process to S->list;
block();
}
}
//primitive
signal(semaphore* S)
{
S->value++;
if (S->value <= 0)
{
remove a process P from S->list;
wakeup(P);
}
}
When a process calls the wait() and fails the "if" test, it will put itself into a waiting list. If more than one processe are blocked on the same semaphore, they're all put into this list(or they are somehow linked together as you can imagine). When another process leaves critical section and calls signal(), one process in the waiting list will be chosen to wake up, ready to compete for CPU again. However, it's the scheduler who decides which process to pick from the waiting list. If the scheduling is implemented in a LIFO(last in first out) manner for instance, it's possible that some process are starved.
Example
T1: thread 1 calls wait(), enters critical section
T2: thread 2 calls wait(), blocked in waiting list
T3: thread 3 calls wait(), blocked in waiting list
T4: thread 1 leaves critical section, calls signal()
T5: scheduler wakes up thread 3
T6: thread 3 enters critical section
T7: thread 4 calls wait(), blocked in waiting list
T8: thread 3 leaves critical section, calls signal()
T9: scheduler wakes up thread 4
..
As you can see, although you implements/uses the semaphore correctly, thread 2 has a unbounded waiting time, even possibly starvation, caused by continuous entering of new processes.