I have two threads t1,t2. t1 has odd numbers and t2 has even numbers array in it. using wait and notify methods I want to print the sequence. for example t1 has array of 1,3,5,7,9 and t2 has array of 2,4,6,8,10. by using threading concept I want to print 1,2,3,4,5.......10. that too using wait and notify methods. Can anyone please help me.
I see it something like this.
Start
t1.start();
t2.start();
In t1
t1.print();
t1.notify();
t1.wait();
In t2
t2.print();
t2.notify();
t2.wait();
Basically threads are grouped in a thread-group. When you call notify on a thread, it wakes up all threads in the thread-group. Than you call wait to put this thread to sleep until it is notified by another.
Related
I'm writing a program where I start N (N is a command-line argument) worker threads, and at any time 0 to N-1 of them can be waiting on another to update a variable. What's the best way for the threads to wait for this event, and the best way for one of the threads to notify all the others at once of the event occurring? This event will be sent multiple times by each thread.
sync.Cond isn't appropriate because the threads don't need to lock a resource upon waking from sleep. sync.WaitGroup won't work because I don't know how many times to call wg.Done().
Solution #1: I could use a sync.Mutex and have the thread that will eventually notify the others acquire the lock and then unlock it to notify the others, but it seems really inefficient for the others to all fight over a lock when they all just need to pop out of sleep, read a variable to see if that particular worker is now the master, and then either go back to sleep or start working.
Solution #2: Create a wrapper for sync.WaitGroup that allows keeping track of the number of waiting threads so that I can call wg.Add(-numWaitingThreads) to wake them. This sounds like a headache to figure out how to code it without all sorts of race conditions.
Solution #3: Until someone comes up with a better idea, I'll be using a list of N channels and have the notifier non-blocking-send to all of the channels except its own. Is this really the best way?
More details: I give each worker some unique credits and have a central variable for "which credit is the next to be written to the output file". When a worker finishes its work for whichever credit ID it was working on, it needs to do the following:
for centralNextCreditID != creditID {
wait_for_centralNextCreditID_to_change()
}
saveWorkToFile()
centralNextCreditID++
wake_other_threads_waiting_for_centralNextCreditID_to_change()
To me it does seem like this is an appropriate use case for sync.Cond. You can use a *RWMutex.RLocker() for Cond.L so all goroutines can acquire the read lock simultaneously once the Cond.Broadcast() is sent.
Additionally, it may be worth making sure you hold a write lock when changing this "who's master" variable to avoid race conditions, which would make sync.Cond an even better fit.
sync.WaitGroup won't work because I don't know how many times to call wg.Done().
wg can be used in this case. Make a wg with count 1 and pass this to the N goroutines. Make them wg.Wait(), except the one that updates the variable.
The goroutine updating the variable calls wg.Done() after successful update thus resulting in N goroutines to come out of wait and start executing further.
The title says that you want to wake 0-N sleeping goroutines, but the body of the question indicates that you only need to wake the goroutine for the next id (if there is a goroutine waiting).
Here's how to implement the problem described in the body of the question:
// waiter sequences work according to an incrementing id.
type waiter struct {
mu sync.Mutex
id int
waiting map[int]chan struct{}
}
func NewWaiter(firstID int) *waiter {
return &waiter{id: firstID, waiting: make(map[int]chan struct{})}
}
// wait waits for id's turn in the sequence.
func (w *waiter) wait(id int) {
w.mu.Lock()
if w.id == id {
// This id is next. Nothing to do.
w.mu.Unlock()
return
}
// Wait for our turn.
c := make(chan struct{})
w.waiting[id] = c
w.mu.Unlock()
<-c
}
// done signals that the work for the previous id is done.
func (w *waiter) done() {
w.mu.Lock()
w.id++
c, ok := w.waiting[w.id]
if ok {
delete(w.waiting, w.id)
}
w.mu.Unlock()
if ok {
// close cause c to receive a zero value
close(c)
}
}
Here's how to use it:
for _, creditID := range creditIDs {
doWorkFor(creditID)
waiter.wait(creditID)
saveWorkToFile()
waiter.done()
}
WaitGroup is the best option. The reason is that is keeps its signalled state and you are safe from deadlock if the main thread signals too early.
If you use Cond there is a risk that the main thread calls cond.Broadcast BEFORE the worker thread calls cond.Wait(). Since Cond doesn't remember that it was signalled, the worker thread will wait for the event to happen.
Here is an example: https://go.dev/play/p/YLfvEGO2A18
The main thread broadcasts too early, the worker threads run into a deadlock.
Same case with con.WaitGroup: https://go.dev/play/p/R6_-ULo2eJ2
The main thread releases the wait group too early, but there is no deadlock.
i have multiple threads running an infinite while true without them knowing of each other's existence.
Inside their respective loops i need them to check the time and do something based on it before the next iteration, something like this:
Thread:
while True:
now = timedate.now()
# do something
time.sleep(0.2)
these threads are started in my main program in such a way:
Main:
t1.start()
t2.start()
t3.start()
...
...
while True:
#main program does something
Onto the problem, i need all the threads running to receive the same time when they check for it.
I was thinking maybe about creating a class with a lock on it and a variable to store the time, the first thread that acquires the lock saves the time in it so that the following threads can read it but to me this seems quinda a hacky way of doing things (plus i wouldn't know how to check when all the threads have read the time to update it).
What would be the best way, if possible, to implement this?
I am pretty new to multithreading. I have 2 threads t1,t2. Each thread call count of integer for 1000 times. So finally output should be 2000.
If I use t1.join();t2.join(); it should return 2000.since join will ensure t2 will run after t1.
But why its not happening,if join ensures order why we need synchronization?
join() does not start the thread (it is already started when you call join(), thus join can't "ensure order").. It waits for the thread to end. However, other threads can run while you are waiting for the thread to end.
from("direct:A")
.process(//processing here)
.recipientList(//expression that return two recipients [direct:B, direct:C] )
from("direct:B")
.process(//processing here)...
from("direct:C")
.process(//processing here)...
.from("direct:A") behaves like a java method i.e the thread that calls it will continue to process().
So what will happen in above case ?
Let say Thread t1 calls from("direct:A") then
t1 will continue to process()
and then t1 will enter into recipientList()
Now from here on-wards will t1 call from("direct:B") and then call from("direct:C") synchronously
or
direct:b and direct:c will be called in two new thread asynchronously.
Read the recipient list documentation for a lot more detail. By default it processes messages synchronously. You can use the parallel processing feature of the recipient list to run this concurrently. You can also define your own thread pools.
Go read the documentation it is in there.
My requirement is as follows
There is a process with multiple threads.
One of the threads (T1), gets triggered by a user event
There is a task that needs to be done in a separate thread(T2), which should be spawned by T1
Now, T1 should check that the system is not already doing the task in T2. If its not, then it should spawn T2 and then exit. If T2 is still running, then I should just return from T1 by logging an error. I do not want to hold T1 until T2 is complete.
T2 will usually take a long time. So in case T1 is triggered before T2 has finished it should just return with an error.
The intention is under no circumstance we should have two threads of T2
I am using a mutex and semaphore to do this, but there may be a simpler way.
Here is what i do.
Mutex g_mutex;
Semaphore g_semaphone;
T1:
if TryLock(g_mutex) succeeds // this means T2 is not active.
spawn T2
else // This means T2 is currently doing something
return with an error.
wait (g_sempahore) // I come here only if I have spawned the thread. now i wait for T2 to pick the task
// I am here means T2 has picked the task, and I can exit.
T2:
Lock(g_mutex)
signal(g_semaphore)
Do the long task
Unlock(g_mutex)
And this works fine. But I want to know if there is a simpler way of doing this.
Do not use a mutex like this. Mutex locks should be held for the minimum time necessary. In this case, have a boolean flag t2_running, which is protected by the mutex. In T1 do:
lock g_mutex
Read t2_running
If t2_running was set, unlock g_mutex and exit with error
Set t2_running
Unlock g_mutex
Populate data for T2
Spawn T2
Wait for g_semaphore
Exit with success
T2 can then do:
Read the data
signal g_semaphore
Process the data
lock g_mutex
Clear t2_running
Unlock g_mutex
exit