Implementing the concept of "events" (with notifiers/receivers) in Golang? - multithreading

I'm wondering what is the proper way to handle the concept of "events" (with notifiers/receivers) in Golang. I suppose I need to use channels but not sure about the best way.
Specifically, I have the program below with two workers. Under certain conditions, "worker1" goes in and out of a "fast mode" and notify this via channels. "worker2" can then receive this event. This works fine, but then the two workers are tightly coupled. In particular, if worker2 is not running, worker1 gets stuck waiting when writing to the channel.
What would be the best way in Golang to implement this logic? Basically, one worker does something and notify any other worker that it has done so. Whether other workers listen to this event or not must not block worker1. Ideally, there could be any number of workers that could listen to this event.
Any suggestion?
var fastModeEnabled = make(chan bool)
var fastModeDisabled = make(chan bool)
func worker1() {
mode := "normal"
for {
// under some conditions:
mode := "fast"
fastModeEnabled <- true
// later, under different conditions:
mode := "normal"
fastModeDisabled <- true
}
}
func worker2() {
for {
select {
case <-fastModeEnabled:
fmt.Println("Fast mode started")
case <-fastModeDisabled:
fmt.Println("Fast mode ended")
}
}
}
func main() {
go worker2()
go worker1()
for {}
}

Use a non-blocking write to the channel. This way if anyone is listening they receive it. If there is no one listening it doesn't block the sender, although the event is lost.
You could use a buffered channel so that at least some events are buffered if you need that.
You implement a non-blocking send by using the select keyword with a default case. The default makes it non-blocking. Without the default case a select will block until one of its channels becomes usable.
Code snippit:
select {
case ch <- event:
sent = true
default:
sent = false
}

Related

How can you block a thread until one of many channels becomes readable?

I'm trying to have a thread (in rust) that listens to multiple channels at once.
In Ada, something similar can be done using a select statement:
loop
select
accept Task1 do
-- do something
end Task1;
or
accept Task2 do
-- do something else
end Task2;
or
accept Task3 (param : Parameter) do
-- do a third thing, with a given parameter
end Task3;
-- etc.
end select;
end loop;
This will block the thread until one of the Task methods is invoked remotely.
I'd like to do something like this in rust, but the following would be terrible, as I'd be wasting a ton of CPU time:
loop {
if let Ok(message) = rx1.try_recv() {
// do something
}
if let Ok(message) = rx2.try_recv() {
// do something else
}
}
Ideally, I'd like to block the thread entirely until one of the resources is updated. But I can't just switch out the thread, as I'd need some way of waking it back up when one of the resources is updated (which is what I think recv() does).
What's the normal way of solving this kind of problem in Rust?
AFAIK this is not possible with the standard library channels. However you can do it with crossbeam channels and select:
select! {
recv(rx1) -> msg => { /* Do something */ },
recv(rx2) -> msg => { /* Do something else */ },
}

How to wait for *any* of a group of goroutines to signal something without requiring that we wait for them to do so

There are plenty of examples of how to use WaitGroup to wait for all of a group of goroutines to finish, but what if you want to wait for any one of them to finish without using a semaphore system where some process must be waiting? For example, a producer/consumer scenario where multiple producer threads add multiple entries to a data structure while a consumer is removing them one at a time. In this scenario:
We can't just use the standard producer/consumer semaphore system, because production:consumption is not 1:1, and also because the data structure acts as a cache, so the producers can be "free-running" instead of blocking until a consumer is "ready" to consume their product.
The data structure may be emptied by the consumer, in which case, the consumer wants to wait until any one of the producers finishes (meaning that there might be new things in the data structure)
Question: Is there a standard way to do that?
I've only been able to devise two methods of doing this. Both by using channels as semaphores:
var unitary_channel chan int = make(chan int, 1)
func my_goroutine() {
// Produce, produce, produce!!!
unitary_channel<-0 // Try to push a value to the channel
<-unitary_channel // Remove it, in case nobody was waiting
}
func main() {
go my_goroutine()
go my_goroutine()
go my_goroutine()
for len(stuff_to_consume) { /* Consume, consume, consume */ }
// Ran out of stuff to consume
<-unitary_channel
unitary_channel<-0 // To unblock the goroutine which was exiting
// Consume more
}
Now, this simplistic example has some glaring (but solvable issues), like the fact that main() can't exit if there wasn't at least one go_routine() still running.
The second method, instead of requiring producers to remove the value they just pushed to the channel, uses select to allow the producers to exit when the channel would block them.
var empty_channel chan int = make(chan int)
func my_goroutine() {
// Produce, produce, produce!!!
select {
case empty_channel <- 0: // Push if you can
default: // Or don't if you can't
}
}
func main() {
go my_goroutine()
go my_goroutine()
go my_goroutine()
for len(stuff_to_consume) { /* Consume, consume, consume */ }
// Ran out of stuff to consume
<-unitary_channel
// Consume more
}
Of course, this one will also block main() forever if all of the goroutines have already terminated. So, if the answer to the first question is "No, there's no standard solution to this other than the ones you've come up with", is there a compelling reason why one of these should be used instead of the other?
you could use a channel with a buffer like this
// create a channel with a buffer of 1
var Items = make(chan int, 1)
var MyArray []int
func main() {
go addItems()
go addItems()
go addItems()
go sendToChannel()
for true {
fmt.Println(<- Items)
}
}
// push a number to the array
func addItems() {
for x := 0; x < 10; x++ {
MyArray = append(MyArray, x)
}
}
// push to Items and pop the array
func sendToChannel() {
for true {
for len(MyArray) > 0 {
Items <- MyArray[0]
MyArray = MyArray[1:]
}
time.Sleep(10 * time.Second)
}
}
the for loop in main will loop for ever and print anything that gets added to the channel and the sendToChannel function will block when the array is empty,
this way a producer will never be blocked and a consumer can consume when there are one or more items available

How to freeze a thread and notify it from another?

I need to pause the current thread in Rust and notify it from another thread. In Java I would write:
synchronized(myThread) {
myThread.wait();
}
and from the second thread (to resume main thread):
synchronized(myThread){
myThread.notify();
}
Is is possible to do the same in Rust?
Using a channel that sends type () is probably easiest:
use std::sync::mpsc::channel;
use std::thread;
let (tx,rx) = channel();
// Spawn your worker thread, giving it `send` and whatever else it needs
thread::spawn(move|| {
// Do whatever
tx.send(()).expect("Could not send signal on channel.");
// Continue
});
// Do whatever
rx.recv().expect("Could not receive from channel.");
// Continue working
The () type is because it's effectively zero-information, which means it's pretty clear you're only using it as a signal. The fact that it's size zero means it's also potentially faster in some scenarios (but realistically probably not any faster than a normal machine word write).
If you just need to notify the program that a thread is done, you can grab its join guard and wait for it to join.
let guard = thread::spawn( ... ); // This will automatically join when finished computing
guard.join().expect("Could not join thread");
You can use std::thread::park() and std::thread::Thread::unpark() to achieve this.
In the thread you want to wait,
fn worker_thread() {
std::thread::park();
}
in the controlling thread, which has a thread handle already,
fn main_thread(worker_thread: std::thread::Thread) {
worker_thread.unpark();
}
Note that the parking thread can wake up spuriously, which means the thread can sometimes wake up without the any other threads calling unpark on it. You should prepare for this situation in your code, or use something like std::sync::mpsc::channel that is suggested in the accepted answer.
There are multiple ways to achieve this in Rust.
The underlying model in Java is that each object contains both a mutex and a condition variable, if I remember correctly. So using a mutex and condition variable would work...
... however, I would personally switch to using a channel instead:
the "waiting" thread has the receiving end of the channel, and waits for it
the "notifying" thread has the sending end of the channel, and sends a message
It is easier to manipulate than a condition variable, notably because there is no risk to accidentally use a different mutex when locking the variable.
The std::sync::mpsc has two channels (asynchronous and synchronous) depending on your needs. Here, the asynchronous one matches more closely: std::sync::mpsc::channel.
There is a monitor crate that provides this functionality by combining Mutex with Condvar in a convenience structure.
(Full disclosure: I am the author.)
Briefly, it can be used like this:
let mon = Arc::new(Monitor::new(false));
{
let mon = mon.clone();
let _ = thread::spawn(move || {
thread::sleep(Duration::from_millis(1000));
mon.with_lock(|mut done| { // done is a monitor::MonitorGuard<bool>
*done = true;
done.notify_one();
});
});
}
mon.with_lock(|mut done| {
while !*done {
done.wait();
}
println!("finished waiting");
});
Here, mon.with_lock(...) is semantically equivalent to Java's synchronized(mon) {...}.

Scala: wake up sleeping thread

In scala, how can I tell a thread: sleep t seconds, or until you receive a message? i.e. sleep at most t seconds, but wake up in case t is not over and you receive a certain message.
The answer depends greatly on what the message is. If you're using Actors (either the old variety or the Akka variety) then you can simply state a timeout value on receive. (React isn't really running until it gets a message, so you can't place a timeout on it.)
// Old style
receiveWithin(1000) {
case msg: Message => // whatever
case TIMEOUT => // Handle timeout
}
// Akka style
context.setTimeoutReceive(1 second)
def receive = {
case msg: Message => // whatever
case ReceiveTimeout => // handle timeout
}
Otherwise, what exactly do you mean by "message"?
One easy way to send a message is to use the Java concurrent classes made for exactly this kind of thing. For example, you can use a java.util.concurrent.SynchronousQueue to hold the message, and the receiver can call the poll method which takes a timeout:
// Common variable
val q = new java.util.concurrent.SynchronousQueue[String]
// Waiting thread
val msg = q.poll(1000)
// Sending thread will also block until receiver is ready to take it
q.offer("salmon", 1000)
An ArrayBlockingQueue is also useful in these situations (if you want the senders to be able to pack messages in a buffer).
Alternatively, you can use condition variables.
val monitor = new AnyRef
var messageReceived: Boolean = false
// The waiting thread...
def waitUntilMessageReceived(timeout: Int): Boolean = {
monitor synchronized {
// The time-out handling here is simplified for the purpose
// of exhibition. The "wait" may wake up spuriously for no
// apparent reason. So in practice, this would be more complicated,
// actually.
while (!messageReceived) monitor.wait(timeout * 1000L)
messageReceived
}
}
// The thread, which sends the message...
def sendMessage: Unit = monitor synchronized {
messageReceived = true
monitor.notifyAll
}
Check out Await. If you have some Awaitable objects then that's what you need.
Instead of making it sleep for a given time, make it only wake up on a Timeout() msg and then you can send this message prematurely if you want it to "wake up".

Efficient consumer thread with multiple producers

I am trying to make a producer/consumer thread situation more efficient by skipping expensive event operations if necessary with something like:
//cas(variable, compare, set) is atomic compare and swap
//queue is already lock free
running = false
// dd item to queue – producer thread(s)
if(cas(running, false, true))
{
// We effectively obtained a lock on signalling the event
add_to_queue()
signal_event()
}
else
{
// Most of the time if things are busy we should not be signalling the event
add_to_queue()
if(cas(running, false, true))
signal_event()
}
...
// Process queue, single consumer thread
reset_event()
while(1)
{
wait_for_auto_reset_event() // Preferably IOCP
for(int i = 0; i &lt SpinCount; ++i)
process_queue()
cas(running, true, false)
if(queue_not_empty())
if(cas(running, false, true))
signal_event()
}
Obviously trying to get these things correct is a little tricky(!) so is the above pseudo code correct? A solution that signals the event more than is exactly needed is ok but not one that does so for every item.
This falls into the sub-category of "stop messing about and go back to work" known as "premature optimisation". :-)
If the "expensive" event operations are taking up a significant portion of time, your design is wrong, and rather than use a producer/consumer you should use a critical section/mutex and just do the work from the calling thread.
I suggest you profile your application if you are really concerned.
Updated:
Correct answer:
Producer
ProducerAddToQueue(pQueue,pItem){
EnterCriticalSection(pQueue->pCritSec)
if(IsQueueEmpty(pQueue)){
SignalEvent(pQueue->hEvent)
}
AddToQueue(pQueue, pItem)
LeaveCriticalSection(pQueue->pCritSec)
}
Consumer
nCheckQuitInterval = 100; // Every 100 ms consumer checks if it should quit.
ConsumerRun(pQueue)
{
while(!ShouldQuit())
{
Item* pCurrentItem = NULL;
EnterCriticalSection(pQueue-pCritSec);
if(IsQueueEmpty(pQueue))
{
ResetEvent(pQueue->hEvent)
}
else
{
pCurrentItem = RemoveFromQueue(pQueue);
}
LeaveCriticalSection(pQueue->pCritSec);
if(pCurrentItem){
ProcessItem(pCurrentItem);
pCurrentItem = NULL;
}
else
{
// Wait for items to be added.
WaitForSingleObject(pQueue->hEvent, nCheckQuitInterval);
}
}
}
Notes:
The event is a manual-reset event.
The operations protected by the critical section are quick. The event is only set or reset when the queue transitions to/from empty state. It has to be set/reset within the critical section to avoid a race condition.
This means the critical section is only held for a short time. so contention will be rare.
Critical sections don't block unless they are contended. So context switches will be rare.
Assumptions:
This is a real problem not homework.
Producers and consumers spend most of their time doing other stuff, i.e. getting the items ready for the queue, processing them after removing them from the queue.
If they are spending most of the time doing the actual queue operations, you shouldn't be using a queue. I hope that is obvious.
Went thru a bunch of cases, can't see an issue. But it's kinda complicated. I thought maybe you would have an issue with queue_not_empty / add_to_queue racing. But looks like the post-dominating CAS in both paths covers this case.
CAS is expensive (not as expensive as signal). If you expect skipping the signal to be common, I would code the CAS as follows:
bool cas(variable, old_val, new_val) {
if (variable != old_val) return false
asm cmpxchg
}
Lock-free structures like this is the stuff that Jinx (the product I work on) is very good at testing. So you might want to use an eval license to test the lock-free queue and signal optimization logic.
Edit: maybe you can simplify this logic.
running = false
// add item to queue – producer thread(s)
add_to_queue()
if (cas(running, false, true)) {
signal_event()
}
// Process queue, single consumer thread
reset_event()
while(1)
{
wait_for_auto_reset_event() // Preferably IOCP
for(int i = 0; i &lt SpinCount; ++i)
process_queue()
cas(running, true, false) // this could just be a memory barriered store of false
if(queue_not_empty())
if(cas(running, false, true))
signal_event()
}
Now that the cas/signal are always next to each other they can be moved into a subroutine.
Why not just associate a bool with the event? Use cas to set it to true, and if the cas succeeds then signal the event because the event must have been clear. The waiter can then just clear the flag before it waits
bool flag=false;
// producer
add_to_queue();
if(cas(flag,false,true))
{
signal_event();
}
// consumer
while(true)
{
while(queue_not_empty())
{
process_queue();
}
cas(flag,true,false); // clear the flag
if(queue_is_empty())
wait_for_auto_reset_event();
}
This way, you only wait if there are no elements on the queue, and you only signal the event once for each batch of items.
I believe, you want to achieve something like in this question:
WinForms Multithreading: Execute a GUI update only if the previous one has finished. It is specific on C# and Winforms, but the structure may well apply for you.

Resources