Go Memory Model Happens Before (Channels with Shared State) - multithreading

I'm trying to more fully understand the nature of the Happens-Before relationship between channels and other shared state. Specifically, I want to see if some kind of memory fence is created on a channel send and receive operation.
For example, if I send a message on a channel, do all other operations surrounding modification of shared state "happen before" the send/receive operation. In my particular example, I'm only ever writing from a single go routine and then reading from a single go routine.
(Aside: the obvious answer in the example below is to put an instance of the Person struct on the channel directly, but that's not what I'm asking.)
package main
func main() {
channel := make(chan int, 128)
go func() {
person := &sharedState[0]
person.Name = "Hello, World!"
channel <- 0
}()
index := <-channel
person := sharedState[index]
if person.Name != "Hello, World!" {
// unintended race condition
}
}
type Person struct{ Name string }
var sharedState = make([]Person, 1024)

The memory model guarantees that when the channel write operation executes, all operations in that goroutine that comes before the channel operation are visible. So in your example, the "unintended race condition" cannot happen, because when the channel is read, the assignment happened in the goroutine is visible. This, of course, assumes that there isn't another goroutine that's writing to that same variable. If there was another goroutine writing to that same variable, then you would need to synchronize that goroutine as well to avoid the race.

Related

Understanding mutex behviour

I was thinking mutex in Go would lock the data and won't allow read/write by any other goroutine unless the fist goroutine releases the lock. It seems like my understanding was wrong. The only way to block read/write from other goroutine is to call lock in other goroutines as well. This would ensure critical section is accessed by one and only one goroutine.
So, I would expect this code to have a deadlock:
package main
import(
"fmt"
"sync"
)
type myMap struct {
m map[string]string
mutex sync.Mutex
}
func main() {
done := make(chan bool)
ch := make(chan bool)
myM := &myMap{
m: make(map[string]string),
}
go func() {
myM.mutex.Lock()
myM.m["x"] = "i"
fmt.Println("Locked. Won't release the Lock")
ch <- true
}()
go func() {
<- ch
fmt.Println("Trying to write to the myMap")
myM.m["a"] = "b"
fmt.Println(myM)
done <- true
}()
<- done
}
Since the fist goroutine locks the struct, I would expect the second goroutine to fail to read/write to the struct but that not happening here.
If I will add mux.Lock() in second goroutine then there will be a deadlock.
I find it a little weird the way mutex works in Go. If I lock then Go shouldn't allow any other goroutine to read/write to it.
Can someone explain to me the mutex concept in Go?
There's no magical force field that surrounds a mutex, protecting any datastructure it happens to be embedded in. If you lock a mutex, it prevents other code from locking it until it's unlocked. Nothing more, nothing less. It's well documented in the sync package.
So in your code, where there's exactly one myM.mutex.Lock(), the effect is the same as if there was no mutex.
A correct use of a mutex that protects data involves locking the mutex before updating or reading the data, and then unlocking it afterwards. Often this code will be wrapped in a function so that defer can be used:
func doSomething(myM *myMap) {
myM.mutex.Lock()
defer myM.mutex.Unlock()
... read or update myM
}

How to safely interact with channels in goroutines in Golang

I am new to go and I am trying to understand the way channels in goroutines work. To my understanding, the keyword range could be used to iterate over a the values of the channel up until the channel is closed or the buffer runs out; hence, a for range c will repeatedly loops until the buffer runs out.
I have the following simple function that adds value to a channel:
func main() {
c := make(chan int)
go printchannel(c)
for i:=0; i<10 ; i++ {
c <- i
}
}
I have two implementations of printchannel and I am not sure why the behaviour is different.
Implementation 1:
func printchannel(c chan int) {
for range c {
fmt.Println(<-c)
}
}
output: 1 3 5 7
Implementation 2:
func printchannel(c chan int) {
for i:=range c {
fmt.Println(i)
}
}
output: 0 1 2 3 4 5 6 7 8
And I was expecting neither of those outputs!
Wanted output: 0 1 2 3 4 5 6 7 8 9
Shouldnt the main function and the printchannel function run on two threads in parallel, one adding values to the channel and the other reading the values up until the channel is closed? I might be missing some fundamental go/thread concept here and pointers to that would be helpful.
Feedback on this (and my understanding to channels manipulation in goroutines) is greatly appreciated!
Implementation 1. You're reading from the channel twice - range c and <-c are both reading from the channel.
Implementation 2. That's the correct approach. The reason you might not see 9 printed is that two goroutines might run in parallel threads. In that case it might go like this:
main goroutine sends 9 to the channel and blocks until it's read
second goroutine receives 9 from the channel
main goroutine unblocks and exits. That terminates whole program which doesn't give second goroutine a chance to print 9
In case like that you have to synchronize your goroutines. For example, like so
func printchannel(c chan int, wg *sync.WaitGroup) {
for i:=range c {
fmt.Println(i)
}
wg.Done() //notify that we're done here
}
func main() {
c := make(chan int)
wg := sync.WaitGroup{}
wg.Add(1) //increase by one to wait for one goroutine to finish
//very important to do it here and not in the goroutine
//otherwise you get race condition
go printchannel(c, &wg) //very important to pass wg by reference
//sync.WaitGroup is a structure, passing it
//by value would produce incorrect results
for i:=0; i<10 ; i++ {
c <- i
}
close(c) //close the channel to terminate the range loop
wg.Wait() //wait for the goroutine to finish
}
As to goroutines vs threads. You shouldn't confuse them and probably should understand the difference between them. Goroutines are green threads. There're countless blog posts, lectures and stackoverflow answers on that topic.
In implementation 1, range reads into channel once, then again in Println. Hence you're skipping over 2, 4, 6, 8.
In both implementations, once the final i (9) has been sent to goroutine, the program exits. Thus goroutine does not have the time to print out 9. To solve it, use a WaitGroup as has been mentioned in the other answer, or a done channel to avoid semaphore/mutex.
func main() {
c := make(chan int)
done := make(chan bool)
go printchannel(c, done)
for i:=0; i<10 ; i++ {
c <- i
}
close(c)
<- done
}
func printchannel(c chan int, done chan bool) {
for i := range c {
fmt.Println(i)
}
done <- true
}
The reason your first implementation only returns every other number is because you are, in effect "taking" from c twice each time the loop runs: first with range, then again with <-. It just happens that you're not actually binding or using the first value taken off the channel, so all you end up printing is every other one.
An alternative approach to your first implementation would be to not use range at all, e.g.:
func printchannel(c chan int) {
for {
fmt.Println(<-c)
}
}
I could not replicate the behavior of your second implementation, on my machine, but the reason for that is that both of your implementations are racy - they will terminate whenever main ends, regardless of what data may be pending in a channel or however many goroutines may be active.
As a closing note, I'd warn you not to think about goroutines as explicitly being "threads", though they have a similar mental model and interface. In a simple program like this it's not at all unlikely that Go might just do it all using a single OS thread.
Your first loop does not work as you have 2 blocking channel receivers and they do not execute at the same time.
When you call the goroutine the loop starts, and it waits for the first value to be sent to the channel. Effectively think of it as <-c .
When the for loop in the main function runs it sends 0 on the Chan. At this point the range c recieves the value and stops blocking the execution of the loop.
Then it is blocked by the reciever at fmt.println(<-c) . When 1 is sent on the second iteration of the loop in main the recieved at fmt.println(<-c) reads from the channel, allowing fmt.println to execute thus finishing the loop and waiting for a value at the for range c .
Your second implementation of the looping mechanism is the correct one.
The reason it exits before printing to 9 is that after the for loop in main finishes the program goes ahead and completes execution of main.
In Go func main is launched as a goroutine itself while executing. Thus when the for loop in main completes it goes ahead and exits, and as the print is within a parallel goroutine that is closed, it is never executed. There is no time for it to print as there is nothing to block main from completing and exiting the program.
One way to solve this is to use wait groups http://www.golangprograms.com/go-language/concurrency.html
In order to get the expected result you need to have a blocking process running in main that provides enough time or waits for confirmation of the execution of the goroutine before allowing the program to continue.

Reading values from a different thread

I'm writing software in Go that does a lot of parallel computing. I want to collect data from worker threads and I'm not really sure how to do it in a safe way. I know that I could use channels but in my scenario they make it more complicated since I have to somehow synchronize messages (wait until every thread sent something) in the main thread.
Scenario
The main thread creates n Worker instances and launches their work() method in a goroutine so that the workers each run in their own thread. Every 10 seconds the main thread should collect some simple values (e.g. iteration count) from the workers and print a consolidated statistic.
Question
Is it safe to read values from the workers? The main thread will only read values and each individual thread will write it's own values. It would be ok if the values are a few nanoseconds off while reading.
Any other ideas on how to implement this in an easy way?
In Go no value is safe for concurrent access from multiple goroutines without synchronization if at least one of the accesses is a write. Your case meets the conditions listed, so you must use some kind of synchronization, else the behavior would be undefined.
Channels are used if goroutine(s) want to send values to another. Your case is not exactly this: you don't want your workers to send updates in every 10 seconds, you want your main goroutine to fetch status in every 10 seconds.
So in this example I would just protect the data with a sync.RWMutex: when the workers want to modify this data, they have to acquire a write lock. When the main goroutine wants to read this data, it has to acquire a read lock.
A simple implementation could look like this:
type Worker struct {
iterMu sync.RWMutex
iter int
}
func (w *Worker) Iter() int {
w.iterMu.RLock()
defer w.iterMu.RUnlock()
return w.iter
}
func (w *Worker) setIter(n int) {
w.iterMu.Lock()
w.iter = n
w.iterMu.Unlock()
}
func (w *Worker) incIter() {
w.iterMu.Lock()
w.iter++
w.iterMu.Unlock()
}
Using this example Worker, the main goroutine can fetch the iteration using Worker.Iter(), and the worker itself can change / update the iteration using Worker.setIter() or Worker.incIter() at any time, without any additional synchronization. The synchronization is ensured by the proper use of Worker.iterMu.
Alternatively for the iteration counter you could also use the sync/atomic package. If you choose this, you may only read / modify the iteration counter using functions of the atomic package like this:
type Worker struct {
iter int64
}
func (w *Worker) Iter() int64 {
return atomic.LoadInt64(&w.iter)
}
func (w *Worker) setIter(n int64) {
atomic.StoreInt64(&w.iter, n)
}
func (w *Worker) incIter() {
atomic.AddInt64(&w.iter, 1)
}

Can go channel keep a value for multiple reads [duplicate]

This question already has answers here:
Multiple goroutines listening on one channel
(7 answers)
Closed 5 years ago.
I understand the regular behavior of a channel is that it empties after a read. Is there a way to keep an unbuffered channel value for multiple reads without the value been removed from the channel?
For example, I have a goroutine that generates a single data for multiple down stream go routines to use. I don't want to have to create multiple channels or use a buffered channel which would require me to duplicate the source data (I don't even know how many copies I will need). Effectively, I want to be able to do something like the following:
main{
ch := make(ch chan dType)
ch <- sourceDataGenerator()
for _,_ := range DynamicRange{
go TargetGoRoutine(ch)
}
close(ch) // would want this to remove the value and the channel
}
func(ch chan dType) TargetGoRoutine{
targetCollection <- ch // want to keep the channel value after read
}
EDIT
Some feel this is a duplicate question. Perhaps, but not sure. The solution here seems simple in the end as n-canter pointed out. All it needs is for every go routine to "recycle" the data by putting it back to the channel after use. None of the supposedly "duplicates" provided this solution. Here is a sample:
package main
import (
"fmt"
"sync"
)
func main() {
c := make(chan string)
var wg sync.WaitGroup
wg.Add(5)
for i := 0; i < 5; i++ {
go func(i int) {
wg.Done()
msg := <-c
fmt.Printf("Data:%s, From go:%d\n", msg, i)
c <-msg
}(i)
}
c <- "Original"
wg.Wait()
fmt.Println(<-c)
}
https://play.golang.org/p/EXBbf1_icG
You may readd value back to the channel after reading, but then all your gouroutines will read shared value sequentially and also you'll need some synchronization primitives for last goroutine not to block.
As far as I know the only case when you can use the single channel for broadcasting is closing it. In this case all readers will be notified.
If you don't want to duplicate large data, maybe you'd better use some global variable. But use it carefully, because it violates golang rule: "Don't communicate by sharing memory; share memory by communicating."
Also look at this question How to broadcast message using channel

Is there a way in c++11 to prevent "normal" operations from sliping before or after atomic operation

I'm interested in doing something like(single thread update, multiple threads read banneedURLs):
atomic<bannedURLList*> bannedURLs;//global variable pointing to the currently used instance of struct
void updateList()
{
//no need for mutex because only 1 thread updates
bannedURLList* newList= new bannedURLList();
bannedURLList* oldList=bannedURLs;
newList->initialize();
bannedURLs=newList;// line must be after previous line, because list must be initialized before it is ready to be used
//while refcnt on the oldList >0 wait, then delete oldList;
}
reader threads do something like this:
{
bannedURLs->refCnt++;
//use bannedURLs
bannedURLs->refCnt--;
}
struct memeber refCnt is also atomic integer
My question is how to prevent reordering of this 2 lines:
newList->initialize();
bannedURLs=newList;
Can it be done in std:: way?
Use bannedURLs.store(newList); instead of bannedURLs=newList;. Since you didn't pass a weak ordering specifier, this forces full ordering in the store.

Resources