I am trying to wrap my head around Send + Sync traits. I get the intuition behind Sync - this is the traditional thread safety(like in C++). The object does the necessary locking(interior mutability if needed), so threads can safely access it.
But the Send part is bit unclear. I understand why things like Rc are Send only - the object can be given to a different thread, but non-atomic operations make it thread unsafe.
What is the intuition behind Send? Does it mean the object can be copied/moved into another thread context, and continues to be valid after the copy/move?
Any examples scenarios for "Sync but no Send" would really help. Please also point to any rust libraries for this case (I found several for the opposite though)
For (2), I found some threads which use structs with pointers to data on stack/thread local storage as examples. But these are unsafe anyways(Sync or otherwise).
Sync allows an object to to be used by two threads A and B at the same time. This is trivial for non-mutable objects, but mutations need to be synchronized (performed in sequence with the same order being seen by all threads). This is often done using a Mutex or RwLock which allows one thread to proceed while others must wait. By enforcing a shared order of changes, these types can turn a non-Sync object into a Sync object. Another mechanism for making objects Sync is to use atomic types, which are essentially Sync primitives.
Send allows an object to be used by two threads A and B at different times. Thread A can create and use an object, then send it to thread B, so thread B can use the object while thread A cannot. The Rust ownership model can be used to enforce this non-overlapping use. Hence the ownership model is an important part of Rust's Send thread safety, and may be the reason that Send is less intuitive than Sync when comparing with other languages.
Using the above definitions, it should be apparent why there are few examples of types that are Sync but not Send. If an object can be used safely by two threads at the same time (Sync) then it can be used safely by two threads at different times (Send). Hence, Sync usually implies Send. Any exception probably relates to Send's transfer of ownership between threads, which affects which thread runs the Drop handler and deallocates the value.
Most objects can be used safely by different threads if the uses can be guaranteed to be at different times. Hence, most types are Send.
Rc is an exception. It does not implement Send. Rc allows data to have multiple owners. If one owner in thread A could send the Rc to another thread, giving ownership to thread B, there could be other owners in thread A that can still use the object. Since the reference count is modified non-atomically, the value of the count on the two threads may get out of sync and one thread may drop the pointed-at value while there are owners in the other thread.
Arc is an Rc that uses an atomic type for the reference count. Hence it can be used by multiple threads without the count getting out of sync. If the data that the Arc points to is Sync, the entire object is Sync. If the data is not Sync (e.g. a mutable type), it can be made Sync using a Mutex. Hence the proliferation of Arc<Mutex<T>> types in multithreaded Rust code.
Send means that a type is safe to move from one thread to another. If the same type also implements Copy, this also means that it is safe to copy from one thread to another.
Sync means that a type is safe to reference from multiple threads at the same time. Specifically, that &T is Send and can be moved/copied to another thread if T is Sync.
So Send and Sync capture two different aspects of thread safety:
Non-Send types can only ever be owned by a single thread, since they cannot be moved or copied to other threads.
Non-Sync types can only be used by a single thread at any single time, since their references cannot be moved or copied to other threads. They can still be moved between threads if they implement Send.
It rarely makes sense to have Sync without Send, as being able to use a type from different threads would usually mean that moving ownership between threads should also be possible. Although they are technically different, so it is conceivable that certain types can be Sync but not Send.
Most types that own data will be Send, as there are few cases where data can't be moved from one thread to another (and not be accessed from the original thread afterwards).
Some common exceptions:
Raw pointers are never Send nor Sync.
Types that share ownership of data without thread synchronization (for instance Rc).
Types that borrow data that is not Sync.
Types from external libraries or the operating system that are not thread safe.
Overall
Send and Sync exist to help thinking about the types when many threads are involved. In a single thread world, there is no need for Send and Sync to exist.
It may help also to not always think about Send and Sync as allowing you to do something, or giving you power to do something. On the contrary, think about !Send and !Sync as ways of forbidding or preventing you of doing multi-thread problematic stuff.
For the definition of Send and Sync
If some type X is Send, then if you have an owned X, you can move it into another thread.
This can be problematic if X is somehow related to multi/shared-ownership.
Rc has a problem with this, since having one Rc allows you to create more owned Rc's (by cloning it), but you don't want any of those to pass into other threads. The problem is that many threads could be making more clones of that Rc at the same time, and the counter of the owners inside of it doesn't work well in that multi-thread situation - because even if each thread would own an Rc, there would be only one counter really, and access into it would not be synchronized.
Arc may work better. At least it's owner's counter is capable of dealing with the situation mentioned above. So in that regard, Arc is ok to allow Send'ing. But only if the inner type is both Send and Sync. For example, an Arc<Rc> is still problematic - remembering that Rc forbids Send (!Send) - because multiple threads having their own owned clone of that Arc<Rc> could still invoke the Rc's own "multi-thread" problems - the Arc itself can't protect the threads from doing that. The other requirement of Arc<T>, to being Send, also requiring T to be Sync is not a big of a deal, because if a type is already forbidding Send'ing, it will likely also be forbidding Sync'ing.
So if some type forbids Sending, then doesn't matter what other types you try wrapping around it, you won't be able to make it "sendable" into another thread.
If some type X is Sync, then if multiple threads happened to somehow have an &X each, they all can safely use that &X.
This is problematic if &X allows interior mutability, and you'd want to forbid Sync if you want to prevent multiple threads having &X.
So if X has a problem with Sending, it will basically also have a problem with Syncing.
It's also problematic for Cell - which doesn't actually forbids Sending. Since Cell allows interior mutation by only having an &Cell, and that mutation access doesn't guarantee anything in a multithread situation, it must forbid Syncing - that is, the situation of multiple threads having &Cell must not be allowed (in general). Regarding it being Send, an owned Cell can still be moved into another thread, as long as there won't be &Cell's anywhere else.
Mutex may work better. It also allows interior mutation, and in which case it knows how to deal when many threads are trying to do it - the Mutex will only require that nothing inside of it forbids Send'ing - otherwise, it's the same problem that Arc would have to deal with. All being good, the Mutex is both Send and Sync.
This is not a practical example, but a curious note: if we have a Mutex<Cell> (which is redundant, but oh well), where Cell itself forbids Sync, the Mutex is able to deal with that problem, and still be (or "re-allow") Sync. This is because, once a thread got access into that Cell, we known it won't have to deal with other threads still trying to access others &Cell at the same time, since the Mutex will be locked and preventing this from happening.
Mutate a value in multi-thread
In theory you could share a Mutex between threads!
If you try to simply move an owned Mutex, you will get it done, but this is of no use, since you'd want multiple threads having some access to it at the same time.
Since it's Sync, you're allowed to share a &Mutex between threads, and it's lock method indeed only requires a &Mutex.
But trying this is problematic, let's say: you're in the main thread, then you create a Mutex and then a reference to it, a &Mutex, and then create another thread Z which you try to pass the &Mutex into.
The problem is that the Mutex has only one owner, and that is inside the main thread. If for some reason the thread Z outlives the main thread, that &Mutex would be dangling. So even if the Sync in the Mutex doesn't particularly forbids you of sending/sharing &Mutex between threads, you'll likely not get it done in this way, for lifetime reasons. Arc to the rescue!
Arc will get rid of that lifetime problem. instead of it being owned by a particular scope in a particular thread, it can be multi-owned, by multi-threads.
So using an Arc<Mutex> will allow a value to be co-owned and shared, and offer interior mutability between many threads. In sum, the Mutex itself re-allows Syncing while not particularly forbidding Sending, and the Arc doesn't particularly forbids neither while offering shared ownership (avoiding lifetime problems).
Small list of types
Types that are Send and Sync, are those that don't particularly forbids neither:
primitives, Arc, Mutex - depending on the inner types
Types that are Send and !Sync, are those that offer (multithread unsync) interior mutability:
Cell, RefCell - depending on the inner type
Types that are !Send and !Sync, are those that offer (multithread unsync) co-ownership:
Rc
I don't know types that are !Send and Sync;
According to
Rustonomicon: Send and Sync
A type is Send if it is safe to send it to another thread.
A type is Sync if it is safe to share between threads (T is Sync if and only if &T is Send).
Related
If I have a type that is not safe to send between threads, I wrap it with Arc<Mutex<T>>. This way I'm guaranteed that when I access it, I need to unlock it first. However, Rust still complains when T does not implement Send + Sync.
Shouldn't it work for any type? In my case, T is a struct that accesses a C object through FFI, so I cannot mark it as Sync + Send.
What can I do in this case and why won't Rust accept Arc<Mutex<T>> as safe to share between threads?
Just because you are the only one accessing something (at a time) does not mean it suddenly become okay to access things from different threads. It merely prevents one issue: data races. But there may be other issues with moving objects across threads.
For example it's common for low-level windowing APIs to only be able to be called from the main thread. Many low-level APIs are also only callable from the thread they were initialized in. If you wrap these APIs in Rust objects, you don't want these objects moving across threads no matter what.
From what I've learned, I should always choose Arc<T> for shared read access across threads and Arc<Mutex<T>> for shared write access across threads. Are there cases where I don't want to use Arc<T>/Arc<Mutex<T>> and instead do something completely different? E.g. do something like this:
unsafe impl Sync for MyStruct {}
unsafe impl Send for MyStruct {}
let shared_data_for_writing = Arc::from(MyStruct::new());
Sharing across threads
Besides Arc<T>, we can share objects across threads using scoped threads, e.g. by using crossbeam::scope and Scope::spawn. Scoped threads allow us to send borrowed pointers (&'a T) to threads spawned in a scope. The scope guarantees that the thread will terminate before the referent is dropped. Borrowed pointers have no runtime overhead compared to Arc<T> (Arc<T> takes a bit more memory and needs to maintain a reference counter using atomic instructions).
Mutating across threads
Mutex<T> is the most basic general-purpose wrapper for ensuring at most one thread may mutate a value at any given time. Mutex<T> has one drawback: if there are many threads that only want to read the value in the mutex, they can't do so concurrently, even though it would be safe. RwLock<T> solves this by allowing multiple concurrent readers (while still ensuring a writer has exclusive access).
Atomic types such as AtomicUsize also allow mutation across threads, but only for small values (8, 16, 32 or 64 bits – some processors support atomic operations on 128-bit values, but that's not exposed in the standard library yet; see atomic::Atomic for that). For example, instead of Arc<Mutex<usize>>, you could use Arc<AtomicUsize>. Atomic types do not require locking, but they are manipulated through atomic machine instructions. The set of atomic instructions is a bit different from the set of non-atomic instructions, so switching from a non-atomic type to an atomic type might not always be a "drop-in replacement".
When spawn is called, a JoinHandle is returned, but if that handle is discarded (or not available, somewhere inside a crate) the thread is "detached".
Is there any way to find all threads currently running and recover a JoinHandle for them?
...my feeling is that, in general, the answer is no.
In that case, is there any way to override either how Thread is invoked, or how JoinHandle is dropped, globally?
...but looking through the source, I can't see any way this might be possible.
As motivation, this very long discussion proposes using scopes to mandate the termination of child threads; effectively executing a join on every child thread when the scope ends. However, it requires that child threads be spawned via a custom method to work; it would very interesting to be able to do something similar in Rust where any thread spawned was intercepted and parented to the active ambient scope on the threadlocal.
I'll accept any answer that either:
demonstrates how to recover a JoinHandle by whatever means possible
demonstrates how to override the behavior of thread::spawn() in some way so that a discarded JoinHandle from a thread invoked in some arbitrary sub-function can be recovered before it is dropped.
Is there any way to find all threads currently running and recover a JoinHandle for them?
No, this would likely impose restrictions/overhead on everyone who wanted to use threads, which is antithetical for a systems programming language.
You could write your own solution for this by using something like Arc/Weak and a global singleton. Then you have your own registry of threads.
is there any way to override either how Thread is invoked, or how JoinHandle is dropped, globally?
No, there is no ability to do this with the Rust libraries as they exist now. In fact "overriding" something on that scale is fairly antithetical to the concepts of a statically-compiled language. Imagine if any library you use could decide to "override" how addition worked or what println did. Some languages do allow this dynamicism, but it comes at a cost. Rust is not the right language for that.
In fact, the right solution for this is nothing new: just use dependency injection. "Starting a thread" is a non-trivial collaborator and likely doesn't belong to the purview of most libraries as it's an application-wide resource.
can be recovered before it is dropped
In Rust, values are dropped at the end of the scope where they are last used. This would require running arbitrary code at the prologue of arbitrary functions anywhere in the program. Such a feature is highly unlikely to ever be implemented.
There's some discussion about creating a method that will return a handle that joins a thread when it is dropped, which might do what you want, but people still have to call it
.
I've been playing around with glib, which
utilizes reference counting to manage memory for its objects;
supports multiple threads.
What I can't understand is how they play together.
Namely:
In glib each thread doesn't seem to increase refcount of objects passed on its input, AFAIK (I'll call them thread-shared objects). Is it true? (or I've just failed to find the right piece of code?) Is it a common practice not to increase refcounts to thread-shared objects for each thread, that shares them, besides the main thread (responsible for refcounting them)?
Still, each thread increases reference counts for the objects, dynamically created by itself. Should the programmer bother not to give the same names of variables in each thread in order to prevent collision of names and memory leaks? (E.g. on my picture, thread2 shouldn't crate a heap variable called output_object or it will collide with thread1's heap variable of the same name)?
UPDATE: Answer to (question 2) is no, cause the visibility scope of
those variables doesn't intersect:
Is dynamically allocated memory (heap), local to a function or can all functions in a thread have access to it even without passing pointer as an argument.
An illustration to my questions:
I think that threads are irrelevant to understanding the use of reference counters. The point is rather ownership and lifetime, and a thread is just one thing that is affected by this. This is a bit difficult to explain, hopefully I'll make this clearer using examples.
Now, let's look at the given example where main() creates an object and starts two threads using that object. The question is, who owns the created object? The simple answer is that main() and both threads share this object, so this is shared ownership. In order to model this, you should increment the refcounter before each call to pthread_create(). If the call fails, you must decrement it again, otherwise it is the responsibility of the started thread to do that when it is done with the object. Then, when main() terminates, it should also release ownership, i.e. decrement the refcounter. The general rule is that when adding an owner, increment the refcounter. When an owner is done with the object, it decrements the refcounter and the last one destroys the object with that.
Now, why does the the code not do this? Firstly, you can get away with adding the first thread as owner and then passing main()'s ownership to the second thread. This will save one increment/decrement operation. This still isn't what's happening though. Instead, no reference counting is done at all, and the simple reason is that it isn't used. The point of refcounting is to coordinate the lifetime of a dynamically allocated object between different owners that are peers. Here though, the object is created and owned by main(), the two threads are not peers but rather slaves of main. Since main() is the master that controls start/stop of the threads, it doesn't have to coordinate the lifetime of the object with them.
Lastly, though that might be due to the example-ness of your code, I think that main simply leaks the reference, relying on the OS to clean up. While this isn't beautiful, it doesn't hurt. In general, you can allocate objects once and then use them forever without any refcounting in some cases. An example for this is the main window of an application, which you only need once and for the whole runtime. You shouldn't repeatedly allocate such objects though, because then you have a significant memory leak that will increase over time. Both cases will be caught by tools like valgrind though.
Concerning your second question, concerning the heap variable name clash you expect, it doesn't exist. Variable names that are function-local can not collide. This is not because they are used by different threads, but even if the same function is called twice by the same thread (think recursion!) the local variables in each call to the function are distinct. Also, variable names are for the human reader. The compiler completely eradicates these.
UPDATE:
As matthias says below, GObject is not thread-safe, only reference counting functions are.
Original content:
GObject is supposed to be thread safe, but I've never played with that myself…
I am writing code in VS2005 using its STL.
I have one UI thread to read a vector, and a work thread to write to a vector.
I use ::boost::shared_ptr as vector element.
vector<shared_ptr<Class>> vec;
but I find, if I manipulate the vec in both thread in the same time(I can guarantee they do not visit the same area, UI Thread always read the area that has the information)
vec.clear() seem can not release the resource. problem happend in shared_ptr, it can not release its resource.
What is the problem?
Does it because when the vector reach its order capacity, it reallocates in memory, then the original part is invalidated.
As far as I know when reallocating, iterator will be invalid, why some problem also happened when I used vec[i].
//-----------------------------------------------
What kinds of lock is needed?
I mean: If the vector's element is a shared_ptr, when a thread A get the point smart_p, the other thread B will wait till A finishes the operation on smart_p right?
Or just simply add lock when thread is trying to read the point, when the read opeation is finished, thread B can continu to do something.
When you're accessing the same resource from more than one thread, locking is necessary. If you don't, you have all sorts of strange behaviour, like you're seeing.
Since you're using Boost, an easy way to use locking is to use the Boost.Thread library. The best kind of locks you can use for this scenario are reader/writer locks; they're called shared_mutex in Boost.Thread.
But yes, what you're seeing is essentially undefined behaviour, due to the lack of synchronisation between the threads. Hope this helps!
Edit to answer OP's second question: You should use a reader lock when reading the smart pointer out of the vector, and a writer lock when writing or adding an item to the vector (so, the mutex is for the vector only). If multiple threads will be accessing the pointed-to object (i.e., what the smart pointer points to), then separate locks should be set up for them. In that case, you're better off putting a mutex object in the object class as well.
Another alternative is to eliminate the locking altogether by ensuring that the vector is accessed in only one thread. For example, by having the worker thread send a message to the main thread with the element(s) to add to the vector.
It is possible to do simultaneous access to a list or array like this. However, std::vector is not a good choice because of its resize behavior. To do it right needs a fixed-size array, or special locking or copy-update behavior on resize. It also needs independent front and back pointers again with locking or atomic update.
Another answer mentioned message queues. A shared array as I described is a common and efficient way to implement those.