I've been playing around with glib, which
utilizes reference counting to manage memory for its objects;
supports multiple threads.
What I can't understand is how they play together.
Namely:
In glib each thread doesn't seem to increase refcount of objects passed on its input, AFAIK (I'll call them thread-shared objects). Is it true? (or I've just failed to find the right piece of code?) Is it a common practice not to increase refcounts to thread-shared objects for each thread, that shares them, besides the main thread (responsible for refcounting them)?
Still, each thread increases reference counts for the objects, dynamically created by itself. Should the programmer bother not to give the same names of variables in each thread in order to prevent collision of names and memory leaks? (E.g. on my picture, thread2 shouldn't crate a heap variable called output_object or it will collide with thread1's heap variable of the same name)?
UPDATE: Answer to (question 2) is no, cause the visibility scope of
those variables doesn't intersect:
Is dynamically allocated memory (heap), local to a function or can all functions in a thread have access to it even without passing pointer as an argument.
An illustration to my questions:
I think that threads are irrelevant to understanding the use of reference counters. The point is rather ownership and lifetime, and a thread is just one thing that is affected by this. This is a bit difficult to explain, hopefully I'll make this clearer using examples.
Now, let's look at the given example where main() creates an object and starts two threads using that object. The question is, who owns the created object? The simple answer is that main() and both threads share this object, so this is shared ownership. In order to model this, you should increment the refcounter before each call to pthread_create(). If the call fails, you must decrement it again, otherwise it is the responsibility of the started thread to do that when it is done with the object. Then, when main() terminates, it should also release ownership, i.e. decrement the refcounter. The general rule is that when adding an owner, increment the refcounter. When an owner is done with the object, it decrements the refcounter and the last one destroys the object with that.
Now, why does the the code not do this? Firstly, you can get away with adding the first thread as owner and then passing main()'s ownership to the second thread. This will save one increment/decrement operation. This still isn't what's happening though. Instead, no reference counting is done at all, and the simple reason is that it isn't used. The point of refcounting is to coordinate the lifetime of a dynamically allocated object between different owners that are peers. Here though, the object is created and owned by main(), the two threads are not peers but rather slaves of main. Since main() is the master that controls start/stop of the threads, it doesn't have to coordinate the lifetime of the object with them.
Lastly, though that might be due to the example-ness of your code, I think that main simply leaks the reference, relying on the OS to clean up. While this isn't beautiful, it doesn't hurt. In general, you can allocate objects once and then use them forever without any refcounting in some cases. An example for this is the main window of an application, which you only need once and for the whole runtime. You shouldn't repeatedly allocate such objects though, because then you have a significant memory leak that will increase over time. Both cases will be caught by tools like valgrind though.
Concerning your second question, concerning the heap variable name clash you expect, it doesn't exist. Variable names that are function-local can not collide. This is not because they are used by different threads, but even if the same function is called twice by the same thread (think recursion!) the local variables in each call to the function are distinct. Also, variable names are for the human reader. The compiler completely eradicates these.
UPDATE:
As matthias says below, GObject is not thread-safe, only reference counting functions are.
Original content:
GObject is supposed to be thread safe, but I've never played with that myself…
Related
I am learning about computer architecture and how operating systems work. I have a few questions about how mutexes work.
Question 1
add_to_list(&list, &elem):
mutex m;
lock_mutex(m);
...
remove_from_list(&list):
mutex m;
lock_mutex(m);
...
These two functions instantiate their own mutex, which means they point to different places in memory and so one does not lock the other and effectively doesn't accomplish what we want--list to be protected.
How do we get two different functions to use the same mutex? Do we define a global variable? If so, how do you share this global variable throughout an entire program that is potentially spread throughout multiple files?
Question 2
mutex m;
modify_A():
lock_mutex(m);
A += 1;
modify_B():
lock_mutex(m);
B += 1;
These two functions modify different spaces in memory. Does that mean I need a unique mutex for each function / or piece of data? If I were to have a global mutex variable that I used for both functions, a thread calling modify_A() would block another thread trying to call modify_B()
Which brings me to my last question...
Question 3
A mutex seems like it just blocks a thread from running a piece of code until whatever thread is currently running that same code finishes. This is to create atomicity and protect the integrity of the data being used by a thread. However, the same piece of memory can be modified from many different places in a program. Which makes me think we have to use one mutex throughout an entire program, which would result in a lot of needless blocking of other threads.
Considering that pretty much every function in a given program is going to be modifying data, if we use a single mutex throughout a program, that means each function call will be blocked while that mutex is in use by another thread, even if the data it needs to access is unrelated.
Doesn't that effectively eliminate the gains from having multiple threads? If only one thread can run at a given time?
I feel like I'm totally misunderstanding how mutexes work, so please ELI5!
Thanks in advance.
Yes, you make it a global variable, or otherwise accessible to the required functions through some kind of convenience method or whatever. Global variables can be shared between translation units too, but that's language/system dependent. In C you'd just put an extern mutex m in a header that everyone shares and then define that mutex as mutex m in exactly one of your translation units.
If you don't want changes to B to block other threads from modifying A, yes, you'd use two different mutexes. If you want to lock both at the same time, you would share the mutex.
Multiple threads can run at the same time as long as no two of them are inside the critical section protected by a certain mutex at the same time. That's the whole point - everything goes on nice and parallel, but you use the mutex to serialize access to a specific resource or critical section you need protected.
You typically use a mutex to protect some particular piece of shared data. If the vast majority of your code's time is spent accessing one single piece of shared data, then you won't get much of a performance improvement from threads precisely because only one thread can safely access that piece of shared data at a time.
If you happen to fall into this situation, there are more complex techniques than mutexes. Fortunately, it's fairly rare (unless you're implementing operating systems or low-level libraries) so you can get away with using mutexes for a very large fraction of your synchronization needs.
I have a threading question and what I'd qualify as a modest threading background.
Suppose I have the following (oversimplified) design and behavior:
Object ObjectA - has a reference to object ObjectB and a method MethodA().
Object ObjectB - has a reference to ObjectA, an array of elements ArrayB and a method MethodB().
ObjectA is responsible for instantiating ObjectB. ObjectB.ObjectA will point to ObjectB's instantiator.
Now, whenever some conditions are met, a new element is added in ObjectB.ArrayB and a new thread is started for this element, say ThreadB_x, where x goes from 1 to ObjectB.ArrayB.Length. Each such thread calls ObjectB.MethodB() to pass some data in, which in turn calls ObjectB.ObjectA.MethodA() for data processing.
So multiple threads call the same method ObjectB.MethodB(), and it's very likely that they do so at the very same time. There's a lot of code in MethodB that creates and initializes new objects, so I don't think there are problems there. But then this method calls ObjectB.ObjectA.MethodA(), and I don't have the slightest idea of what's going on in there. Based on the results I get, nothing wrong, apparently, but I'd like to be sure of that.
For now, I enclosed the call to ObjectB.ObjectA.MethodA() in a lock statement inside ObjectB.MethodB(), so I'm thinking this will ensure there are no clashes to the call of MethodA(), though I'm not 100% sure of that. But what happens if each ThreadB_x calls ObjectB.MethodB() a lot of times and very very fast? Will I have a queue of calls waiting for ObjectB.ObjectA.MethodA() to finish?
Thanks.
Your question is very difficult to answer because of the lack of information. It depends on the average time spent in methodA, how many times this method is called per thread, how many cores are allocated to the process, the OS scheduling policy, to name a few parameters.
All things being equals, when the number of threads grows toward infinity, you can easily imagine that the probability for two threads requesting access to a shared resource simultaneously will tend to one. This probability will grow faster in proportion to the amount of time spent on the shared resource. That intuition is probably the reason of your question.
The main idea of multithreading is to parallelize code which can be effectively computed concurrently, and avoid contention as much as possible. In your setup, if methodA is not pure, ie. if it may change the state of the process - or in C++ parlance, if it cannot be made const, then it is a source of contention (recall that a function can only be pure if it uses pure functions or constants in its body).
One way of dealing with a shared resource is to protect it with a mutex, as you've done in your code. Another way is to try to turn its use into an async service, with one thread handling it, and others requesting that thread for computation. In effect, you will end up with an explicit queue of requests, but threads doing these requests will be free to work on something else in the mean time. The goal is always to maximize computation time, as opposed to thread management time, which happens each time a thread gets rescheduled.
Of course, it is not always possible to do so, eg. when the result of methodA belongs to a strongly ordered chain of computation.
I had implemented a few methods that were being handled by individual background threads. I understand the complexity of doing things this way but when I tested the results it all seemed fine. Each thread accesses the same variables at times and there is maximum of 5 threads working at any given time and I guess I should have used synchlock but my question is whether there can be any way the threads could have been executing the processes without overwriting the variable contents. I was under the impression that each thread was allocated a site in memory for that variable and even though it is named the same, in memory it is a different location mapped with a specific thread, right? so if there were collisions you should be getting an error that it cannot access that variable if it were used by another thread.
Am I wrong on this?
If you are talking about local variables of a function - no, each thread has its own copy of those on its stack.
If you are talking about member variables of a class being accessed from different threads - yes, you need to protect them (unless they are read-only)
If I have the following psuedocode:
sharedVariable = somevalue;
CreateThread(threadWhichUsesSharedVariable);
Is it theoretically possible for a multicore CPU to execute code in threadWhichUsesSharedVariable() which reads the value of sharedVariable before the parent thread writes to it? For full theoretical avoidance of even the remote possibility of a race condition, should the code look like this instead:
sharedVariableMutex.lock();
sharedVariable = somevalue;
sharedVariableMutex.unlock();
CreateThread(threadWhichUsesSharedVariable);
Basically I want to know if the spawning of a thread explicitly linearizes the CPU at that point, and is guaranteed to do so.
I know that the overhead of thread creation probably takes enough time that this would never matter in practice, but the perfectionist in me is afraid of the theoretical race condition. In extreme conditions, where some threads or cores might be severely lagged and others are running fast and efficiently, I can imagine that it might be remotely possible for the order of execution (or memory access) to be reversed unless there was a lock.
I would say that your pseudocode is safe on any correctly functioning
multiprocessor system. The C++ compiler cannot generate a call to
CreateThread() before sharedVariable has received a correct value
unless it can prove to itself that doing so is safe. You are guaranteed
that your single-threaded code executes equivalently to a completely
non-reordered linear execution path. Any system that "time warps" the
thread creation ahead of the variable assignment is seriously broken.
I don't think declaring sharedVariable as volatile does anything
useful in this case.
Given your example and if you were using Java then the answer would be "No". In Java it is not possible for the thread to spawn and read your value before the assignment operation is complete. In some other languages this might be a different story.
"Variables shared between multiple threads (e.g., instance variables of objects) have atomic assignment guaranteed by the Java language specification for all data types except longs and doubles... If a method consists solely of a single variable access or assignment, there is no need to make it synchronized for thread-safety, and every reason not to do so for performance."
reference
If your double or long is declared volatile, then you are also guaranteed that the assignment is an atomic operation.
Update:
Your example is going to work in C++ just like it works in Java. Theoretically there is no way that the thread spawning will begin or complete before the assignment, even with Out of Order Execution.
Note that your example is VERY specific and in any other case it is recommended that you ensure the shared resource is protected properly. The new C++ standard is coming out with a lot of atomic stuff, so you could declare your variable as atomic and the assignment operation will be visible to all threads without the need of locking. CAS (compare and set) is a your next best option.
I am writing code in VS2005 using its STL.
I have one UI thread to read a vector, and a work thread to write to a vector.
I use ::boost::shared_ptr as vector element.
vector<shared_ptr<Class>> vec;
but I find, if I manipulate the vec in both thread in the same time(I can guarantee they do not visit the same area, UI Thread always read the area that has the information)
vec.clear() seem can not release the resource. problem happend in shared_ptr, it can not release its resource.
What is the problem?
Does it because when the vector reach its order capacity, it reallocates in memory, then the original part is invalidated.
As far as I know when reallocating, iterator will be invalid, why some problem also happened when I used vec[i].
//-----------------------------------------------
What kinds of lock is needed?
I mean: If the vector's element is a shared_ptr, when a thread A get the point smart_p, the other thread B will wait till A finishes the operation on smart_p right?
Or just simply add lock when thread is trying to read the point, when the read opeation is finished, thread B can continu to do something.
When you're accessing the same resource from more than one thread, locking is necessary. If you don't, you have all sorts of strange behaviour, like you're seeing.
Since you're using Boost, an easy way to use locking is to use the Boost.Thread library. The best kind of locks you can use for this scenario are reader/writer locks; they're called shared_mutex in Boost.Thread.
But yes, what you're seeing is essentially undefined behaviour, due to the lack of synchronisation between the threads. Hope this helps!
Edit to answer OP's second question: You should use a reader lock when reading the smart pointer out of the vector, and a writer lock when writing or adding an item to the vector (so, the mutex is for the vector only). If multiple threads will be accessing the pointed-to object (i.e., what the smart pointer points to), then separate locks should be set up for them. In that case, you're better off putting a mutex object in the object class as well.
Another alternative is to eliminate the locking altogether by ensuring that the vector is accessed in only one thread. For example, by having the worker thread send a message to the main thread with the element(s) to add to the vector.
It is possible to do simultaneous access to a list or array like this. However, std::vector is not a good choice because of its resize behavior. To do it right needs a fixed-size array, or special locking or copy-update behavior on resize. It also needs independent front and back pointers again with locking or atomic update.
Another answer mentioned message queues. A shared array as I described is a common and efficient way to implement those.