This is related to How to assign unique ids to threads in a pthread wrapper? and The need for id_callback when in a multithread environment?.
When we need to differentiate among unique threads, we cannot use functions like pthread_self because thread ids are reused. In those problems, it was suggested to use a monotonically increasing counter to provide a unique id due to counter potential thread id reuse. The counter is then passed to the thread by way of arg in pthread_create.
I don't think we can maintain a map of external thread ids to unique ids because of the reuse problem. The same thread id could have multiple unique ids.
How do we retrieve the arg passed to pthread_create from outside the thread? Is it even retrievable?
I don't think we can maintain a map of external thread ids to unique
ids because of the reuse problem. The same thread id could have
multiple unique ids.
You can, as long as in this map you only keep the external thread IDs corresponding to currently running threads. When a thread exits, you remove it from the map.
The user of the map obviously only cares about currently running threads, since apparently the only way it has to identify the thread it wants is the external thread ID.
Related
I'm not sure if reverse/inverse counting mutex is the name of the synchronization primitive I'm looking for, and search results are vague. What I need is the ability for a thread that wants to write to an object to be able to "lock" said object such that it waits for all existing readers of that object to finish doing so, but where no new thread/task can attempt to aquire any access to that object (neither read nor write), until the thread that wants to write finishes doing so and "unlocks" that object.
The question is how to design a class that behaves as such a synchronization primitive using only preexisting binary/counting/recursive semaphores.
Using a standard counting semaphore isn't suitable, since that only limits the max number of tasks that would be able to access the object simultaneously. But it doesn't restrict/enforce that they may only read, nor would they notifiy the thread that wants to write that they have finished, nor would it prevent any other threads in the meanwhile starting to read.
I need some kind of "counting" semaphore that is not bounded from above, but on which "register_read" or "lock_for_read" can be called (which keeps count how many simultaneous readers there are), but on which a task can call "lock_for_write", and then blocks until the count reaches 0, and after "lock_for_write" is called, any new calls to "lock_for_read" would have to block until the writing thread calls "unlock_from_write".
I have a RAM based database in the form of a linked list of trees (Each node in the list points to a tree of strings).
A set of words is given as an input and every word in this set must be searched for in the RAM database.
I thought of implementing a multithreaded search feature. The current implemenation uses a 2 level threading scheme. The first level class of threads will be concurrently taking words out from the input set and then each thread of this level will spawn other worker threads that will be searching for the same word in the RAM DB.
The implementation works but suffers a lot from synchronization overhead (Besides the overhead of creating, terminating the threads and the load imbalance between them) so i want to improve the scheme for better performance.
Current implementation details: The first level threads creates (spawns) worker threads to search for the same word. Whenever, one of the worker threads finds the word in the DB, it must kill other threads and then return the result to the parent thread (First level thread). The parent thread will grab another word and repeats the process until there are no words to search for. The input set is protected with a lock and every group of worker threads (Threads searching for the same word) have a protected shared pointer in the RAM DB.
The question is : What other more efficient schemes could you suggest for such a situation ?
Presumably the fastest is to have the worker threads already "running", but blocked on a condition variable (or barrier). (If you know a priori how to load-balance the trees, the threads know which trees to search; otherwise, you need a work queue of some kind.) The thread which learns the (next) word to search for then stores it in a shared location and signals the variable (or joins the barrier). When one thread finds the word, it sets a flag that sends the other threads (who must unfortnately be written to check it periodically) back to waiting.
I want to do bulk insert for many threads at the same time, each time each thread insert data into different collections. I know it's not thread safe if I put all data into one collection, but what if each thread insert data into a totally different collection? In such case, can I assume it's thread safe and do not have to worry about stuff?
If every thread uses it's own connection then it is thread safe. There is no difference if you insert in-to the same collection or different ones. The crucial part is that every thread must use it's own separate connection to the database.
I am trying to understand how the TLS works, but I think that the definitions provided by Wikipedia and MSDN are different.
By reading the Wikipedia page, my understanding is that TLS is a way to map data which normally would be global/static locally for each thread of a process. If this is true, different threads cannot access to the data of other threads, though.
According MSDN: "One thread allocates the index, which can be used by the other threads to retrieve the unique data associated with the index", so it looks like that a thread can have access to the data of other threads.
Which seems to be in contrast of what Wikipedia says, where's the catch?
Two confusions here:
Firstly, you allocate a (unique) TLS ID. Using that ID with the according function, every thread can access its associated TLS data. Note that this ID has to be allocated once but that the ID (not the data!) is used by all threads.
Every thread can access every other thread's data, whether it's TLS or not. The simple reason is that threads share a memory space (the memory space and the threads roughly make up a process). Getting at some other thread's TLS data is more difficult though, but a thread could e.g. pass a pointer.
In short, TLS works like a C++ map. The key is the pair of thread ID and TLS ID. The data is typically a pointer which can be used to indirectly reference some data. When accessing an element, you only supply the TLS ID, the implementation adds the calling thread's ID to form the key for lookup. Needless to say, access to that map is of course thread-safe.
In my cocos2d game i have a some balls which must be destroyed, and there are 2 threads which are concurrent with each other, first thread add balls to the NSMutablearray, and second thread iterate through this array and calls release method for each ball,i have put every operation with array in synchronized block with #synchronized(array) but its not affect and every time in synchronized block application throws an exception __NSArrayM was mutated while being enumerated:
maybe there is other way to synchronize threads?
Since you're adding objects from one thread and iterating over the same array with another thread, it seems rather pointless to multithread this part of your code.
The reason is that you can not modify an array while iterating over it, regardless of whether you do it from within the same thread or multiple threads.
You will most likely get better results by using two arrays, one for each thread, and each thread perfoming the same tasks: both are adding objects, then both iterate over their half half of the objects. How you split the objects is up to you, it could be based on screen coordinates (screen split) or some other condition (ie balance number of objects processed by each thread).