I'm trying to implement my own read/write lock using atomic types. I can easily define exclusive locks, but I fail to create locks for shared reader threads, like SRWLock does (see SRWLock). My question is how to implement locks that can be used in exclusive mode (one reader/writer threads at a time) or in shared mode (multiple reader threads at a time).
I can't use std::mutex lock because it doesn't support multiple readers. Also I don't use boost, so no shared_mutex either.
The shared timed mutex
There is no equivalent for that kind of read-write locking in the C++11 standard library. The good news is that there is one in C++14 and it's called shared_timed_mutex.
Take a look here:
http://en.cppreference.com/w/cpp/thread/shared_timed_mutex
Compiler support
GCC's recent versions support shared_timed_mutex according to its documentation if you use the -std=c++14 compiler flag. The bad news is that Visual C++ doesn't support it yet, or at least I haven't been able to find any concrete info about it, the closest thing I got is this feature table which says that Shared Locking in C++ is missing.
Possible alternatives
You can implement this kind of thing using a mutex and a semaphore as described in this tutorial if you use a library that has these primitives.
If you prefer to stay with the standard library, you can implement the stuff with an std::mutex and an std::condition_variable similarly to how it's done here or here.
There is also shared_mutex in boost (as you already noted), or uv_rwlock_t in libuv, or pthread_rwlock in unix-like OSes.
Related
I am trying to use Boost's upgrade_lock (using this example, but I run into a starvation issue.
I am actually using the code from this post, but I wanted an up-to-date discussion. I run 400 threads after the WorkerKiller. I run into the exact same problem as anoneironaut, the author of the mentionned post.
I have seen the proposition from Howard Hinnant, but I don't really want to include more external code (moreover I cannot get his to compile as of now) and a comment posted 6 months later states that "Boost uses a fair implementation now" (Dec 3 '12).
The Boost 1.55 documentation states that:
Note the the lack of reader-writer priority policies in shared_mutex. This is
due to an algorithm credited to Alexander Terekhov which lets the OS decide
which thread is the next to get the lock without caring whether a unique lock or
shared lock is being sought. This results in a complete lack of reader or writer
starvation. It is simply fair.".
And the algorithm credited to Alexander Terekhov is the one that Howard Hinnant talks about, so I would expect the 1.55 boost implementation to behave like in Howard Hinnant's answer, which is not the case. It behaves exactly like in the question.
Why is it the case that my WorkerKiller suffers of starvation?
UPDATE: It was observed with this code on:
Debian x64, Boost 1.55 (both the Debian version and one compiled from sources), with both clang++ and g++
Ubuntu x64, Boost 1.54, with both clang++ (3.4-1ubuntu1) and g++ (4.8.1-10ubuntu9)
This is a subtle one. The difference involves the concepts of shared and upgradable ownerships, and their implementations in Boost.
Let's first get the concepts of shared ownership and upgradable ownership sorted out.
For a SharedLockable, a thread must decide beforehand whether it wants to change the object (requiring exclusive ownership) or only read from it (shared ownership suffices). If a thread with shared ownership decides it wants to change the object, it first must release its shared lock on the object and then construct a new, exclusive lock. In between these two steps, the thread holds no locks at all on the object. Attempting to construct an exclusive lock from a thread that already holds a shared lock will deadlock, as the exclusive lock constructor will block until all shared locks have been released.
UpgradeLockable overcomes this limitation, by allowing to upgrade a shared lock to an exclusive lock without releasing it. That is, the thread keeps an active lock on the mutex at all times, prohibiting other threads from obtaining an exclusive lock in the meantime. Besides that, UpgradeLockable still allows all operations from SharedLockable, the former concept is a superset of the latter. The question you linked to is only concerned with the SharedLockable concept.
Neither concept, as specified by Boost, requires an implementation to be fair. However, the shared_mutex, which is Boost's minimal implementation for a SharedLockable does give the fairness guarantees quoted in your question. Note that this is an additional guarantee to what the concept actually requires.
Unfortunately, the minimal implementation for upgradable ownership, the upgrade_mutex, does not give this additional gurantee. It still implements the shared ownership concept as a requirement for upgradable ownership, but since fairness is not required for a conforming implementation, they do not provide it.
As pointed out by Howard in the comments, Terekhov's algorithm can be trivially adjusted to work with upgradable locks as well, it's just that the Boost implementation does not support this currently.
I m thinking to use Posix robust mutexes to protect shared resource among different processes (on Linux). However there are some doubts about safety in difference scenarios. I have the following questions:
Are robust mutexes implemented in the kernel or in user code?
If latter, what would happen if a process happens to crash while in a call to pthread_mutex_lock or pthread_mutex_unlock and while a shared pthread_mutex datastructure is getting updated?
I understand that if a process locked the mutex and dies, a thread in another process will be awaken and return EOWNERDEAD. However, what would happen if the process dies (in unlikely case) exactly when the pthread_mutex datastructure (in shared memory) is being updated? Will the mutex get corrupted in that case? What would happen to another process that is mapped to the same shared memory if it were to call a pthread_mutex function?
Can the mutex still be recovered in this case?
This question applies to any pthread object with PTHREAD_PROCESS_SHARED attribute. Is it safe to call functions like pthread_mutex_lock, pthread_mutex_unlock, pthread_cond_signal, etc. concurrently on the same object from different processes? Are they thread-safe across different processes?
From the man-page for pthreads:
Over time, two threading implementations have been provided by the
GNU C library on Linux:
LinuxThreads
This is the original Pthreads implementation. Since glibc
2.4, this implementation is no longer supported.
NPTL (Native POSIX Threads Library)
This is the modern Pthreads implementation. By comparison
with LinuxThreads, NPTL provides closer conformance to the
requirements of the POSIX.1 specification and better
performance when creating large numbers of threads. NPTL is
available since glibc 2.3.2, and requires features that are
present in the Linux 2.6 kernel.
Both of these are so-called 1:1 implementations, meaning that each
thread maps to a kernel scheduling entity. Both threading
implementations employ the Linux clone(2) system call. In NPTL,
thread synchronization primitives (mutexes, thread joining, and so
on) are implemented using the Linux futex(2) system call.
And from man futex(7):
In its bare form, a futex is an aligned integer which is touched only
by atomic assembler instructions. Processes can share this integer
using mmap(2), via shared memory segments or because they share
memory space, in which case the application is commonly called
multithreaded.
An additional remark found here:
(In case you’re wondering how they work in shared memory: Futexes are keyed upon their physical address)
Summarizing, Linux decided to implement pthreads on top of their "native" futex primitive, which indeed lives in the user process address space. For shared synchronization primitives, this would be shared memory and the other processes will still be able to see it, after one process dies.
What happens in case of process termination? Ingo Molnar wrote an article called Robust Futexes about just that. The relevant quote:
Robust Futexes
There is one race possible though: since adding to and removing from the
list is done after the futex is acquired by glibc, there is a few
instructions window for the thread (or process) to die there, leaving
the futex hung. To protect against this possibility, userspace (glibc)
also maintains a simple per-thread 'list_op_pending' field, to allow the
kernel to clean up if the thread dies after acquiring the lock, but just
before it could have added itself to the list. Glibc sets this
list_op_pending field before it tries to acquire the futex, and clears
it after the list-add (or list-remove) has finished
Summary
Where this leaves you for other platforms, is open-ended. Suffice it to say that the Linux implementation, at least, has taken great care to meet our common-sense expectation of robustness.
Seeing that other operating systems usually resort to Kernel-based synchronization primitives in the first place, it makes sense to me to assume their implementations would be even more naturally robust.
Following the documentation from here: http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutexattr_getrobust.html, it does read that in a fully POSIX compliant OS, shared mutex with the robust flag will behave in the way you'd expect.
The problem obviously is that not all OS are fully POSIX compliant. Not even those claiming to be. Process shared mutexes and in particular robust ones are among those finer points that are often not part of an OS's implementation of POSIX.
I have to implement kernel level thread but while searching on the net I found that there are three ways to create kernel level thread in linux:
NPTL
kthread
linuxThreads
It was written somewhere that linuxThreads are now abandoned. But I am unable to find current support of NPTL & kthread. Also I am unable to find any source that can simply explain me how to use their functionality.
Which is the currently supported and good library to use kernel level thread?
Also pls share any resource for installing these library and also using them?
You are confusing two very different definitions of "kernel thread".
LinuxThreads and NPTL are implementations of POSIX pthreads for user-space processes. They use a 1-to-1 mapping of kernel scheduling entities to user-space threads. They are sometimes described as kernel threads implementations only because they create threads that are scheduled by the kernel.
LinuxThreads is unsupported and entirely obsolete. NPTL is now part of glibc, so you already have it. There's nothing special to install. You use these the same way you use any POSIX threading library, with calls to functions like pthread_create.
Actual kernel threads run kernel code. None of those libraries are relevant since they're all user-space libraries. Have a look at functions like kthread_run. There's no magic, no secret. Write kernel code the way similar kernel code is written. (Knowledge and experience in writing kernel code is needed. It's, unfortunately, not simple.)
I assume that; if you really wanted to create a kernel thread, you would already know about these things.
I think, you want to create multi-threaded applications and trying to find info about user-level multi-threading functions.
And yes, these threads you created will be managed by the kernel itself. This is what you are looking for :: POSIX Threads
I have four threads, and i need to translate the data among these threads, the function like follow:
theadFunc(){
processing;
__sync();
processing;
}
Is there any sync functions in linux that make sure the threads will arrive at the same point.
In windows , I use atomic add and atomic compare to implement the __sync(), and i didn't find the atomic compare function in Linux.
You can use GCC's Atomic builtins to do a compare and swap, but you may want to consider using a pthreads barrier instead. See the documentation for pthread_barrier_init and pthread_barrier_wait for more information. You can also read this pthreads primer for a working example of barrier usage.
Somebody can tell me an example of using locking mechanism based on futex? (for muticore x86 CPU, CentOS)
Pthreads' mutexes are implemented using futexes on recent versions of Linux. Pthreads is the standard C threading API on Linux, and is part of the Posix standard, so you can easily port your program to other Unix-like systems. You should avoid using futexes directly unless you have very unusual needs, because they're very hard to use correctly - use pthreads, or a higher-level, language-specific API (which will almost certainly use pthreads itself).
Have a look at https://github.com/avsm/ipc-bench. They use futex in shared memory pipe implementation.
Specifically, you can check this code.
working example: pthreads mutex use futex locks.
code example: These were made within months of this post in '10 but are still up-to-date.
http://meta-meta.blogspot.com/2010/11/linux-threading-primitives-futex.html
https://github.com/lcapaldo/futexexamples
use case example: IPC and inter-process synchronization are the only example of why one should use a futex in userspace. pthread mutexes will work for multi-thread except for extreme cases, but multi-process is lacking in high performance locking mechanisms as well as lock types.