Does C++11 locks make a call to the kernel [duplicate] - multithreading

I have the following situation:
Two C++11 threads are working on a calculation and they are synchronized through a std::mutex.
Thread A locks the mutex until the data is ready for the operation Thread B executes. When the mutex is unlocked Thread B starts to work.
Thread B tries to lock the mutex and is blocked until it is unlocked by Thread A.
void ThreadA (std::mutex* mtx, char* data)
{
mtx->lock();
//do something useful with data
mtx->unlock();
}
void ThreadB (std::mutex* mtx, char* data)
{
mtx->lock(); //wait until Thread A is ready
//do something useful with data
//.....
}
It is asserted that Thread A can block the mutex first.
Now I am wondering if the mtx->lock() in Thread B waits active or passive. So is Thread B polling the mutex state and wasting processor time or is released passively by the sheduler when the mutex is unlocked.
In the different C++ references it is only mentioned that the thread is blocked, but not in which way.
Could it be, however, that the std::mutex implementation is hardly depended on the used plattform and OS?

It's highly implementation defined, even for the same compiler and OS
for example,on VC++, in Visual Studio 2010, std::mutex was implemented with Win32 CRITICAL_SECTION. EnterCriticalSection(CRITICAL_SECTION*) has some nice feature: first it tries to lock the CRITICAL_SECTION by iterating on the lock again and again. after specified number of iteration, it makes a kernel-call which makes the thread go sleep, only to be awakened up again when the lock is released and the whole deal starts again.
in this case , the mechanism polls the lock again and again before going to sleep, then the control switches to the kernel.
Visual Studio 2012 came with a different implementation. std::mutex was implemented with Win32 mutex. Win32 mutex shifts the control immediately to the kernel. there is no active polling done by the lock.
you can read about the implementation switch in the answer : std::mutex performance compared to win32 CRITICAL_SECTION
So, it is unspecified how the mutex acquires the lock. it is the best not to rely on such behaviour.
ps. do not lock the mutex manually, use std::lock_guard instead. also, you might want to use condition_variable for more-refined way of controlling your synchronization.

Related

What is a safe and easy way to exchange data from a threaded ISR? (Raspberry Pi)

I'm trying to develop a C/C++ userspace application on the Raspberry Pi which processes data coming from an SPI device. I'm using the WiringPi Library (function wiringPiISR) which registers a function (the real interrupt handler) that will be called from a pthreaded interrupt handler on an IRQ event.
I heard that STL containers aren't thread safe, but is it enough to have a mutex lock while executing my callback function and of course a lock in the main thread while accessing the buffer/container there?
My "real interrupt handler" which is registered through wiringPiISR looks like this
std::deque<uint8_t> buffer;
static void irq_handler()
{
uint8_t data;
while (digitalRead(IRQ_PIN)==0)
{
data = spi_txrx(CMD_READBYTE);
pthread_mutex_lock(&mutex1);
callback(data);
pthread_mutex_unlock(&mutex1);
}
}
static void callback(uint8_t byte)
{
buffer.push_back(byte);
}
Or is there an easier way to achieve the data exchange between a threaded ISR and main thread?
Is that a real ISR ?
Anyway mutex are not a good fit for ISR, because they lead to priority inversion.
Let's look at normal mutex usage, with two thread :
Thread A runs and take the mmutex
for some reason, thread A is preempted, and thread B executes.
thread B try to take the mutex, but can't.
thread B is put to sleep, allowing another thread to run, for instance thread C or thread A
...
At some point, thread A wille be rescheduled, will resume it's operation, and release the mutex.
When thread B is scheduled again, takes the mutex.
Now the scenario is very different when it comes to ISR. ISR won't be put to sleep in favor of a lower priority thread, so the mutex owning thread will not run while you are in the ISR, and you will never get out of point three.
So the real question is, "When running an IRQ handler, is it possible for other code to run ?" Otherwise you are in deadlock !

How to manage a shared POSIX semaphore with async signals in a multithreaded application

I've to write a thread-safe library that uses a POSIX semaphore (used as a mutex with initial value = 1) for sync. I found some problems to correctly manage async signals. I've an application that links against this static library and the application (multi-threaded) calls library's functions. Access to some internals structures are controlled by a posix semaphore (it's internal to the library):
void library_func1(lib_handler *h)
{
sem_wait(sem);
/* do some stuff with global data */
sem_post(sem);
}
void library_func2(lib_handler *h)
{
sem_wait(sem);
/* do some stuff with global data */
sem_post(sem);
}
void library_close(lib_handler *h)
{
...
}
What append if an async signal, let's say SIGINT, is raised when one thread is locking the semaphore? If i relaunch the application i'll have a deadlock because the semaphore exists and it's value is 0. There is a function library_close that could release the semaphore when the async signal is raised but which is the best way to do and check this (I think that that function would be signal-safe only if followed by exit)? In multi-threaded application usually is a good practice having a single thread manager for all signals: this thread should be in the library or is ok to launch it in the application?
Thank you all.
Linux futexes had the same problem. It is not fully solvable, but what you could do is write the pid of the process locking the semaphore somewhere in the same shared memory region. If another process tries to lock the semaphore and it is taking too long (for some value of 'too long'), it finds out what process has the semaphore locked by reading the pid from the shared memory. If that process no longer exists, you know you are in a deadlock (and you should probably just die since the library's internal data may be in an inconsistent state).
There's still a small race with this as the process taking the lock may die just after locking but before writing its pid. AFAIK there's no way to avoid this using semaphores. (It might work if you have a lock implementation where the pid is written to the lock variable atomically on aquire, but you would probably need to write this yourself.)
The state of a static library doesn't carry over between different runs of the app and isn't shared by other apps using it. it's part of the state of the application that's using it. So your semaphore won't be in a wonky state.

Can multithreaded code possible deadlock be avoided this way?

We know that multi-threaded code has the bane of possible deadlocks if the threads acquire mutex locks but before it gets a chance to release it, the thread gets suspended by main thread or pre-empted out by Scheduler?
I am a beginner in using pthread library so please bear with me if my below query/proposed solution might be unfeasible or outright wrong.
void main()
{
thread_create(T1,NULL,thr_function,NULL)
suspend_thread(T1);
acquire_lock(Lock1);<--- //Now here is a possible deadlock if thread_function acquried Lock1 before main and main suspended T1 before its release
//Do something further;
}
void *thr_function(void *val)
{
///do something;
acquire_lock(Lock1);
//do some more things;
//do some more things;
release_lock(Lock1);
}
In this below pseudo code segment above I have, can't the thread run-time/compiler work together to make sure if a thread which has acquired a mutex lock, is suspended/pre-empted then it executes some 'cleanup code' of releasing all locks it has held before it gets out. The compiler/linker can identify the places inside a thread function which acquire , release lock, then when a thread is suspended between those two places(i.e. after acquire but before release) the execution in the thread function should jump via some kind of 'goto label;' inserted by the runtime where at the label: the thread would release the lock and then the thread gets blocked or context switch happens. [ I know if a thread acquires more than 1 locks it might get messy to jump across those points to release those locks...]
But basic idea/question is can the thread function not do the necessary releases of acquired locks for mutexes, semaphores before it gets blocked out or goes out of execution state to wait or some other state?
No. The reason a thread holds a lock is so that it can make data temporarily inconsistent or see a consistent view of that data itself. If some scheme were to automatically release that lock before the thread made the data consistent again, other threads would acquire the lock, see the inconsistent data, and fail. Or when that thread was resumed, it would either not have the lock or have the lock and see inconsistent data itself. This is why you can only reliably suspend a thread with that thread's cooperation.
Consider this logic to add an object to a linked list protected by a mutex:
Acquire the lock protecting a linked list.
Modify the link's head pointer.
Modify the object's next pointer.
Release the lock.
Now imagine if something were to suspend the thread between steps 2 and 3. If the lock were released, other threads would see the link's head pointer pointing to an object that had not been linked to the list. And when the thread resumed, it might set the object to the wrong pointer because the list had changed.
The general consensus is that suspending threads is so evil that even a feeling that you might want to suspend a thread suggests an incorrect application design. There is practically no reason a properly-designed application would ever want to suspend a thread. (If you didn't want that thread to continue doing the work it was doing, why did you code it to continue doing that work in the first place?)
By the way, scheduler pre-emption is not a problem. Eventually, the thread will be scheduled again and release the lock. So long as there are other threads that can make forward progress, no harm is done. And if there are no other threads that can make forward progress, the only thing the system can do is schedule the thread that was pre-empted.
One way to avoid this kind of deadlocks is to have a global, mutexed variable should_stop_thread which eventually gets set to true by the master thread.
The child thread checks the variable regularly and terminates in a controlled manner if it is true. "Controlled" in this sense means that all data (pointers) are valid (again) and mutex locks are released.

synchronising threads with mutexes

In Qt, I have a method which contains a mutex lock and unlock. The problem is when the mutex is unlock it sometimes take long before the other thread gets the lock back. In other words it seems the same thread can get the lock back(method called in a loop) even though another thread is waiting for it. What can I do about this? One thread is a qthread and the other thread is the main thread.
You can have your thread that just unlocked the mutex relinquish the processor. On Posix, you do that by calling pthread_yield() and on Windows by calling Sleep(0).
That said, there is no guarantee that the thread waiting on the lock will be scheduled before your thread wakes up again.
It shouldn't be possible to release a lock and then get it back if some other thread is already waiting on it.
Check that you actually releasing the lock when you think you do. Check that waiting thread actually waits (and not spins a loop with a trylock tests and sleeps, I actually done that once and was very puzzled at first :)).
Or if waiting thread really never gets time to even reach locking code, try QThread::yieldCurrentThread(). This will stop current thread and give scheduler a chance to give execution to somebody else. Might cause unnecessary switching depending on tightness of your loop.
If you want to make sure that one thread has priority over the other ones, an option is to use a QReadWriteLock. It's adapted to a typical scenario where n threads are going to read a value in a infinite loop, with only one thread updating it. I think it's the scenario you described.
QReadWriteLock offers two ways to lock: lockForRead() and lockForWrite(). The threads depending on the value will use the latter, while the thread updating the value (typically via the GUI) will use the former (lockForWrite()) and will have top priority. You won't need to sleep or yield or whatever.
Example code
Let's say you have a QReadWrite lock; somewhere.
"Reader" thread
forever {
lock.lockForRead();
if (condition) {
do_stuff();
}
lock.unlock();
}
"Writer" thread
// external input (eg. user) changes the thread
lock.lockForWrite(); // will block as soon as the reader lock ends
update_condition();
lock.unlock();

How do I suspend another thread (not the current one)?

I'm trying to implement a simulation of a microcontroller. This simulation is not meant to do a clock cycle precise representation of one specific microcontroller but check the general correctness of the code.
I thought of having a "main thread" executing normal code and a second thread executing ISR code. Whenever an ISR needs to be run, the ISR thread suspends the "main thread".
Of course, I want to have a feature to block interrupts.
I thought of solving this with a mutex that the ISR thread holds whenever it executes ISR code while the main thread holds it as long as "interrupts are blocked".
A POR (power on reset) can then be implemented by not only suspending but killing the main thread (and starting a new one executing the POR function).
The windows API provides the necessary functions.
But it seems to be impossible to do the above with posix threads (on linux).
I don't want to change the actual hardware independent microcontroller code. So inserting anything to check for pending interrupts is not an option.
Receiving interrupts at non well behaved points is desirable, as this also happens on microcontrollers (unless you block interrupts).
Is there a way to suspend another thread on linux? (Debuggers must use that option somehow, I think.)
Please, don't tell me this is a bad idea. I know that is true in most circumstances. But the main code does not use standard libs or lock/mutexes/semaphores.
SIGSTOP does not work - it always stops the entire process.
Instead you can use some other signals, say SIGUSR1 for suspending and SIGUSR2 for resuming:
// at process start call init_pthread_suspending to install the handlers
// to suspend a thread use pthread_kill(thread_id, SUSPEND_SIG)
// to resume a thread use pthread_kill(thread_id, RESUME_SIG)
#include <signal.h>
#define RESUME_SIG SIGUSR2
#define SUSPEND_SIG SIGUSR1
static sigset_t wait_mask;
static __thread int suspended; // per-thread flag
void resume_handler(int sig)
{
suspended = 0;
}
void suspend_handler(int sig)
{
if (suspended) return;
suspended = 1;
do sigsuspend(&wait_mask); while (suspended);
}
void init_pthread_suspending()
{
struct sigaction sa;
sigfillset(&wait_mask);
sigdelset(&wait_mask, SUSPEND_SIG)
sigdelset(&wait_mask, RESUME_SIG);
sigfillset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = resume_handler;
sigaction(RESUME_SIG, &sa, NULL);
sa.sa_handler = suspend_handler;
sigaction(SUSPEND_SIG, &sa, NULL);
}
I am very annoyed by replies like "you should not suspend another thread, that is bad".
Guys why do you assume others are idiots and don't know what they are doing? Imagine that others, too, have heard about deadlocking and still, in full consciousness, want to suspend other threads.
If you don't have a real answer to their question why do you waste your and the readers' time.
An yes, IMO pthreads are very short-sighted api, a disgrace for POSIX.
The Hotspot JAVA VM uses SIGUSR2 to implement suspend/resume for JAVA threads on linux.
A procedure based on on a signal handler for SIGUSR2 might be:
Providing a signal handler for SIGUSR2 allows a thread to request a lock
(which has already been acquired by the signal sending thread).
This suspends the thread.
As soon as the suspending thread releases the lock, the signal handler can
(and will?) get the lock. The signal handler releases the lock immediately and
leaves the signal handler.
This resumes the thread.
It will probably be necessary to introduce a control variable to make sure that the main thread is in the signal handler before starting the actual processing of the ISR.
(The details depend on whether the signal handler is called synchronously or asynchronously.)
I don't know, if this is exactly how it is done in the Java VM, but I think the above procedure does what I need.
Somehow I think sending the other thread SIGSTOP works.
However, you are far better off writing some thread communication involving senaogires.mutexes and global variables.
You see, if you suspend the other thread in malloc() and you call malloc() -> deadlock.
Did I mention that lots of C standard library functions, let alone other libraries you use, will call malloc() behind your back?
EDIT:
Hmmm, no standard library code. Maybe use setjmp/longjump() from signal handler to simulate the POR and a signal handier to simulate interrupt.
TO THOSE WHO KEEP DOWNVOTING THIS: The answer was accepted for the contents after EDIT, which is a specific scenario that cannot be used in any other scenario.
Solaris has the thr_suspend(3C) call that would do what you want. Is switching to Solaris a possibility?
Other than that, you're probably going to have to do some gymnastics with mutexes and/or semaphores. The problem is that you'll only suspend when you check the mutex, which will probably be at a well-behaved point. Depending on what you're actually trying to accomplish, this might now be desirable.
It makes more sense to have the main thread execute the ISRs - because that's how the real controller works (presumably). Just have it check after each emulated instruction if there is both an interrupt pending, and interrupts are currently enabled - if so, emulate a call to the ISR.
The second thread is still used - but it just listens for the conditions which cause an interrupt, and mark the relevant interrupt as pending (for the other thread to later pick up).

Resources