threads and locks - multithreading

I do not not know anything about multithreading programming so wanted to post a general question here. How can I do the following:
main()
run MyMethod every 30 seconds
MyMethod()
1. get data
2. do calculations
3. save result into file
How can I make sure that I finish saving results (MyMethod step 3) before main start running MyMethod again ? Basically I have to lock that thread somehow until MyMethod is done. Feel free to use any language as example I'm more interested in the concept how such things are done in reality.
Thanks

You don't need synchronization. You only need to make sure that the thread work is completed, since saving happens at the end.
#include <thread>
#include <unistd.h>
int MyMethod(){
// some code
}
int run(){
std::thread thrd(MyMethod);
sleep(30);
thrd.join();
}
int main(){
while(true)
run();
}

Related

Do QThreads run on parallel?

I have two threads running and they simply print a message. Here is an minimalistic example of it.
Here is my Header.h:
#pragma once
#include <QtCore/QThread>
#include <QtCore/QDebug>
class WorkerOne : public QObject {
Q_OBJECT
public Q_SLOTS:
void printFirstMessage() {
while (1) {
qDebug() << "<<< Message from the FIRST worker" << QThread::currentThreadId();
}
}
};
class WorkerTwo : public QObject {
Q_OBJECT
public Q_SLOTS:
void printSecondMessage() {
while (1) {
qDebug() << ">>> Message from the SECOND worker" << QThread::currentThreadId();
}
}
};
And, of course, my main:
#include <QtCore/QCoreApplication>
#include "Header.h"
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
WorkerOne kek1;
QThread t1;
kek1.moveToThread(&t1);
t1.setObjectName("FIRST THREAD");
QThread t2;
WorkerTwo kek2;
kek2.moveToThread(&t2);
t2.setObjectName("SECOND THREAD");
QObject::connect(&t1, &QThread::started, &kek1, &WorkerOne::printFirstMessage);
QObject::connect(&t2, &QThread::started, &kek2, &WorkerTwo::printSecondMessage);
t1.start();
t2.start();
return a.exec();
}
When I start application I see an expected output of it:
As you may see, thread id is different. It's was added to be sure they are running on different threads.
I set the only one breakpoint in printFirstMessage and run the application in debug mode attached to the debugger. Once the debugger stops at my breakpoint - I wait for a while and press Continue, so my debugger stops at the same breakpoint again.
What do I expect to see? I expect to see only one <<< Message from the FIRST worker and a lot of messages from the second worker. But what do I see? I see only two messages: the first one from the first worker and the second one from the second worker.
I pressed Continue a lot of times and the result is more or less the same. That's weird to me, because I expected the second thread to be running while the first one is stopped by debugger.
I decided to test it using std::thread and wrote the following code:
#include <thread>
#include <iostream>
void foo1() {
while (true) {
std::cout << "Function ONE\n";
}
}
void foo2() {
while (true) {
std::cout << "The second function\n";
}
}
int main() {
std::thread t1(&foo1);
std::thread t2(&foo2);
t1.join();
t2.join();
}
Set a breakpoint in the first one, starts the app, after stopping at the breakpoint I hit Continue and see that console contains a lot of messages from the second function and only one from the first function (exactly this I expected using QThread as well):
Could someone explain how does it works with QThread? By the way, I tested it using QtConcurrent::run instead of QThread and the result was as expected: the second function is running while the first one is stopped because of a breakpoint.
Yes, multiple QThread instances are allowed to run in parallel. Whether they effectively run in parallel is up to your OS and depends on multiple factors:
The number of physical (and logical) CPU cores. This is typically not more than 4 or 8 on a consumer computer. This is the maximum number of threads (including the threads of other programs and your OS itself) that can be effectively run in parallel. The number of cores is much lower than the number of threads typically running on a computer. If your computer consists of only 1 core, you will still be able to use multiple QThread's but the OS scheduler will alternate between executing those threads. QThread::idealThreadCount can be used to query the number of (logical) CPU cores.
Each thread has a QThread::Priority. The OS thread scheduler may use this value to prioritize (or de-prioritize) one thread over another. A thread with a lower priority may get less CPU time than a thread with a higher priority when the CPU cores are busy.
The (workload on the) other threads that are currently running.
Debugging your program definitely alters the normal execution of a multi thread program:
Interrupting and continuing a thread has a certain overhead. In the meantime, the other threads may still/already perform some operations.
As pointed out by G.M., most of the time all threads are interrupted when a breakpoint is hit. How fast the others threads are interrupted is not well defined.
Often a debugger has a configuration option to allow interrupting a single thread, while the others continue running, see f.ex. this question.
The number of loops that are executed while the other thread is interrupted/started again, depends on the number of CPU instructions that are needed to perform a single loop. Calling qDebug() and QThread::currentThreadId() is definitely slower than a single std::cout.
Conclusion: You don't have any hard garanty about the scheduling of a thread. However, in normal operation, both threads will get almost the same amount of CPU time on average as the OS scheduler has no reason the favor one over the other. Using a debugger completely alters this normal behavior.

deadlock using condition variable

I have a question about condition_variable.
Having this code
#include <iostream>
#include <thread>
#include <chrono>
#include <mutex>
#include <condition_variable>
std::condition_variable cv;
std::mutex mut;
int value;
void sleep() {
std::unique_lock<std::mutex> lock(mut);
// sleep forever
cv.notify_one();
}
int main ()
{
std::thread th (sleep);
std::unique_lock<std::mutex> lck(mut);
if(cv.wait_for(lck,std::chrono::seconds(1))==std::cv_status::timeout) {
std::cout << "failed" << std::endl;
}
th.join();
return 0;
}
How to resolve this deadlock
Why the wait_for blocks even after the 1 sec.
Is the mut necessary for the thread th ?
Thanks.
Why the wait_for blocks even after the 1 sec?
How do you know that the main thread ever makes it to the cv.wait_for(...) call?
It's not 100% clear what you are asking, but if the program never prints "failed," and you are asking why not, then probably what happened is, the child thread locked the mutex first and then it "slept forever" while keeping the mutex locked. If that happened, then the main thread would never be able to get past the std::unique_lock<std::mutex> lck(mut); line.
Is the mut necessary for the thread th ?
That depends. You certainly don't need to lock a mutex in a thread that does nothing but "// sleep forever," but maybe the thread that you are asking about is not exactly the same as what you showed. Maybe you are asking how wait() and notify() are supposed to be used.
I can't give a C++-specific answer, but in most programming languages and libraries, wait() and notify() are low-level primitives that are meant to be used in a very specific way:
A "consumer" thread waits by doing something like this:
mutex.lock();
while ( ! SomeImportantCondition() ) {
cond_var.wait(mutex);
}
DoSomethingThatRequiresTheConditionToBeTrue();
mutex.unlock()
The purpose of the mutex is to protect the shared data that SomeImportantCondition() tests. No other thread should be allowed to change the value that SomeImportantCondition() returns while the mutex is locked.
Also, you may already know this, but some readers might not; The reason why mutex is given in cond_var.wait(mutex) is because the wait function temporarily unlocks the mutex while it is waiting, and then it re-locks the mutex before it returns. The unlock is necessary so that a producer thread will be allowed to make the condition become true. Re-locking is needed to guarantee that the condition still will be true when the consumer accesses the shared data.
The third thing to note is that the consumer does not wait() if the condition already is true. A common newbie mistake is to unconditionally call cv.wait() and expect that a cv.notify() call in some other thread will wake it up. But a notify() will not wake the consumer thread if it happens before the consumer starts waiting.
Writing the "producer" is easier. There's no technical reason why a "producer" can't just call cond_var.notify() without doing anything else at all. But that's not very useful. Most producers do something like this:
mutex.lock();
... Do something that makes SomeImportantCondition() return true;
cond_var.notify();
mutex.unlock();
The only really important thing is that the producer locks the mutex before it touches the shared data that are tested by SomeImportantCondition(). Some programming languages will let you move the notify() call after the unlock() call. Others won't. It doesn't really matter either way.

Is the following code thread unsafe? Is so, how can I make a possible result more likely to come out?

Is the screen output of the following program deterministic? My understanding is that it is not, as it could be either 1 or 2 depending on whether the latest thread to pick up the value of i picks it up before or after the other thread has written 1 into it.
On the other, hand I keep seeing the same output as if each thread waits the previous to finish, as in I get 2 on screen in this case, or 100 if I create similar threads from t1 to t100 and join them all.
If the answer is no, the result is not deterministic, is there a way with a simple toy program to increase the odds that the one of the possible results comes out?
#include <iostream>
#include <thread>
int main() {
int i = 0;
std::thread t1([&i](){ ++i; });
std::thread t2([&i](){ ++i; });
t1.join();
t2.join();
std::cout << i << '\n';
}
(I'm compiling and running it like this: g++ -std=c++11 -lpthread prova.cpp -o exe && ./exe.)
Your are always seeing the same result because the first thread starts and runs its operations before the second one. This narrows the window for a race condition to occur.
But ultimately, there is still a chance that it occurs because the ++ operation is not atomic (read value, then increment, then write).
If the two threads start at the same time (eg: thread 1 slowed down due to the CPU being busy), then they will read the same value and the final result will be 1.

C++ data sharing between threads c++

Originally coming from Java, I'm having problem with data sharing between 2 threads in C++11. I have thoroughly read through the multithreading posts here without help and i would simply like to know why my approach is not OK C++ syntax for multithreading.
My application in short:
I have one thread reading a hardware sensor and dumping that data to some shared data monitor
I want another thread listening to data changes of that very monitor and draw some graphical stuff based on the new data (yes, I'm using conditional varible in my monitor)
Below is my Main class with the main method:
#include <cstdlib>
#include <iostream>
#include <thread>
#include <sweep/sweep.hpp>
#include <pcl/ModelCoefficients.h>
#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>
#include <pcl/filters/extract_indices.h>
#include <pcl/features/normal_3d.h>
#include "include/LiDAR.h"
#include "include/Visualizer.h"
void run_LiDAR(LiDAR* lidar){
lidar->run();
}
void visualize(Visualizer* visualizer){
visualizer->run();
}
int main(int argc, char* argv[]) try {
Monitor mon; //The monitor holding shared data
LiDAR sensor(&mon); //Sensor object dumping data to the monitor
Visualizer vis(&mon); //Visualizer listening to data changes and updates the visuals accordingly
std::thread sweep_thread(run_LiDAR, &sensor); //Starting the LiDAR thread
std::cout << "Started Sweep thread" << std::endl;
std::thread visualizer_thread(visualize, vis);
std::cout << "Started Visualizer thread" << std::endl;
while(1){
//Do some calculations on the data in Monitor mon
mon.cluster();
}
}
The sensor thread dumping the data works good and so does the main thread running the clustering algorithms. However I get the following error message:
In file included from MY_DIRECTORY/Main.cpp:3: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include/c++/v1/thread:336:5: error: attempt to use a deleted function
__invoke(_VSTD::move(_VSTD::get<1>(__t)), _VSTD::move(_VSTD::get<_Indices>(__t))...);
If I comment the line:
std::thread visualizer_thread(visualize, vis);
My program builds and works...
What am I not getting?
Kind regards,
What is happening is that Visualizer doesn't have a move constructor.
std::thread visualizer_thread(visualize, vis);
visualize() expects a pointer.
As an aside, you should make you have a mechanism to end your threads in an orderly manner, since the data (sensor, vis) will destroy itself when main() exits, leaving the threads reading/writing to unallocated data on the stack!
Using dynamic allocation using std::unique_ptr or std::shared_ptr (which are moveable) can eliminate the issue.

Thread, ansi c signal and Qt

I'm writing a multithread plugin based application. I will not be the plugins author. So I would wish to avoid that the main application crashes cause of a segmentation fault in a plugin. Is it possible? Or the crash in the plugin definitely compromise also the main application status?
I wrote a sketch program using qt cause my "real" application is strongly based on qt library. Like you can see I forced the thread to crash calling the trimmed function on a not-allocated QString. The signal handler is correctly called but after the thread is forced to quit also the main application crashes. Did I do something wrong? or like I said before what I'm trying to do is not achievable?
Please note that in this simplified version of the program I avoided to use plugins but only thread. Introducing plugins will add a new critical level, I suppose. I want to go on step by step. And, overall, I want to understand if my target is feasible. Thanks a lot for any kind of help or suggestions everyone will try to give me.
#include <QString>
#include <QThread>
#include<csignal>
#include <QtGlobal>
#include <QtCore/QCoreApplication>
class MyThread : public QThread
{
public:
static void sigHand(int sig)
{
qDebug("Thread crashed");
QThread* th = QThread::currentThread();
th->exit(1);
}
MyThread(QObject * parent = 0)
:QThread(parent)
{
signal(SIGSEGV,sigHand);
}
~MyThread()
{
signal(SIGSEGV,SIG_DFL);
qDebug("Deleted thread, restored default signal handler");
}
void run()
{
QString* s;
s->trimmed();
qDebug("Should not reach this point");
}
};
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
MyThread th(&a);
th.run();
while (th.isRunning());
qDebug("Thread died but main application still on");
return a.exec();
}
I'm currently working on the same issue and found this question via google.
There are several reasons your source is not working:
There is no new thread. The thread is only created, if you call QThread::start. Instead you call MyThread::run, which executes the run method in the main thread.
You call QThread::exit to stop the thread, which is not supposed to directly stop a thread, but sends a (qt) signal to the thread event loop, requesting it to stop. Since there is neither a thread nor an event loop, the function has no effect. Even if you had called QThread::start, it would not work, since writing a run method does not create a qt event loop. To be able to use exit with any QThread, you would need to call QThread::exec first.
However, QThread::exit is the wrong method anyways. To prevent the SIGSEGV, the thread must be called immediately, not after receiving the (qt) signal in its event loop. So although generally frowned upon, in this case QThread::terminate has to be called
But it is generally said to be unsafe to call complex functions like QThread::currentThread, QThread::exit or QThread::terminate from signal handlers, so you should never call them there
Since the thread is still running after the signal handler (and I'm not sure even QThread::terminate would kill it fast enough), the signal handler exits to where it was called from, so it reexecutes the instruction causing the SIGSEGV, and the next SIGSEGV occurs.
Therefore I have used a different approach, the signal handler changes the register containing the instruction address to another function, which will then be run, after the signal handler exits, instead the crashing instruction. Like:
void signalHandler(int type, siginfo_t * si, void* ccontext){
(static_cast<ucontext_t*>(ccontext))->Eip = &recoverFromCrash;
}
struct sigaction sa;
memset(&sa, 0, sizeof(sa)); sa.sa_flags = SA_SIGINFO;
sa.sa_sigaction = &signalHandler;
sigaction(SIGSEGV, &sa, 0);
The recoverFromCrash function is then normally called in the thread causing the SIGSEGV. Since the signal handler is called for all SIGSEGV, from all threads, the function has to check which thread it is running in.
However, I did not consider it safe to simply kill the thread, since there might be other stuff, depending on a running thread. So instead of killing it, I let it run in an endless loop (calling sleep to avoid wasting CPU time). Then, when the program is closed, it sets a global variabel, and the thread is terminated. (notice that the recover function must never return, since otherwise the execution will return to the function which caused the SIGSEGV)
Called from the mainthread on the other hand, it starts a new event loop, to let the program running.
if (QThread::currentThread() != QCoreApplication::instance()->thread()) {
//sub thread
QThread* t = QThread::currentThread();
while (programIsRunning) ThreadBreaker::sleep(1);
ThreadBreaker::forceTerminate();
} else {
//main thread
while (programIsRunning) {
QApplication::processEvents(QEventLoop::AllEvents);
ThreadBreaker::msleep(1);
}
exit(0);
}
ThreadBreaker is a trivial wrapper class around QThread, since msleep, sleep and setTerminationEnabled (which has to be called before terminate) of QThread are protected and could not be called from the recover function.
But this is only the basic picture. There are a lot of other things to worry about: Catching SIGFPE, Catching stack overflows (check the address of the SIGSEGV, run the signal handler in an alternate stack), have a bunch of defines for platform independence (64 bit, arm, mac), show debug messages (try to get a stack trace, wonder why calling gdb for it crashes the X server, wonder why calling glibc backtrace for it crashes the program)...

Resources