Make scope of lock smaller in threadsafe queue - multithreading

I have a dll which has a high priority functionality that runs in a high priority thread. This dll needs to report progress. Basically a callback system is used. The issue is that the dll has no control over the amount of time the callback takes to complete. This means the high priority functionality is dependent on the implementation of the callback which is not acceptable.
The idea is to have a class inbetween that buffers the progress notifications and calls the callback.
I'm new to C++11 and it's threading functionality and trying to discover the possibilities. I have an implementation but I have an issue(at least one that I see now). When the thread awakens after the wait the mutex is reacquired and stays acquired until the next wait. This means the lock is acquired for as long as the lengthy operation continues. Adding progress will block here. Basically a lot of code for no gain. I thought of changing the code to this but I don't know if this is the correct implementation.
Progress progress = queue.front();
queue.pop();
lock.unlock();
// Do lengthy operation with progress
lock.lock();
I think I need to wait for the condition variable, but that should not be connected to the lock. I don't see how this can be done. Pass a dummy lock and use a different lock to protect the queue? How should this problem be tackled in C++11?
header file
#include <atomic>
#include <condition_variable>
#include <mutex>
#include <thread>
#include <queue>
#include "Error.h"
#include "TypeDefinitions.h"
struct Progress
{
StateDescription State;
uint8 ProgressPercentage;
};
class ProgressIsolator
{
public:
ProgressIsolator();
virtual ~ProgressIsolator();
void ReportProgress(const Progress& progress);
void Finish();
private:
std::atomic<bool> shutdown;
std::condition_variable itemAvailable;
std::mutex mutex;
std::queue<Progress> queue;
std::thread worker;
void Work();
};
cpp file
#include "ProgressIsolator.h"
ProgressIsolator::ProgressIsolator() :
shutdown(false),
itemAvailable(),
worker([this]{ Work(); }),
progressCallback(progressCallback),
completedCallback(completedCallback)
{
// TODO: only continue when worker thread is ready and listening?
}
ProgressIsolator::~ProgressIsolator()
{
Finish();
worker.join();
}
void ProgressIsolator::ReportProgress(const Progress& progress)
{
std::unique_lock<std::mutex> lock(mutex);
queue.push(progress);
itemAvailable.notify_one();
}
void ProgressIsolator::Finish()
{
shutdown = true;
itemAvailable.notify_one();
}
void ProgressIsolator::Work()
{
std::unique_lock<std::mutex> lock(mutex);
while (!shutdown)
{
itemAvailable.wait(lock);
while (!queue.empty())
{
Progress progress = queue.front();
queue.pop();
// Do lengthy operation with progress
}
}
}

void ProgressIsolator::Work()
{
while (!shutdown)
{
Progress progress;
{
std::unique_lock<std::mutex> lock(mutex);
itemAvailable.wait(lock, [this] { return !queue.empty(); });
progress = queue.front();
queue.pop();
}
// Do lengthy operation with progress
}
}

Related

How to interrupt a thread which is waiting for std::condition_variable_any in C++?

I'm reading C++ concurrency in action.
It introduces how to implement interrupting thread using std::condition_variable_any.
I try to understand the code more than a week, but I couldn't.
Below is the code and explanation in the book.
#include <condition_variable>
#include <future>
#include <iostream>
#include <thread>
class thread_interrupted : public std::exception {};
class interrupt_flag {
std::atomic<bool> flag;
std::condition_variable* thread_cond;
std::condition_variable_any* thread_cond_any;
std::mutex set_clear_mutex;
public:
interrupt_flag() : thread_cond(0), thread_cond_any(0) {}
void set() {
flag.store(true, std::memory_order_relaxed);
std::lock_guard<std::mutex> lk(set_clear_mutex);
if (thread_cond) {
thread_cond->notify_all();
} else if (thread_cond_any) {
thread_cond_any->notify_all();
}
}
bool is_set() const { return flag.load(std::memory_order_relaxed); }
template <typename Lockable>
void wait(std::condition_variable_any& cv, Lockable& lk);
};
thread_local static interrupt_flag this_thread_interrupt_flag;
void interruption_point() {
if (this_thread_interrupt_flag.is_set()) {
throw thread_interrupted();
}
}
template <typename Lockable>
void interrupt_flag::wait(std::condition_variable_any& cv, Lockable& lk) {
struct custom_lock {
interrupt_flag* self;
// (1) What is this lk for? Why is lk should be already locked when it is used in costume_lock constructor?
Lockable& lk;
custom_lock(interrupt_flag* self_, std::condition_variable_any& cond,
Lockable& lk_)
: self(self_), lk(lk_) {
self->set_clear_mutex.lock();
self->thread_cond_any = &cond;
}
void unlock() {
lk.unlock();
self->set_clear_mutex.unlock();
}
void lock() { std::lock(self->set_clear_mutex, lk); }
~custom_lock() {
self->thread_cond_any = 0;
self->set_clear_mutex.unlock();
}
};
custom_lock cl(this, cv, lk);
interruption_point();
cv.wait(cl);
interruption_point();
}
class interruptible_thread {
std::thread internal_thread;
interrupt_flag* flag;
public:
template <typename FunctionType>
interruptible_thread(FunctionType f) {
std::promise<interrupt_flag*> p;
internal_thread = std::thread([f, &p] {
p.set_value(&this_thread_interrupt_flag);
f();
});
flag = p.get_future().get();
}
void interrupt() {
if (flag) {
flag->set();
}
};
void join() { internal_thread.join(); };
void detach();
bool joinable() const;
};
template <typename Lockable>
void interruptible_wait(std::condition_variable_any& cv, Lockable& lk) {
this_thread_interrupt_flag.wait(cv, lk);
}
void foo() {
// (2) This is my implementation of how to use interruptible wait. Is it correct?
std::condition_variable_any cv;
std::mutex m;
std::unique_lock<std::mutex> lk(m);
try {
interruptible_wait(cv, lk);
} catch (...) {
std::cout << "interrupted" << std::endl;
}
}
int main() {
std::cout << "Hello" << std::endl;
interruptible_thread th(foo);
th.interrupt();
th.join();
}
Your custom lock type acquires the lock on the internal
set_clear_mutex when it’s constructed 1, and then sets the
thread_cond_any pointer to refer to the std:: condition_variable_any
passed in to the constructor 2.
The Lockable reference is stored for later; this must already be
locked. You can now check for an interruption without worrying about
races. If the interrupt flag is set at this point, it was set before
you acquired the lock on set_clear_mutex. When the condition variable
calls your unlock() function inside wait(), you unlock the Lockable
object and the internal set_clear_mutex 3.
This allows threads that are trying to interrupt you to acquire the
lock on set_clear_mutex and check the thread_cond_any pointer once
you’re inside the wait() call but not before. This is exactly what you
were after (but couldn’t manage) with std::condition_variable.
Once wait() has finished waiting (either because it was notified or
because of a spurious wake), it will call your lock() function, which
again acquires the lock on the internal set_clear_mutex and the lock
on the Lockable object 4. You can now check again for interruptions
that happened during the wait() call before clearing the
thread_cond_any pointer in your custom_lock destructor 5, where you
also unlock the set_clear_mutex.
First, I couldn't understand what is the purpose of Lockabel& lk in mark (1) and why it is already locked in constructor of custom_lock. (It could be locked in the very custom_lock constructor. )
Second there is no example in this book of how to use interruptible wait, so foo() {} in mark (2) is my guess implementation of how to use it. Is it correct way of using it ?
You need a mutex-like object (lk in your foo function) to call the interruptiple waiting just as you would need it for the plain std::condition_variable::wait function.
What's problematic (I also read the book and I have doubts about this example) is that the flag member points to a memory location inside the other thread which could finish right before calling flag->set(). In this specific example the thread only exists after we set the flag so that is okay, but otherwise this approach is limited in my opinion (correct me if I am wrong).

Timed waiting and infinite waiting on the same condition variable?

Scenario:
I have a condition_variable based wait and signal mechanism. This works! But I need a little more than just the classic wait and signal mechanism. I need to be able to do a timed wait as well as an infinite wait "on the same condition_variable". Hence, I created a wrapper class around a condition_variable which takes care of the spurious wake up issue as well. Following is the code for that:
Code:
// CondVarWrapper.hpp
#pragma once
#include <mutex>
#include <chrono>
#include <condition_variable>
class CondVarWrapper {
public:
void Signal() {
std::unique_lock<std::mutex> unique_lock(cv_mutex);
cond_var_signalled = true;
timed_out = false;
unique_lock.unlock();
cond_var.notify_one();
}
bool WaitFor(const std::chrono::seconds timeout) {
std::unique_lock<std::mutex> unique_lock(cv_mutex);
timed_out = true;
cond_var.wait_for(unique_lock, timeout, [this] {
return cond_var_signalled;
});
cond_var_signalled = false;
return (timed_out == false);
}
bool Wait() {
std::unique_lock<std::mutex> unique_lock(cv_mutex);
timed_out = true;
cond_var.wait(unique_lock, [this] {
return cond_var_signalled;
});
cond_var_signalled = false;
return (timed_out == false);
}
private:
bool cond_var_signalled = false;
bool timed_out = false;
std::mutex cv_mutex;
std::condition_variable cond_var;
};
// main.cpp
#include "CondVarWrapper.hpp"
#include <iostream>
#include <string>
#include <thread>
int main() {
CondVarWrapper cond_var_wrapper;
std::thread my_thread = std::thread([&cond_var_wrapper]{
std::cout << "Thread started" << std::endl;
if (cond_var_wrapper.WaitFor(std::chrono::seconds(10))) {
std::cout << "Thread stopped by signal from main" << std::endl;
} else {
std::cout << "ERROR: Thread stopping because of timeout" << std::endl;
}
});
std::this_thread::sleep_for(std::chrono::seconds(3));
// Uncomment following line to see the timeout working
cond_var_wrapper.Signal();
my_thread.join();
}
Question:
Above code is good but I think there is one problem? Would I really be able to do a wait as as well do a wait_for on the same condition_variable? What if a thread has acquired cv_mutex by calling CondVarWrapper::Wait() and this one never returned for some reason. And then another thread comes in calling CondVarWrapper::WaitFor(std::chrono::seconds(3)) expecting to return out if it does not succeed in 3 seconds. Now, this second thread would not be able to return out of WaitFor after 3 seconds isnt it? In fact it wouldn't ever return. Because the condition_variable wait is a timed wait but not the lock on cv_mutex. Am I correct or Am I wrong in understanding here?
If I am correct above then I need to replace std::mutex cv_mutex with a std::timed_mutex cv_mutex and do a timed_wait in CondVarWrapper::WaitFor and do a infinite wait on CondVarWrapper::Wait? Or are there any better/easier ways of handling it?
The mutex is released when calling std::condition::wait on the condition variable cond_var. Thus, when you call CondVarWrapper::Wait from one thread, it releases the mutex when calling std::condition::wait and it hangs in there forever, the second thread can still call CondVarWrapper::WaitFor and successfully lock the mutex cv_mutex.

thread sync using mutex and condition variable

I'm trying to implement an multi-thread job, a producer and a consumer, and basically what I want to do is, when consumer finishes the data, it notifies the producer so that producer provides new data.
The tricky part is, in my current impl, producer and consumer both notifies each other and waits for each other, I don't know how to implement this part correctly.
For example, see the code below,
mutex m;
condition_variable cv;
vector<int> Q; // this is the queue the consumer will consume
vector<int> Q_buf; // this is a buffer Q into which producer will fill new data directly
// consumer
void consume() {
while (1) {
if (Q.size() == 0) { // when consumer finishes data
unique_lock<mutex> lk(m);
// how to notify producer to fill up the Q?
...
cv.wait(lk);
}
// for-loop to process the elems in Q
...
}
}
// producer
void produce() {
while (1) {
// for-loop to fill up Q_buf
...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
unique_lock<mutex> lk(m);
cv.wait(lk);
Q.swap(Q_buf); // replace the empty Q with the full Q_buf
cv.notify_one();
}
}
I'm not sure this the above code using mutex and condition_variable is the right way to implement my idea,
please give me some advice!
The code incorrectly assumes that vector<int>::size() and vector<int>::swap() are atomic. They are not.
Also, spurious wakeups must be handled by a while loop (or another cv::wait overload).
Fixes:
mutex m;
condition_variable cv;
vector<int> Q;
// consumer
void consume() {
while(1) {
// Get the new elements.
vector<int> new_elements;
{
unique_lock<mutex> lk(m);
while(Q.empty())
cv.wait(lk);
new_elements.swap(Q);
}
// for-loop to process the elems in new_elements
}
}
// producer
void produce() {
while(1) {
vector<int> new_elements;
// for-loop to fill up new_elements
// publish new_elements
{
unique_lock<mutex> lk(m);
Q.insert(Q.end(), new_elements.begin(), new_elements.end());
cv.notify_one();
}
}
}
Maybe that is close to what you want to achive. I used 2 conditional variables to notify producers and consumers between each other and introduced variable denoting which turn is now:
#include <ctime>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
template<typename T>
class ReaderWriter {
private:
std::vector<std::thread> readers;
std::vector<std::thread> writers;
std::condition_variable readerCv, writerCv;
std::queue<T> data;
std::mutex readerMutex, writerMutex;
size_t noReaders, noWriters;
enum class Turn { WRITER_TURN, READER_TURN };
Turn turn;
void reader() {
while (1) {
{
std::unique_lock<std::mutex> lk(readerMutex);
while (turn != Turn::READER_TURN) {
readerCv.wait(lk);
}
std::cout << "Thread : " << std::this_thread::get_id() << " consumed " << data.front() << std::endl;
data.pop();
if (data.empty()) {
turn = Turn::WRITER_TURN;
writerCv.notify_one();
}
}
}
}
void writer() {
while (1) {
{
std::unique_lock<std::mutex> lk(writerMutex);
while (turn != Turn::WRITER_TURN) {
writerCv.wait(lk);
}
srand(time(NULL));
int random_number = std::rand();
data.push(random_number);
std::cout << "Thread : " << std::this_thread::get_id() << " produced " << random_number << std::endl;
turn = Turn::READER_TURN;
}
readerCv.notify_one();
}
}
public:
ReaderWriter(size_t noReadersArg, size_t noWritersArg) : noReaders(noReadersArg), noWriters(noWritersArg), turn(ReaderWriter::Turn::WRITER_TURN) {
}
void run() {
int noReadersArg = noReaders, noWritersArg = noWriters;
while (noReadersArg--) {
readers.emplace_back(&ReaderWriter::reader, this);
}
while (noWritersArg--) {
writers.emplace_back(&ReaderWriter::writer, this);
}
}
~ReaderWriter() {
for (auto& r : readers) {
r.join();
}
for (auto& w : writers) {
w.join();
}
}
};
int main() {
ReaderWriter<int> rw(5, 5);
rw.run();
}
Here's a code snippet. Since the worker treads are already synchronized, requirement of two buffers is ruled out. So a simple queue is used to simulate the scenario:
#include "conio.h"
#include <iostream>
#include <thread>
#include <mutex>
#include <queue>
#include <atomic>
#include <condition_variable>
using namespace std;
enum state_t{ READ = 0, WRITE = 1 };
mutex mu;
condition_variable cv;
atomic<bool> running;
queue<int> buffer;
atomic<state_t> state;
void generate_test_data()
{
const int times = 5;
static int data = 0;
for (int i = 0; i < times; i++) {
data = (data++) % 100;
buffer.push(data);
}
}
void ProducerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == WRITE; });
if (!running) return;
generate_test_data(); //producing here
lock.unlock();
//notify consumer to start consuming
state = READ;
cv.notify_one();
}
}
void ConsumerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == READ; });
if (!running) return;
while (!buffer.empty()) {
auto data = buffer.front(); //consuming here
buffer.pop();
cout << data << " \n";
}
//notify producer to start producing
if (buffer.empty()) {
state = WRITE;
cv.notify_one();
}
}
}
int main(){
running = true;
thread producer = thread([]() { ProducerThread(); });
thread consumer = thread([]() { ConsumerThread(); });
//simulating gui thread
while (!getch()){
}
running = false;
producer.join();
consumer.join();
}
Not a complete answer, though I think two condition variables could be helpful, one named buffer_empty that the producer thread will wait on, and another named buffer_filled that the consumer thread will wait on. Number of mutexes, how to loop, and so on I cannot comment on, since I'm not sure about the details myself.
Accesses to shared variables should only be done while holding the
mutex that protects it
condition_variable::wait should check a condition.
The condition should be a shared variable protected by the mutex that you pass to condition_variable::wait.
The way to check the condition is to wrap the call to wait in a while loop or use the 2-argument overload of wait (which is
equivalent to the while-loop version)
Note: These rules aren't strictly necessary if you truly understand what the hardware is doing. However, these problems get complicated quickly when with simple data structures, and it will be easier to prove that your algorithm is working correctly if you follow them.
Your Q and Q_buf are shared variables. Due to Rule 1, I would prefer to have them as local variables declared in the function that uses them (consume() and produce(), respectively). There will be 1 shared buffer that will be protected by a mutex. The producer will add to its local buffer. When that buffer is full, it acquires the mutex and pushes the local buffer to the shared buffer. It then waits for the consumer to accept this buffer before producing more data.
The consumer waits for this shared buffer to "arrive", then it acquires the mutex and replaces its empty local buffer with the shared buffer. Then it signals to the producer that the buffer has been accepted so it knows to start producing again.
Semantically, I don't see a reason to use swap over move, since in every case one of the containers is empty anyway. Maybe you want to use swap because you know something about the underlying memory. You can use whichever you want and it will be fast and work the same (at least algorithmically).
This problem can be done with 1 condition variable, but it may be a little easier to think about if you use 2.
Here's what I came up with. Tested on Visual Studio 2017 (15.6.7) and GCC 5.4.0. I don't need to be credited or anything (it's such a simple piece), but legally I have to say that I offer no warranties whatsoever.
#include <thread>
#include <vector>
#include <mutex>
#include <condition_variable>
#include <chrono>
std::vector<int> g_deliveryBuffer;
bool g_quit = false;
std::mutex g_mutex; // protects g_deliveryBuffer and g_quit
std::condition_variable g_producerDeliver;
std::condition_variable g_consumerAccepted;
// consumer
void consume()
{
// local buffer
std::vector<int> consumerBuffer;
while (true)
{
if (consumerBuffer.empty())
{
std::unique_lock<std::mutex> lock(g_mutex);
while (g_deliveryBuffer.empty() && !g_quit) // if we beat the producer, wait for them to push to the deliverybuffer
g_producerDeliver.wait(lock);
if (g_quit)
break;
consumerBuffer = std::move(g_deliveryBuffer); // get the buffer
}
g_consumerAccepted.notify_one(); // notify the producer that the buffer has been accepted
// for-loop to process the elems in Q
// ...
consumerBuffer.clear();
// ...
}
}
// producer
void produce()
{
std::vector<int> producerBuffer;
while (true)
{
// for-loop to fill up Q_buf
// ...
producerBuffer = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
// ...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
{ // scope is for lock
std::unique_lock<std::mutex> lock(g_mutex);
g_deliveryBuffer = std::move(producerBuffer); // ok to push to deliverybuffer. it is guaranteed to be empty
g_producerDeliver.notify_one();
while (!g_deliveryBuffer.empty() && !g_quit)
g_consumerAccepted.wait(lock); // wait for consumer to signal for more data
if (g_quit)
break;
// We will never reach this point if the buffer is not empty.
}
}
}
int main()
{
// spawn threads
std::thread consumerThread(consume);
std::thread producerThread(produce);
// for for 5 seconds
std::this_thread::sleep_for(std::chrono::seconds(5));
// signal that it's time to quit
{
std::lock_guard<std::mutex> lock(g_mutex);
g_quit = true;
}
// one of the threads may be sleeping
g_consumerAccepted.notify_one();
g_producerDeliver.notify_one();
consumerThread.join();
producerThread.join();
return 0;
}

properly ending an infinite std::thread

I have a reusable class that starts up an infinite thread. this thread can only be killed by calling a stop function that sets a kill switch variable. When looking around, there is quite a bit of argument over volatile vs atomic variables.
The following is my code:
program.cpp
int main()
{
ThreadClass threadClass;
threadClass.Start();
Sleep(1000);
threadClass.Stop();
Sleep(50);
threaClass.Stop();
}
ThreadClass.h
#pragma once
#include <atomic>
#include <thread>
class::ThreadClass
{
public:
ThreadClass(void);
~ThreadClass(void);
void Start();
void Stop();
private:
void myThread();
std::atomic<bool> runThread;
std::thread theThread;
};
ThreadClass.cpp
#include "ThreadClass.h"
ThreadClass::ThreadClass(void)
{
runThread = false;
}
ThreadClass::~ThreadClass(void)
{
}
void ThreadClass::Start()
{
runThread = true;
the_thread = std::thread(&mythread, this);
}
void ThreadClass::Stop()
{
if(runThread)
{
runThread = false;
if (the_thread.joinable())
{
the_thread.join();
}
}
}
void ThreadClass::mythread()
{
while(runThread)
{
//dostuff
Sleep(100); //or chrono
}
}
The code that i am representing here mirrors an issue that our legacy code had in place. We call the stop function 2 times, which will try to join the thread 2 times. This results in an invalid handle exception. I have coded the Stop() function in order to work around that issue, but my question is why would the the join fail the second time if the thread has completed and joined? Is there a better way programmatically to assume that the thread is valid before trying to join?

Qt GUI user interaction with QMessageBox from within QThread object

I'm using QThread with MyObject->moveToThread(myThread); for communication functions that take a while. A few Signals and Slots keep the GUI posted about the progress.
Howeever, some situation may occur during the threaded communication that requires user interaction - since a QMessageBox can't be created inside a thread, I was thinking to emit a signal that would allow me to pause the thread and show the dialog. But first of all, there does not seem to be a way to pause a thread, and second, this attempt probably fails because it requires a way to pass a parameter back to the thread when resuming it.
A differenet approach might be to pass all parameters in question to the thread beforehand, but this may not alway be an option.
How is this usually done?
Edit
Thanks for the comment #1 and getting my hopes up, but please elaborate on how to create e.g. a dialog from an object within a thread and how to pause it..
The following example code with Qt 4.8.1 and MSVC++ 2010 results in:
MyClass::MyClass created
MainWindow::MainWindow thread started
MyClass::start run
ASSERT failure in QWidget: "Widgets must be created in the GUI thread.", file kernel\qwidget.cpp, line 1299
mainwindow.h
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
namespace Ui {
class MainWindow;
}
class MainWindow : public QMainWindow
{
Q_OBJECT
public:
explicit MainWindow(QWidget *parent = 0);
~MainWindow();
private:
Ui::MainWindow *ui;
};
#endif // MAINWINDOW_H
mainwindow.cpp
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include "myclass.h"
#include <QThread>
#include <QDebug>
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
QThread *thread = new QThread();
MyClass* myObject = new MyClass();
myObject->moveToThread( thread );
connect(thread, SIGNAL( started()), myObject, SLOT(start()));
connect(myObject, SIGNAL( finished()), thread, SLOT(quit()));
connect(myObject, SIGNAL( finished()), myObject, SLOT(deleteLater()));
connect(thread, SIGNAL( finished()), thread, SLOT(deleteLater()));
thread->start();
if( thread->isRunning() )
{
qDebug() << __FUNCTION__ << "thread started";
}
}
MainWindow::~MainWindow()
{
delete ui;
}
myclass.h
#ifndef MYCLASS_H
#define MYCLASS_H
#include <QObject>
class MyClass : public QObject
{
Q_OBJECT
public:
explicit MyClass(QObject *parent = 0);
signals:
void finished();
public slots:
void start();
};
#endif // MYCLASS_H
myclass.cpp
#include "myclass.h"
#include <QMessageBox>
#include <QDebug>
MyClass::MyClass(QObject *parent) :
QObject(parent)
{
qDebug() << __FUNCTION__ << "created";
}
void MyClass::start()
{
qDebug() << __FUNCTION__ << "run";
// do stuff ...
// get information from user (blocking)
QMessageBox *msgBox = new QMessageBox();
msgBox->setWindowTitle( tr("WindowTitle") );
msgBox->setText( tr("Text") );
msgBox->setInformativeText( tr("InformativeText") );
msgBox->setStandardButtons( QMessageBox::Ok | QMessageBox::Cancel);
msgBox->setDefaultButton( QMessageBox::Ok);
msgBox->setEscapeButton( QMessageBox::Cancel);
msgBox->setIcon( QMessageBox::Information);
int ret = msgBox->exec();
// continue doing stuff (based on user input) ...
switch (ret)
{
case QMessageBox::Ok:
break;
case QMessageBox::Cancel:
break;
default:
break;
}
// do even more stuff
emit finished();
}
Use Qt::BlockingQueuedConnection in a signal/slot connection (the call to QObject::connect()).
http://doc.qt.digia.com/qt/qt.html#ConnectionType-enum
This will block your thread until the slot on the UI thread returns, the slot in the UI thread is then free to display a messagebox/modal dialog/whatever you want to do.
You must be sure that your worker thread is actually not on the UI thread, because as the docs say this will cause a dead lock if the signal and slot are on the same thread (since it will block itself).
I can't give any specific code right now, but I would do it like this:
In MyClass::start() lock a QMutex.
Emit a signal e.g. messageBoxRequired().
Wait on a shared QWaitCondition on the recent mutex. This will also unlock the mutex while the thread is waiting.
In a slot in your MainWindow, e.g. showMessageBox(), show the message box.
Store the returned value in a member of MyClass. You can do this by offering a setter and getter which use the mutex in order to protect the member. Obviously MyClass itself should only access that member with those setters/getters itself. (Also see QMutexLocker for that).
Call wakeOne() or wakeAll() on the shared QWaitCondition.
The previous wait() call will return and MyClass::start() will continue execution. If I understand the docs correctly, QWaitCondition will lock the mutex again before it returns from wait(). This means you have to unlock the mutex directly after the wait() call.
You can access the message box's return value from your class member (using a thread-safe getter)
Implementations for thread-safe setters/getters would be as follows:
void MyClass::setVariable( int value )
{
QMutexLocker( &_mutex );
_value = value;
}
int MyClass::getVariable() // Not sure if a 'const' modifier would work here
{
QMutexLocker( &_mutex );
return _value;
}

Resources