properly ending an infinite std::thread - multithreading

I have a reusable class that starts up an infinite thread. this thread can only be killed by calling a stop function that sets a kill switch variable. When looking around, there is quite a bit of argument over volatile vs atomic variables.
The following is my code:
program.cpp
int main()
{
ThreadClass threadClass;
threadClass.Start();
Sleep(1000);
threadClass.Stop();
Sleep(50);
threaClass.Stop();
}
ThreadClass.h
#pragma once
#include <atomic>
#include <thread>
class::ThreadClass
{
public:
ThreadClass(void);
~ThreadClass(void);
void Start();
void Stop();
private:
void myThread();
std::atomic<bool> runThread;
std::thread theThread;
};
ThreadClass.cpp
#include "ThreadClass.h"
ThreadClass::ThreadClass(void)
{
runThread = false;
}
ThreadClass::~ThreadClass(void)
{
}
void ThreadClass::Start()
{
runThread = true;
the_thread = std::thread(&mythread, this);
}
void ThreadClass::Stop()
{
if(runThread)
{
runThread = false;
if (the_thread.joinable())
{
the_thread.join();
}
}
}
void ThreadClass::mythread()
{
while(runThread)
{
//dostuff
Sleep(100); //or chrono
}
}
The code that i am representing here mirrors an issue that our legacy code had in place. We call the stop function 2 times, which will try to join the thread 2 times. This results in an invalid handle exception. I have coded the Stop() function in order to work around that issue, but my question is why would the the join fail the second time if the thread has completed and joined? Is there a better way programmatically to assume that the thread is valid before trying to join?

Related

How to share data between a TThread in a DLL and the main thread?

I'm writing a DLL in C++Builder XE6, that creates a separate thread (derived from TThread) to retrieve JSON data from a REST server every X seconds (using TIdHTTP), and parse the JSON data.
The thread fills a simple struct (no dynamically allocated data) with the parsed JSON data in the Execute() method of the thread:
typedef struct
{
char MyString[40 + 1];
double MyDouble;
bool MyBool;
} TMyStruct;
The thread should store the struct in a list, for example a std::vector:
#include <vector>
std::vector<TMyStruct> MyList;
The thread will add a TMyStruct to the list:
TMyStruct Data;
...
MyList.push_back(Data);
The list will be guarded by a TCriticalSection to prevent data corruption.
The DLL exports a function to retrieve a TMyStruct from MyList.
bool __declspec(dllexport) __stdcall GetMyStruct (int Index, TMyStruct* Data)
{
...
}
Only thing is, I don't know where to put MyList...
If I make MyList a global variable, it is located in the main thread's memory and GetMyStruct() can access it directly. How does the thread access MyList?
If I make MyList a member of the TThread-derived class, it is located in the thread's memory and the thread can access it directly. How does GetMyStruct() access MyList?
What is the best/prefered/common way to store MyList and access it in a different thread?
If I make MyList a global variable, it is located in the main thread's memory and GetMyStruct() can access it directly. How does the thread access MyList?
The exact same way. All threads in a process can freely access global variables within that process. For example:
#include <vector>
#include <System.SyncObjs.hpp>
typedef struct
{
char MyString[40 + 1];
double MyDouble;
bool MyBool;
} TMyStruct;
std::vector<TMyStruct> MyList;
TCriticalSection *Lock = NULL; // why not std::mutex instead?
class TMyThread : public TThread
{
...
};
TMyThread *Thread = NULL;
...
void __fastcall TMyThread::Execute()
{
TMyStruct Data;
...
Lock->Enter();
try {
MyList.push_back(Data);
}
__finally {
Lock->Leave();
}
...
}
...
void __declspec(dllexport) __stdcall StartThread ()
{
Lock = new TCriticalSection;
Thread = new TMyThread;
}
void __declspec(dllexport) __stdcall StopThread ()
{
if (Thread) {
Thread->Terminate();
Thread->WaitFor();
delete Thread;
Thread = NULL;
}
if (Lock) {
delete Lock;
Lock = NULL;
}
}
bool __declspec(dllexport) __stdcall GetMyStruct (int Index, TMyStruct* Data)
{
if (!(Lock && Thread)) return false;
Lock->Enter();
try {
*Data = MyList[Index];
}
__finally {
Lock->Leave();
}
return true;
}
If I make MyList a member of the TThread-derived class, it is located in the thread's memory and the thread can access it directly. How does GetMyStruct() access MyList?
By accessing it via a pointer to the thread object. For example:
#include <vector>
#include <System.SyncObjs.hpp>
typedef struct
{
char MyString[40 + 1];
double MyDouble;
bool MyBool;
} TMyStruct;
class TMyThread : public TThread
{
protected:
void __fastcall Execute();
public:
__fastcall TMyThread();
__fastcall ~TMyThread();
std::vector<TMyStruct> MyList;
TCriticalSection *Lock;
};
TMyThread *Thread = NULL;
...
__fastcall TMyThread::TMyThread()
: TThread(false)
{
Lock = new TCriticalSection;
}
__fastcall TMyThread::~TMyThread()
{
delete Lock;
}
void __fastcall TMyThread::Execute()
{
TMyStruct Data;
...
Lock->Enter();
try {
MyList.push_back(Data);
}
__finally {
Lock->Leave();
}
...
}
void __declspec(dllexport) __stdcall StartThread ()
{
Thread = new TMyThread;
}
void __declspec(dllexport) __stdcall StopThread ()
{
if (Thread) {
Thread->Terminate();
Thread->WaitFor();
delete Thread;
Thread = NULL;
}
}
bool __declspec(dllexport) __stdcall GetMyStruct (int Index, TMyStruct* Data)
{
if (!Thread) return false;
Thread->Lock->Enter();
try {
*Data = Thread->MyList[Index];
}
__finally {
Thread->Lock->Leave();
}
return true;
}
What is the best/prefered/common way to store MyList and access it in a different thread?
That is entirely up to you to decide, based on your particular needs and project design.

How to interrupt a thread which is waiting for std::condition_variable_any in C++?

I'm reading C++ concurrency in action.
It introduces how to implement interrupting thread using std::condition_variable_any.
I try to understand the code more than a week, but I couldn't.
Below is the code and explanation in the book.
#include <condition_variable>
#include <future>
#include <iostream>
#include <thread>
class thread_interrupted : public std::exception {};
class interrupt_flag {
std::atomic<bool> flag;
std::condition_variable* thread_cond;
std::condition_variable_any* thread_cond_any;
std::mutex set_clear_mutex;
public:
interrupt_flag() : thread_cond(0), thread_cond_any(0) {}
void set() {
flag.store(true, std::memory_order_relaxed);
std::lock_guard<std::mutex> lk(set_clear_mutex);
if (thread_cond) {
thread_cond->notify_all();
} else if (thread_cond_any) {
thread_cond_any->notify_all();
}
}
bool is_set() const { return flag.load(std::memory_order_relaxed); }
template <typename Lockable>
void wait(std::condition_variable_any& cv, Lockable& lk);
};
thread_local static interrupt_flag this_thread_interrupt_flag;
void interruption_point() {
if (this_thread_interrupt_flag.is_set()) {
throw thread_interrupted();
}
}
template <typename Lockable>
void interrupt_flag::wait(std::condition_variable_any& cv, Lockable& lk) {
struct custom_lock {
interrupt_flag* self;
// (1) What is this lk for? Why is lk should be already locked when it is used in costume_lock constructor?
Lockable& lk;
custom_lock(interrupt_flag* self_, std::condition_variable_any& cond,
Lockable& lk_)
: self(self_), lk(lk_) {
self->set_clear_mutex.lock();
self->thread_cond_any = &cond;
}
void unlock() {
lk.unlock();
self->set_clear_mutex.unlock();
}
void lock() { std::lock(self->set_clear_mutex, lk); }
~custom_lock() {
self->thread_cond_any = 0;
self->set_clear_mutex.unlock();
}
};
custom_lock cl(this, cv, lk);
interruption_point();
cv.wait(cl);
interruption_point();
}
class interruptible_thread {
std::thread internal_thread;
interrupt_flag* flag;
public:
template <typename FunctionType>
interruptible_thread(FunctionType f) {
std::promise<interrupt_flag*> p;
internal_thread = std::thread([f, &p] {
p.set_value(&this_thread_interrupt_flag);
f();
});
flag = p.get_future().get();
}
void interrupt() {
if (flag) {
flag->set();
}
};
void join() { internal_thread.join(); };
void detach();
bool joinable() const;
};
template <typename Lockable>
void interruptible_wait(std::condition_variable_any& cv, Lockable& lk) {
this_thread_interrupt_flag.wait(cv, lk);
}
void foo() {
// (2) This is my implementation of how to use interruptible wait. Is it correct?
std::condition_variable_any cv;
std::mutex m;
std::unique_lock<std::mutex> lk(m);
try {
interruptible_wait(cv, lk);
} catch (...) {
std::cout << "interrupted" << std::endl;
}
}
int main() {
std::cout << "Hello" << std::endl;
interruptible_thread th(foo);
th.interrupt();
th.join();
}
Your custom lock type acquires the lock on the internal
set_clear_mutex when it’s constructed 1, and then sets the
thread_cond_any pointer to refer to the std:: condition_variable_any
passed in to the constructor 2.
The Lockable reference is stored for later; this must already be
locked. You can now check for an interruption without worrying about
races. If the interrupt flag is set at this point, it was set before
you acquired the lock on set_clear_mutex. When the condition variable
calls your unlock() function inside wait(), you unlock the Lockable
object and the internal set_clear_mutex 3.
This allows threads that are trying to interrupt you to acquire the
lock on set_clear_mutex and check the thread_cond_any pointer once
you’re inside the wait() call but not before. This is exactly what you
were after (but couldn’t manage) with std::condition_variable.
Once wait() has finished waiting (either because it was notified or
because of a spurious wake), it will call your lock() function, which
again acquires the lock on the internal set_clear_mutex and the lock
on the Lockable object 4. You can now check again for interruptions
that happened during the wait() call before clearing the
thread_cond_any pointer in your custom_lock destructor 5, where you
also unlock the set_clear_mutex.
First, I couldn't understand what is the purpose of Lockabel& lk in mark (1) and why it is already locked in constructor of custom_lock. (It could be locked in the very custom_lock constructor. )
Second there is no example in this book of how to use interruptible wait, so foo() {} in mark (2) is my guess implementation of how to use it. Is it correct way of using it ?
You need a mutex-like object (lk in your foo function) to call the interruptiple waiting just as you would need it for the plain std::condition_variable::wait function.
What's problematic (I also read the book and I have doubts about this example) is that the flag member points to a memory location inside the other thread which could finish right before calling flag->set(). In this specific example the thread only exists after we set the flag so that is okay, but otherwise this approach is limited in my opinion (correct me if I am wrong).

cannot handle QNetworkAccessManager::finised signal in multithreading

I want to serialize network requests using QNetworkAccessManager. For achieving it i wrote such class:
#ifndef CLIENT_H
#define CLIENT_H
#include <queue>
#include <mutex>
#include <condition_variable>
#include <QtNetwork/QNetworkAccessManager>
#include <QtNetwork/QNetworkReply>
#include <QtNetwork/QNetworkRequest>
class Client : public QObject
{
Q_OBJECT
struct RequestRecord
{
RequestRecord(QString u, int o):url(u),operation(o){}
QString url;
int operation;
};
std::mutex mutex;
std::queue<RequestRecord*> requests;
QNetworkAccessManager *manager;
bool running;
std::condition_variable cv;
public:
Client():manager(nullptr){}
~Client()
{
if(manager)
delete manager;
}
void request_cppreference()
{
std::unique_lock<std::mutex> lock(mutex);
requests.push(new RequestRecord("http://en.cppreference.com",0));
cv.notify_one();
}
void request_qt()
{
std::unique_lock<std::mutex> lock(mutex);
requests.push(new RequestRecord("http://doc.qt.io/qt-5/qnetworkaccessmanager.html",1));
cv.notify_one();
}
void process()
{
manager = new QNetworkAccessManager;
connect(manager,&QNetworkAccessManager::finished,[this](QNetworkReply *reply)
{
std::unique_lock<std::mutex> lock(mutex);
RequestRecord *front = requests.front();
requests.pop();
delete front;
reply->deleteLater();
});
running = true;
while (running)
{
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock);
RequestRecord *front = requests.front();
manager->get(QNetworkRequest(QUrl(front->url)));
}
}
};
#endif // CLIENT_H
As one can see, there are 2 methods for requesting data from network and method process, which should be called in separate thread.
I'm using this class as follows:
Client *client = new Client;
std::thread thr([client](){
client->process();
});
std::this_thread::sleep_for(std::chrono::seconds(1));
client->request_qt();
std::this_thread::sleep_for(std::chrono::milliseconds(1));
client->request_cppreference();
This example illustrate 2 consecutive requests to network from one thread and processing of these request in another. All works fine except my lambda is never called. Requests are sent (checked it using wireshark), but i cannot get replies. What is the cause?
as #thuga suppose the problem was in event loop. My thread always waiting on cv and thus cannot process events, little hack solve the problem:
void process()
{
manager = new QNetworkAccessManager;
connect(manager,&QNetworkAccessManager::finished,[this](QNetworkReply *reply)
{
std::unique_lock<std::mutex> lock(mutex);
RequestRecord *front = requests.front();
requests.pop();
delete front;
qDebug() << reply->readAll();
processed = true;
reply->deleteLater();
});
running = true;
while (running)
{
{
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock);
if(requests.size() > 0 && processed)
{
RequestRecord *front = requests.front();
manager->get(QNetworkRequest(QUrl(front->url)));
processed = false;
QtConcurrent::run([this]()
{
while (running)
{
cv.notify_one();
msleep(10);
}
});
}
}
QCoreApplication::processEvents();
}
}
};
it's not beautiful obvious since it is using 3 threads instead of 2, but it is Qt with this perfect phrase:
QUrl QNetworkReply::url() const Returns the URL of the content
downloaded or uploaded. Note that the URL may be different from that
of the original request. If the
QNetworkRequest::FollowRedirectsAttribute was set in the request, then
this function returns the current url that the network API is
accessing, i.e the url emitted in the QNetworkReply::redirected
signal.

How many mutex(es) should be used in one thread

I am working on a c++ (11) project and on the main thread, I need to check the value of two variables. The value of the two variables will be set by other threads through two different callbacks. I am using two condition variables to notify changes of those two variables. Because in c++, locks are needed for condition variables, I am not sure if I should use the same mutex for the two condition variables or I should use two mutex's to minimize exclusive execution. Somehow, I feel one mutex should be sufficient because on one thread(the main thread in this case) the code will be executed sequentially anyway. The code on the main thread that checks (wait for) the value of the two variables wont be interleaved anyway. Let me know if you need me to write code to illustrate the problem. I can prepare that. Thanks.
Update, add code:
#include <mutex>
class SomeEventObserver {
public:
virtual void handleEventA() = 0;
virtual void handleEventB() = 0;
};
class Client : public SomeEventObserver {
public:
Client() {
m_shouldQuit = false;
m_hasEventAHappened = false;
m_hasEventBHappened = false;
}
// will be callbed by some other thread (for exampe, thread 10)
virtual void handleEventA() override {
{
std::lock_guard<std::mutex> lock(m_mutexForA);
m_hasEventAHappened = true;
}
m_condVarEventForA.notify_all();
}
// will be called by some other thread (for exampe, thread 11)
virtual void handleEventB() override {
{
std::lock_guard<std::mutex> lock(m_mutexForB);
m_hasEventBHappened = true;
}
m_condVarEventForB.notify_all();
}
// here waitForA and waitForB are in the main thread, they are executed sequentially
// so I am wondering if I can use just one mutex to simplify the code
void run() {
waitForA();
waitForB();
}
void doShutDown() {
m_shouldQuit = true;
}
private:
void waitForA() {
std::unique_lock<std::mutex> lock(m_mutexForA);
m_condVarEventForA.wait(lock, [this]{ return m_hasEventAHappened; });
}
void waitForB() {
std::unique_lock<std::mutex> lock(m_mutexForB);
m_condVarEventForB.wait(lock, [this]{ return m_hasEventBHappened; });
}
// I am wondering if I can use just one mutex
std::condition_variable m_condVarEventForA;
std::condition_variable m_condVarEventForB;
std::mutex m_mutexForA;
std::mutex m_mutexForB;
bool m_hasEventAHappened;
bool m_hasEventBHappened;
};
int main(int argc, char* argv[]) {
Client client;
client.run();
}

Make scope of lock smaller in threadsafe queue

I have a dll which has a high priority functionality that runs in a high priority thread. This dll needs to report progress. Basically a callback system is used. The issue is that the dll has no control over the amount of time the callback takes to complete. This means the high priority functionality is dependent on the implementation of the callback which is not acceptable.
The idea is to have a class inbetween that buffers the progress notifications and calls the callback.
I'm new to C++11 and it's threading functionality and trying to discover the possibilities. I have an implementation but I have an issue(at least one that I see now). When the thread awakens after the wait the mutex is reacquired and stays acquired until the next wait. This means the lock is acquired for as long as the lengthy operation continues. Adding progress will block here. Basically a lot of code for no gain. I thought of changing the code to this but I don't know if this is the correct implementation.
Progress progress = queue.front();
queue.pop();
lock.unlock();
// Do lengthy operation with progress
lock.lock();
I think I need to wait for the condition variable, but that should not be connected to the lock. I don't see how this can be done. Pass a dummy lock and use a different lock to protect the queue? How should this problem be tackled in C++11?
header file
#include <atomic>
#include <condition_variable>
#include <mutex>
#include <thread>
#include <queue>
#include "Error.h"
#include "TypeDefinitions.h"
struct Progress
{
StateDescription State;
uint8 ProgressPercentage;
};
class ProgressIsolator
{
public:
ProgressIsolator();
virtual ~ProgressIsolator();
void ReportProgress(const Progress& progress);
void Finish();
private:
std::atomic<bool> shutdown;
std::condition_variable itemAvailable;
std::mutex mutex;
std::queue<Progress> queue;
std::thread worker;
void Work();
};
cpp file
#include "ProgressIsolator.h"
ProgressIsolator::ProgressIsolator() :
shutdown(false),
itemAvailable(),
worker([this]{ Work(); }),
progressCallback(progressCallback),
completedCallback(completedCallback)
{
// TODO: only continue when worker thread is ready and listening?
}
ProgressIsolator::~ProgressIsolator()
{
Finish();
worker.join();
}
void ProgressIsolator::ReportProgress(const Progress& progress)
{
std::unique_lock<std::mutex> lock(mutex);
queue.push(progress);
itemAvailable.notify_one();
}
void ProgressIsolator::Finish()
{
shutdown = true;
itemAvailable.notify_one();
}
void ProgressIsolator::Work()
{
std::unique_lock<std::mutex> lock(mutex);
while (!shutdown)
{
itemAvailable.wait(lock);
while (!queue.empty())
{
Progress progress = queue.front();
queue.pop();
// Do lengthy operation with progress
}
}
}
void ProgressIsolator::Work()
{
while (!shutdown)
{
Progress progress;
{
std::unique_lock<std::mutex> lock(mutex);
itemAvailable.wait(lock, [this] { return !queue.empty(); });
progress = queue.front();
queue.pop();
}
// Do lengthy operation with progress
}
}

Resources