From Windows Threading Libraries to C++11 - multithreading

I want to remove some Windows dependencies in how the threading is done in an old code, how can I convert this piece of code to C++11 threading style?
MyClass run Method:
void MyClass::run()
{
while(true)
{
WaitForSingleObject(startEvent, INFINITE);
processData();
ResetEvent(startEvent);
SetEvent(hEvent);
}
}
Main Update in another class:
{
.
.
.
WaitForSingleObject(myClassInstance.hEvent, INFINITE);
ResetEvent(myClassInstance.hEvent);
// Getting data processed by myClassInstance in the previous update call
// Mem copies to myClassInstance to be used later by myClassInstance processData()
SetEvent(myClassInstance.startEvent);
.
.
.
}

You can create an event class easily with std::condition_variable and a bool:
class Event {
std::condition_variable cv_;
std::mutex mtx_;
bool signaled_ = false;
public:
void wait() {
std::unique_lock<std::mutex> lock{mtx_};
while (!signaled_) {
cv_.wait(lock);
}
}
void reset() {
std::lock_guard<std::mutex> lock{mtx_};
signaled_ = false;
}
void set() {
{
std::lock_guard<std::mutex> lock{mtx_};
signaled_ = true;
}
cv_.notify_one();
}
};
Resulting in the usage:
struct MyClass {
Event start;
Event ready;
void processData();
void run();
};
void MyClass::run() {
while (true) {
start.wait();
processData();
start.reset();
ready.set();
}
}
void main_update_in_another_class() {
ready.wait();
ready.reset();
// Getting data processed by myClassInstance in the previous update call
// Mem copies to myClassInstance to be used later by myClassInstance processData()
start.set();
// Do other things that don't require access to myClassInstance
}
See the live demo at Coliru.

Related

TService and TThread in C++Builder

I'm not sure if I developed everything correct.
At least it's working, but is it realy correct?
But sometimes I get an error when starting the service. If I try again it's starting.
Thanks for your help.
I developed a Service which creates a Thread "Importer".
The Importer read the configuration like connection params from the registry. Here I get sometimes the problem that some parameter could not read.
The Import check for files with extension .jpg in a directory. The path of the directory is also stored in registry.
If JPG are existing in directory they will be imported in a database and removed from filesystem.
All JPG are imported the Thread "Importer" will sleep for x minutes which is also configured in registry.
The service
//---------------------------------------------------------------------------
void __fastcall TSrvCrateImage::ServiceExecute(TService *Sender) {
while (!Terminated) {
ServiceThread->ProcessRequests(false);
// MyImporter->Resume();
Sleep(1000);
}
}
//---------------------------------------------------------------------------
void __fastcall TSrvCrateImage::ServiceStart(TService *Sender, bool &Started) {
MyImporter = new Importer(false);
bool valid = ReadConfig();
if (valid) {
Started = true;
} else {
Started = false;
}
}
//---------------------------------------------------------------------------
bool __fastcall TSrvCrateImage::ReadConfig() {
UnicodeString msg = MyImporter->ReadConfig();
if (!msg.IsEmpty()) {
LogMessage(msg, EVENTLOG_ERROR_TYPE);
return false;
}
LogMessage("Configuration loaded.", EVENTLOG_INFORMATION_TYPE);
return true;
}
//---------------------------------------------------------------------------
void __fastcall TSrvCrateImage::ServiceStop(TService *Sender, bool &Stopped) {
MyImporter->Terminate();
Stopped = true;
}
The Importer
__fastcall Importer::Importer(bool CreateSuspended)
: TThread(CreateSuspended) {
m_SleepMin = 0;
m_ConfFile = new TRegConfigFile();
}
//---------------------------------------------------------------------------
void __fastcall Importer::Execute() {
try {
while (!Terminated) {
ImportImageFiles();
Sleep(m_SleepMin * 60 * 1000);
}
} catch (Exception &exception) {
SrvCrateImage->LogMessage("Importer::Execute() " + exception.ToString(), EVENTLOG_ERROR_TYPE);
}
}
Thanks a lot
I started the service and sometimes I get errors that some paramters are not existing.
I don't know what happen if a problem will occure.
Will the service and thread still work? Or not due to bad thread programming...
I see a number of issues with your code.
get rid of the TService::OnExecute event handler completely. You don't need it. It is not doing anything useful. TService will handle SCM requests internally for you when there is no OnExecute handler assigned.
What is the point of creating a new thread for the importer if you are going to make the service thread dependent on reading the config from the importer? You have two threads fighting over a single config without any synchronization between them. I would suggest having the service thread read in the config before starting the importer thread.
Your service's OnStop event handler is signaling the importer thread to terminate, but is not waiting for the thread to fully terminate, let alone destroying the thread. It needs to do both.
With that said, try something more like this:
Service:
//---------------------------------------------------------------------------
void __fastcall TSrvCrateImage::ServiceStart(TService *Sender, bool &Started) {
TRegConfigFile *ConfFile = ReadConfig();
if (ConfFile) {
MyImporter = new Importer(false, ConfFile);
Started = true;
} else {
Started = false;
}
}
//---------------------------------------------------------------------------
TRegConfigFile* __fastcall TSrvCrateImage::ReadConfig() {
String msg;
// read config directly, not from importer thread...
TRegConfigFile *ConfFile = new TRegConfigFile();
try {
msg = ...; // read from ConfFile as needed?
}
catch (Exception &exception) {
msg = exception.ToString();
}
if (!msg.IsEmpty()) {
LogMessage(msg, EVENTLOG_ERROR_TYPE);
delete ConfFile;
return NULL;
}
LogMessage("Configuration loaded.", EVENTLOG_INFORMATION_TYPE);
return ConfFile;
}
//---------------------------------------------------------------------------
void __fastcall TSrvCrateImage::ServiceStop(TService *Sender, bool &Stopped) {
if (MyImporter) {
MyImporter->Terminate();
HANDLE h = (HANDLE) MyImporter->Handle;
while (WaitForSingleObject(h, WaitHint-100) == WAIT_TIMEOUT) {
ReportStatus();
}
delete MyImporter;
}
Stopped = true;
}
Importer:
__fastcall Importer::Importer(bool CreateSuspended, TRegConfigFile *ConfFile)
: TThread(CreateSuspended) {
m_ConfFile = ConfFile;
m_SleepMin = ...; // read from m_ConfFile as needed
...
}
//---------------------------------------------------------------------------
__fastcall Importer::~Importer() {
delete m_ConfFile;
}
//---------------------------------------------------------------------------
void __fastcall Importer::Execute() {
while (!Terminated) {
ImportImageFiles();
Sleep(m_SleepMin * 60 * 1000);
}
}
//---------------------------------------------------------------------------
void __fastcall Importer::DoTerminate() {
if (FatalException) {
SrvCrateImage->LogMessage("Importer " + ((Exception*)FatalException)->ToString(), EVENTLOG_ERROR_TYPE);
}
TThread::DoTerminate();
}

How to launch an std::async thread from within an std::async thread and let the first one die once the second is launched?

What I am trying to achieve is having and autonomous async thread mill, were async A does its task, launches async B and dies. async B does the same in repeat.
Example code: main.cpp
class operation_manager : public std::enable_shared_from_this<operation_manager> {
public:
operation_manager() {}
void do_operation(void) {
std::function<void(std::shared_ptr<operation_manager>)> fun( [this](std::shared_ptr<operation_manager> a_ptr) {
if( a_ptr != nullptr ) {
a_ptr->do_print();
}
} );
i_ap.read(fun, shared_from_this());
}
void do_print(void) {
std::cout << "Hello world\n" << std::flush;
do_operation();
}
private:
async_operation i_ap;
};
int main(int argc, const char * argv[]) {
auto om( std::make_shared<operation_manager>() );
om->do_operation();
while(true) {
std::this_thread::sleep_for(std::chrono::seconds(1));
}
return 0;
}
Example code: async_operation.hpp
class async_operation {
public:
async_operation() {};
template<typename T>
void read(std::function<void(std::shared_ptr<T>)> a_callback, std::shared_ptr<T> a_ptr) {
auto result( std::async(std::launch::async, [&]() {
wait();
a_callback(a_ptr);
return true;
}) );
result.get();
}
private:
void wait(void) const {
std::this_thread::sleep_for(std::chrono::seconds(1));
}
};
Your mistake is calling result.get() inside the async task - that causes it to block and wait for the next task to finish. wait you need to do is save the futures somewhere, and let them run.
Here is the modified code to the async_operation class:
std::vector<std::shared_ptr<std::future<bool>>> results;
class async_operation {
public:
async_operation() {};
template<typename T>
void read(std::function<void(std::shared_ptr<T>)> a_callback, std::shared_ptr<T> a_ptr) {
results.push_back(std::make_shared<std::future<bool>>(std::async(std::launch::async, [=]() {
wait();
a_callback(a_ptr);
return true;
})));
}
private:
void wait(void) const {
std::this_thread::sleep_for(std::chrono::seconds(1));
}
};

How many mutex(es) should be used in one thread

I am working on a c++ (11) project and on the main thread, I need to check the value of two variables. The value of the two variables will be set by other threads through two different callbacks. I am using two condition variables to notify changes of those two variables. Because in c++, locks are needed for condition variables, I am not sure if I should use the same mutex for the two condition variables or I should use two mutex's to minimize exclusive execution. Somehow, I feel one mutex should be sufficient because on one thread(the main thread in this case) the code will be executed sequentially anyway. The code on the main thread that checks (wait for) the value of the two variables wont be interleaved anyway. Let me know if you need me to write code to illustrate the problem. I can prepare that. Thanks.
Update, add code:
#include <mutex>
class SomeEventObserver {
public:
virtual void handleEventA() = 0;
virtual void handleEventB() = 0;
};
class Client : public SomeEventObserver {
public:
Client() {
m_shouldQuit = false;
m_hasEventAHappened = false;
m_hasEventBHappened = false;
}
// will be callbed by some other thread (for exampe, thread 10)
virtual void handleEventA() override {
{
std::lock_guard<std::mutex> lock(m_mutexForA);
m_hasEventAHappened = true;
}
m_condVarEventForA.notify_all();
}
// will be called by some other thread (for exampe, thread 11)
virtual void handleEventB() override {
{
std::lock_guard<std::mutex> lock(m_mutexForB);
m_hasEventBHappened = true;
}
m_condVarEventForB.notify_all();
}
// here waitForA and waitForB are in the main thread, they are executed sequentially
// so I am wondering if I can use just one mutex to simplify the code
void run() {
waitForA();
waitForB();
}
void doShutDown() {
m_shouldQuit = true;
}
private:
void waitForA() {
std::unique_lock<std::mutex> lock(m_mutexForA);
m_condVarEventForA.wait(lock, [this]{ return m_hasEventAHappened; });
}
void waitForB() {
std::unique_lock<std::mutex> lock(m_mutexForB);
m_condVarEventForB.wait(lock, [this]{ return m_hasEventBHappened; });
}
// I am wondering if I can use just one mutex
std::condition_variable m_condVarEventForA;
std::condition_variable m_condVarEventForB;
std::mutex m_mutexForA;
std::mutex m_mutexForB;
bool m_hasEventAHappened;
bool m_hasEventBHappened;
};
int main(int argc, char* argv[]) {
Client client;
client.run();
}

properly ending an infinite std::thread

I have a reusable class that starts up an infinite thread. this thread can only be killed by calling a stop function that sets a kill switch variable. When looking around, there is quite a bit of argument over volatile vs atomic variables.
The following is my code:
program.cpp
int main()
{
ThreadClass threadClass;
threadClass.Start();
Sleep(1000);
threadClass.Stop();
Sleep(50);
threaClass.Stop();
}
ThreadClass.h
#pragma once
#include <atomic>
#include <thread>
class::ThreadClass
{
public:
ThreadClass(void);
~ThreadClass(void);
void Start();
void Stop();
private:
void myThread();
std::atomic<bool> runThread;
std::thread theThread;
};
ThreadClass.cpp
#include "ThreadClass.h"
ThreadClass::ThreadClass(void)
{
runThread = false;
}
ThreadClass::~ThreadClass(void)
{
}
void ThreadClass::Start()
{
runThread = true;
the_thread = std::thread(&mythread, this);
}
void ThreadClass::Stop()
{
if(runThread)
{
runThread = false;
if (the_thread.joinable())
{
the_thread.join();
}
}
}
void ThreadClass::mythread()
{
while(runThread)
{
//dostuff
Sleep(100); //or chrono
}
}
The code that i am representing here mirrors an issue that our legacy code had in place. We call the stop function 2 times, which will try to join the thread 2 times. This results in an invalid handle exception. I have coded the Stop() function in order to work around that issue, but my question is why would the the join fail the second time if the thread has completed and joined? Is there a better way programmatically to assume that the thread is valid before trying to join?

Implementing boost::barrier in C++11

I've been trying to get a project rid of every boost reference and switch to pure C++11.
At one point, thread workers are created which wait for a barrier to give the 'go' command, do the work (spread through the N threads) and synchronize when all of them finish. The basic idea is that the main loop gives the go order (boost::barrier .wait()) and waits for the result with the same function.
I had implemented in a different project a custom made Barrier based on the Boost version and everything worked perfectly. Implementation is as follows:
Barrier.h:
class Barrier {
public:
Barrier(unsigned int n);
void Wait(void);
private:
std::mutex counterMutex;
std::mutex waitMutex;
unsigned int expectedN;
unsigned int currentN;
};
Barrier.cpp
Barrier::Barrier(unsigned int n) {
expectedN = n;
currentN = expectedN;
}
void Barrier::Wait(void) {
counterMutex.lock();
// If we're the first thread, we want an extra lock at our disposal
if (currentN == expectedN) {
waitMutex.lock();
}
// Decrease thread counter
--currentN;
if (currentN == 0) {
currentN = expectedN;
waitMutex.unlock();
currentN = expectedN;
counterMutex.unlock();
} else {
counterMutex.unlock();
waitMutex.lock();
waitMutex.unlock();
}
}
This code has been used on iOS and Android's NDK without any problems, but when trying it on a Visual Studio 2013 project it seems only a thread which locked a mutex can unlock it (assertion: unlock of unowned mutex).
Is there any non-spinning (blocking, such as this one) version of barrier that I can use that works for C++11? I've only been able to find barriers which used busy-waiting which is something I would like to prevent (unless there is really no reason for it).
class Barrier {
public:
explicit Barrier(std::size_t iCount) :
mThreshold(iCount),
mCount(iCount),
mGeneration(0) {
}
void Wait() {
std::unique_lock<std::mutex> lLock{mMutex};
auto lGen = mGeneration;
if (!--mCount) {
mGeneration++;
mCount = mThreshold;
mCond.notify_all();
} else {
mCond.wait(lLock, [this, lGen] { return lGen != mGeneration; });
}
}
private:
std::mutex mMutex;
std::condition_variable mCond;
std::size_t mThreshold;
std::size_t mCount;
std::size_t mGeneration;
};
Use a std::condition_variable instead of a std::mutex to block all threads until the last one reaches the barrier.
class Barrier
{
private:
std::mutex _mutex;
std::condition_variable _cv;
std::size_t _count;
public:
explicit Barrier(std::size_t count) : _count(count) { }
void Wait()
{
std::unique_lock<std::mutex> lock(_mutex);
if (--_count == 0) {
_cv.notify_all();
} else {
_cv.wait(lock, [this] { return _count == 0; });
}
}
};
Here's my version of the accepted answer above with Auto reset behavior for repetitive use; this was achieved by counting up and down alternately.
/**
* #brief Represents a CPU thread barrier
* #note The barrier automatically resets after all threads are synced
*/
class Barrier
{
private:
std::mutex m_mutex;
std::condition_variable m_cv;
size_t m_count;
const size_t m_initial;
enum State : unsigned char {
Up, Down
};
State m_state;
public:
explicit Barrier(std::size_t count) : m_count{ count }, m_initial{ count }, m_state{ State::Down } { }
/// Blocks until all N threads reach here
void Sync()
{
std::unique_lock<std::mutex> lock{ m_mutex };
if (m_state == State::Down)
{
// Counting down the number of syncing threads
if (--m_count == 0) {
m_state = State::Up;
m_cv.notify_all();
}
else {
m_cv.wait(lock, [this] { return m_state == State::Up; });
}
}
else // (m_state == State::Up)
{
// Counting back up for Auto reset
if (++m_count == m_initial) {
m_state = State::Down;
m_cv.notify_all();
}
else {
m_cv.wait(lock, [this] { return m_state == State::Down; });
}
}
}
};
Seem all above answers don't work in the case the barrier is placed too near
Example: Each thread run the while loop look like this:
while (true)
{
threadBarrier->Synch();
// do heavy computation
threadBarrier->Synch();
// small external calculations like timing, loop count, etc, ...
}
And here is the solution using STL:
class ThreadBarrier
{
public:
int m_threadCount = 0;
int m_currentThreadCount = 0;
std::mutex m_mutex;
std::condition_variable m_cv;
public:
inline ThreadBarrier(int threadCount)
{
m_threadCount = threadCount;
};
public:
inline void Synch()
{
bool wait = false;
m_mutex.lock();
m_currentThreadCount = (m_currentThreadCount + 1) % m_threadCount;
wait = (m_currentThreadCount != 0);
m_mutex.unlock();
if (wait)
{
std::unique_lock<std::mutex> lk(m_mutex);
m_cv.wait(lk);
}
else
{
m_cv.notify_all();
}
};
};
And the solution for Windows:
class ThreadBarrier
{
public:
SYNCHRONIZATION_BARRIER m_barrier;
public:
inline ThreadBarrier(int threadCount)
{
InitializeSynchronizationBarrier(
&m_barrier,
threadCount,
8000);
};
public:
inline void Synch()
{
EnterSynchronizationBarrier(
&m_barrier,
0);
};
};

Resources