c++11 update pthread based code to std::thread or boost::thread - multithreading

I have the following code that I would like to update to be more portable and c++11 friendly. However, I'm stuck as how to replace the pthread calls. I can use std::this_thread::get_id() to get the thread id but can't tell if that thread is still alive.
pthread_t activeThread = 0;
pthread_t getCurrentThread() {
return pthread_self();
}
bool isActiveThreadAlive() {
if(activeThread == 0) {
return false;
}
return pthread_kill(activeThread, 0) != ESRCH;
}
Potential std::thread version...
std::thread::id activeThread = std::thread::id();
std::thread::id getCurrentThread() {
return std::this_thread::get_id();
}
bool isActiveThreadAlive() {
if(activeThread == std::thread::id()) {
return false;
}
return pthread_kill(activeThread, 0) != ESRCH;// <--- need replacement!!!
}
What the code really needs to do is know if the thread has died from an exception or some other error that caused it to terminate without releasing the object. As in the following...
std::unique_lock<std::mutex> uLock = getLock();
while (activeThread != 0) {
if (threadWait.wait_for(uLock, std::chrono::seconds(30)) == std::cv_status::timeout) {
if (!isActiveThreadAlive()) {
activeThread = 0;
}
}
}
activeThread = getCurrentThread();
uLock.unlock();
try {
// do stuff here.
}
catch (const std::exception&) {
}
uLock.lock();
activeThread = 0;
And before anyone asks I do not have a guarantee of control over when, where or how the threads are created. The threads that call the functions may be from anywhere.

Related

RXCPP , stack overflow when I did use retry operator indefinitely in observable

I'm trying to make an observable that when an error is detected, this will be execute again, but did notice something , when "on_error()" with "retry" operator is execute, this only re-run again the Observable but, the current instance of Observable still in the current stack, in other words is still alive
I did make a test to verify the behavior
#include <string>
#include "rxcpp/rx.hpp"
class test_class
{
public:
int a;
test_class() {
printf("Create Obj \n");
a = 1;
}
~test_class() {
printf("Destroy Obj \n");
a = 0;
}
};
int main()
{
// Create Observable request
auto values = rxcpp::observable<>::create<std::string>(
[&](rxcpp::subscriber<std::string> subscriber) {
test_class test;
while (subscriber.is_subscribed()) {
std::exception_ptr eptr = std::current_exception();
subscriber.on_error(eptr);
int a;
a = 2;
subscriber.on_next("normal");
}
})
.retry()
.as_dynamic();
values.
subscribe(
[](std::string v) {
printf("OnNext: %s\n", v.c_str()); },
[](std::exception_ptr ep) {
printf("OnError: %s\n", rxcpp::util::what(ep).c_str()); },
[]() {
printf("OnCompleted\n"); });
}
So, my input output is
Create Obj
Create Obj
Create Obj
Create Obj
...
I did expect to see "Destroy Obj" output as well
also I got a Stack overflow exception
My goal is , execute an Observable Object, that when an error is triggered, this could be restart again, but destroying curruent one, in orden to prevent Stack overflow exception
Maybe there's exist another way to make this, could you help me?
I found a possible solution, I only remove the loop inside Observable and retry operator, then I add a loop in Subscribe operation
I know is not an "Elegant" solution but that is the idea that I want to do, could you help me on this?
How could be the better way using RxCPP library?
#include <string>
#include "rxcpp/rx.hpp"
class test_class
{
public:
int a;
test_class() {
printf("Create Obj \n");
a = 1;
}
~test_class() {
printf("Destroy Obj \n");
a = 0;
}
};
int main()
{
// Create Observable request
auto values = rxcpp::observable<>::create<std::string>(
[&](rxcpp::subscriber<std::string> subscriber) {
test_class test;
//while (subscriber.is_subscribed()) {
std::exception_ptr eptr = std::current_exception();
subscriber.on_error(eptr);
int a;
a = 2;
subscriber.on_next("normal");
//}
});
//.retry()
//.as_dynamic();
for (;;) {
values.
subscribe(
[](std::string v) {
printf("OnNext: %s\n", v.c_str()); },
[](std::exception_ptr ep) {
printf("OnError: %s\n", rxcpp::util::what(ep).c_str()); },
[]() {
printf("OnCompleted\n"); });
}
}
Here my output:
Create Obj
OnError: bad exception
Destroy Obj
Create Obj
OnError: bad exception
Destroy Obj
Without stack overflow exception error

multiple-readers, single-writer locks in OpenMP

There is an object shared by multiple threads to read from and write to, and I need to implement the class with a reader-writer lock which has the following functions:
It might be declared occupied by one and no more than one thread. Any other threads that try to occupy it will be rejected, and continue to do their works rather than be blocked.
Any of the threads are allowed to ask whether the object is occupied by self or by others at any time, except for the time when it is being declared occupied or released.
Only the owner of the object is allowed to release its ownership, though others might try to do it as well. If it is not the owner, the releasing operation will be canceled.
The performance needs to be carefully considered.
I'm doing the work with OpenMP, so I hope to implement the lock using only the APIs within OpenMP, rather than POSIX, or so on. I have read this answer, but there are only solutions for implementations of C++ standard library. As mixing OpenMP with C++ standard library or POSIX thread model may slow down the program, I wonder is there a good solution for OpenMP?
I have tried like this, sometimes it worked fine but sometimes it crashed, and sometimes it was dead locked. I find it hard to debug as well.
class Element
{
public:
typedef int8_t label_t;
Element() : occupied_(-1) {}
// Set it occupied by thread #myThread.
// Return whether it is set successfully.
bool setOccupiedBy(const int myThread)
{
if (lock_.try_lock())
{
if (occupied_ == -1)
{
occupied_ = myThread;
ready_.set(true);
}
}
// assert(lock_.get() && ready_.get());
return occupied_ == myThread;
}
// Return whether it is occupied by other threads
// except for thread #myThread.
bool isOccupiedByOthers(const int myThread) const
{
bool value = true;
while (lock_.get() != ready_.get());
value = occupied_ != -1 && occupied_ != myThread;
return value;
}
// Return whether it is occupied by thread #myThread.
bool isOccupiedBySelf(const int myThread) const
{
bool value = true;
while (lock_.get() != ready_.get());
value = occupied_ == myThread;
return value;
}
// Clear its occupying mark by thread #myThread.
void clearOccupied(const int myThread)
{
while (true)
{
bool ready = ready_.get();
bool lock = lock_.get();
if (!ready && !lock)
return;
if (ready && lock)
break;
}
label_t occupied = occupied_;
if (occupied == myThread)
{
ready_.set(false);
occupied_ = -1;
lock_.unlock();
}
// assert(ready_.get() == lock_.get());
}
protected:
Atomic<label_t> occupied_;
// Locked means it is occupied by one of the threads,
// and one of the threads might be modifying the ownership
MutexLock lock_;
// Ready means it is occupied by one the the threads,
// and none of the threads is modifying the ownership.
Mutex ready_;
};
The atomic variable, mutex, and the mutex lock is implemented with OpenMP instructions as following:
template <typename T>
class Atomic
{
public:
Atomic() {}
Atomic(T&& value) : mutex_(value) {}
T set(const T& value)
{
T oldValue;
#pragma omp atomic capture
{
oldValue = mutex_;
mutex_ = value;
}
return oldValue;
}
T get() const
{
T value;
#pragma omp read
value = mutex_;
return value;
}
operator T() const { return get(); }
Atomic& operator=(const T& value)
{
set(value);
return *this;
}
bool operator==(const T& value) { return get() == value; }
bool operator!=(const T& value) { return get() != value; }
protected:
volatile T mutex_;
};
class Mutex : public Atomic<bool>
{
public:
Mutex() : Atomic<bool>(false) {}
};
class MutexLock : private Mutex
{
public:
void lock()
{
bool oldMutex = false;
while (oldMutex = set(true), oldMutex == true) {}
}
void unlock() { set(false); }
bool try_lock()
{
bool oldMutex = set(true);
return oldMutex == false;
}
using Mutex::operator bool;
using Mutex::get;
};
I also use the lock provided by OpenMP in alternative:
class OmpLock
{
public:
OmpLock() { omp_init_lock(&lock_); }
~OmpLock() { omp_destroy_lock(&lock_); }
void lock() { omp_set_lock(&lock_); }
void unlock() { omp_unset_lock(&lock_); }
int try_lock() { return omp_test_lock(&lock_); }
private:
omp_lock_t lock_;
};
By the way, I use gcc 4.9.4 and OpenMP 4.0, on x86_64 GNU/Linux.

Win32 Events -- I need to be notified when the event is cleared

Win 7, x64, Visual Studio Community 2015, C++
I have a thread which I need to pause/unpause or terminate, which I currently do with manually-reset "run" or "kill" events. The loop in the thread pauses each time for 5000ms.
My goal is to be able to stop waiting or kill the thread while in the middle of the wait.
The problem is the way I currently have it set up, I need to be notified when the "run" event goes to the non-signalled state, but there is no way to do this, unless I create an event with the inverted polarity, but this seems like a kludge. In short, I need a level-sensitive signal, not edge sensitive.
Maybe the event should just toggle the run state?
This is the thread function:
DWORD WINAPI DAQ::_fakeOutFn(void *param) {
DAQ *pThis = (DAQ *)param;
const DWORD timeout = 5000;
bool running = false;
HANDLE handles[] = { pThis->hFakeTaskRunningEvent, pThis->hFakeTaskKillEvent };
do {
DWORD result = WaitForMultipleObjects(2, handles, FALSE, INFINITE);
switch (result) {
case WAIT_OBJECT_0: // Run started or continued
running = true;
pThis->outputIndex++;
if (pThis->outputIndex >= pThis->numSamples)
pThis->outputIndex = 0;
// Wait here
// Not sure how to cancel this if the TaskRunningEvent goes false during the wait
DWORD result2 = WaitForMultipleObjects(2, handles, FALSE, timeout);
// Check result2, and 'continue' the loop if hFakeTaskRunningEvent went to NON-SIGNALLED state
break;
case WAIT_OBJECT_0 + 1: // Kill requested
running = false;
break;
default:
_ASSERT_EXPR(FALSE, L"Wait error");
break;
}
} while (running);
return 0;
}
Use separate events for the running and resume states. Then you can reset the resume event to pause, and signal the event to resume. The running event should be used to let the thread know when it has work to do, not when it should pause that work for a period of time.
DWORD WINAPI DAQ::_fakeOutFn(void *param)
{
DAQ *pThis = (DAQ *)param;
bool running = false;
HANDLE handles[] = { pThis->hFakeTaskRunningEvent, pThis->hFakeTaskKillEvent };
do
{
DWORD result = WaitForMultipleObjects(2, handles, FALSE, INFINITE);
switch (result)
{
case WAIT_OBJECT_0: // Run started
{
running = true;
pThis->outputIndex++;
if (pThis->outputIndex >= pThis->numSamples)
pThis->outputIndex = 0;
// check for pause here
HANDLE handles2[] = { pThis->hFakeTaskResumeEvent, pThis->hFakeTaskKillEvent };
DWORD result2 = WaitForMultipleObjects(2, handles2, FALSE, INFINITE);
switch (result2)
{
case WAIT_OBJECT_0;
break;
case WAIT_OBJECT_0 + 1: // Kill requested
running = false;
break;
default:
_ASSERT_EXPR(FALSE, L"Wait error");
break;
}
if (!running) break;
// continue working...
break;
}
case WAIT_OBJECT_0 + 1: // Kill requested
running = false;
break;
default:
_ASSERT_EXPR(FALSE, L"Wait error");
break;
}
}
while (running);
return 0;
}
here i be use not Events, but queue Apc to this thread with 'command' (run,pause,exit). however need know more about task, for select best solution. you writing service ?
struct DAQ
{
HANDLE _hEvent;
enum STATE {
running,
paused,
exit
} _state;
DAQ()
{
_hEvent = 0;
}
~DAQ()
{
if (_hEvent)
{
ZwClose(_hEvent);
}
}
NTSTATUS Init()
{
return ZwCreateEvent(&_hEvent, EVENT_ALL_ACCESS, 0, NotificationEvent, FALSE);
}
void Close()
{
if (HANDLE hEvent = InterlockedExchangePointer(&_hEvent, 0))
{
ZwClose(hEvent);
}
}
DWORD fakeOutFn()
{
DbgPrint("running\n");
_state = running;
ZwSetEvent(_hEvent, 0);
static LARGE_INTEGER Interval = { 0, MINLONG };
do ; while (0 <= ZwDelayExecution(TRUE, &Interval) && _state != exit);
DbgPrint("exit\n");
return 0;
}
static DWORD WINAPI _fakeOutFn(PVOID pThis)
{
return ((DAQ*)pThis)->fakeOutFn();
}
void OnApc(STATE state)
{
_state = state;
static PCSTR stateName[] = { "running", "paused" };
if (state < RTL_NUMBER_OF(stateName))
{
DbgPrint("%s\n", stateName[state]);
}
}
static void WINAPI _OnApc(PVOID pThis, PVOID state, PVOID)
{
((DAQ*)pThis)->OnApc((STATE)(ULONG_PTR)state);
}
};
void test()
{
DAQ d;
if (0 <= d.Init())
{
if (HANDLE hThread = CreateThread(0, 0, DAQ::_fakeOutFn, &d, 0, 0))
{
if (STATUS_SUCCESS == ZwWaitForSingleObject(d._hEvent, FALSE, 0))// need for not QueueApc too early. in case ServiceMain this event not need
{
d.Close();
int n = 5;
do
{
DAQ::STATE state;
if (--n)
{
state = (n & 1) != 0 ? DAQ::running : DAQ::paused;
}
else
{
state = DAQ::exit;
}
ZwQueueApcThread(hThread, DAQ::_OnApc, &d, (PVOID)state, 0);
} while (n);
}
ZwWaitForSingleObject(hThread, FALSE, 0);
ZwClose(hThread);
}
}
}

DoModal in critical section

in an parallel loop, there is a critical section. I try to execute an mfc dialog with DoModal in the critical section, however since main thread waits for parallel threads, there is no way for my dialog to show up and execute. In order to break this dependency, I create an executable and I run it as a process within my parallel loop. When the process shows dialog and gets the information. It returns and other threads keeps running.
However my team leader insist that there is a better way to do it which I couldn't figure out after doing hours of search :\
I tried a seperate thread in parallel for. It didn't worked.
I tried CWinThread (google say it is gui thread :\ which didn't helped)
I tired creating an exe and running it. That worked :)
int someCriticDialog()
{
#pragma omp critic (showCriticDlg)
{
CMyDialog ccc;
ccc.DoModal();
/* However the code below works
CreateProcess("someCriticDlg.exe", null, &hProcess);
WaitForSingeObject(hProcess, INFINITE);
*/
}
}
#pragma omp parallel
for (int i = 0; i < 5; i++)
someCriticDialog();
Let's say here is the problem:
void trouble_maker()
{
Sleep(10000);//application stops for 10 seconds
}
You can use PostMessage + PeekMessage + modal dialog to wait for it to finish through GUI window:
void PumpWaitingMessages()
{
MSG msg;
while (::PeekMessage(&msg, NULL, NULL, NULL, PM_NOREMOVE))
if (!AfxGetThread()->PumpMessage())
return;
}
BEGIN_MESSAGE_MAP(CMyDialog, CDialog)
ON_COMMAND(2000, OnDoSomething)
ON_COMMAND(IDCANCEL, OnCancel)
END_MESSAGE_MAP()
CMyDialog::CMyDialog(CWnd* par /*=NULL*/) : CDialog(IDD_DIALOG1, par)
{
working = false;
stop = false;
}
BOOL CMyDialog::OnInitDialog()
{
BOOL res = CDialog::OnInitDialog();
//call the function "OnDoSomething", but don't call it directly
PostMessage(WM_COMMAND, 2000, 0);
return res;
}
void CMyDialog::OnCancel()
{
if (working)
{
stop = true;
}
else
{
CDialog::OnCancel();
}
}
void CMyDialog::OnDoSomething()
{
HANDLE h = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)&trouble_maker, NULL, 0, NULL);
working = true;
for (;;)
{
if (WAIT_TIMEOUT != WaitForSingleObject(h, 100)) break;
PumpWaitingMessages();
//update progress bar or something...
if (stop)
{
//terminate if it's safe
//BOOL res = TerminateThread(h, 0);
//CloseHandle(h);
//CDialog::OnCancel();
//return;
}
}
working = false;
MessageBox("done");
}

Implementing boost::barrier in C++11

I've been trying to get a project rid of every boost reference and switch to pure C++11.
At one point, thread workers are created which wait for a barrier to give the 'go' command, do the work (spread through the N threads) and synchronize when all of them finish. The basic idea is that the main loop gives the go order (boost::barrier .wait()) and waits for the result with the same function.
I had implemented in a different project a custom made Barrier based on the Boost version and everything worked perfectly. Implementation is as follows:
Barrier.h:
class Barrier {
public:
Barrier(unsigned int n);
void Wait(void);
private:
std::mutex counterMutex;
std::mutex waitMutex;
unsigned int expectedN;
unsigned int currentN;
};
Barrier.cpp
Barrier::Barrier(unsigned int n) {
expectedN = n;
currentN = expectedN;
}
void Barrier::Wait(void) {
counterMutex.lock();
// If we're the first thread, we want an extra lock at our disposal
if (currentN == expectedN) {
waitMutex.lock();
}
// Decrease thread counter
--currentN;
if (currentN == 0) {
currentN = expectedN;
waitMutex.unlock();
currentN = expectedN;
counterMutex.unlock();
} else {
counterMutex.unlock();
waitMutex.lock();
waitMutex.unlock();
}
}
This code has been used on iOS and Android's NDK without any problems, but when trying it on a Visual Studio 2013 project it seems only a thread which locked a mutex can unlock it (assertion: unlock of unowned mutex).
Is there any non-spinning (blocking, such as this one) version of barrier that I can use that works for C++11? I've only been able to find barriers which used busy-waiting which is something I would like to prevent (unless there is really no reason for it).
class Barrier {
public:
explicit Barrier(std::size_t iCount) :
mThreshold(iCount),
mCount(iCount),
mGeneration(0) {
}
void Wait() {
std::unique_lock<std::mutex> lLock{mMutex};
auto lGen = mGeneration;
if (!--mCount) {
mGeneration++;
mCount = mThreshold;
mCond.notify_all();
} else {
mCond.wait(lLock, [this, lGen] { return lGen != mGeneration; });
}
}
private:
std::mutex mMutex;
std::condition_variable mCond;
std::size_t mThreshold;
std::size_t mCount;
std::size_t mGeneration;
};
Use a std::condition_variable instead of a std::mutex to block all threads until the last one reaches the barrier.
class Barrier
{
private:
std::mutex _mutex;
std::condition_variable _cv;
std::size_t _count;
public:
explicit Barrier(std::size_t count) : _count(count) { }
void Wait()
{
std::unique_lock<std::mutex> lock(_mutex);
if (--_count == 0) {
_cv.notify_all();
} else {
_cv.wait(lock, [this] { return _count == 0; });
}
}
};
Here's my version of the accepted answer above with Auto reset behavior for repetitive use; this was achieved by counting up and down alternately.
/**
* #brief Represents a CPU thread barrier
* #note The barrier automatically resets after all threads are synced
*/
class Barrier
{
private:
std::mutex m_mutex;
std::condition_variable m_cv;
size_t m_count;
const size_t m_initial;
enum State : unsigned char {
Up, Down
};
State m_state;
public:
explicit Barrier(std::size_t count) : m_count{ count }, m_initial{ count }, m_state{ State::Down } { }
/// Blocks until all N threads reach here
void Sync()
{
std::unique_lock<std::mutex> lock{ m_mutex };
if (m_state == State::Down)
{
// Counting down the number of syncing threads
if (--m_count == 0) {
m_state = State::Up;
m_cv.notify_all();
}
else {
m_cv.wait(lock, [this] { return m_state == State::Up; });
}
}
else // (m_state == State::Up)
{
// Counting back up for Auto reset
if (++m_count == m_initial) {
m_state = State::Down;
m_cv.notify_all();
}
else {
m_cv.wait(lock, [this] { return m_state == State::Down; });
}
}
}
};
Seem all above answers don't work in the case the barrier is placed too near
Example: Each thread run the while loop look like this:
while (true)
{
threadBarrier->Synch();
// do heavy computation
threadBarrier->Synch();
// small external calculations like timing, loop count, etc, ...
}
And here is the solution using STL:
class ThreadBarrier
{
public:
int m_threadCount = 0;
int m_currentThreadCount = 0;
std::mutex m_mutex;
std::condition_variable m_cv;
public:
inline ThreadBarrier(int threadCount)
{
m_threadCount = threadCount;
};
public:
inline void Synch()
{
bool wait = false;
m_mutex.lock();
m_currentThreadCount = (m_currentThreadCount + 1) % m_threadCount;
wait = (m_currentThreadCount != 0);
m_mutex.unlock();
if (wait)
{
std::unique_lock<std::mutex> lk(m_mutex);
m_cv.wait(lk);
}
else
{
m_cv.notify_all();
}
};
};
And the solution for Windows:
class ThreadBarrier
{
public:
SYNCHRONIZATION_BARRIER m_barrier;
public:
inline ThreadBarrier(int threadCount)
{
InitializeSynchronizationBarrier(
&m_barrier,
threadCount,
8000);
};
public:
inline void Synch()
{
EnterSynchronizationBarrier(
&m_barrier,
0);
};
};

Resources