RXCPP , stack overflow when I did use retry operator indefinitely in observable - components

I'm trying to make an observable that when an error is detected, this will be execute again, but did notice something , when "on_error()" with "retry" operator is execute, this only re-run again the Observable but, the current instance of Observable still in the current stack, in other words is still alive
I did make a test to verify the behavior
#include <string>
#include "rxcpp/rx.hpp"
class test_class
{
public:
int a;
test_class() {
printf("Create Obj \n");
a = 1;
}
~test_class() {
printf("Destroy Obj \n");
a = 0;
}
};
int main()
{
// Create Observable request
auto values = rxcpp::observable<>::create<std::string>(
[&](rxcpp::subscriber<std::string> subscriber) {
test_class test;
while (subscriber.is_subscribed()) {
std::exception_ptr eptr = std::current_exception();
subscriber.on_error(eptr);
int a;
a = 2;
subscriber.on_next("normal");
}
})
.retry()
.as_dynamic();
values.
subscribe(
[](std::string v) {
printf("OnNext: %s\n", v.c_str()); },
[](std::exception_ptr ep) {
printf("OnError: %s\n", rxcpp::util::what(ep).c_str()); },
[]() {
printf("OnCompleted\n"); });
}
So, my input output is
Create Obj
Create Obj
Create Obj
Create Obj
...
I did expect to see "Destroy Obj" output as well
also I got a Stack overflow exception
My goal is , execute an Observable Object, that when an error is triggered, this could be restart again, but destroying curruent one, in orden to prevent Stack overflow exception
Maybe there's exist another way to make this, could you help me?

I found a possible solution, I only remove the loop inside Observable and retry operator, then I add a loop in Subscribe operation
I know is not an "Elegant" solution but that is the idea that I want to do, could you help me on this?
How could be the better way using RxCPP library?
#include <string>
#include "rxcpp/rx.hpp"
class test_class
{
public:
int a;
test_class() {
printf("Create Obj \n");
a = 1;
}
~test_class() {
printf("Destroy Obj \n");
a = 0;
}
};
int main()
{
// Create Observable request
auto values = rxcpp::observable<>::create<std::string>(
[&](rxcpp::subscriber<std::string> subscriber) {
test_class test;
//while (subscriber.is_subscribed()) {
std::exception_ptr eptr = std::current_exception();
subscriber.on_error(eptr);
int a;
a = 2;
subscriber.on_next("normal");
//}
});
//.retry()
//.as_dynamic();
for (;;) {
values.
subscribe(
[](std::string v) {
printf("OnNext: %s\n", v.c_str()); },
[](std::exception_ptr ep) {
printf("OnError: %s\n", rxcpp::util::what(ep).c_str()); },
[]() {
printf("OnCompleted\n"); });
}
}
Here my output:
Create Obj
OnError: bad exception
Destroy Obj
Create Obj
OnError: bad exception
Destroy Obj
Without stack overflow exception error

Related

use a lambda to start a thread which is a class attribute

I would like to assign a name to a thread, the thread itself must do this. The thread is a class member of the class foo.
I would like to start this thread with a lambda but unfortunately I get the error message:
no match for call to '(std::thread) (foo::start()::<lambda()>)
Can someone explain to me where the problem is?
Previously I had created a temporary thread object, and put this with move on the thread "manage", however, I can then give no name.
class foo {
public:
int start()
{
this->manage([this](){
auto nto_errno = pthread_setname_np(manage.native_handle(),"manage"); // Give thread an human readable name (non portable!)
while(1){
printf("do work");
}
});
return 1;
}
private:
int retVal;
std::thread manage;
};
You passed the lambda in a wrong way, after initialization the manage thread can't be initialized again. you should create a new std::thread and assign it.
the following compiles and indeed prints "manage".
class foo {
public:
int start()
{
manage = std::thread([this]{
auto nto_errno = pthread_setname_np(manage.native_handle(),"manage");
char name[16];
pthread_getname_np(pthread_self(), &name[0], sizeof(name));
cout << name << endl;
});
manage.join();
return 1;
}
private:
int retVal;
std::thread manage;
};

properly ending an infinite std::thread

I have a reusable class that starts up an infinite thread. this thread can only be killed by calling a stop function that sets a kill switch variable. When looking around, there is quite a bit of argument over volatile vs atomic variables.
The following is my code:
program.cpp
int main()
{
ThreadClass threadClass;
threadClass.Start();
Sleep(1000);
threadClass.Stop();
Sleep(50);
threaClass.Stop();
}
ThreadClass.h
#pragma once
#include <atomic>
#include <thread>
class::ThreadClass
{
public:
ThreadClass(void);
~ThreadClass(void);
void Start();
void Stop();
private:
void myThread();
std::atomic<bool> runThread;
std::thread theThread;
};
ThreadClass.cpp
#include "ThreadClass.h"
ThreadClass::ThreadClass(void)
{
runThread = false;
}
ThreadClass::~ThreadClass(void)
{
}
void ThreadClass::Start()
{
runThread = true;
the_thread = std::thread(&mythread, this);
}
void ThreadClass::Stop()
{
if(runThread)
{
runThread = false;
if (the_thread.joinable())
{
the_thread.join();
}
}
}
void ThreadClass::mythread()
{
while(runThread)
{
//dostuff
Sleep(100); //or chrono
}
}
The code that i am representing here mirrors an issue that our legacy code had in place. We call the stop function 2 times, which will try to join the thread 2 times. This results in an invalid handle exception. I have coded the Stop() function in order to work around that issue, but my question is why would the the join fail the second time if the thread has completed and joined? Is there a better way programmatically to assume that the thread is valid before trying to join?

may the queue leak data? or there is bug in my thread code?

I have a data structure :
struct {
mutex m;
condition_variable cv_p2c;
queue<int> qi;
bool finished;
} sdf_inst;
and I have a producer that generate 100 integer and insert them into queue qi after getting lock.
void producer () {
for(int i = 0 ; i < 100 ; i++ ) {
{
unique_lock<mutex> ulck(sdf_inst.m);//LOCK
sdf_inst.qi.push(i);
cout<<"adding "<<i<<endl<<flush;
}
sdf_inst.cv_p2c.notify_one();
}
unique_lock<mutex> ulck(sdf_inst.m);//LOCK
sdf_inst.finished=true;
sdf_inst.cv_p2c.notify_one();
}
After all datas have been inserted, it will acquire the lock and set the finished flag, and exit.
And I have another consumer :
void consumer () {
while(true) {
unique_lock<mutex> ulck(sdf_inst.m);//LOCK
sdf_inst.cv_p2c.wait(ulck,[]{return sdf_inst.qi.empty()==false || sdf_inst.finished==true ; });
print_all();
if(sdf_inst.finished=true) return;
}
}
It just acquire lock, and wait for notify from producer, and print all the data currently in the queue qi with the print_all function below:
void print_all () {
while(sdf_inst.qi.empty()==false) {
int i = sdf_inst.qi.front();
sdf_inst.qi.pop();
cout<<"shared_i "<< i <<endl<<flush;
}
return;
}
I think it should print all the 100 data, but sometimes it print only part of them.
I have studied the code carefully and find no error in synchronization, so may the lost data caused by the queue leakage?
I found the cause of this problem, in consumer():
if(sdf_inst.finished=true) return;
should be
if ( sdf_inst.finished )
{
return;
}

c++11 update pthread based code to std::thread or boost::thread

I have the following code that I would like to update to be more portable and c++11 friendly. However, I'm stuck as how to replace the pthread calls. I can use std::this_thread::get_id() to get the thread id but can't tell if that thread is still alive.
pthread_t activeThread = 0;
pthread_t getCurrentThread() {
return pthread_self();
}
bool isActiveThreadAlive() {
if(activeThread == 0) {
return false;
}
return pthread_kill(activeThread, 0) != ESRCH;
}
Potential std::thread version...
std::thread::id activeThread = std::thread::id();
std::thread::id getCurrentThread() {
return std::this_thread::get_id();
}
bool isActiveThreadAlive() {
if(activeThread == std::thread::id()) {
return false;
}
return pthread_kill(activeThread, 0) != ESRCH;// <--- need replacement!!!
}
What the code really needs to do is know if the thread has died from an exception or some other error that caused it to terminate without releasing the object. As in the following...
std::unique_lock<std::mutex> uLock = getLock();
while (activeThread != 0) {
if (threadWait.wait_for(uLock, std::chrono::seconds(30)) == std::cv_status::timeout) {
if (!isActiveThreadAlive()) {
activeThread = 0;
}
}
}
activeThread = getCurrentThread();
uLock.unlock();
try {
// do stuff here.
}
catch (const std::exception&) {
}
uLock.lock();
activeThread = 0;
And before anyone asks I do not have a guarantee of control over when, where or how the threads are created. The threads that call the functions may be from anywhere.

Implementing boost::barrier in C++11

I've been trying to get a project rid of every boost reference and switch to pure C++11.
At one point, thread workers are created which wait for a barrier to give the 'go' command, do the work (spread through the N threads) and synchronize when all of them finish. The basic idea is that the main loop gives the go order (boost::barrier .wait()) and waits for the result with the same function.
I had implemented in a different project a custom made Barrier based on the Boost version and everything worked perfectly. Implementation is as follows:
Barrier.h:
class Barrier {
public:
Barrier(unsigned int n);
void Wait(void);
private:
std::mutex counterMutex;
std::mutex waitMutex;
unsigned int expectedN;
unsigned int currentN;
};
Barrier.cpp
Barrier::Barrier(unsigned int n) {
expectedN = n;
currentN = expectedN;
}
void Barrier::Wait(void) {
counterMutex.lock();
// If we're the first thread, we want an extra lock at our disposal
if (currentN == expectedN) {
waitMutex.lock();
}
// Decrease thread counter
--currentN;
if (currentN == 0) {
currentN = expectedN;
waitMutex.unlock();
currentN = expectedN;
counterMutex.unlock();
} else {
counterMutex.unlock();
waitMutex.lock();
waitMutex.unlock();
}
}
This code has been used on iOS and Android's NDK without any problems, but when trying it on a Visual Studio 2013 project it seems only a thread which locked a mutex can unlock it (assertion: unlock of unowned mutex).
Is there any non-spinning (blocking, such as this one) version of barrier that I can use that works for C++11? I've only been able to find barriers which used busy-waiting which is something I would like to prevent (unless there is really no reason for it).
class Barrier {
public:
explicit Barrier(std::size_t iCount) :
mThreshold(iCount),
mCount(iCount),
mGeneration(0) {
}
void Wait() {
std::unique_lock<std::mutex> lLock{mMutex};
auto lGen = mGeneration;
if (!--mCount) {
mGeneration++;
mCount = mThreshold;
mCond.notify_all();
} else {
mCond.wait(lLock, [this, lGen] { return lGen != mGeneration; });
}
}
private:
std::mutex mMutex;
std::condition_variable mCond;
std::size_t mThreshold;
std::size_t mCount;
std::size_t mGeneration;
};
Use a std::condition_variable instead of a std::mutex to block all threads until the last one reaches the barrier.
class Barrier
{
private:
std::mutex _mutex;
std::condition_variable _cv;
std::size_t _count;
public:
explicit Barrier(std::size_t count) : _count(count) { }
void Wait()
{
std::unique_lock<std::mutex> lock(_mutex);
if (--_count == 0) {
_cv.notify_all();
} else {
_cv.wait(lock, [this] { return _count == 0; });
}
}
};
Here's my version of the accepted answer above with Auto reset behavior for repetitive use; this was achieved by counting up and down alternately.
/**
* #brief Represents a CPU thread barrier
* #note The barrier automatically resets after all threads are synced
*/
class Barrier
{
private:
std::mutex m_mutex;
std::condition_variable m_cv;
size_t m_count;
const size_t m_initial;
enum State : unsigned char {
Up, Down
};
State m_state;
public:
explicit Barrier(std::size_t count) : m_count{ count }, m_initial{ count }, m_state{ State::Down } { }
/// Blocks until all N threads reach here
void Sync()
{
std::unique_lock<std::mutex> lock{ m_mutex };
if (m_state == State::Down)
{
// Counting down the number of syncing threads
if (--m_count == 0) {
m_state = State::Up;
m_cv.notify_all();
}
else {
m_cv.wait(lock, [this] { return m_state == State::Up; });
}
}
else // (m_state == State::Up)
{
// Counting back up for Auto reset
if (++m_count == m_initial) {
m_state = State::Down;
m_cv.notify_all();
}
else {
m_cv.wait(lock, [this] { return m_state == State::Down; });
}
}
}
};
Seem all above answers don't work in the case the barrier is placed too near
Example: Each thread run the while loop look like this:
while (true)
{
threadBarrier->Synch();
// do heavy computation
threadBarrier->Synch();
// small external calculations like timing, loop count, etc, ...
}
And here is the solution using STL:
class ThreadBarrier
{
public:
int m_threadCount = 0;
int m_currentThreadCount = 0;
std::mutex m_mutex;
std::condition_variable m_cv;
public:
inline ThreadBarrier(int threadCount)
{
m_threadCount = threadCount;
};
public:
inline void Synch()
{
bool wait = false;
m_mutex.lock();
m_currentThreadCount = (m_currentThreadCount + 1) % m_threadCount;
wait = (m_currentThreadCount != 0);
m_mutex.unlock();
if (wait)
{
std::unique_lock<std::mutex> lk(m_mutex);
m_cv.wait(lk);
}
else
{
m_cv.notify_all();
}
};
};
And the solution for Windows:
class ThreadBarrier
{
public:
SYNCHRONIZATION_BARRIER m_barrier;
public:
inline ThreadBarrier(int threadCount)
{
InitializeSynchronizationBarrier(
&m_barrier,
threadCount,
8000);
};
public:
inline void Synch()
{
EnterSynchronizationBarrier(
&m_barrier,
0);
};
};

Resources