C++ Qt: Redirect cout from a thread to emit a signal - multithreading

In a single thread, I have this beautiful class that redirects all cout output to a QTextEdit
#include <iostream>
#include <streambuf>
#include <string>
#include <QScrollBar>
#include "QTextEdit"
#include "QDateTime"
class ThreadLogStream : public std::basic_streambuf<char>, QObject
{
Q_OBJECT
public:
ThreadLogStream(std::ostream &stream) : m_stream(stream)
{
m_old_buf = stream.rdbuf();
stream.rdbuf(this);
}
~ThreadLogStream()
{
// output anything that is left
if (!m_string.empty())
{
log_window->append(m_string.c_str());
}
m_stream.rdbuf(m_old_buf);
}
protected:
virtual int_type overflow(int_type v)
{
if (v == '\n')
{
log_window->append(m_string.c_str());
m_string.erase(m_string.begin(), m_string.end());
}
else
m_string += v;
return v;
}
virtual std::streamsize xsputn(const char *p, std::streamsize n)
{
m_string.append(p, p + n);
long pos = 0;
while (pos != static_cast<long>(std::string::npos))
{
pos = m_string.find('\n');
if (pos != static_cast<long>(std::string::npos))
{
std::string tmp(m_string.begin(), m_string.begin() + pos);
log_window->append(tmp.c_str());
m_string.erase(m_string.begin(), m_string.begin() + pos + 1);
}
}
return n;
}
private:
std::ostream &m_stream;
std::streambuf *m_old_buf;
std::string m_string;
QTextEdit* log_window;
};
However, this doesn't work if ANY thread (QThread) is initiated with a cout. This is because all pointers are messed up, and one has to use signals and slots for allowing transfer of data between the sub-thread and the main thread.
I would like to modify this class to emit a signal rather than write to a text file. This requires that this class becomes a Q_OBJECT and be inherited from one. I tried to inherit from QObject in addition to std::basic_streambuf<char> and added Q_OBJECT macro in the body but it didn't compile.
Could you please help me to achieve this? What should I do to get this class to emit signals that I can connect to and that are thread safe?

For those who need the full "working" answer, here it's. I just copied it because #GraemeRock asked for it.
#ifndef ThreadLogStream_H
#define ThreadLogStream_H
#include <iostream>
#include <streambuf>
#include <string>
#include <QScrollBar>
#include "QTextEdit"
#include "QDateTime"
class ThreadLogStream : public QObject, public std::basic_streambuf<char>
{
Q_OBJECT
public:
ThreadLogStream(std::ostream &stream) : m_stream(stream)
{
m_old_buf = stream.rdbuf();
stream.rdbuf(this);
}
~ThreadLogStream()
{
// output anything that is left
if (!m_string.empty())
{
emit sendLogString(QString::fromStdString(m_string));
}
m_stream.rdbuf(m_old_buf);
}
protected:
virtual int_type overflow(int_type v)
{
if (v == '\n')
{
emit sendLogString(QString::fromStdString(m_string));
m_string.erase(m_string.begin(), m_string.end());
}
else
m_string += v;
return v;
}
virtual std::streamsize xsputn(const char *p, std::streamsize n)
{
m_string.append(p, p + n);
long pos = 0;
while (pos != static_cast<long>(std::string::npos))
{
pos = static_cast<long>(m_string.find('\n'));
if (pos != static_cast<long>(std::string::npos))
{
std::string tmp(m_string.begin(), m_string.begin() + pos);
emit sendLogString(QString::fromStdString(tmp));
m_string.erase(m_string.begin(), m_string.begin() + pos + 1);
}
}
return n;
}
private:
std::ostream &m_stream;
std::streambuf *m_old_buf;
std::string m_string;
signals:
void sendLogString(const QString& str);
};
#endif // ThreadLogStream_H

The derivation needs to happen QObject-first:
class LogStream : public QObject, std::basic_streambuf<char> {
Q_OBJECT
...
};
...
If the goal was to minimally modify your code, there's a simpler way. You don't need to inherit QObject to emit signals iff you know exactly what slots the signals are going to. All you need to do is to invoke the slot in a thread safe way:
QMetaObject::invokeMethod(log_window, "append", Qt::QueuedConnection,
Q_ARG(QString, tmp.c_str()));
To speed things up, you can cache the method so that it doesn't have to be looked up every time:
class LogStream ... {
QPointer<QTextEdit> m_logWindow;
QMetaMethod m_append;
LogStream::LogStream(...) :
m_logWindow(...),
m_append(m_logWindow->metaObject()->method(
m_logWindow->metaObject()->indexOfSlot("append(QString)") )) {
...
}
};
You can then invoke it more efficiently:
m_append.invoke(m_logWindow, Qt::QueuedConnection, Q_ARG(QString, tmp.c_str()));
Finally, whenever you're holding pointers to objects whose lifetimes are not under your control, it's helpful to use QPointer since it never dangles. A QPointer resets itself to 0 when the pointed-to object gets destructed. It will at least prevent you from dereferencing a dangling pointer, since it never dangles.

Related

Wait for thread queue to be empty

I am new to C++ and multithreading applications. I want to process a long list of data (potentially several thousands of entries) by dividing its entries among a few threads. I have retrieved a ThreadPool class and a Queue class from the web (it is my first time tackling the subject). I construct the threads and populate the queue in the following way (definitions at the end of the post):
ThreadPool *pool = new ThreadPool(8);
std::vector<std::function<void(int)>> *caller =
new std::vector<std::function<void(int)>>;
for (size_t i = 0; i < Nentries; ++i)
{
caller->push_back(
[=](int j){func(entries[i], j);});
pool->PushTask((*caller)[i]);
}
delete pool;
The problem is that only a number of entries equaling the number of created threads are processed, as if the program does not wait for the queue to be empty. Indeed, if I put
while (pool->GetWorkQueueLength()) {}
just before the pool destructor, the whole list is correctly processed. However, I am afraid I am consuming too many resources by using a while loop. Moreover, I have not found anyone doing anything like it, so I think this is the wrong approach and the classes I use have some error. Can anyone find the error (if present) or suggest another solution?
Here are the classes I use. I suppose the problem is in the implementation of the destructor, but I am not sure.
SynchronizeQueue.hh
#ifndef SYNCQUEUE_H
#define SYNCQUEUE_H
#include <list>
#include <mutex>
#include <condition_variable>
template<typename T>
class SynchronizedQueue
{
public:
SynchronizedQueue();
void Put(T const & data);
T Get();
size_t Size();
private:
SynchronizedQueue(SynchronizedQueue const &)=delete;
SynchronizedQueue & operator=(SynchronizedQueue const &)=delete;
std::list<T> queue;
std::mutex mut;
std::condition_variable condvar;
};
template<typename T>
SynchronizedQueue<T>::SynchronizedQueue()
{}
template<typename T>
void SynchronizedQueue<T>::Put(T const & data)
{
std::unique_lock<std::mutex> lck(mut);
queue.push_back(data);
condvar.notify_one();
}
template<typename T>
T SynchronizedQueue<T>::Get()
{
std::unique_lock<std::mutex> lck(mut);
while (queue.empty())
{
condvar.wait(lck);
}
T result = queue.front();
queue.pop_front();
return result;
}
template<typename T>
size_t SynchronizedQueue<T>::Size()
{
std::unique_lock<std::mutex> lck(mut);
return queue.size();
}
#endif
ThreadPool.hh
#ifndef THREADPOOL_H
#define THREADPOOL_H
#include "SynchronizedQueue.hh"
#include <atomic>
#include <functional>
#include <mutex>
#include <thread>
#include <vector>
class ThreadPool
{
public:
ThreadPool(int nThreads = 0);
virtual ~ThreadPool();
void PushTask(std::function<void(int)> func);
size_t GetWorkQueueLength();
private:
void WorkerThread(int i);
std::atomic<bool> done;
unsigned int threadCount;
SynchronizedQueue<std::function<void(int)>> workQueue;
std::vector<std::thread> threads;
};
#endif
ThreadPool.cc
#include "ThreadPool.hh"
#include "SynchronizedQueue.hh"
void doNothing(int i)
{}
ThreadPool::ThreadPool(int nThreads)
: done(false)
{
if (nThreads <= 0)
{
threadCount = std::thread::hardware_concurrency();
}
else
{
threadCount = nThreads;
}
for (unsigned int i = 0; i < threadCount; ++i)
{
threads.push_back(std::thread(&ThreadPool::WorkerThread, this, i));
}
}
ThreadPool::~ThreadPool()
{
done = true;
for (unsigned int i = 0; i < threadCount; ++i)
{
PushTask(&doNothing);
}
for (auto& th : threads)
{
if (th.joinable())
{
th.join();
}
}
}
void ThreadPool::PushTask(std::function<void(int)> func)
{
workQueue.Put(func);
}
void ThreadPool::WorkerThread(int i)
{
while (!done)
{
workQueue.Get()(i);
}
}
size_t ThreadPool::GetWorkQueueLength()
{
return workQueue.Size();
}
You can push tasks saying "done" instead of setting "done" via atomic variable.
So that each thread will exit by itself when seeing "done" task, and no earlier. In destructor you only need to push these tasks and join threads. This is called "poison pill".
Alternatively, if you insist on your current design with done variable, you can wait on the same condition you already have:
std::unique_lock<std::mutex> lck(mut);
while (!queue.empty())
{
condvar.wait(lck);
}
But then you'll need to change your notify_one to notify_all, and this may be sub-optimal.
I want to process a long list of data (potentially several thousands of entries) by dividing its entries among a few threads.
You can do that with parallel algorithms, like tbb::parallel_for:
#include <tbb/parallel_for.h>
#include <vector>
void func(int entry);
int main () {
std::vector<int> entries(1000000);
tbb::parallel_for(size_t{0}, entries.size(), [&](size_t i) { func(entries[i]); });
}
If you need sequential thread ids, you can do:
void func(int element, int thread_id);
template<class C>
inline auto make_range(C& c) -> decltype(tbb::blocked_range<decltype(c.begin())>(c.begin(), c.end())) {
return tbb::blocked_range<decltype(c.begin())>(c.begin(), c.end());
}
int main () {
std::vector<int> entries(1000000);
std::atomic<int> thread_counter{0};
tbb::parallel_for(make_range(entries), [&](auto sub_range) {
static thread_local int const thread_id = thread_counter.fetch_add(1, std::memory_order_relaxed);
for(auto& element : sub_range)
func(element, thread_id);
});
}
Alternatively, there is std::this_thread::get_id.

Overridden virtual function not called from thread

I am writing a base class to manage threads. The idea is to allow the thread function to be overridden in child class while the base class manages thread life cycle. I ran into a strange behavior which I don't understand - it seems that the virtual function mechanism does not work when the call is made from a thread. To illustrate my problem, I reduced my code to the following:
#include <iostream>
#include <thread>
using namespace std;
struct B
{
thread t;
void thread_func_non_virt()
{
thread_func();
}
virtual void thread_func()
{
cout << "B::thread_func\n";
}
B(): t(thread(&B::thread_func_non_virt, this)) { }
void join() { t.join(); }
};
struct C : B
{
virtual void thread_func() override
{
cout << "C::thread_func\n";
}
};
int main()
{
C c; // output is "B::thread_func" but "C::thread_func" is expected
c.join();
c.thread_func_non_virt(); // output "C::thread_func" as expected
}
I tried with both Visual studio 2017 and g++ 5.4 (Ubuntu 16) and found the behavior is consistent. Can someone point out where I got wrong?
== UPDATE ==
Based on Igor's answer, I moved the thread creation out of the constructor into a separate method and calling that method after the constructor and got the desired behavior.
Your program exhibits undefined behavior. There's a race on *this between thread_func and C's (implicitly defined) constructor.
#include <iostream>
#include <thread>
using namespace std;
struct B
{
thread t;
void thread_func_non_virt()
{
thread_func();
}
virtual void thread_func()
{
cout << "B::thread_func\n";
}
B(B*ptr): t(thread(&B::thread_func_non_virt, ptr))
{
}
void join() { t.join(); }
};
struct C:public B
{
C():B(this){}
virtual void thread_func() override
{
cout << "C::thread_func\n";
}
};
int main()
{
C c; // "C::thread_func" is expected as expected
c.join();
c.thread_func_non_virt(); // output "C::thread_func" as expected
}

How to prematurely kill std::async threads before they are finished *without* using a std::atomic_bool?

I have a function that takes a callback, and used it to do work on 10 separate threads. However, it is often the case that not all of the work is needed. For example, if the desired result is obtained on the third thread, it should stop all work being done on of the remaining alive threads.
This answer here suggests that it is not possible unless you have the callback functions take an additional std::atomic_bool argument, that signals whether the function should terminate prematurely.
This solution does not work for me. The workers are spun up inside a base class, and the whole point of this base class is to abstract away details of multithreading. How can I do this? I am anticipating that I will have to ditch std::async for something more involved.
#include <iostream>
#include <future>
#include <vector>
class ABC{
public:
std::vector<std::future<int> > m_results;
ABC() {};
~ABC(){};
virtual int callback(int a) = 0;
void doStuffWithCallBack();
};
void ABC::doStuffWithCallBack(){
// start working
for(int i = 0; i < 10; ++i)
m_results.push_back(std::async(&ABC::callback, this, i));
// analyze results and cancel all threads when you get the 1
for(int j = 0; j < 10; ++j){
double foo = m_results[j].get();
if ( foo == 1){
break; // but threads continue running
}
}
std::cout << m_results[9].get() << " <- this shouldn't have ever been computed\n";
}
class Derived : public ABC {
public:
Derived() : ABC() {};
~Derived() {};
int callback(int a){
std::cout << a << "!\n";
if (a == 3)
return 1;
else
return 0;
};
};
int main(int argc, char **argv)
{
Derived myObj;
myObj.doStuffWithCallBack();
return 0;
}
I'll just say that this should probably not be a part of a 'normal' program, since it could leak resources and/or leave your program in an unstable state, but in the interest of science...
If you have control of the thread loop, and you don't mind using platform features, you could inject an exception into the thread. With posix you can use signals for this, on Windows you would have to use SetThreadContext(). Though the exception will generally unwind the stack and call destructors, your thread may be in a system call or other 'non-exception safe place' when the exception occurs.
Disclaimer: I only have Linux at the moment, so I did not test the Windows code.
#if defined(_WIN32)
# define ITS_WINDOWS
#else
# define ITS_POSIX
#endif
#if defined(ITS_POSIX)
#include <signal.h>
#endif
void throw_exception() throw(std::string())
{
throw std::string();
}
void init_exceptions()
{
volatile int i = 0;
if (i)
throw_exception();
}
bool abort_thread(std::thread &t)
{
#if defined(ITS_WINDOWS)
bool bSuccess = false;
HANDLE h = t.native_handle();
if (INVALID_HANDLE_VALUE == h)
return false;
if (INFINITE == SuspendThread(h))
return false;
CONTEXT ctx;
ctx.ContextFlags = CONTEXT_CONTROL;
if (GetThreadContext(h, &ctx))
{
#if defined( _WIN64 )
ctx.Rip = (DWORD)(DWORD_PTR)throw_exception;
#else
ctx.Eip = (DWORD)(DWORD_PTR)throw_exception;
#endif
bSuccess = SetThreadContext(h, &ctx) ? true : false;
}
ResumeThread(h);
return bSuccess;
#elif defined(ITS_POSIX)
pthread_kill(t.native_handle(), SIGUSR2);
#endif
return false;
}
#if defined(ITS_POSIX)
void worker_thread_sig(int sig)
{
if(SIGUSR2 == sig)
throw std::string();
}
#endif
void init_threads()
{
#if defined(ITS_POSIX)
struct sigaction sa;
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = worker_thread_sig;
sigaction(SIGUSR2, &sa, 0);
#endif
}
class tracker
{
public:
tracker() { printf("tracker()\n"); }
~tracker() { printf("~tracker()\n"); }
};
int main(int argc, char *argv[])
{
init_threads();
printf("main: starting thread...\n");
std::thread t([]()
{
try
{
tracker a;
init_exceptions();
printf("thread: started...\n");
std::this_thread::sleep_for(std::chrono::minutes(1000));
printf("thread: stopping...\n");
}
catch(std::string s)
{
printf("thread: exception caught...\n");
}
});
printf("main: sleeping...\n");
std::this_thread::sleep_for(std::chrono::seconds(2));
printf("main: aborting...\n");
abort_thread(t);
printf("main: joining...\n");
t.join();
printf("main: exiting...\n");
return 0;
}
Output:
main: starting thread...
main: sleeping...
tracker()
thread: started...
main: aborting...
main: joining...
~tracker()
thread: exception caught...
main: exiting...

thread safe boost intrusive list is slow

I wrapped a boost intrusive list with mutex to make it thread safe, to be used as a producer/consumer queue.
But on windows (MSVC 14) it's really slow, after profiling, 95% of time is spent idle, mainly on push() and waint_and_pop() methods.
I only have 1 producer and 2 producer/consumer threads.
Any suggestions to make this faster?
#ifndef INTRUSIVE_CONCURRENT_QUEUE_HPP
#define INTRUSIVE_CONCURRENT_QUEUE_HPP
#include <thread>
#include <mutex>
#include <condition_variable>
#include <boost/intrusive/list.hpp>
using namespace boost::intrusive;
template<typename Data>
class intrusive_concurrent_queue
{
protected:
list<Data, constant_time_size<false> > the_queue;
mutable std::mutex the_mutex;
std::condition_variable the_condition_variable;
public:
void push(Data * data)
{
std::unique_lock<std::mutex> lock(the_mutex);
the_queue.push_back(*data);
lock.unlock();
the_condition_variable.notify_one();
}
bool empty() const
{
std::unique_lock<std::mutex> lock(the_mutex);
return the_queue.empty();
}
size_t unsafe_size() const
{
return the_queue.size();
}
size_t size() const
{
std::unique_lock<std::mutex> lock(the_mutex);
return the_queue.size();
}
Data* try_pop()
{
Data* popped_ptr;
std::unique_lock<std::mutex> lock(the_mutex);
if(the_queue.empty())
{
return nullptr;
}
popped_ptr= & the_queue.front();
the_queue.pop_front();
return popped_ptr;
}
Data* wait_and_pop(const bool & exernal_stop = false)
{
Data* popped_ptr;
std::unique_lock<std::mutex> lock(the_mutex);
the_condition_variable.wait(lock,[&]{ return ! ( the_queue.empty() | exernal_stop ) ; });
if ( exernal_stop){
return nullptr;
}
popped_ptr=&the_queue.front();
the_queue.pop_front();
return popped_ptr;
}
intrusive_concurrent_queue<Data> & operator=(intrusive_concurrent_queue<Data>&& origin)
{
this->the_queue = std::move(the_queue);
return *this;
}
};
#endif // !INTRUSIVE_CONCURRENT_QUEUE_HPP
Try removing the lock from the methods and lock the whole data structure when you do things with it.

Why this boost thread creation does't compile?

I wrote some multithreading code using Boost thread library. I initialized two threads in the constructor using the placeholder _1 as the argument required by member function fillSample(int num). But this doesn't compile in my Visual Studio 2010. Following is the code:
#include<boost/thread.hpp>
#include<boost/thread/condition.hpp>
#include<boost/bind/placeholders.hpp>
#define SAMPLING_FREQ 250
#define MAX_NUM_SAMPLES 5*60*SAMPLING_FREQ
#define BUFFER_SIZE 8
class ECG
{
private:
int sample[BUFFER_SIZE];
int sampleIdx;
int readIdx, writeIdx;
boost::thread m_ThreadWrite;
boost::thread m_ThreadRead;
boost::mutex m_Mutex;
boost::condition bufferNotFull, bufferNotEmpty;
public:
ECG();
void fillSample(int num); //get sample from the data stream
void processSample(); //process ECG sample, return the last processed
};
ECG::ECG() : readyFlag(false), sampleIdx(0), readIdx(0), writeIdx(0)
{
m_ThreadWrite=boost::thread((boost::bind(&ECG::fillSample, this, _1)));
m_ThreadRead=boost::thread((boost::bind(&ECG::processSample, this)));
}
void ECG::fillSample(int num)
{
boost::mutex::scoped_lock lock(m_Mutex);
while( (writeIdx-readIdx)%BUFFER_SIZE == BUFFER_SIZE-1 )
{
bufferNotFull.wait(lock);
}
sample[writeIdx] = num;
writeIdx = (writeIdx+1) % BUFFER_SIZE;
bufferNotEmpty.notify_one();
}
void ECG::processSample()
{
boost::mutex::scoped_lock lock(m_Mutex);
while( readIdx == writeIdx )
{
bufferNotEmpty.wait(lock);
}
sample[readIdx] *= 2;
readIdx = (readIdx+1) % BUFFER_SIZE;
++sampleIdx;
bufferNotFull.notify_one();
}
I already included the placeholders.hpp header file but it still doesn't compile. If I replace the _1 with 0, then it will work. But this will initialize the thread function with 0, which is not what I want. Any ideas on how to make this work?
Move the creation to the initialization list:
m_ThreadWrite(boost::bind(&ECG::fillSample, this, _1)), ...
thread object is not copyable, and your compiler doesn't support its move constructor.

Resources