cannot handle QNetworkAccessManager::finised signal in multithreading - multithreading

I want to serialize network requests using QNetworkAccessManager. For achieving it i wrote such class:
#ifndef CLIENT_H
#define CLIENT_H
#include <queue>
#include <mutex>
#include <condition_variable>
#include <QtNetwork/QNetworkAccessManager>
#include <QtNetwork/QNetworkReply>
#include <QtNetwork/QNetworkRequest>
class Client : public QObject
{
Q_OBJECT
struct RequestRecord
{
RequestRecord(QString u, int o):url(u),operation(o){}
QString url;
int operation;
};
std::mutex mutex;
std::queue<RequestRecord*> requests;
QNetworkAccessManager *manager;
bool running;
std::condition_variable cv;
public:
Client():manager(nullptr){}
~Client()
{
if(manager)
delete manager;
}
void request_cppreference()
{
std::unique_lock<std::mutex> lock(mutex);
requests.push(new RequestRecord("http://en.cppreference.com",0));
cv.notify_one();
}
void request_qt()
{
std::unique_lock<std::mutex> lock(mutex);
requests.push(new RequestRecord("http://doc.qt.io/qt-5/qnetworkaccessmanager.html",1));
cv.notify_one();
}
void process()
{
manager = new QNetworkAccessManager;
connect(manager,&QNetworkAccessManager::finished,[this](QNetworkReply *reply)
{
std::unique_lock<std::mutex> lock(mutex);
RequestRecord *front = requests.front();
requests.pop();
delete front;
reply->deleteLater();
});
running = true;
while (running)
{
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock);
RequestRecord *front = requests.front();
manager->get(QNetworkRequest(QUrl(front->url)));
}
}
};
#endif // CLIENT_H
As one can see, there are 2 methods for requesting data from network and method process, which should be called in separate thread.
I'm using this class as follows:
Client *client = new Client;
std::thread thr([client](){
client->process();
});
std::this_thread::sleep_for(std::chrono::seconds(1));
client->request_qt();
std::this_thread::sleep_for(std::chrono::milliseconds(1));
client->request_cppreference();
This example illustrate 2 consecutive requests to network from one thread and processing of these request in another. All works fine except my lambda is never called. Requests are sent (checked it using wireshark), but i cannot get replies. What is the cause?

as #thuga suppose the problem was in event loop. My thread always waiting on cv and thus cannot process events, little hack solve the problem:
void process()
{
manager = new QNetworkAccessManager;
connect(manager,&QNetworkAccessManager::finished,[this](QNetworkReply *reply)
{
std::unique_lock<std::mutex> lock(mutex);
RequestRecord *front = requests.front();
requests.pop();
delete front;
qDebug() << reply->readAll();
processed = true;
reply->deleteLater();
});
running = true;
while (running)
{
{
std::unique_lock<std::mutex> lock(mutex);
cv.wait(lock);
if(requests.size() > 0 && processed)
{
RequestRecord *front = requests.front();
manager->get(QNetworkRequest(QUrl(front->url)));
processed = false;
QtConcurrent::run([this]()
{
while (running)
{
cv.notify_one();
msleep(10);
}
});
}
}
QCoreApplication::processEvents();
}
}
};
it's not beautiful obvious since it is using 3 threads instead of 2, but it is Qt with this perfect phrase:
QUrl QNetworkReply::url() const Returns the URL of the content
downloaded or uploaded. Note that the URL may be different from that
of the original request. If the
QNetworkRequest::FollowRedirectsAttribute was set in the request, then
this function returns the current url that the network API is
accessing, i.e the url emitted in the QNetworkReply::redirected
signal.

Related

Regarding thread communication to post task back from called child thread to main thread

I have a requirement to post a task from child thread to main thread back. I am creating child thread from the main thread and posting tasks over there. But after receiving few callbacks from common API, I need to execute a few particular tasks on main thread only like proxy creation, etc. so in such a scenario, I have to communicate with the main thread and need to post that particular task on the main thread. I have designed LoopingThread.cpp as mentioned below and communicating with the main for posting tasks on that:
LoopingThread.cpp:
#include <iostream>
#include "loopingThread.hpp"
using namespace std;
LoopingThread::LoopingThread() : thread(nullptr), scheduledCallbacks() {
}
LoopingThread::~LoopingThread() {
if (this->thread) {
delete this->thread;
}
}
void LoopingThread::runCallbacks() {
this->callbacksMutex.lock();
if (this->scheduledCallbacks.size() > 0) {
std::thread::id threadID = std::this_thread::get_id();
std::cout<<"inside runCallback()threadId:"<<threadID<<std::endl;
// This is to allow for new callbacks to be scheduled from within a callback
std::vector<std::function<void()>> currentCallbacks = std::move(this->scheduledCallbacks);
this->scheduledCallbacks.clear();
this->callbacksMutex.unlock();
for (auto callback : currentCallbacks) {
//callback();
//this->callback();
int id = 1;
this->shared_func(id);
}
} else {
this->callbacksMutex.unlock();
}
}
void LoopingThread::shared_func(int id)
{
std::thread::id run_threadID = std::this_thread::get_id();
std::cout<<"shared_func: "<<run_threadID<<std::endl;
this->callbacksMutex.lock();
if (id == 0)
std::cout<<"calling from main,id: "<<id<<std::endl;
else if (id == 1)
std::cout<<"calling from child,id: "<<id<<std::endl;
this->callbacksMutex.unlock();
}
void LoopingThread::run() {
std::thread::id run_threadID = std::this_thread::get_id();
std::cout<<"Child_run_threadID: "<<run_threadID<<std::endl;
for (;;) {
this->runCallbacks();
// Run the tick
if (!this->tick()) {
std::cout<<"Run the tick"<<std::endl;
break;
}
}
// Run pending callbacks, this might result in an infinite loop if there are more
// callbacks scheduled from within scheduled callbacks
this->callbacksMutex.lock();
while (this->scheduledCallbacks.size() > 0) {
std::cout<<"inside scheduledCallbacks.size() > 0"<<std::endl;
this->callbacksMutex.unlock();
this->runCallbacks();
this->callbacksMutex.lock();
}
this->callbacksMutex.unlock();
}
void LoopingThread::scheduleCallback(std::function<void()> callback) {
std::cout<<"inside schedulecallback"<<std::endl;
this->callbacksMutex.lock();
this->scheduledCallbacks.push_back(callback);
this->callbacksMutex.unlock();
}
void LoopingThread::start() {
if (!this->thread) {
this->thread = new std::thread(&LoopingThread::run, this);
//std::thread::id threadID = std::this_thread::get_id();
//std::cout<<"creating thread: "<<threadID<<std::endl;
}
}
void LoopingThread::join() {
if (this->thread && this->thread->joinable()) {
this->thread->join();
std::cout<<"joining thread"<<std::endl;
}
}
**main.cpp:**
#include <thread>
#include <chrono>
#include <iostream>
#include <mutex>
#include <string>
#include "loopingThread.hpp"
using namespace std;
std::mutex stdoutMutex;
// Example usage of LoopingThread with a classic MainThread:
class MainThread : public LoopingThread {
private:
MainThread();
public:
virtual ~MainThread();
static MainThread& getInstance();
virtual bool tick();
};
MainThread::MainThread() {}
MainThread::~MainThread() {}
MainThread& MainThread::getInstance() {
// Provide a global instance
static MainThread instance;
return instance;
}
bool MainThread::tick() {
// std::cout<<"main thread:"<<threadID<<std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
stdoutMutex.lock();
std::cout << "tick" << std::endl;
stdoutMutex.unlock();
// Return false to stop this thread
return true;
}
void doLongAsyncTask() {
std::thread longTask([] () {
stdoutMutex.lock();
std::cout << "Beginning long task..." <<std::endl;
stdoutMutex.unlock();
std::this_thread::sleep_for(std::chrono::seconds(2));
stdoutMutex.lock();
std::cout << "Long task finished!" << std::endl;
stdoutMutex.unlock();
MainThread::getInstance().scheduleCallback([] () {
stdoutMutex.lock();
std::cout << "This is called within the main thread!" << std::endl <<
"No need to worry about thread safety or " <<
"race conditions here" << std::endl;
stdoutMutex.unlock();
});
});
longTask.detach();
}
int main() {
doLongAsyncTask();
MainThread::getInstance().start();
MainThread::getInstance().join();
MainThread::getInstance().run();
}
Now suppose child thread receives any task of creating proxy then It needs to post that task back to the main thread. How to achieve this scenario?

thread sync using mutex and condition variable

I'm trying to implement an multi-thread job, a producer and a consumer, and basically what I want to do is, when consumer finishes the data, it notifies the producer so that producer provides new data.
The tricky part is, in my current impl, producer and consumer both notifies each other and waits for each other, I don't know how to implement this part correctly.
For example, see the code below,
mutex m;
condition_variable cv;
vector<int> Q; // this is the queue the consumer will consume
vector<int> Q_buf; // this is a buffer Q into which producer will fill new data directly
// consumer
void consume() {
while (1) {
if (Q.size() == 0) { // when consumer finishes data
unique_lock<mutex> lk(m);
// how to notify producer to fill up the Q?
...
cv.wait(lk);
}
// for-loop to process the elems in Q
...
}
}
// producer
void produce() {
while (1) {
// for-loop to fill up Q_buf
...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
unique_lock<mutex> lk(m);
cv.wait(lk);
Q.swap(Q_buf); // replace the empty Q with the full Q_buf
cv.notify_one();
}
}
I'm not sure this the above code using mutex and condition_variable is the right way to implement my idea,
please give me some advice!
The code incorrectly assumes that vector<int>::size() and vector<int>::swap() are atomic. They are not.
Also, spurious wakeups must be handled by a while loop (or another cv::wait overload).
Fixes:
mutex m;
condition_variable cv;
vector<int> Q;
// consumer
void consume() {
while(1) {
// Get the new elements.
vector<int> new_elements;
{
unique_lock<mutex> lk(m);
while(Q.empty())
cv.wait(lk);
new_elements.swap(Q);
}
// for-loop to process the elems in new_elements
}
}
// producer
void produce() {
while(1) {
vector<int> new_elements;
// for-loop to fill up new_elements
// publish new_elements
{
unique_lock<mutex> lk(m);
Q.insert(Q.end(), new_elements.begin(), new_elements.end());
cv.notify_one();
}
}
}
Maybe that is close to what you want to achive. I used 2 conditional variables to notify producers and consumers between each other and introduced variable denoting which turn is now:
#include <ctime>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <queue>
#include <thread>
template<typename T>
class ReaderWriter {
private:
std::vector<std::thread> readers;
std::vector<std::thread> writers;
std::condition_variable readerCv, writerCv;
std::queue<T> data;
std::mutex readerMutex, writerMutex;
size_t noReaders, noWriters;
enum class Turn { WRITER_TURN, READER_TURN };
Turn turn;
void reader() {
while (1) {
{
std::unique_lock<std::mutex> lk(readerMutex);
while (turn != Turn::READER_TURN) {
readerCv.wait(lk);
}
std::cout << "Thread : " << std::this_thread::get_id() << " consumed " << data.front() << std::endl;
data.pop();
if (data.empty()) {
turn = Turn::WRITER_TURN;
writerCv.notify_one();
}
}
}
}
void writer() {
while (1) {
{
std::unique_lock<std::mutex> lk(writerMutex);
while (turn != Turn::WRITER_TURN) {
writerCv.wait(lk);
}
srand(time(NULL));
int random_number = std::rand();
data.push(random_number);
std::cout << "Thread : " << std::this_thread::get_id() << " produced " << random_number << std::endl;
turn = Turn::READER_TURN;
}
readerCv.notify_one();
}
}
public:
ReaderWriter(size_t noReadersArg, size_t noWritersArg) : noReaders(noReadersArg), noWriters(noWritersArg), turn(ReaderWriter::Turn::WRITER_TURN) {
}
void run() {
int noReadersArg = noReaders, noWritersArg = noWriters;
while (noReadersArg--) {
readers.emplace_back(&ReaderWriter::reader, this);
}
while (noWritersArg--) {
writers.emplace_back(&ReaderWriter::writer, this);
}
}
~ReaderWriter() {
for (auto& r : readers) {
r.join();
}
for (auto& w : writers) {
w.join();
}
}
};
int main() {
ReaderWriter<int> rw(5, 5);
rw.run();
}
Here's a code snippet. Since the worker treads are already synchronized, requirement of two buffers is ruled out. So a simple queue is used to simulate the scenario:
#include "conio.h"
#include <iostream>
#include <thread>
#include <mutex>
#include <queue>
#include <atomic>
#include <condition_variable>
using namespace std;
enum state_t{ READ = 0, WRITE = 1 };
mutex mu;
condition_variable cv;
atomic<bool> running;
queue<int> buffer;
atomic<state_t> state;
void generate_test_data()
{
const int times = 5;
static int data = 0;
for (int i = 0; i < times; i++) {
data = (data++) % 100;
buffer.push(data);
}
}
void ProducerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == WRITE; });
if (!running) return;
generate_test_data(); //producing here
lock.unlock();
//notify consumer to start consuming
state = READ;
cv.notify_one();
}
}
void ConsumerThread() {
while (running) {
unique_lock<mutex> lock(mu);
cv.wait(lock, []() { return !running || state == READ; });
if (!running) return;
while (!buffer.empty()) {
auto data = buffer.front(); //consuming here
buffer.pop();
cout << data << " \n";
}
//notify producer to start producing
if (buffer.empty()) {
state = WRITE;
cv.notify_one();
}
}
}
int main(){
running = true;
thread producer = thread([]() { ProducerThread(); });
thread consumer = thread([]() { ConsumerThread(); });
//simulating gui thread
while (!getch()){
}
running = false;
producer.join();
consumer.join();
}
Not a complete answer, though I think two condition variables could be helpful, one named buffer_empty that the producer thread will wait on, and another named buffer_filled that the consumer thread will wait on. Number of mutexes, how to loop, and so on I cannot comment on, since I'm not sure about the details myself.
Accesses to shared variables should only be done while holding the
mutex that protects it
condition_variable::wait should check a condition.
The condition should be a shared variable protected by the mutex that you pass to condition_variable::wait.
The way to check the condition is to wrap the call to wait in a while loop or use the 2-argument overload of wait (which is
equivalent to the while-loop version)
Note: These rules aren't strictly necessary if you truly understand what the hardware is doing. However, these problems get complicated quickly when with simple data structures, and it will be easier to prove that your algorithm is working correctly if you follow them.
Your Q and Q_buf are shared variables. Due to Rule 1, I would prefer to have them as local variables declared in the function that uses them (consume() and produce(), respectively). There will be 1 shared buffer that will be protected by a mutex. The producer will add to its local buffer. When that buffer is full, it acquires the mutex and pushes the local buffer to the shared buffer. It then waits for the consumer to accept this buffer before producing more data.
The consumer waits for this shared buffer to "arrive", then it acquires the mutex and replaces its empty local buffer with the shared buffer. Then it signals to the producer that the buffer has been accepted so it knows to start producing again.
Semantically, I don't see a reason to use swap over move, since in every case one of the containers is empty anyway. Maybe you want to use swap because you know something about the underlying memory. You can use whichever you want and it will be fast and work the same (at least algorithmically).
This problem can be done with 1 condition variable, but it may be a little easier to think about if you use 2.
Here's what I came up with. Tested on Visual Studio 2017 (15.6.7) and GCC 5.4.0. I don't need to be credited or anything (it's such a simple piece), but legally I have to say that I offer no warranties whatsoever.
#include <thread>
#include <vector>
#include <mutex>
#include <condition_variable>
#include <chrono>
std::vector<int> g_deliveryBuffer;
bool g_quit = false;
std::mutex g_mutex; // protects g_deliveryBuffer and g_quit
std::condition_variable g_producerDeliver;
std::condition_variable g_consumerAccepted;
// consumer
void consume()
{
// local buffer
std::vector<int> consumerBuffer;
while (true)
{
if (consumerBuffer.empty())
{
std::unique_lock<std::mutex> lock(g_mutex);
while (g_deliveryBuffer.empty() && !g_quit) // if we beat the producer, wait for them to push to the deliverybuffer
g_producerDeliver.wait(lock);
if (g_quit)
break;
consumerBuffer = std::move(g_deliveryBuffer); // get the buffer
}
g_consumerAccepted.notify_one(); // notify the producer that the buffer has been accepted
// for-loop to process the elems in Q
// ...
consumerBuffer.clear();
// ...
}
}
// producer
void produce()
{
std::vector<int> producerBuffer;
while (true)
{
// for-loop to fill up Q_buf
// ...
producerBuffer = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
// ...
// once Q_buf is fully filled, wait until consumer asks to give it a full Q
{ // scope is for lock
std::unique_lock<std::mutex> lock(g_mutex);
g_deliveryBuffer = std::move(producerBuffer); // ok to push to deliverybuffer. it is guaranteed to be empty
g_producerDeliver.notify_one();
while (!g_deliveryBuffer.empty() && !g_quit)
g_consumerAccepted.wait(lock); // wait for consumer to signal for more data
if (g_quit)
break;
// We will never reach this point if the buffer is not empty.
}
}
}
int main()
{
// spawn threads
std::thread consumerThread(consume);
std::thread producerThread(produce);
// for for 5 seconds
std::this_thread::sleep_for(std::chrono::seconds(5));
// signal that it's time to quit
{
std::lock_guard<std::mutex> lock(g_mutex);
g_quit = true;
}
// one of the threads may be sleeping
g_consumerAccepted.notify_one();
g_producerDeliver.notify_one();
consumerThread.join();
producerThread.join();
return 0;
}

How many mutex(es) should be used in one thread

I am working on a c++ (11) project and on the main thread, I need to check the value of two variables. The value of the two variables will be set by other threads through two different callbacks. I am using two condition variables to notify changes of those two variables. Because in c++, locks are needed for condition variables, I am not sure if I should use the same mutex for the two condition variables or I should use two mutex's to minimize exclusive execution. Somehow, I feel one mutex should be sufficient because on one thread(the main thread in this case) the code will be executed sequentially anyway. The code on the main thread that checks (wait for) the value of the two variables wont be interleaved anyway. Let me know if you need me to write code to illustrate the problem. I can prepare that. Thanks.
Update, add code:
#include <mutex>
class SomeEventObserver {
public:
virtual void handleEventA() = 0;
virtual void handleEventB() = 0;
};
class Client : public SomeEventObserver {
public:
Client() {
m_shouldQuit = false;
m_hasEventAHappened = false;
m_hasEventBHappened = false;
}
// will be callbed by some other thread (for exampe, thread 10)
virtual void handleEventA() override {
{
std::lock_guard<std::mutex> lock(m_mutexForA);
m_hasEventAHappened = true;
}
m_condVarEventForA.notify_all();
}
// will be called by some other thread (for exampe, thread 11)
virtual void handleEventB() override {
{
std::lock_guard<std::mutex> lock(m_mutexForB);
m_hasEventBHappened = true;
}
m_condVarEventForB.notify_all();
}
// here waitForA and waitForB are in the main thread, they are executed sequentially
// so I am wondering if I can use just one mutex to simplify the code
void run() {
waitForA();
waitForB();
}
void doShutDown() {
m_shouldQuit = true;
}
private:
void waitForA() {
std::unique_lock<std::mutex> lock(m_mutexForA);
m_condVarEventForA.wait(lock, [this]{ return m_hasEventAHappened; });
}
void waitForB() {
std::unique_lock<std::mutex> lock(m_mutexForB);
m_condVarEventForB.wait(lock, [this]{ return m_hasEventBHappened; });
}
// I am wondering if I can use just one mutex
std::condition_variable m_condVarEventForA;
std::condition_variable m_condVarEventForB;
std::mutex m_mutexForA;
std::mutex m_mutexForB;
bool m_hasEventAHappened;
bool m_hasEventBHappened;
};
int main(int argc, char* argv[]) {
Client client;
client.run();
}

properly ending an infinite std::thread

I have a reusable class that starts up an infinite thread. this thread can only be killed by calling a stop function that sets a kill switch variable. When looking around, there is quite a bit of argument over volatile vs atomic variables.
The following is my code:
program.cpp
int main()
{
ThreadClass threadClass;
threadClass.Start();
Sleep(1000);
threadClass.Stop();
Sleep(50);
threaClass.Stop();
}
ThreadClass.h
#pragma once
#include <atomic>
#include <thread>
class::ThreadClass
{
public:
ThreadClass(void);
~ThreadClass(void);
void Start();
void Stop();
private:
void myThread();
std::atomic<bool> runThread;
std::thread theThread;
};
ThreadClass.cpp
#include "ThreadClass.h"
ThreadClass::ThreadClass(void)
{
runThread = false;
}
ThreadClass::~ThreadClass(void)
{
}
void ThreadClass::Start()
{
runThread = true;
the_thread = std::thread(&mythread, this);
}
void ThreadClass::Stop()
{
if(runThread)
{
runThread = false;
if (the_thread.joinable())
{
the_thread.join();
}
}
}
void ThreadClass::mythread()
{
while(runThread)
{
//dostuff
Sleep(100); //or chrono
}
}
The code that i am representing here mirrors an issue that our legacy code had in place. We call the stop function 2 times, which will try to join the thread 2 times. This results in an invalid handle exception. I have coded the Stop() function in order to work around that issue, but my question is why would the the join fail the second time if the thread has completed and joined? Is there a better way programmatically to assume that the thread is valid before trying to join?

Write to QTcpSocket fails with different thread error

I have created a simple threaded TCP server which collects 3 lines read from the socket, and then tries to echo them back to the socket. The function echoCommand below crashes.
#include "fortunethread.h"
#include <QtNetwork>
#include <QDataStream>
FortuneThread::FortuneThread(int socketDescriptor, QObject *parent)
: QThread(parent), socketDescriptor(socketDescriptor), in(0)
{
}
void FortuneThread::run()
{
tcpSocketPtr = new QTcpSocket;
if (!tcpSocketPtr->setSocketDescriptor(socketDescriptor)) {
emit error(tcpSocketPtr->error());
return;
}
in = new QDataStream(tcpSocketPtr);
connect(tcpSocketPtr, SIGNAL(readyRead()), this, SLOT(readCommand()) );
QThread::exec();
}
void FortuneThread::echoCommand()
{
QString block;
QTextStream out(&block, QIODevice::WriteOnly);
for (QStringList::Iterator it = commandList.begin(); it != commandList.end(); ++it) {
out << "Command: " << *it << endl;
}
out << endl;
tcpSocketPtr->write(block.toUtf8());
tcpSocketPtr->disconnectFromHost();
tcpSocketPtr->waitForDisconnected();
}
void FortuneThread::readCommand()
{
while (tcpSocketPtr->canReadLine())
{
commandList << (tcpSocketPtr->readLine()).trimmed();
}
if (commandList.size() > 2)
{
echoCommand();
}
}
and here is the file where I connect up the slots/signals:
#include "fortuneserver.h"
#include "fortunethread.h"
#include <stdlib.h>
FortuneServer::FortuneServer(QObject *parent)
: QTcpServer(parent)
{
}
void FortuneServer::incomingConnection(qintptr socketDescriptor)
{
QString fortune = fortunes.at(qrand() % fortunes.size());
FortuneThread *thread = new FortuneThread(socketDescriptor, this);
connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()));
thread->start();
}
during or after the socket write, with this error:
**QObject: Cannot create children for a parent that is in a different thread.
(Parent is QNativeSocketEngine(0x7f19cc002720), parent's thread is FortuneThread(0x25411d0), current thread is QThread(0x220ff90)**
Since I create the tcpSocketPtr in the run() function, I know it is in the same thread as this function. Why would the socket write fail? I should point out that the write is succeeding since I see the output on the telnet window...but still the socket write fails...
Just more info...I found that I should NOT put a slot in a QThread..not sure how to get around this, but here is my class definiation:
class FortuneThread : public QThread
{
Q_OBJECT
public:
FortuneThread(int socketDescriptor, QObject *parent);
void run();
signals:
void error(QTcpSocket::SocketError socketError);
private slots:
void readCommand();
private:
void echoCommand();
int socketDescriptor;
QDataStream *in;
QStringList commandList;
QTcpSocket *tcpSocketPtr;
};

Resources