I am trying to write a module for nodeJS which wraps libspotify. The goal is to write a webapp that allows the remote control of a device playing music from spotify.
I have decided to go along the spshell example to ensure thread safety and write a "Spotify Service" in plain C that starts a seperate thread which calls all the API functions.
The nodeJS module then just calls a few provided functions to interact with spotify. The code for the service can be found here: http://pastebin.com/KB6uwSC8 The new thread gets started at the bottom.
Now, if i call this in a simple program like this (the fget is just to have a simple way for the login to complete). I used c++ to get as close to as node-gyp compiles the code.
#include <stdio.h>
extern "C" {
#include "objects/SpotifyService.h"
}
int main(int argc, char** argv) {
login();
char string[100];
fgets(string, 100, stdin);
fprintf(stdout, "Got: %s", string);
logout();
fgets(string, 100, stdin);
fprintf(stdout, "Got: %s", string);
return 0;
}
It works fine. I can't get this to crash.
If I use the same exact "Service" in nodeJS (meaning I just call login() and logout() and do nothing else), it crashes sometimes when logging out, like 7-8/10 times. I've tried lots of stuff, including:
Copying the compiler flags from node-gyp to my small example
fiddling with the thread attributes of the spotify thread
compiling on OSX and Debian
using libuv instead of plain pthreads
compiling my "service" to a shared library and call this from node
to no avail. It just crashes. It seems to crash less when called from within gdb, but that could be random.
A stack trace from gdb shows the following:
Thread 3 (Thread 0x7ffff65fd700 (LWP 21838)):
#0 0x00007ffff678f746 in ?? () from /usr/local/lib/libspotify.so.12
#1 0x00007ffff6702289 in ?? () from /usr/local/lib/libspotify.so.12
#2 0x00007ffff6702535 in ?? () from /usr/local/lib/libspotify.so.12
#3 0x00007ffff6703b5a in ?? () from /usr/local/lib/libspotify.so.12
#4 0x00007ffff6703c86 in ?? () from /usr/local/lib/libspotify.so.12
#5 0x00007ffff66c5c8b in ?? () from /usr/local/lib/libspotify.so.12
#6 0x00007ffff679a5b3 in sp_session_process_events () from /usr/local/lib/libspotify.so.12
#7 0x00007ffff6aa7839 in spotifyLoop (nervNicht=<value optimized out>) at ../src/SpotifyService.c:103
#8 0x00007ffff70118ca in start_thread () from /lib/libpthread.so.0
#9 0x00007ffff6d78b6d in clone () from /lib/libc.so.6
#10 0x0000000000000000 in ?? ()
(In OSX gdb showed that the function called in libspotify is called "process_title".)
Since nothing has helped so far i just don't have any idea if i can get this to work or if it is something in libspotify that's just incompatible with nodeJS. I don't understand how node-gyp links the .o files, maybe there something goes wrong?
I found two other projects on github that try to do this, but one of them puts the spotify main loop actually in Javascript and the other one uses node 0.1.100 and libspotify 0.0.4 and hasn't been updated in 2 years. I couldn't learn anything from both of them.
OK, i've played around some more. I just ignored the logout error and continued to implement other features.
I added a new sp_playlist_container creation in the logged_in callback and apparently that helped. After that, the node module does not crash anymore (or hasn't yet).
static sp_playlistcontainer_callbacks pc_callbacks = {
.container_loaded = &rootPlaylistContainerLoaded,
};
static void rootPlaylistContainerLoaded(sp_playlistcontainer* pc, void* userdata) {
int numPlaylists = sp_playlistcontainer_num_playlists(pc);
fprintf(stdout, "Root playlist synchronized, number of Playlists: %d\n", numPlaylists);
}
static void loggedIn(sp_session* session, sp_error error) {
if(SP_ERROR_OK != error) {
fprintf(stderr, "Error logging in: %s\n", sp_error_message(error));
} else {
fprintf(stdout, "Service is logged in!\n");
}
//This is absolutely necessary here, otherwise following callbacks can crash.
sp_playlistcontainer *pc = sp_session_playlistcontainer(spotifySession);
sp_playlistcontainer_add_callbacks(pc, &pc_callbacks, NULL);
}
But the sp_playlist_container creation must be in the logged_in callback, if i called it in another function (say, "getPlaylistNames") the program crashed, too.
I'll see if it continues to work and hope this answer can help others.
Related
I am using a QWebFrame to visualize some data and I use evalueateJavascript method to update data on Javascript. Here is my function to do this.
QWebFrame * webFrame;
void setValue(int idx, double value){
webPage->page()->mainFrame()->evaluateJavaScript(QString("setDataValue(%1,%2)").arg(QString::number(idx)).arg(QString::number(value)));
}
In QT application I can call this function via a button call back as many time without causing an error.
I want to call this setValue function from a separate thread to visualize incoming data. When I call setValue function from a separate thread, after few or first iteration application crash. I tried with both QThread and boost threads, but results are same.
void dummyTest(){
for(int i = 0; i < 1000; i++)
setValue(0,rand() % 150);
}
This dummyTest function is also working without problems when called via a button callback, but crash on running on a separate thread.
Here is the code for thread initialization
void startSerialProcessing() {
boost::thread_attributes attr;
attr.set_stack_size(1024);
std::cout << "dummy processor started. \n";
serialThread= new boost::thread(&MavLinkAL::dummyTest, this);
}
My observation is this crash only happens when setValue is called from a separate thread. Here is important lines from coredump file viewed from gdb.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007f3f0d6f361f in WTF::StringImpl::~StringImpl() ()
from /usr/lib/x86_64-linux-gnu/libQtWebKit.so.4
#1 0x00007f3f0d6583f8 in JSC::JSValue::toStringSlowCase(JSC::ExecState*) const
() from /usr/lib/x86_64-linux-gnu/libQtWebKit.so.4
#2 0x00007f3f0d68a396 in ?? () from /usr/lib/x86_64-linux-gnu/libQtWebKit.so.4
#3 0x00007f3f0d55a9f1 in ?? () from /usr/lib/x86_64-linux-gnu/libQtWebKit.so.4
#4 0x00007f3f0d56313f in ?? () from /usr/lib/x86_64-linux-gnu/libQtWebKit.so.4
Any help to solve this problem is really appreciated. Thanks.
QWebFrame functions are neither reentrant nor thread safe. They can only be invoked from main GUI thread. Qt allows you to deliver a message to another thread quite easily. Make setValue function a slot, and connect it to a signal emitted from the processing thread.
EDIT: As Vladimir Bershov suggested in the comment below, one can also use QMetaObject::invokeMethod() with Qt::QueuedConnection to achieve the same result.
I am writing a heavy multi threaded [>170 threads] c++11 program. Each thread is logging information into one file used by all threads. For performance reasons I want to create a log thread which is writing the information via fprintf() into the global file. I have no idea how to organize the structure into which the worker threads are writing the information which can be then read by the log thread.
Why do I not call sprintf() in each worker thread and then just provide the output buffer to the log thread? For the formatted output into the log file I am using a locale in the fprintf() functions which is different than in the rest of the thread. Therefore I would have to switch and lock/guard permanently the xprintf() calls in order to differ the locale output.
In the log thread I have one locale setting used for the whole output while the worker threads have their locale version.
Another reason for the log thread is that I have to "group" the output otherwise the information from each worker thread would not be in a block:
Wrong:
Information A Thread #1
Information A Thread #2
Information B Thread #1
Information B Thread #2
Correct:
Information A Thread #1
Information B Thread #1
Information A Thread #2
Information B Thread #2
In order to achieve this grouping I have to guard the output in each worker thread which is slowing the thread execution time.
How can I save the va_list into a structure that way it can be read by the log thread and passed back to fprintf()?
I don't see how this would be done easily using the legacy C vprintf with va_lists. As you want to pass things around between threads, sooner or later you will need to use the heap in some way.
Below is a solution that uses Boost.Format for the formatting and Boost.Variant for parameter passing. The example is complete and working if you concatenate the following code blocks in order. If you compile with GCC, you need to pass the -pthread linker flag. And of course, you'll also need the two Boost libraries which are header-only, however. Here are the headers we will use.
#include <condition_variable>
#include <iostream>
#include <list>
#include <locale>
#include <mutex>
#include <random>
#include <string>
#include <thread>
#include <utility>
#include <vector>
#include <boost/format.hpp>
#include <boost/variant.hpp>
At first, we need some mechanism to asynchronously execute some tasks, in this case, print our logging messages. Since the concept is general, I use an “abstract” base class Spooler for this. Its code is based on Herb Sutter's talk “Lock-Free Programming (or, Juggling Razor Blades)” on CppCon 2014 (part 1, part 2). I'm not going into detail about this code because it is mostly scaffolding not directly related to your question and I assume you already have this piece of functionality in place. My Spooler uses a std::list protected by a std::mutex as a task queue. It might be worthwhile to consider using a lock-free data structure instead.
class Spooler
{
private:
bool done_ {};
std::list<std::function<void(void)>> queue_ {};
std::mutex mutex_ {};
std::condition_variable condvar_ {};
std::thread worker_ {};
public:
Spooler() : worker_ {[this](){ work(); }}
{
}
~Spooler()
{
auto poison = [this](){ done_ = true; };
this->submit(std::move(poison));
if (this->worker_.joinable())
this->worker_.join();
}
protected:
void
submit(std::function<void(void)> task)
{
// This is basically a push_back but avoids potentially blocking
// calls while in the critical section.
decltype(this->queue_) tmp {std::move(task)};
{
std::unique_lock<std::mutex> lck {this->mutex_};
this->queue_.splice(this->queue_.cend(), tmp);
}
this->condvar_.notify_all();
}
private:
void
work()
{
do
{
std::unique_lock<std::mutex> lck {this->mutex_};
while (this->queue_.empty())
this->condvar_.wait(lck);
const auto task = std::move(this->queue_.front());
this->queue_.pop_front();
lck.unlock();
task();
}
while (!this->done_);
}
};
From the Spooler, we now derive a Logger that (privately) inherits its asynchronous capabilities from the Spooler and adds the logging specific functionality. It has only one function member called log that takes as parameters a format string and zero or more arguments to format into it as a std::vector of boost::variants.
Unfortunately, this limits us to a fixed number of types we can support but that shouldn't be a large problem since the C printf doesn't support arbitrary types either. For the sake of this example, I'm only using int and double but you can extend the list with std::strings, void * pointers or what have you.
The log function constructs a lambda expression that creates a boost::format object, feeds it all the arguments and then writes it to std::log or wherever you want the formatted message to go.
The constructor of boost::format has an overload that accepts the format string and a locale. You might be interested in this one since you have mentioned setting a custom locale in the comments. The usual constructor only takes a single argument, the format string.
Note how all formatting and outputting is done on the spooler's thread.
class Logger : Spooler
{
public:
void
log(const std::string& fmt,
const std::vector<boost::variant<int, double>>& args)
{
auto task = [fmt, args](){
boost::format msg {fmt, std::locale {"C"}}; // your locale here
for (const auto& arg : args)
msg % arg; // feed the next argument
std::clog << msg << std::endl; // print the formatted message
};
this->submit(std::move(task));
}
};
This is all it takes. We can now use the Logger like in this example. It is important that all worker threads are join() ed before the Logger is destructed or it won't process all messages.
int
main()
{
Logger logger {};
std::vector<std::thread> threads {};
std::random_device rnddev {};
for (int i = 0; i < 4; ++i)
{
const auto seed = rnddev();
auto task = [&logger, i, seed](){
std::default_random_engine rndeng {seed};
std::uniform_real_distribution<double> rnddist {0.0, 0.5};
for (double p = 0.0; p < 1.0; p += rnddist(rndeng))
logger.log("thread #%d is %6.2f %% done", {i, 100.0 * p});
logger.log("thread #%d has completed its work", {i});
};
threads.emplace_back(std::move(task));
}
for (auto& thread : threads)
thread.join();
}
Possible output:
thread #1 is 0.00 % done
thread #0 is 0.00 % done
thread #0 is 26.84 % done
thread #0 is 76.15 % done
thread #3 is 0.00 % done
thread #0 has completed its work
thread #3 is 34.70 % done
thread #3 is 78.92 % done
thread #3 is 91.89 % done
thread #3 has completed its work
thread #1 is 26.98 % done
thread #1 is 73.84 % done
thread #1 has completed its work
thread #2 is 0.00 % done
thread #2 is 10.17 % done
thread #2 is 29.85 % done
thread #2 is 79.03 % done
thread #2 has completed its work
a user submitted a bug-report, where my application segfaults in "__fortify_fail()".
i understand that this is related to building my application with Debian's "hardening" flags -D_FORTIFY_SOURCE=2 -fstack-protector.
unfortunately the backtrace of the user does not tell me much yet, and the user is not super responsive (right now).
in order to understand better what is going on, i would like to know, what __fortify_fail actually does.
This function is normally just an error reporter. Sample code from glibc is:
extern char **__libc_argv attribute_hidden;
void
__attribute__ ((noreturn))
__fortify_fail (msg)
const char *msg;
{
/* The loop is added only to keep gcc happy. */
while (1)
__libc_message (2, "*** %s ***: %s terminated\n",
msg, __libc_argv[0] ?: "<unknown>");
}
libc_hidden_def (__fortify_fail)
It may be called here and there where sources is preferred to be fortified. "Fortification" itself is just a couple of run-time checks. Sample usage in openat function from io/openat.c is:
int
__openat_2 (fd, file, oflag)
int fd;
const char *file;
int oflag;
{
if (oflag & O_CREAT)
__fortify_fail ("invalid openat call: O_CREAT without mode");
return __openat (fd, file, oflag);
}
Without fortification, O_CREAT is acceptable without mode (still this case is highly suspicious, it is legal).
Think about __fortify_fail like about printf+abort.
Turning telepathy on about your question, I may suggest that user have some problems with using libc in user code. /lib/x86_64-linux-gnu/libc.so.6(+0xebdf0)[0x7f75d3576df0] is a place inside libc where some runtime-check fails, so pd[0x49b5c0] is a place where libc incorrectly called from.
I am trying to automate the handler equipment(a robot picks a chip and put it onto a hardware platform) with the following requirement:
1.There are 6 sites for the handler , once handler puts a device onto that site, handler will return an errorcode:
code1 for ready to test, code2 for error, and if in process no code have returned.
2.There is a master PC that controls the handler operation, and the communication b/w master and site PCs are using Staf
3.I need to use that code to run some tests(which already implemented and working properly).
Handler puts the device in a FIFO order, first site returns code first, and last site returns code last.
4.The Site PC is acting passively, which master PC will determine when to run and how to run the tests. Site PC will only know if handler is ready then execute the tests.
So my question would be: In this case, for the site-PCs(Windows based with perl and .net enabled), is busy waiting method better or is the wait condition mechanism suits better:
For example: the sample code would be:
void runTestonSite()
{
for(;;)
{
if(returnCode == code1)
{
testStart(arg1,arg2,arg3);
}
}
}
or is there any better way to do this kind of task?
#include <boost/thread.hpp>
void getReturnCode() {
// do stuff
}
void RunTestOnSite() {
// do stuff
}
int main (int argc, char ** argv) {
using namespace boost;
thread thread_1 = thread(getReturnCode);
thread thread_2 = thread(RunTestOnSite);
// do other stuff
thread_2.join();
thread_1.join();
return 0;
}
Please advise,
thanks
This question already has answers here:
Ensuring QProcess termination on termination of its parent QThread
(2 answers)
Closed 4 years ago.
how to terminate an ongoing QProcess that is running inside a QThread and gets deleted by another QThread? I even inserted a QMutex extCmdProcessLock, which should avoid the destruction of the DbManager before the extCmdProcess could finish or timeout.
I get a segmentation fault on "waitForStarted" if another thread calls delete on DbManager.
I cannot use signals (I think) because I use the external command inside a sequential data process.
Thank you very much for any help!
DbManager::extCmd(){
...
QMutexLocker locker(&extCmdProcessLock);
extCmdProcess = new QProcess(this);
QString argStr += " --p1=1"
+ " --p2=3";
extCmdProcess->start(cmd,argStr.split(QString(" ")));
bool startedSuccessfully = extCmdProcess->waitForStarted();
if (!startedSuccessfully) {
extCmdProcess->close();
extCmdProcess->kill();
extCmdProcess->waitForFinished();
delete extCmdProcess;
extCmdProcess = NULL;
return;
}
bool successfullyFinished = extCmdProcess->waitForFinished(-1);
if (!successfullyFinished) {
qDebug() << "finishing failed"; // Appendix C
extCmdProcess->close();
extCmdProcess->kill();
extCmdProcess->waitForFinished(-1);
delete extCmdProcess;
extCmdProcess = NULL;
return;
}
extCmdProcess->close();
delete extCmdProcess;
extCmdProcess = NULL;
}
DbManager::~DbManager(){
qDebug() << "DB DbManager destructor called.";
QMutexLocker locker(&extCmdProcessLock);
if (extCmdProcess!= NULL){
this->extCmdProcess->kill(); // added after Appendix A
this->extCmdProcess->waitForFinished();
}
}
Appendix A: I also get the error "QProcess: Destroyed while process is still running." and I read that this could mean that the "delete dbmanager" call from my other thread is executed while the waitForStarted() command has not completed. But I really wonder why the kill() command in my destructor has not fixed this.
Appendix B: According to comment, added waitForFinished(). Sadly, the QProcess termination still does not get shutdown properly, the segmentation fault happens in waitForStarted() or as below in start() itself.
#0 0x00007f25e03a492a in QEventDispatcherUNIX::registerSocketNotifier () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#1 0x00007f25e0392d0b in QSocketNotifier::QSocketNotifier () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#2 0x00007f25e0350bf8 in ?? () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#3 0x00007f25e03513ef in ?? () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#4 0x00007f25e03115da in QProcess::start () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#5 0x0000000000428628 in DbManager::extCmd()
#6 0x000000000042ca06 in DbManager::storePos ()
#7 0x000000000044f51c in DeviceConnection::incomingData ()
#8 0x00000000004600fb in DeviceConnection::qt_metacall ()
#9 0x00007f25e0388782 in QObject::event () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#10 0x00007f25e0376e3f in QCoreApplicationPrivate::notify_helper () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#11 0x00007f25e0376e86 in QCoreApplication::notify () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#12 0x00007f25e0376ba4 in QCoreApplication::notifyInternal () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#13 0x00007f25e0377901 in QCoreApplicationPrivate::sendPostedEvents () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#14 0x00007f25e03a4500 in QEventDispatcherUNIX::processEvents () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#15 0x00007f25e0375e15 in QEventLoop::processEvents () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#16 0x00007f25e0376066 in QEventLoop::exec () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#17 0x00007f25e0277715 in QThread::exec () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#18 0x00007f25e027a596 in ?? () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#19 0x00007f25df9b43f7 in start_thread () from /lib/libpthread.so.0
#20 0x00007f25def89b4d in clone () from /lib/libc.so.6
#21 0x0000000000000000 in ?? ()
Appendix C: The debug output showed me, that the error message: QProcess: Destroyed while process is still running. always appears, when the finishing failed output appears. This means that my locks or/and kill attempts to protect the QProcess are failing.
Questions I wonder about:
a) If a create a QProcess object and start it, is my extCmdProcessLock unlocked? I already tried to use a normal lock() call instead of the QMutexLoader but no luck.
b) The docs say the main thread will be stopped if I use QProcess this way. Do they really mean the main thread or the thread in which QProcess is started? I assumed second.
c) is QProcess not usable in multithreading environment? If two threads create a QProcess object and run it, do they interfere? Maybe the object is somehow static?
Thanks for any help in filling the knowledge leaks. I really hope to get that puzzle solved.
Appendix D: After removing any delete and deleteLater() from any thread, my QProcess still gets smashed.
#0 0x00007fc94e9796b0 in QProcess::setProcessState () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#1 0x00007fc94e97998b in QProcess::waitForStarted () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#2 0x00007fc94e979a12 in QProcess::waitForFinished () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#3 0x0000000000425681 in DbManager::extCmd()
#4 0x0000000000426fb6 in DbManager::storePos ()
#5 0x000000000044d51c in DeviceConnection::incomingData ()
#6 0x000000000045fb7b in DeviceConnection::qt_metacall ()
#7 0x00007fc94e9f4782 in QObject::event () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#8 0x00007fc94e9e2e3f in QCoreApplicationPrivate::notify_helper () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#9 0x00007fc94e9e2e86 in QCoreApplication::notify () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#10 0x00007fc94e9e2ba4 in QCoreApplication::notifyInternal () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#11 0x00007fc94e9e3901 in QCoreApplicationPrivate::sendPostedEvents () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#12 0x00007fc94ea10500 in QEventDispatcherUNIX::processEvents () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#13 0x00007fc94e9e1e15 in QEventLoop::processEvents () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#14 0x00007fc94e9e2066 in QEventLoop::exec () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#15 0x00007fc94e8e3715 in QThread::exec () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#16 0x00007fc94e8e6596 in ?? () from /usr/local/Trolltech/Qt-4.7.4/lib/libQtCore.so.4
#17 0x00007fc94e0203f7 in start_thread () from /lib/libpthread.so.0
#18 0x00007fc94d5f5b4d in clone () from /lib/libc.so.6
#19 0x0000000000000000 in ?? ()
It is really bad style to use a QThread to manage a running process. I'm seeing it again and again and it's some fundamental misunderstanding about how to write asynchronous applications properly. Processes are separate from your own application. QProcess provides a beautiful set of signals to notify you when it has successfully started, failed to start, and finished. Simply hook those signals to slots in an instance of a QObject-derived class of yours, and you'll be all set.
It's bad design if the number of threads in your application can exceed significantly the number of cores/hyperhtreads available on the platform, or if the number of threads is linked to some unrelated runtime factor like number of running subprocesses.
See my other other answer.
You can create QProcess on the heap, as a child of your monitoring QObject. You could connect QProcess's finished() signal to its own deleteLater() slot, so that it will automatically delete itself when it's done. The monitoring QObject should forcibly terminate any remaining running processes when it gets itself destroyed, say as a result of your application shutting down.
Further to the question was how to execute uncontrollably long running functions, say database queries for which there's no asynchronous API, with minimal impact, when interspersed with things for which there is good asynchronous API, such as QProcess.
A canonical way would be: do things synchronously where you must, asynchronously otherwise. You can stop the controlling object, and any running process, by invoking its deleteLater() slot -- either via a signal/slot connection, or using QMetaObject::invokeMethod() if you want to do it directly while safely crossing the thread boundary. This is the major benefit of using as few blocking calls as possible: you have some control over the processing and can stop it some of the time. With purely blocking implementation, there's no way to stop it short of using some flag variables and sprinkling your code with tests for it.
The deleteLater() will get processed any time the event loop can spin in the thread where a QObject lives. This means that it will get a chance between the database query calls -- any time when the process is running, in fact.
Untested code:
class Query : public QObject
{
Q_OBJECT
public:
Query(QObject * parent = 0) : QObject(parent) {
connect(process, SIGNAL(error(QProcess::ProcessError)), SLOT(error()));
connect(process, SIGNAL(finished(int,QProcess::ExitStatus)), SLOT(finished(int,QProcess::ExitStatus)));
}
~Query() { process.kill(); }
void start() {
QTimer::singleShot(0, this, SLOT(slot1()));
}
protected slots:
void slot1() {
// do a database query
process.start(....);
next = &Query::slot2;
}
protected:
// slot2 and slot3 don't have to be slots
void slot2() {
if (result == Error) {...}
else {...}
// another database query
process.start(...); // yet another process gets fired
next = &Query::slot3;
}
void slot3() {
if (result == Error) {...}
deleteLater();
}
protected slots:
void error() {
result = Error;
(this->*next)();
}
void finished(int code, QProcess::ExitStatus status) {
result = Finished;
exitCode = code;
exitStatus = status;
(this->*next)();
}
private:
QProcess process;
enum { Error, Finished } result;
int exitCode;
QProcess::ExitStatus exitStatus;
void (Query::* next)();
};
Personally, I'd check if the database that you're using has an asynchronous API. If it doesn't, but if the client library has available sources, then I'd do a minimal port to use Qt's networking stack to make it asynchronous. It would lower the overheads because you'd no more have one thread per database connection, and as you'd get closer to saturating the CPU, the overheads wouldn't rise: ordinarily, to saturate the CPU you'd need many, many threads, since they mostly idle. With asynchronous interface, the number of context switches would go down, since a thread would process one packet of data from the database, and could immediately process another packet from a different connection, without having to do a context switch: the execution stays within the event loop of that thread.
QProcess::waitForStarted just signals that your process has started. The mutex in extCmd() method gets unlocked then because you are not waiting for QProcess::waitForFinished in this method. You will exit this method while the child process is still running.
If you want to use a fire&forget type of execution I just you uses QProcess::startDetached