How to use a Shared-Variable - multithreading

I want to know your opinion on how I coded my QThread.
this part of my code is for handle a PWM for LED display.
The purpose of my QThread is to work as smoothly as possible while changing a value in the QThread else i can watch on LED display a very fast break during the value change, example when i use QMutex.
this is the reason why I chose to use the SIGNAL / SLOT way
When I need to change the brightness that depends on the time or value of a resistance, I use the emit SIGNAL_FUNCTION (qint32 new_value)
Is there a risk of crashing my application using my method?
is there another way more correct to handle this kind of procedure?
I enclose my code below.
#include "luminosite.h"
LUMINOSITE::LUMINOSITE(QObject *parent, qint32 P_PIN_LUMINOSITE) :
QThread(parent)
{
PIN_LUMINOSITE = P_PIN_LUMINOSITE;
VALEUR = 1;
}
void LUMINOSITE::run()
{
qint32 TICK = 0;
while(true)
{
if (TICK == 0)
{
digitalWrite(PIN_LUMINOSITE, LOW);
}
else if (TICK == VALEUR)
{
digitalWrite(PIN_LUMINOSITE, HIGH);
}
TICK++;
if (TICK == 33)
{
TICK = 0;
}
delayMicroseconds(1);
}
}
void LUMINOSITE::CHANGEMENT_VALEUR(qint32 P_VALEUR) // SLOT
{
VALEUR = P_VALEUR;
}

If your question is only "SIGNAL/SLOT are they thread safe ?", read that : http://doc.qt.io/qt-5/threads-qobject.html#signals-and-slots-across-threads
It only depend on the connection type you choose.
You need to choose between Qt::QueuedConnection and Qt::BlockingQueuedConnection

Related

Worker thread suspend / resume implementation

In my attempt to add suspend / resume functionality to my Worker [thread] class, I've happened upon an issue that I cannot explain. (C++1y / VS2015)
The issue looks like a deadlock, however I cannot seem to reproduce it once a debugger is attached and a breakpoint is set before a certain point (see #1) - so it looks like it's a timing issue.
The fix that I could find (#2) doesn't make a lot of sense to me because it requires to hold on to a mutex longer and where client code might attempt to acquire other mutexes, which I understand to actually increase the chance of a deadlock.
But it does fix the issue.
The Worker loop:
Job* job;
while (true)
{
{
std::unique_lock<std::mutex> lock(m_jobsMutex);
m_workSemaphore.Wait(lock);
if (m_jobs.empty() && m_finishing)
{
break;
}
// Take the next job
ASSERT(!m_jobs.empty());
job = m_jobs.front();
m_jobs.pop_front();
}
bool done = false;
bool wasSuspended = false;
do
{
// #2
{ // Removing this extra scoping seemingly fixes the issue BUT
// incurs us holding on to m_suspendMutex while the job is Process()ing,
// which might 1, be lengthy, 2, acquire other locks.
std::unique_lock<std::mutex> lock(m_suspendMutex);
if (m_isSuspended && !wasSuspended)
{
job->Suspend();
}
wasSuspended = m_isSuspended;
m_suspendCv.wait(lock, [this] {
return !m_isSuspended;
});
if (wasSuspended && !m_isSuspended)
{
job->Resume();
}
wasSuspended = m_isSuspended;
}
done = job->Process();
}
while (!done);
}
Suspend / Resume is just:
void Worker::Suspend()
{
std::unique_lock<std::mutex> lock(m_suspendMutex);
ASSERT(!m_isSuspended);
m_isSuspended = true;
}
void Worker::Resume()
{
{
std::unique_lock<std::mutex> lock(m_suspendMutex);
ASSERT(m_isSuspended);
m_isSuspended = false;
}
m_suspendCv.notify_one(); // notify_all() doesn't work either.
}
The (Visual Studio) test:
struct Job: Worker::Job
{
int durationMs = 25;
int chunks = 40;
int executed = 0;
bool Process()
{
auto now = std::chrono::system_clock::now();
auto until = now + std::chrono::milliseconds(durationMs);
while (std::chrono::system_clock::now() < until)
{ /* busy, busy */
}
++executed;
return executed < chunks;
}
void Suspend() { /* nothing here */ }
void Resume() { /* nothing here */ }
};
auto worker = std::make_unique<Worker>();
Job j;
worker->Enqueue(j);
std::this_thread::sleep_for(std::chrono::milliseconds(j.durationMs)); // Wait at least one chunk.
worker->Suspend();
Assert::IsTrue(j.executed < j.chunks); // We've suspended before we finished.
const int testExec = j.executed;
std::this_thread::sleep_for(std::chrono::milliseconds(j.durationMs * 4));
Assert::IsTrue(j.executed == testExec); // We haven't moved on.
// #1
worker->Resume(); // Breaking before this call means that I won't see the issue.
worker->Finalize();
Assert::IsTrue(j.executed == j.chunks); // Now we've finished.
What am I missing / doing wrong? Why does the Process()ing of the job have to be guarded by the suspend mutex?
EDIT: Resume() should not have been holding on to the mutex at the time of notification; that's fixed -- the issue persists.
Of course the Process()ing of the job does not have to be guarded by the suspend mutex.
The access of j.executed - for the asserts as well as for the incrementing - however does need to be synchronized (either by making it an std::atomic<int> or by guarding it with a mutex etc.).
It's still not clear why the issue manifested the way it did (since I'm not writing to the variable on the main thread) -- might be a case of undefined behaviour propagating backwards in time.

How to search through next available thread to do computation

I am doing multithreading in C++. This may be something very standard but I can't seem to find it anywhere or know any key terms to search for it online.
I want to do some sort of computation many times but with multiple threads. For each iteration of computation, I want to find the next available thread that has finished its previous computation to do the next iteration. I don't want to cycle through the threads in order since the next thread to be called may not have finished its work yet.
E.g.
Suppose I have a vector of int and I want to sum up the total with 5 threads. I have the to-be-updated total sum stored somewhere and the count for which element I am currently up to. Each thread looks at the count to see the next position and then takes that vector value and adds it to the total sum so far. Then it goes back to look for the count to do the next iteration. So for each iteration, the count increments then looks for the next available thread (maybe one already waiting for count; or maybe they are all busy still working) to do the next iteration. We do not increase the number of threads but I want to be able to somehow search through all the 5 threads for the first one that finish to do the next computation.
How would I go about coding this. Every way I know of involves doing a loop through the threads such that I can't check for the next available one which may be out of order.
Use semafor (or mutex, always mix up those two) on a global variable telling you what is next. The semafor will lock the other threads out as long as you access the variable making that threads access clear.
So, assuming you have an Array of X elements. And a global called nextfree witch is initalized to 0, then a psudo code would look like this:
while (1)
{
<lock semafor INT>
if (nextfree>=X)
{
<release semnafor INT>
<exit and terminate thread>
}
<Get the data based on "nextfree">
nextfree++;
<release semafor INT>
<do your stuff withe the chunk you got>
}
The point here is that each thread will be alone and have exlusive access to the data struct within the semafor lock and therefore can access the next available regardless of what the others doing. (The other threads will have to wait in line if they are done while another thread working on getting next data chunk. When you release only ONE that stands in queue will get access. The rest will have to wait.)
There are some things to be ware of. Semafor's might lock your system if you manage to exit in the wrong position (Withour releasing it) or create a deadlock.
This is a thread pool:
template<class T>
struct threaded_queue {
using lock = std::unique_lock<std::mutex>;
void push_back( T t ) {
{
lock l(m);
data.push_back(std::move(t));
}
cv.notify_one();
}
boost::optional<T> pop_front() {
lock l(m);
cv.wait(l, [this]{ return abort || !data.empty(); } );
if (abort) return {};
auto r = std::move(data.back());
data.pop_back();
return std::move(r);
}
void terminate() {
{
lock l(m);
abort = true;
data.clear();
}
cv.notify_all();
}
~threaded_queue()
{
terminate();
}
private:
std::mutex m;
std::deque<T> data;
std::condition_variable cv;
bool abort = false;
};
struct thread_pool {
thread_pool( std::size_t n = 1 ) { start_thread(n); }
thread_pool( thread_pool&& ) = delete;
thread_pool& operator=( thread_pool&& ) = delete;
~thread_pool() = default; // or `{ terminate(); }` if you want to abandon some tasks
template<class F, class R=std::result_of_t<F&()>>
std::future<R> queue_task( F task ) {
std::packaged_task<R()> p(std::move(task));
auto r = p.get_future();
tasks.push_back( std::move(p) );
return r;
}
template<class F, class R=std::result_of_t<F&()>>
std::future<R> run_task( F task ) {
if (threads_active() >= total_threads()) {
start_thread();
}
return queue_task( std::move(task) );
}
void terminate() {
tasks.terminate();
}
std::size_t threads_active() const {
return active;
}
std::size_t total_threads() const {
return threads.size();
}
void clear_threads() {
terminate();
threads.clear();
}
void start_thread( std::size_t n = 1 ) {
while(n-->0) {
threads.push_back(
std::async( std::launch::async,
[this]{
while(auto task = tasks.pop_front()) {
++active;
try{
(*task)();
} catch(...) {
--active;
throw;
}
--active;
}
}
)
);
}
}
private:
std::vector<std::future<void>> threads;
threaded_queue<std::packaged_task<void()>> tasks;
std::atomic<std::size_t> active;
};
You give it how many threads either at construction or via start_thread.
You then queue_task. This returns a std::future that tells you when the task is completed.
As threads finish a task, they go to the threaded_queue and look for more.
When a threaded_queue is destroyed, it aborts all data in it.
When a thread_pool is destroyed, it aborts all future tasks, then waits for all of the outstanding tasks to finish.
Live example.

Arduino Uno Timing Issues

I have been trying to make a robotic "gutter cleaner" for a school project. I decided to use an Arduino Uno to control the motors. It's only a forward/reverse drive and therefore only has one motor controlling the motion of the robot. There is another motor controlling the "blades" (for lack of a better word) coming out the front to fling the dirt out of the gutter.
I am using a HC-05 Bluetooth module to accept input from a smartphone and a L9110 H-bridge to control the motors separately. There are five functions: forward motion, reverse motion, blades on, stop and "autonomous". Autonomous involves the blades being on with the robot moving forward for 20 seconds, backwards for 10 seconds, repeating until the stop function is called.
The problem is that when I call the function for the autonomous, the HC-06 seems to stop receiving data and the debug println("auto fwd") spams the Serial Monitor. The "auto rev" code debug code isn't even reached. The stop function cannot run as it appears no data is being received so an infinite loop is created.
I've tried using BlinkWithoutDelay here and I honestly have no idea why this isn't working.
#include <SoftwareSerial.h> //Include the "SoftwareSerial" software in the program
#define M1A 5 //tells the software compiler to assign these varibales to these outputs on the Arduino board
#define M1B 9 //M1 motors are controlling the motion
#define M2A 4 //M2 motors control the blades
#define M2B 10
SoftwareSerial BT(3, 2); //Tells the program to assign pins 2 and 3 on the Arduino to send and receive data
void fw(); //Denoting Functions to be used
void bw();
void Stop();
void autonomous();
void bladesOn();
boolean autonom = false; //Variables
boolean blades = false;
unsigned long currentMillis = millis();
unsigned long previousMillis = 0;
const long fwdTime = 20000;
const long revTime = fwdTime/2;
void setup() {
// put your setup code here, to run once:
TCCR1B = (TCCR1B & 0b11111000) | 0x04;
BT.begin(9600);
Serial.begin(9600);
pinMode(M1A, OUTPUT);
pinMode(M1B, OUTPUT);
pinMode(M2A, OUTPUT);
pinMode(M2B, OUTPUT);
}
void loop() {
if (BT.available()) {
char input = BT.read(); //Read the incoming BlueTooth signal
Serial.println(input); //Print the BT signal to the memory
switch (input) { //IF input is 1, do this, IF input is 2, do that
case '1':
fw();
break;
case '2':
bw();
break;
case '3':
autonomous();
blades = 1;
autonom = true;
break;
case '4':
bladesOn();
blades = true;
break;
case '0':
Stop();
autonom = false;
blades = false;
break;
}
}
}
void bw() {
digitalWrite(M1A, 0); //Give an output to the M1A pin
analogWrite(M1B, 255); //Give an output to the M1B pin
digitalWrite(M2A, 0);
analogWrite(M2B, 255);
Serial.println("Backwards");
}
void fw() {
digitalWrite(M1A, 1);
analogWrite(M1B, (255 - 255));
digitalWrite(M2A, 1);
analogWrite(M2B, (255 - 255));
Serial.println("Forwards");
}
void Stop() {
digitalWrite(M1A, 0);
analogWrite(M1B, 0);
digitalWrite(M2A, 0);
analogWrite(M2B, 0);
Serial.println("Stop");
}
void autonomous() {
while (autonom == true) {
if (currentMillis - previousMillis <= fwdTime) {
//When time between last repeat of forwards/reverse and now is less than Time1, go forward
digitalWrite(M1A, 1);
analogWrite(M1B, (255 - 255));
digitalWrite(M2A, 1);
analogWrite(M2B, (255 - 255));
Serial.println("auto fwd");
}
if (currentMillis - previousMillis <= revTime) {
//When time between last repeat of forwards/reverse and now is less than Time2, go reverse
digitalWrite(M1A, 0);
analogWrite(M1B, 255);
digitalWrite(M2A, 0);
analogWrite(M2B, 255);
Serial.println("auto rev");
}
if (currentMillis - previousMillis == revTime) { //Set previoustime to currenttime
previousMillis = currentMillis;
Serial.println("Autonom");
}
}
}
void bladesOn() {
blades = true;
digitalWrite(M2A, 1);
analogWrite(M2B, 0);
Serial.println("Blades");
}
I know this is probably too long to read for some people, but any help would be very much appreciated. If you need more information, don't hesitate to ask.
PS. I am using "Arduino BT Joystick" as the Android app to control the robot, if that helps.
Thank you,
Craig.
Your logic for the autonomous() function is wrong. The arduino will get stuck on the while loop on the second call of the function, as noted by DigitalNinja, as the autonom variable is only updated outside this loop.
Even if it wasn't the case, the currentMillis variable is not being updated anywhere in the code, so the test currentMillis - previousMillis <= fwdTime will always be true.
(Ps: sorry to answer like this, I don't have enough reputation to comment.)
Problems:
autonom() contains a loop while(autonom == true) { ... } which does never terminate because autonom is never set to false within the loop, so the control never returns to caller void loop() { ... } and you never listen to bluetooth commands again.
You do not update currentMillis anywhere, thus your robot would end being stuck trying to go forward/backward forever.
It is inappropriate to write currentMillis - previousMillis == revTime, because generally speaking currentMillis might become larger than previousMillis + revTime without ever being equal. You should write currentMillis - previousMillis >= revTime instead.
Although not really complicated, your code is quite long and I don't have much time to spend over an answer, so I'll try to set you in the right direction by giving you the mock-up of a proper design in pseudo-code. I am confident that you'll be able to use it to your needs.
Please note that in this example '1' sets the autonomous movement forward and '2' sets the autonomous movement backward, and '3' is no longer used. This is because I think your communication protocol should be enriched so to allow for command parameters, e.g.: the duration of movement. Note that you can easily reproduce your previous behaviour for '1' and '2' by setting time = 0.
enum {
STAY_STILL,
MOVE_FORWARD,
MOVE_BACKWARD,
}
#define FORWARD_TIME 20000
#define BACKWARD_TIME (FORWARD_TIME / 2)
byte state = STAY_STILL;
unsigned long startMillis;
void loop() {
currentMillis = millis();
if (BT.available()) {
char input = BT.read();
switch (input) {
case '1':
state = MOVE_FORWARD;
time = FORWARD_TIME;
startMillis = currentMillis;
break;
case '2':
state = MOVE_BACKWARD;
time = BACKWARD_TIME;
startMillis = currentMillis;
break;
// case '3' is pointless: instead of duplicating functionality
// you should modify your communication protocol so to allow
// setting both a direction and the amount of time you want
// the robot to move in such direction. A value of 0 could
// stand for 'forever', all other values could be interpreted
// as seconds.
case '4':
start_blades();
break;
case '5':
stop_blades();
break;
case '0':
state = STAY_STILL;
break;
}
}
if (state == MOVE_FORWARD || state == MOVE_BACKWARD) {
move_robot();
} else if (state == STAY_STILL) {
stop_blades();
stop_robot();
}
delay(10);
}
void start_blades() {
digitalWrite(M2A, 1);
analogWrite(M2B, 0);
Serial.println("start blades");
}
void stop_blades() {
digitalWrite(M2A, 0);
analogWrite(M2B, 0);
Serial.println("stop blades");
}
void stop_robot() {
digitalWrite(M1A, 0);
analogWrite(M1B, 0);
Serial.println("stop wheels");
}
void move_robot() {
// time == 0 : move forever in one direction
// else : move up until deadline
if (time == 0 || currentMillis - startMillis <= time) {
if (state == MOVE_FORWARD) {
fw();
} else if (state == MOVE_BACKWARD) {
bw();
}
} else {
// this will turn off both blades and
// wheels at the next iteration of the loop()
state = STAY_STILL;
}
}
...
// your fw() and bw() functions
...
To conclude, a minor suggestion. In your place, I would change functions like fw(), bw(), start_blades(), stop_blades(), stop_robot() to perform an action only when it is necessary, instead of setting the values and printing stuff mindlessly. In addition to reducing the overhead of long functions, it also avoids printing tons of messages through serial, making debugging easier. This can be achieved by enriching the robot's state with more information, but it falls beyond the scope of the question and this answer.
Well, in the end, I had a variable that was turning itself on and off apparently on its own. So when the due date came in for the project I took out the Auto function completely. Thanks for all your help guys, I mightn't have gotten it, but I learned a bit.

Concurrency in a D program

I am trying to write a simple D program with two threads (main and one spawned in main) with a daughter thread receiving message from the parent thread. Here is my piece:
import std.stdio;
import core.thread;
import std.concurrency;
import std.c.stdio;
extern (C) int kbhit();
void readSignal(){
for(;;){
bool stop = receiveOnly!bool();
if(!stop)
writeln("reading signal...");
else
writeln("stop reading signal...");
Thread.getThis.sleep(1.seconds);
}
}
int main()
{
auto reader = spawn(&readSignal);
bool stop = false;
while(true){
if(kbhit()){
stop = true;
}
option 1: reader.send(stop);
option 2: send(reader, stop);
}
}
return 0;
}
Basically I wait for a keyboard hit and the intend to pause the spawned thread.
Neither of options (option 1 by Andrei Alexandrescu and option 2 as in dlang docs) work for me. What do i do wrong?
It looks to me that your main thread is spinning very hard, sending thousands and thousands of messages to the reader. The reader, however, is only reading one per second. The 'true' message is backed up behind a massive number of 'false' messages.
For a situation like this, you may be better off using a shared variable, rather than using messages. Both threads will be able to safely read and write from it.
as mentioned you are flooding the client thread.
Try adding a sleep like this
Thread.sleep(dur!"msecs"(1500));
As mentioned by duselbaer and Anders S, in your example, the number of sends is huge, but the number of receives is limited by the sleep. So, when the thread should actually get a true, it is already preceded by an overwhelming number of falses in the queue.
Generally, if the child thread is able to receive faster than the parent thread sends, it's fine. Here is a working example produced from your code. The sleep calls may be substituted by doing some work, but the relation must generally hold.
import std.stdio;
import core.thread;
import std.concurrency;
import core.stdc.stdio;
extern (C) int kbhit();
void readSignal() {
for(int i = 0; ; i++) {
bool stop = receiveOnly!bool();
if(!stop)
writeln("reading signal... ", i);
else
{
writeln("stop reading signal...");
break;
}
Thread.getThis.sleep(1.msecs); // this should be no more than below
}
}
int main() {
auto reader = spawn(&readSignal);
bool stop = false;
while(!stop) {
if(kbhit()) {
stop = true;
}
send(reader, stop); // option 1
// reader.send(stop); // option 2 does the same thing as above via UFCS
Thread.getThis.sleep(1.msecs); // this should be no less than above
}
return 0;
}

Controlling a camera from Qt: interaction of OS threads with the event loop

I'm working on a Qt application to control an industrial camera, and in particular I need to trigger the camera at a particular time (when various illumination settings have been put in place, for example), and wait until a frame is returned. In the simplest case, the following code does the job nicely:
void AcquireFrame()
{
// Runs in the main GUI thread:
camera -> m_mutex.lock();
camera -> frameHasArrived = false;
camera -> m_mutex.unlock();
camera -> triggerImageAcquisition();
forever
{
camera -> m_mutex.lock()
bool isReady = camera -> frameHasArrived;
camera -> m_mutex.unlock()
if (isReady)
{
return;
}
else
{
Sleep(10);
}
}
}
void callback(camera *)
{
// Called by the camera driver from a separate OS thread - not a Qt thread -
// when a frame is ready:
camera -> m_mutex.lock();
camera -> frameHasArrived = true;
camera -> m_mutex.unlock();
}
...and most of the time this works perfectly well. However, this being the real world, occasionally the camera will fail to receive the trigger or the computer will fail to receive the frame cleanly, and the above code will then go into an infinite loop.
The obvious thing to do is to put in a timeout, so if the frame is not received within a certain time then the image acquisition can be attempted again. The revised code looks like:
void AcquireFrame()
{
camera -> m_mutex.lock();
camera -> frameHasArrived = false;
camera -> m_mutex.unlock();
camera -> triggerImageAcquisition();
QTime timeout;
timeout.start();
forever
{
timeout.restart();
fetch: camera -> m_mutex.lock()
bool isReady = camera -> frameHasArrived;
camera -> m_mutex.unlock()
if (isReady)
{
return;
}
else if (timeout.elapsed() > CAM_TIMEOUT)
{
// Assume the first trigger failed, so try again:
camera -> triggerImageAcquisition();
continue;
}
else
{
Sleep(10);
goto fetch;
}
}
}
Now, the problem is that with this latter version the failure rate (the proportion of 'unsuccessful triggers') is much, much higher - at least an order of magnitude. Moreover, this code too will eventually find itself in an infinite loop where, however many times it tries to re-trigger the camera, it never sees a frame come back. Under these latter circumstances, killing the application and checking the camera reveals that the camera is in perfect working order and patiently waiting for its next trigger, so it doesn't appear to be a camera problem. I'm coming to the conclusion that in fact it's some sort of a system resource issue or a thread conflict, so that Qt's event loop is not allowing the camera callback to be called at the proper time.
Is this likely, and is there in fact a better way of doing this?
Update on 6th June:
For what it's worth, I've seen no more problems since I adopted the method below (having given the camera object an extra member, namely a QWaitCondition called 'm_condition'):
void AcquireFrame()
{
bool frameReceived;
forever
{
camera -> triggerImageAcquisition();
camera -> m_mutex.lock();
frameReceived = camera -> m_condition.wait(&camera->m_mutex, CAM_TIMEOUT);
if (frameReceived)
{
// We received a frame from the camera, so can return:
camera -> m_mutex.unlock();
return;
}
// If we got to here, then the wait condition must have timed out. We need to
// unlock the mutex, go back to the beginning of the 'forever' loop and try
// again:
camera -> m_mutex.unlock();
}
}
void callback (camera *)
{
// Called by the camera driver from a separate OS thread -
// not a QThread - when a frame is ready:
camera -> m_condition.wakeOne();
}
This still has the effect of pausing the main thread until we have either received a frame or experienced a timeout, but now we have eliminated the Sleep() and the Qt event loop remains in full control throughout. It's still not clear to me why the old method caused so many problems - I still suspect some sort of system resource limitation - but this new approach seems to be more lightweight and certainly works better.
Running the AcquireFrame that blocks on mutexes in the GUI thread makes not much sense to me unless you wanted to trade off GUI reponsiveness for latency, but I doubt that you care about the latency here as the camera snaps single frames and you insist on processing them in the busy GUI thread in the first place.
Secondly, there is nothing that Qt would do to prevent the callback from getting called from the other thread, other than the other thread having lower priority and being preempted by higher priority threads completely monopolizing the CPU.
I would simply post an event to a QObject in the GUI thread (or any other QThread!) from the callback function. You can post events from any thread, it doesn't matter -- what matters is the receiver. QCoreApplication::postEvent is a static function, after all, and it doesn't check the current thread at all.
In a complex application you probably want to have the logic in a dedicated controller QObject, and not spread across QWidget-derived classes. So you'd simply post the event to the controller instance.
Do note that posting an event to an idle GUI thread will work exactly the same as using a mutex -- Qt's event loop uses a mutex and sleeps on that mutex and on messages from the OS. The beautiful thing is that Qt already does all the waiting for you, but the wait is interruptible. The posted event should have a high priority, so that it'll end up the first one in the queue and preempt all the other events. When you're ready to acquire the frame, but before you trigger it, you can probably call QCoreApplication::flush(). That's about it.
There should be no problem in having a separate image processor QObject living in a dedicated QThread to leverage multicore machines. You can then process the image into a QImage, and forward that one to the GUI thread using another event, or simply via a signal-slot connection. You can probably also notify the GUI thread when you've acquired the frame but are only beginning to process it. That way it'd be more obvious to the user that something is happening. If image processing takes long, you can even send periodic updates that'd be mapped to a progress bar.
The benchmark results (using release builds) are interesting but in line with the fact that Qt's event queues are internally protected by a mutex, and the event loop waits on that mutex. Oh, the results seem to be portable among mac and windows xp platforms.
Using a naked wait condition is not the fastest way, but using a naked posted event is even slower. The fastest way is to use a queued signal-slot connection. In that case, the cost of posting an event to the same thread (that's what the FrameProcessorEvents::tick() does) seems to be negligible.
Mac
warming caches...
benchmarking...
wait condition latency: avg=45us, max=152us, min=8us, n=1001
queued signal connection latency: avg=25us, max=82us, min=10us, n=1000
queued event latency: avg=71us, max=205us, min=14us, n=1000
Windows XP under VMWare Fusion
Note that results over 1ms are likely due to VMWare being not scheduled at the moment.
warming caches...
benchmarking...
wait condition latency: avg=93us, max=783us, min=8us, n=1000
queued signal connection latency: avg=46us, max=1799us, min=0us, n=1000
queued event latency: avg=117us, max=989us, min=18us, n=1001
Below is the benchmarking code.
#include <cstdio>
#include <limits>
#include <QtCore>
QTextStream out(stdout);
class TimedBase : public QObject
{
public:
TimedBase(QObject * parent = 0) : QObject(parent) { reset(); }
friend QTextStream & operator<<(QTextStream & str, const TimedBase & tb) {
return str << "avg=" << tb.avg() << "us, max=" << tb.usMax << "us, min="
<< tb.usMin << "us, n=" << tb.n;
}
void reset() { usMax = 0; n = 0; usMin = std::numeric_limits<quint32>::max(); usSum = 0; }
protected:
quint64 n, usMax, usMin, usSum;
quint64 avg() const { return (n) ? usSum/n : 0; }
void tock() {
const quint64 t = elapsed.nsecsElapsed() / 1000;
usSum += t;
if (t > usMax) usMax = t;
if (t < usMin) usMin = t;
n ++;
}
QElapsedTimer elapsed;
};
class FrameProcessorEvents : public TimedBase
{
Q_OBJECT
public:
FrameProcessorEvents(QObject * parent = 0) : TimedBase(parent) {}
public slots: // can be invoked either from object thread or from the caller thread
void tick() {
elapsed.start();
QCoreApplication::postEvent(this, new QEvent(QEvent::User), 1000);
}
protected:
void customEvent(QEvent * ev) { if (ev->type() == QEvent::User) tock(); }
};
class FrameProcessorWait : public TimedBase
{
Q_OBJECT
public:
FrameProcessorWait(QObject * parent = 0) : TimedBase(parent) {}
void start() {
QTimer::singleShot(0, this, SLOT(spinner()));
}
public: // not a slot since it must be always invoked in the caller thread
void tick() { elapsed.start(); wc.wakeAll(); }
protected:
QMutex mutex;
QWaitCondition wc;
protected slots:
void spinner() {
forever {
QMutexLocker lock(&mutex);
if (wc.wait(&mutex, 1000)) {
tock();
} else {
return;
}
}
}
};
FrameProcessorEvents * fpe;
FrameProcessorWait * fpw;
static const int avgCount = 1000;
static const int period = 5;
class FrameSender : public QObject
{
Q_OBJECT
public:
FrameSender(QObject * parent = 0) : QObject(parent), n(0), N(1) {
QTimer::singleShot(0, this, SLOT(start()));
}
protected slots:
void start() {
out << (N ? "warming caches..." : "benchmarking...") << endl;
// fire off a bunch of wait ticks
n = avgCount;
timer.disconnect();
connect(&timer, SIGNAL(timeout()), SLOT(waitTick()));
fpw->reset();
fpw->start();
timer.start(period);
}
void waitTick() {
fpw->tick();
if (!n--) {
if (!N) { out << "wait condition latency: " << *fpw << endl; }
// fire off a bunch of signal+event ticks
n = avgCount;
fpe->reset();
timer.disconnect();
connect(&timer, SIGNAL(timeout()), fpe, SLOT(tick()));
connect(&timer, SIGNAL(timeout()), SLOT(signalTick()));
}
}
void signalTick() {
if (!n--) {
if (!N) { out << "queued signal connection latency: " << *fpe << endl; }
// fire off a bunch of event-only ticks
n = avgCount;
fpe->reset();
timer.disconnect();
connect(&timer, SIGNAL(timeout()), SLOT(eventTick()));
}
}
void eventTick() {
fpe->tick();
if (!n--) {
if (!N) { out << "queued event latency: " << *fpe << endl; }
if (!N--) {
qApp->exit();
} else {
start();
}
}
}
protected:
QTimer timer;
int n, N;
};
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
QThread eThread;
QThread wThread;
eThread.start(QThread::TimeCriticalPriority);
wThread.start(QThread::TimeCriticalPriority);
fpw = new FrameProcessorWait();
fpe = new FrameProcessorEvents();
fpw->moveToThread(&eThread);
fpe->moveToThread(&wThread);
FrameSender s;
a.exec();
eThread.exit();
wThread.exit();
eThread.wait();
wThread.wait();
return 0;
}
#include "main.moc"
How much work is it to detect the trigger state and fire the camera?
If that's relatively cheap - I would have a separate thread just blocking on a trigger event and firing the camera. Then have the main thread informed by a Qt signal sent from the callback function.

Resources