GTest for multithreaded process - multithreading

I am running GTest for my one process which will be multi threaded.
I am compiling all the code of the process in Gtest environment and then testing the code through test cases.
My problem is as below
TEST(TestCase1,TestCase11)
{
Point 1 //i am calling one event of the class over here...the class will be having its separate thread.
Point 2 //Now over here if i am not putting a usleep of some seconds then the event in that separate thread is not getting covered.
}
One thing to note over here that it is a event, which will be handled by the framework such that when Point 1 is generating the event then it will return the execution to point 2.. but will continue to process some things in the context of the thread in which the event was defined. It is not a function Call.
Can anyone help over here. How to remove that usleep because the time varies many times because of the execution time taken by the events will be different.

Related

Why can a sub-classed QThread simply fail to start?

This is using a sub-classed QThread based on the ideas expressed in the whitepaper "QThreads: You were not doing so wrong". It does not have an event loop, nor does it have slots. It just emits signals and stops. In fact its primary signal is the QThread finished one.
Basically I have a Qt using a background thread to monitor stuff. Upon finding what it is looking for, it records its data, and terminates.
The termination sends a signal to the main event loop part of the application, which processes it, and when done, starts the background anew. I can usually get this working for tens of seconds, but then it just seems to quit.
It seems that when the main application tries to start the thread, it doesn't really run. I base this on telemetry code that increments counters as procedures get executed.
basically
//in main application. Setup not shown.
//background points to the QThread sub-class object
void MainWindow::StartBackground()
{
background->startcount++;
background->start();
if ( background->isRunning() )
{
background->startedcount++;
}
}
//in sub-classed QThread
void Background::run()
{
runcount++;
//Do stuff until done
}
So when I notice that it seems that my background thread isn't running, by watching Process Explorer, I cause the debugger to break in, and check the counts. What I see is that startcount and startedcount are equal. And have a value of one greater than runcount
So I can only conclude that the thread didn't really run, but I have been unable to find out any evidence of why.
I have not been able to find documentation on QThreads not starting do to some error condition, or what evidence there is of such an error.
I suppose I could set up a slot to catch started from the thread. The starting code could loop on a timed-out semaphore, trying again and again until the started slot actually resets the semaphore. But it feels ugly.
EDIT - further information
So using the semaphore method, I have a way to breakpoint on failure to start.
I sampled isFinished() right before I wanted to do start(), and it was false. After my 100ms semaphore timeout it became true.
So the question seems to be evolving into 'Why does QThread sometimes emit a finished() signal before isFinished() becomes true?'
Heck of a race condition. I'd hate to spin on isFinished() before starting the next background thread.
So this may be a duplicate of
QThread emits finished() signal but isRunning() returns true and isFinished() returns false
But not exactly, because I do override run() and I have no event loop.
In particular the events 8 and 9 in that answer are not in the same order. My slot is getting a finished() before isFinished() goes true.
I'm not sure an explicit quit() is any different than letting run() return;
It sounds as if you have a race condition wherein you may end up trying to restart your thread before the previous iteration has actually finished. If that's the case then, from what I've seen, the next call to QThread::start will be silently ignored. You need to update your code so that it checks the status of the thread before restarting -- either by calling QThread::isFinished or handling the QThread::finished signal.
On the other hand... why have the thread repeatedly started/stopped. Would it not be easier to simply start the thread once? Whatever code is run within the context of QThread::run can monitor whatever it monitors and signal the main app when it finds anything of note.
Better still. Separate the monitor logic from the thread entirely...
class monitor: public QObject {
.
.
.
};
QThread monitor_thread;
monitor monitor;
/*
* Fix up any signals to/from monitor.
*/
monitor.moveToThread(&monitor_thread);
monitor_thread.start();
The monitor class can do whatever it wants and when it's time to quit the app can just call monitor_thread::quit.
There is a race condition in the version of Qt I am using. I don't know if it was reported or not before, but I do not have the latest, so it's probably moot unless I can demonstrate it in the current version.
Similar bugs were reported here long ago:
QThread.isFinished returns False in slot connected to finished() signal
(the version I use is much more recent than Qt 4.8.5)
What more important is I can workaround it with the following code
while ( isRunning() )
{
msleep(1);
}
start();
I've run a few tests, and it never seems to take more than 1ms for the race condition to settle. Probably just needs a context switch to clean up.

Node.js Event loop

Is the Node.js I/O event loop single- or multithreaded?
If I have several I/O processes, node puts them in an external event loop. Are they processed in a sequence (fastest first) or handles the event loop to process them concurrently (...and in which limitations)?
Event Loop
The Node.js event loop runs under a single thread, this means the application code you write is evaluated on a single thread. Nodejs itself uses many threads underneath through libuv, but you never have to deal with with those when writing nodejs code.
Every call that involves I/O call requires you to register a callback. This call also returns immediately, this allows you to do multiple IO operations in parallel without using threads in your application code. As soon as an I/O operation is completed it's callback will be pushed on the event loop. It will be executed as soon as all the other callbacks that where pushed on the event loop before it are executed.
There are a few methods to do basic manipulation of how callbacks are added to the event loop.
Usually you shouldn't need these, but every now and then they can be useful.
setImmediate
process.nextTick
At no point will there ever be two true parallel paths of execution, so all operations are inherently thread safe. There usually will be several asynchronous concurrent paths of execution that are being managed by the event loop.
Read More about the event loop
Limitations
Because of the event loop, node doesn't have to start a new thread for every incoming tcp connection. This allows node to service hundreds of thousands of requests concurrently , as long as you aren't calculating the first 1000 prime numbers for each request.
This also means it's important to not do CPU intensive operations, as these will keep a lock on the event loop and prevent other asynchronous paths of execution from continuing.
It's also important to not use the sync variant of all the I/O methods, as these will keep a lock on the event loop as well.
If you want to do CPU heavy things you should ether delegate it to a different process that can execute the CPU bound operation more efficiently or you could write it as a node native add on.
Read more about use cases
Control Flow
In order to manage writing many callbacks you will probably want to use a control flow library.
I believe this is currently the most popular callback based library:
https://github.com/caolan/async
I've used callbacks and they pretty much drove me crazy, I've had much better experience using Promises, bluebird is a very popular and fast promise library:
https://github.com/petkaantonov/bluebird
I've found this to be a pretty sensitive topic in the node community (callbacks vs promises), so by all means, use what you feel will work best for you personally. A good control flow library should also give you async stack traces, this is really important for debugging.
The Node.js process will finish when the last callback in the event loop finishes it's path of execution and doesn't register any other callbacks.
This is not a complete explanation, I advice you to check out the following thread, it's pretty up to date:
How do I get started with Node.js
From Willem's answer:
The Node.js event loop runs under a single thread. Every I/O call requires you to register a callback. Every I/O call also returns immediately, this allows you to do multiple IO operations in parallel without using threads.
I would like to start explaining with this above quote, which is one of the common misunderstandings of node js framework that I am seeing everywhere.
Node.js does not magically handle all those asynchronous calls with just one thread and still keep that thread unblocked. It internally uses google's V8 engine and a library called libuv(written in c++) that enables it to delegate some potential asynchronous work to other worker threads (kind of like a pool of threads waiting there for any work to be delegated from the master node thread). Then later when those threads finish their execution they call their callbacks and that is how the event loop is aware of the fact that the execution of a worker thread is completed.
The main point and advantage of nodejs is that you will never need to care about those internal threads and they will stay away from your code!. All the nasty sync stuff that should normally happen in multi threaded environments will be abstracted out by nodejs framework and you can happily work on your single thread (main node thread) in a more programmer friendly environment (while benefiting from all the performance enhancements of multiple threads).
Below is a good post if anyone is interested:
When is the thread pool used?
you have to know first about nodeJs implementaion in order to know event loop.
actually node js core implementation using two components :
v8 javascript runtime engine
libuv for handlign non i/o blocking operation and handling threads and concurrent operations for you;
with the javascript you can actually write code with one thread but this means not that your code execute on the one thread although you can execute on multiple thread s using clusters in node js
now when you want to execute some code like :
let fs = require('fs');
fs.stat('path',(err,stat)=>{
//do something with the stat;
console.log('second');
});
console.log('first');
the execution of this code at high level is like this:
first the v8 engine run this code and then if there is no error
everything is good then it looks for the
it try to run it run line by line when it gets to the fs .stats this is a node js api very similar to the web apis like setTimeout that the browser handle it for us when it encounter to the fs.stats it is pass the code to the libuv components with a flag and pass your callback to the event queue then the libuv you execute your code during the operation and when its done just send some signal and then d the v8 execute your code az a callback you set on the queue but it always check for the stack is empty then go for the your code on the queue # always remember that !
Well, to understand nodejs I/O events in the event, you must understand nodejs event loop properly.
from the name event loop, we understand it's a loop that runs cycle after cycle round-robin basis until there are no events remains in the loop or the app closed.
The event loop is one of the topmost features in nodejs, it is what makes async programming in nodejs.
When the program starts we are in a node process in the single thread where the event loop runs. Now the most importing things we need to know that the event loop is where all the application code that is inside callback functions is executed.
So, basically all code that is not top-level code will run in the event loop. Some part (mostly heavy duties) might get offloaded to the thread pool
(When is the thread pool used?), the event loop will take care of those heavy duties and return the result to the event of the event loop.
It is the heart of the node architecture, and nodejs built around callback functions. so callbacks will triggered as soon as some work is finished sometime in the future because node uses an event-triggered architecture.
When an application receives an HTTP request on a node server or a timer expiring or a file finishing to read all these will emit events as soon as they are done with their work, and our event loop will then pick up these events and call the callback functions that are associated with each event, it's usually said that the event loop does the orchestration, which simply means that it receives events, calls their callback functions, and offloads the more expensive tasks to the thread pool.
Now, how does all this actually work behind the scenes? In what order are these callbacks executed?
Well, when we start our node application, the event loop starts running right away. An event loop has multiple phases, and each phase has a callback queue, where the four most important phases are 1. Expired timer callbacks, 2.I/O polling and callbacks 3. setImmediate callbacks, and 4. Close callbacks. There are other phases that is used internally by Node.
So, the first phase takes care of callbacks of expired timers, for example, from the setTimeout() function. So, if there are callback functions from timers that just expired, these are the first ones to be processed by the event loop.
** The most important thing is, If a timer expires later during the time when one of the other phases is being processed, well then the callback of that timer will only be called as soon as the event loop comes back to this first phase. And it works like this in all four phases.**
So callbacks in each queue are processed one by one until there are no ones left in the queue and only then, the event loop will enter the next phase. for example, suppose there is 1000 setTimeOut callbacks timer expired and the event loop is in the first phase then all these 1000 setTimeOuts callbacks will execute one by one then it will go to the next phase(I/O pooling and callbacks).
Next up, we have I/O pooling and execution of I/O callbacks. Here I/O stands for input/output and polling basically means looking for new I/O events that are ready to be processed and putting theme into the callback queue.
In the context of a Node application, I/O means mainly stuff like networking and file access, so in this phase where probably 99% of general application code gets executed.
The next phase is for setImmediate callbacks, and SetImmediate is a special kind of timer that we can use if we want to process callbacks immediately after the I/O polling and execution phase.
And finally, the fourth phase is the close callbacks, in this phase, all close events are processed, for example when a server or a WebSocket shut down.
These are the four phases in the event loop, but besides these four callbacks queues there are actually also two other queues,
1. nextTick() other
2. microtasks queue(which is mainly for resolved promises)
If there are any callbacks in one of these two queues to be processed, they will be executed right after the current phase of the event loop finishes instead of waiting for the entire loop/cycle to finish.
In other words, after each of these four phases, if there are any callbacks in these two special queues, they will be executed right away. Now imagine that a promise resolves and returns some data from an API call while the callback of an expired timer is running, In this case, the promise callback will be executed right after the one from the timer finish.
The same logic also applies to the nextTick() queue. The nextTick() is a function that we can use when we really, really need to execute a certain callback right after the current event loop phase. It's a bit similar to setImmediate, with the difference that setImmediate only runs after the I/O callback phase.
Will all the above things can happen in one tick/cycle of the event loop, In the meantime their new events could have arisen in a particular phase or old event could be expired, the event loop will handle those events with another new cycle.
So now it's time to decide whether the loop should continue to the next tick or if the program should exit. Node simply checks whether there are any timers or I/O tasks that are still running in the background if there aren't any then it will exit the application. But if there are any pending timers or I/O tasks, then the node will continue running the event loop and go starting to the next cycle.
For example, in node application when we are listening for incoming HTTP requests, we basically running an infinite I/O task, and that is run in the event loop, for that Node.js keep running and keep listening for new HTTP request coming in instead of just exiting the application.
Also when we are writing or reading a file in the background that's also an I/O task and it makes sense that the app doesn't exist while it's working with that file, right?
Now The event loop in practices:
const fs = require('fs');
setTimeout(()=>console.log('Timer 1 finished'), 0);
fs.readFile('test-file.txt', ()=>{
console.log('I/O finished');
});
setImmediate(()=>console.log('Immediate 1 finished'))
console.log('Hello from the top level code');
Output:
Well the first lin is Hello from the top level code, yes it is expected because this is a code that gets executed immediately. Then after we have three output, Timer 1 finished this line is expected because of phase one as we discuess before, but after that I/O finished should be printed, because we discuess that setImmediate runs after the I/O callback phase, but this code is actually not in an I/O cycle, so it is not running inside of the event loop, because it's not runnin inside of any callback function.
Now lets do another test:
const fs = require('fs');
setTimeout(()=>console.log('Timer 1 finished'), 0);
setImmediate(()=>console.log('Immediate 1 finished'));
fs.readFile('test-file.txt', ()=>{
console.log('I/O finished');
setTimeout(()=>console.log('Timer 2 finished'), 0);
setImmediate(()=>console.log('Immediate 2 finished'));
setTimeout(()=>console.log('Timer 3 finished'), 0);
setImmediate(()=>console.log('Immediate 3 finished'));
});
console.log('Hello from the top level code')
Output:
The output is as expected right? Now let's add some delay:
setTimeout(()=>console.log('Timer 1 finished'), 0);
setImmediate(()=>console.log('Immediate 1 finished'));
fs.readFile('test-file.txt', ()=>{
console.log('I/O finished');
setTimeout(()=>console.log('Timer 2 finished'), 3000);
setImmediate(()=>console.log('Immediate 2 finished'));
setTimeout(()=>console.log('Timer 3 finished'), 0);
setImmediate(()=>console.log('Immediate 3 finished'));
});
console.log('Hello from the top level code')
output:
In the first cycle inside I/O everything executed, but because of the dealy Timer-2 executed inside its code in the second cycle.
Now Lets add nextTick(), and see how nodejs behaves:
setTimeout(()=>console.log('Timer 1 finished'), 0);
setImmediate(()=>console.log('Immediate 1 finished'));
fs.readFile('test-file.txt', ()=>{
console.log('I/O finished');
setTimeout(()=>console.log('Timer 2 finished'), 3000);
setImmediate(()=>console.log('Immediate 2 finished'));
setTimeout(()=>console.log('Timer 3 finished'), 0);
setImmediate(()=>console.log('Immediate 3 finished'));
process.nextTick(()=>console.log('Process Next Tick'));
});
console.log('Hello from the top level code')
Output:
Well, the first callback is executed is inside the process.NextTick(), as it is expected right? Because nextTicks callbacks stays in the microtask queue an they executed after each phase.
If you run this simple node code
console.log('starting')
setTimeout(()=>{
console.log('0sec')
}, 0)
setTimeout(()=>{
console.log('2sec')
}, 2000)
console.log('end')
What do you expect output to be?
If its,
starting
0sec
end
2sec
it's is wrong guess, we will get
starting
end
0sec
2sec
because node will never print code in event loop before exiting main()
So basically, First main() will go in stack, then console.log('starting ') so you will see it printed first, after that come setTimeout(()=>{console.log('0sec')}, 0) will go in a stack and then in nodeAPI (node uses multi-threads (lib written in c++) to execute setTimeout to finish, even tho above code is single thread code) after time is up it moves to the event loop, now node can't print it unless stack is not empty. So, next line i.e setTimeout of 2sec will be first pushed to stack,then nodeAPI which will wait for 2 sec to finish, and then to even loop, in mean while next code line will be executed that is console.log('end') and so we see end msg before 0sec, because if nodes non blocking nature. After end code is over and hence main is poped out and its turn of event loop code to be executed that is first 0sec and after that 2sec msg will be printed.

Does timer need mutex

I use timer in my program:
timer = new Qtimer(); connect(timer, SIGNAL(timeout()), this, SLOT(readData())); timer.start(1000);
And there is also other slots which may be triggered by UI interation:
/*SLOT FUNCTION*/ on_pushbutton_triggered(){..../*write data*/...}.
(the code is written in qt, but I think it's a common question)
So I worry about the potential problem: may readData() reads wrong data while on_pushbutton_triggereed() is writting data?
I am not so familiar with how the timer really work behind the screen: is it in the same thread with my program?
Will readData() and on_pushbutton_triggereed() be called, executed, finished serially and have no mutex problem(that is: I have to use lock() and unlock())? Thank you for reading! I really hope for your hints!
Qt is using an event loop to implement concurrent activity in general and QTimer in particular within a single thread.
The event providers (QTimer in this case) are producing events and publish them to the event loop. Then they are processed according to their priority and order of publishing. This approach doesn't require any synchronization as there is only one section of code executed at the time, so it's safe to access data.
On Unix-like systems ps -eLf command will show information about all processes (PID column in the output) and their threads (LWP column). NLWP column shows how many threads particular process has.

QThread execution freezes my GUI

I'm new to multithread programming. I wrote this simple multi thread program with Qt. But when I run this program it freezes my GUI and when I click inside my widow, it responds that your program is not responding .
Here is my widget class. My thread starts to count an integer number and emits it when this number is dividable by 1000. In my widget simply I catch this number with signal-slot mechanism and show it in a label and a progress bar.
Widget::Widget(QWidget *parent) :
QWidget(parent),
ui(new Ui::Widget)
{
ui->setupUi(this);
MyThread *th = new MyThread;
connect( th, SIGNAL(num(int)), this, SLOT(setNum(int)));
th->start();
}
void Widget::setNum(int n)
{
ui->label->setNum( n);
ui->progressBar->setValue(n%101);
}
and here is my thread run() function :
void MyThread::run()
{
for( int i = 0; i < 10000000; i++){
if( i % 1000 == 0)
emit num(i);
}
}
thanks!
The problem is with your thread code producing an event storm. The loop counts very fast -- so fast, that the fact that you emit a signal every 1000 iterations is pretty much immaterial. On modern CPUs, doing a 1000 integer divisions takes on the order of 10 microseconds IIRC. If the loop was the only limiting factor, you'd be emitting signals at a peak rate of about 100,000 per second. This is not the case because the performance is limited by other factors, which we shall discuss below.
Let's understand what happens when you emit signals in a different thread from where the receiver QObject lives. The signals are packaged in a QMetaCallEvent and posted to the event queue of the receiving thread. An event loop running in the receiving thread -- here, the GUI thread -- acts on those events using an instance of QAbstractEventDispatcher. Each QMetaCallEvent results in a call to the connected slot.
The access to the event queue of the receiving GUI thread is serialized by a QMutex. On Qt 4.8 and newer, the QMutex implementation got a nice speedup, so the fact that each signal emission results in locking of the queue mutex is not likely to be a problem. Alas, the events need to be allocated on the heap in the worker thread, and then deallocated in the GUI thread. Many heap allocators perform quite poorly when this happens in quick succession if the threads happen to execute on different cores.
The biggest problem comes in the GUI thread. There seems to be a bunch of hidden O(n^2) complexity algorithms! The event loop has to process 10,000 events. Those events will be most likely delivered very quickly and end up in a contiguous block in the event queue. The event loop will have to deal with all of them before it can process further events. A lot of expensive operations happen when you invoke your slot. Not only is the QMetaCallEvent deallocated from the heap, but the label schedules an update() (repaint), and this internally posts a compressible event to the event queue. Compressible event posting has to, in worst case, iterate over entire event queue. That's one potential O(n^2) complexity action. Another such action, probably more important in practice, is the progressbar's setValue internally calling QApplication::processEvents(). This can, recursively call your slot to deliver the subsequent signal from the event queue. You're doing way more work than you think you are, and this locks up the GUI thread.
Instrument your slot and see if it's called recursively. A quick-and-dirty way of doing it is
void Widget::setNum(int n)
{
static int level = 0, maxLevel = 0;
level ++;
maxLevel = qMax(level, maxLevel);
ui->label->setNum( n);
ui->progressBar->setValue(n%101);
if (level > 1 && level == maxLevel-1) {
qDebug("setNum recursed up to level %d", maxLevel);
}
level --;
}
What is freezing your GUI thread is not QThread's execution, but the huge amount of work you make the GUI thread do. Even if your code looks innocuous.
Side Note on processEvents and Run-to-Completion Code
I think it was a very bad idea to have QProgressBar::setValue invoke processEvents(). It only encourages the broken way people code things (continuously running code instead of short run-to-completion code). Since the processEvents() call can recurse into the caller, setValue becomes a persona-non-grata, and possibly quite dangerous.
If one wants to code in continuous style yet keep the run-to-completion semantics, there are ways of dealing with that in C++. One is just by leveraging the preprocessor, for example code see my other answer.
Another way is to use expression templates to get the C++ compiler to generate the code you want. You may want to leverage a template library here -- Boost spirit has a decent starting point of an implementation that can be reused even though you're not writing a parser.
The Windows Workflow Foundation also tackles the problem of how to write sequential style code yet have it run as short run-to-completion fragments. They resort to specifying the flow of control in XML. There's apparently no direct way of reusing standard C# syntax. They only provide it as a data structure, a-la JSON. It'd be simple enough to implement both XML and code-based WF in Qt, if one wanted to. All that in spite of .NET and C# providing ample support for programmatic generation of code...
The way you implemented your thread, it does not have its own event loop (because it does not call exec()). I'm not sure if your code within run() is actually executed within your thread or within the GUI thread.
Usually you should not subclass QThread. You probably did so because you read the Qt Documentation which unfortunately still recommends subclassing QThread - even though the developers long ago wrote a blog entry stating that you should not subclass QThread. Unfortunately, they still haven't updated the documentation appropriately.
I recommend reading "You're doing it wrong" on Qt Blog and then use the answer by "Kari" as an example of how to set up a basic multi-threaded system.
But when I run this program it freezes my GUI and when I click inside my window,
it responds that your program is not responding.
Yes because IMO you're doing too much work in thread that it exhausts CPU. Generally program is not responding message pops up when process show no progress in handling application event queue requests. In your case this happens.
So in this case you should find a way to divide the work. Just for the sake of example say, thread runs in chunks of 100 and repeat the thread till it completes 10000000.
Also you should have look at QCoreApplication::processEvents() when you're performing a lengthy operation.

InvokeOnMainThread() not necessarily invoking on main thread - bug or feature?

I'm using a Timer and let it perform regular checks. If the test condition is true, I start a thread and let it do what it has to do.
If within that thread I want to change the UI I'm using InvokeOnMainThread(). But as the thread was triggered from a Timer which already is a seprate thread, the InvokeOnMainThread() will invoke things on the Timer's thread and not on the real main thread. I work around it by boxing two InvokeOnMainThread() calls.
Is this working as intended or is it a bug in the Mono framework?
Is the main thread defined as the one who triggered the current thread or is it supposed to return the "root" thread?
NSObject.InvokeOnMainThread is, mostly, a wrapper around performSelectorOnMainThread:withObject:waitUntilDone:
Quote from documentation:
You can use this method to deliver messages to the main thread of your application. The main thread encompasses the application’s main run loop, and is where the NSApplication object receives events.
We can have a deeper look into it (seems weird) if you fill a bug report on http://bugzilla.xamarin.com along with a self-contained test case.

Resources