Qthreadpool in Qt - multithreading

I am doing some coding with OpenCV and I am processing some image pixel. However, the process take so much time (The picture is very delayed) because I am processing each color R G B on a sequence base. I thought I can make it faster by doing multi-threading and based on my previous knowledge Threadpool is more effecient. I saw some examples on line but they all requiring the use of QRunnable and my implementation should be easier than that because I just want to pass the same function with different channels every time
any idea !!!

If you want to execute a function in a separate thread you can use the QtConcurrentRun mechanism.
Suppose you have a function f with an integer as argument, in a class A
class A {
public:
f(int i);
}
Now if you want to call the function asynchronously, from a different class you can do:
A a;
QFuture<void> future1 = QtConcurrent::run(a, &A::f, 1); // Call it with argument 1
QFuture<void> future2 = QtConcurrent::run(a, &A::f, 2); // Call it with argument 2
You can use QFutureWatcher in order to get notified when the execution has finished.

Related

How to ensure the comparison result still hold in multi-threading?

Suppose there are 3 threads,
Thread 1 and 2 will increase or decrease a global variable X atomically.
thread 1:
atomic_increase(X)
thread 2:
atomic_decrease(X)
Thread 3 will check if the X is greater than some predefined value and do things accordingly.
thread 3:
if( X > 5 ) {... logic 1 ...}
else {... logic 2 ....}
I think the atomic_xxx operations are not enough. They can only synchronize the modifications between thread 1 and 2.
What if X is changed by thread 1 or 2 after thread 3 finishes the comparison and enters logic 1.
Do I have to use a mutex to synchronize all the 3 threads when modifying or reading the X?
ADD 1
BTW, logic 1 and logic 2 don't modify the X.
In short yes, reads also need to be synchronized in some way, otherwise the risk of inconsistent reads is real. A read performed between the read and write of atomic_increase will be inconsistent.
However if logic 1 or logic 2 do stuff to X, your problems doesn't seem to stop right there. I think then you need the concept of a transaction, where it starts with a read (the X > 5 thing) and then ends with a write (logic 1 or logic 2).
Yes, And the Answer is happens before link, Lets say Thread-1 started executing atomic_increase method. It will hold the lock and enter the synchronized block to update X.
private void atomic_increase() {
synchronized (lock) {
X = X + 1; // <-- Thread-1 entered synchronized block, yet to update variable X
}
}
Now, for Thread-3 to run the logic, it needs to read the variable X, and if it is not synchronized (on the same monitor), the variable X read can be an old value since it may not yet updated by Thread-1.
private void runLogic() {
if (X > 5) { // <-- Reading X here, can be inconsistent no
happens-before between atomic_increase and runLogic
} else {
}
}
We could have prevented this by maintaining a happens-before link between atomic operation and run_logic method. If the runLogic is synchronized (on the same monitor) , then it would have to wait until the variable X to be updated by the Thread-1. So we are guaranteed to get the last updated value of X
private void runLogic() {
synchronized (lock) {
if (X > 5) { // <-- Reading X here, will be consistent, since there
is happens-before between atomic_increase and runLogic
} else {
}
}
}
The answer depends on what your application does. If neither logic 1 nor logic 2 modifies X, it is quite possible that there is no need for additional synchronization (besides using an atomic_load to read X).
I assume you use intrinsics for atomic operations, and not simply an increment in a mutex (or in a synchronized block in Java). E.g. in Java there is an AtomicInteger class with methods such as 'incrementAndGet' and 'get'. If you use them, there is probably no need for additional synchronization, but it depends what you actually want to achieve with logic 1 or logic 2.
If you want to e.g. display a message when X > 5, then you can do it. By the time the message is displayed the value of X may have already changed, but it remains the fact, that the message was triggered by X being greater than 5 for at least some time.
In other words, without additional synchronization, you have only the guarantee that logic 1 will be called if X becomes greater than 5, but there is no guarantee that it will remain so during execution of logic 1. It may be ok for you, or not.

blockingForEach(), why apply function to blocked observables

I'm having trouble understanding the point of a blocking Observable, specifically blockingForEach()
What is the point in applying a function to an Observable that we will never see?? Below, I'm attempting to have my console output in the following order
this is the integer multiplied by two:2
this is the integer multiplied by two:4
this is the integer multiplied by two:6
Statement comes after multiplication
My current method prints the statement before the multiplication
fun rxTest(){
val observer1 = Observable.just(1,2,3).observeOn(AndroidSchedulers.mainThread())
val observer2 = observer1.map { response -> response * 2 }
observer2
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(AndroidSchedulers.mainThread())
.subscribe{ it -> System.out.println("this is the integer multiplie by two:" + it) }
System.out.println("Statement comes after multiplication ")
}
Now I have my changed my method to include blockingForEach()
fun rxTest(){
val observer1 = Observable.just(1,2,3).observeOn(AndroidSchedulers.mainThread())
val observer2 = observer1.map { response -> response * 2 }
observer2
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(AndroidSchedulers.mainThread())
.blockingForEach { it -> System.out.println("this is the integer multiplie by two:" + it) }
System.out.println("Statement comes after multiplication ")
}
1.)What happens to the transformed observables once no longer blocking? Wasnt that just unnecessary work since we never see those Observables??
2.)Why is my System.out("Statement...) appear before my observables when I'm subscribing?? Its like observable2 skips its blocking method, makes the System.out call and then resumes its subscription
It's not clear what you mean by your statement that you will "never see" values emitted by an observer chain. Each value that is emitted in the observer chain is seen by observers downstream from the point where they are emitted. At the point where you subscribe to the observer chain is the usual place where you perform a side effect, such as printing a value or storing it into a variable. Thus, the values are always seen.
In your examples, you are getting confused by how the schedulers work. When you use the observeOn() or subscribeOn() operators, you are telling the observer chain to emit values after the value is move on to a different thread. When you move data between threads, the destination thread has to be able to process the data. If your main code is running on the same thread, you can lock yourself out or you will re-order operations.
Normally, the use of blocking operations is strongly discouraged. Blocking operations can often be used when testing, because you have full control of the consequences. There are a couple of other situations where blocking may make sense. An example would be an application that requires access to a database or other resource; the application has no purpose without that resource, so it blocks until it becomes available or a timeout occurs, kicking it out.

How to pass type information through templates to instantiate objects within in D

How would one pass type information into a thread, so objects of the correct types could be created in the thread using the passed info? Something like this:
struct Test // or class Test
{
int x, y, z;
}
void testInThread(F, T ...)(T args)
{
auto obj = F(args);
// Do stuf with obj in the new thread
}
auto tid = std.concurrency.spawn!(testInThread, Test, 1, 2, 3);
// Threads and stuff...
This doesn't compile, but I'm sure something like this should be possible. I think there's just something I'm not understanding about template parameters.
This line here would compile:
auto tid = std.concurrency.spawn(&testInThread!(Test, int, int, int), 1, 2, 3);
I'm not sure if you can make it prettier with implicit deduction of those ints or not though. But the reason this compiles is that spawn expects a function. testInThread is a template that generates a function. If you pass it the compile time argument list over there without a runtime list, you can get the address to the function... which is good enough for spawn.
spawn accepts a pointer to a function. What you're trying to pass it is a template for a function. If you want to pass it a templated function, that templated function must be fully instantiated - in this case something like
auto tid = std.concurrency.spawn(&testInThread!(Test, int ,int, int), 1, 2, 3);
But as templates are compile time constructs, it's not going to work to pass template arguments across threads and have a template instantiated on the other side. All templates much be instantiated at compile time. So, if the issue is really that you want to be able to pass a templated function to spawn and have it be called in the other thread, then the example above does that, but if you really want to be passing template arguments across threads, then you're out of luck.
You might want to read the template chapter from Ali Çehreli's online book on D in order to better understand templates.

What is process interleaving? (in the realm of Concurrency)

I'm not quite sure as to what this term means. I saw it during a course where we are learning about concurrency. I've seen a lot of definitions for data interleaving, but I could find anything about process interleaving.
When looking at the term my instincts tell me it is the use of threads to run more than one process simultaneously, is that correct?
If you imagine a process as a (possibly infinite) sequence/trace of statements (e.g. obtained by loop unfolding), then the set of possible interleavings of several processes consists of all possible sequences of statements of any of those process.
Consider for example the processes
int i;
proctype A() {
i = 1;
}
proctype B() {
i = 2;
}
Then the possible interleavings are i = 1; i = 2 and i = 2; i = 1, i.e. the possible final values for i are 1 and 2. This can be of course more complex, for instance in the presence of guarded statements: Then the next possible statements in an interleaving sequence are not necessarily those at the position of the next program counter, but only those that are allowed by the guard; consider for example the proctype
proctype B() {
if
:: i == 0 -> i = 2
:: else -> skip
fi
}
Then the possible interleavings (given A() as before) are i = 1; skip and i = 2; i = 1, so there is only one possible final value for i.
Indeed the notion of interleavings is crucial for Spin's view of concurrency. In a trace semantics, the set of possible traces of concurrent processes is the set of possible interleavings of the traces of the individual processes.
It simply means performing (data access or execution or ... ) in an arbitrary order**(see the note). In the case of concurrency, it usually refers to action interleaving.
If the process P and Q are in parallel composition (P||Q) then the actions of these will be interleaved. Consider following processes:
PLAYING = (play_music -> stop_music -> STOP).
PERFORMING = (dance -> STOP).
||PLAY_PERFORM = (PLAYING || PERFORMING).
So each primitive process can be shown as: (generated by LTSA model-cheking tool)
Then the possible traces as the result of action interleaving will be:
dance -> play_music -> stop_music
play_music -> dance -> stop_music
play_music -> stop_music -> dance
Here is the LTSA tool generated output of this example.
**note: "arbitrary" here means arbitrary choice of process execution not their inner sequence of codes. The code execution in each process will be always followed sequentially.
If it is still something that you're not comfortable with you can take a look at: https://www.doc.ic.ac.uk/~jnm/book/firstbook/pdf/ch3.pdf
Hope it helps! :)
Operating Systems support Tasks (or Processes). But for now let's think of "Actitivities".
Activities can be executed in parallel. Here are two activities, P and Q:
P: abc
Q: def
a, b, c, d, e, f, are operations. *
Each operation has always the same effect independent of what other
operations may be executing at the same time (atomicity).
What is the effect of executing the two activities concurrently? We
do not know for sure, but we know that it will be the same as obtained
by executing sequentially an INTERLEAVING of the two activities
[interleavings are also called SCHEDULES]. Here are the possible
interleavings of these two activities:
abcdef
abdcef
abdecf
abdefc
adbcef
......
defabc
That is, the operations of the two activities are sequenced in all possible ways that preserve the order in which the operations appeared in the two activities. A serial interleaving [serial schedule] of two activities is one where all the operations of one activity precede all the operations of the other activity.
The importance of the concept of interleaving is that it allows us to express the meaning of concurrent programs: The parallel execution of activities is equivalent to the sequential execution of one of the interleavings of these activities.
For detailed information: https://cis.temple.edu/~ingargio/cis307/readings/interleave.html

Multithread+Recursion strategies

I am just starting to learn the ins-and-outs of multithread programming and have a few basic questions that, once answered, should keep me occupied for quite sometime. I understand that multithreading loses its effectiveness once you have created more threads than there are cores (due to context switching and cache flushing). With that understood, I can think of two ways to employ multithreading of a recursive function...but am not quite sure what is the common way to approach the problem. One seems much more complicated, perhaps with a higher payoff...but thats what I hope you will be able to tell me.
Below is pseudo-code for two different methods of multithreading a recursive function. I have used the terminology of merge sort for simplicity, but it's not that important. It is easy to see how to generalize the methods to other problems. Also, I will personally be employing these methods using the pthreads library in C, so the thread syntax mildly reflects this.
Method 1:
main ()
{
A = array of length N
NUM_CORES = get number of functional cores
chunk[NUM_CORES] = array of indices partitioning A into (N / NUM_CORES) sized chunks
thread_id[NUM_CORES] = array of thread id’s
thread[NUM_CORES] = array of thread type
//start NUM_CORES threads on working on each chunk of A
for i = 0 to (NUM_CORES - 1) {
thread_id[i] = thread_start(thread[i], MergeSort, chunk[i])
}
//wait for all threads to finish
//Merge chunks appropriately
exit
}
MergeSort ( chunk )
{
MergeSort ( lowerSubChunk )
MergeSort ( higherSubChunk )
Merge(lowerSubChunk, higherSubChunk)
}
//Merge(,) not shown
Method 2:
main ()
{
A = array of length N
NUM_CORES = get number of functional cores
chunk = indices 0 and N
thread_id[NUM_CORES] = array of thread id’s
thread[NUM_CORES] = array of thread type
//lock variable aka mutex
THREADS_IN_USE = 1
MergeSort( chunk )
exit
}
MergeSort ( chunk )
{
lock THREADS_IN_USE
if ( THREADS_IN_USE < NUM_CORES ) {
FREE_CORE = find index of unused core
thread_id[FREE_CORE] = thread_start(thread[FREE_CORE], MergeSort, lowerSubChunk)
THREADS_IN_USE++
unlock THREADS_IN_USE
MergeSort( higherSubChunk )
//wait for thread_id[FREE_CORE] and current thread to finish
lock THREADS_IN_USE
THREADS_IN_USE--
unlock THREADS_IN_USE
Merge(lowerSubChunk, higherSubChunk)
}
else {
unlock THREADS_IN_USE
MergeSort( lowerSubChunk )
MergeSort( higherSubChunk )
Merge(lowerSubChunk, higherSubChunk)
}
}
//Merge(,) not shown
Visually, one can think of the differences between these two methods as follows:
Method 1: creates NUM_CORES separate recursion trees, each one having a single core traversing it.
Method 2: creates a single recursion tree but has all cores traversing it. In particular, whenever there is a free core, it is set to work on the "left child subtree" of the first node where MergeSort is called after the core is freed.
The problem with Method 1 is that if it is the case that the running time of the recursive function varies with the distribution of values within each initial subchunk (i.e. the chunk[i]), one thread could finish much faster leaving a core sitting idle while the others finish. With Merge Sort this is not likely to be the case since the work of MergeSort happens in Merge whose runtime isn't affected much by the distribution of values in the (sorted) subchunks. However, with a more involved recursive function, the running time on one subchunk could be much longer!
With Method 2 it is possible to have the same problem. Again, with merge sort its not clear since the running time for each subchunk is likely to be similar, but the line //wait for thread_id[FREE_CORE] and current thread to finish would also require one core to wait for the other. However, with Method 2, all calls to Merge run ASAP as opposed to Method 1 where one must wait for NUM_CORES calls to MergeSort to finish and then do NUM_CORES - 1 merges afterward (although you can multithread this as well...to an extent)
(though the syntax might not be completely correct)
Are both of these methods used in practice? Are there situations where one is more beneficial over the other? Is this the correct way to implement Method 2? (in this case, THREADS_IN_USE is a semaphore?)
Thanks so much for your help!

Resources