Does RxJava Parallelization Break the Observable Contract? - multithreading

Ben Christensen posted here that the best way to currently achieve parallelism in RxJava is to create another Observable and subscribe it on a scheduler as shown below.
streamOfItems.flatMap(item -> {
doStuffWithItem(item).subscribeOn(Schedulers.io());
});
However, the Observable Contract says that an onNext() call may be called any number of times, as long as the calls do not overlap. Well, any operators in the rest of the chain following the one above could now easily break that rule (unless they explicitly do some sort of synchronization/serialization).
My impression is RxJava prefers to keep a stream of emissions on one thread at a time and switching a steady sequential stream from one thread to another at specific operators, but never in parallel (as depicted below).
observeOn() thread -------------------------Y----Y----Y-------------
subscribeOn() thread ----X----X----X----X-----------------------------
With a parallel approach, I understand the chart may look something like this and that looks pretty overlapped to me.
par subscribeOn() thread 3 -------------------------Y-----Y---------------
par subscribeOn() thread 2 ---------------------------Y---Y---------------
par subscribeOn() thread 1 -------------------------Y-------------Y-------
initial subscribeOn() thread ----X----X----X----X---------------------------
Did I misunderstand anything or make broad assumptions? Is parallelism not breaking the Observable contract? Does that make it not preferable in some way?

If you are using standard operators, nothing will break the Observable contract because whenever concurrency may happen, the operators serialize their output. In your example, flatMap does this so its output is guaranteed to be sequential (although the the reception thread may switch back and forth).
This is, however, not generally true for different stages of the same pipeline if those are separated by an asynchronous boundary or an operator that may do thread arbitration.

Related

Kotlin coroutines multithread dispatcher and thread-safety for local variables

Let's consider this simple code with coroutines
import kotlinx.coroutines.*
import java.util.concurrent.Executors
fun main() {
runBlocking {
launch (Executors.newFixedThreadPool(10).asCoroutineDispatcher()) {
var x = 0
val threads = mutableSetOf<Thread>()
for (i in 0 until 100000) {
x++
threads.add(Thread.currentThread())
yield()
}
println("Result: $x")
println("Threads: $threads")
}
}
}
As far as I understand this is quite legit coroutines code and it actually produces expected results:
Result: 100000
Threads: [Thread[pool-1-thread-1,5,main], Thread[pool-1-thread-2,5,main], Thread[pool-1-thread-3,5,main], Thread[pool-1-thread-4,5,main], Thread[pool-1-thread-5,5,main], Thread[pool-1-thread-6,5,main], Thread[pool-1-thread-7,5,main], Thread[pool-1-thread-8,5,main], Thread[pool-1-thread-9,5,main], Thread[pool-1-thread-10,5,main]]
The question is what makes these modifications of local variables thread-safe (or is it thread-safe?). I understand that this loop is actually executed sequentially but it can change the running thread on every iteration. The changes done from thread in first iteration still should be visible to the thread that picked up this loop on second iteration. Which code does guarantee this visibility? I tried to decompile this code to Java and dig around coroutines implementation with debugger but did not find a clue.
Your question is completely analogous to the realization that the OS can suspend a thread at any point in its execution and reschedule it to another CPU core. That works not because the code in question is "multicore-safe", but because it is a guarantee of the environment that a single thread behaves according to its program-order semantics.
Kotlin's coroutine execution environment likewise guarantees the safety of your sequential code. You are supposed to program to this guarantee without any worry about how it is maintained.
If you want to descend into the details of "how" out of curiosity, the answer becomes "it depends". Every coroutine dispatcher can choose its own mechanism to achieve it.
As an instructive example, we can focus on the specific dispatcher you use in your posted code: JDK's fixedThreadPoolExecutor. You can submit arbitrary tasks to this executor, and it will execute each one of them on a single (arbitrary) thread, but many tasks submitted together will execute in parallel on different threads.
Furthermore, the executor service provides the guarantee that the code leading up to executor.execute(task) happens-before the code within the task, and the code within the task happens-before another thread's observing its completion (future.get(), future.isCompleted(), getting an event from the associated CompletionService).
Kotlin's coroutine dispatcher drives the coroutine through its lifecycle of suspension and resumption by relying on these primitives from the executor service, and thus you get the "sequential execution" guarantee for the entire coroutine. A single task submitted to the executor ends whenever the coroutine suspends, and the dispatcher submits a new task when the coroutine is ready to resume (when the user code calls continuation.resume(result)).

I want to know about the multi thread with future on Scala

I know multi thread with future a little such as :
for(i <- 1 to 5) yield future {
println(i)
}
but this is all the threads do same work.
So, i want to know how to make two threads which do different work concurrently.
Also, I want to know is there any method to know all the thread is complete?
Please, give me something simple.
First of all, chances are you might be happy with parallel collections, especially if all you need is to crunch some data in parallel using multiple threads:
val lines = Seq("foo", "bar", "baz")
lines.par.map(line => line.length)
While parallel collections suitable for finite datasets, Futures are more oriented towards events-like processing and in fact, future defines task, abstracting away from execution details (one thread, multiple threads, how particular task is pinned to thread) -- all of this is controlled with execution context. What you can do with futures though is to add callback (on success, on failure, on both), compose it with another future or await for result. All this concepts are nicely explained in official doc which is worthwhile reading.

Future vs Thread: Which is better for working with channels in core.async?

When working with channels, is future recommended or is thread? Are there times when future makes more sense?
Rich Hickey's blog post on core.async recommends using thread rather than future:
While you can use these operations on threads created with e.g. future, there is also a macro, thread , analogous to go, that will launch a first-class thread and similarly return a channel, and should be preferred over future for channel work.
~ http://clojure.com/blog/2013/06/28/clojure-core-async-channels.html
However, a core.async example makes extensive use of future when working with channels:
(defn fake-search [kind]
(fn [c query]
(future
(<!! (timeout (rand-int 100)))
(>!! c [kind query]))))
~ https://github.com/clojure/core.async/blob/master/examples/ex-async.clj
Summary
In general, thread with its channel return will likely be more convenient for the parts of your application where channels are prominent. On the other hand, any subsystems in your application that interface with some channels at their boundaries but don't use core.async internally should feel free to launch threads in whichever way makes the most sense for them.
Differences between thread and future
As pointed out in the fragment of the core.async blog post you quote, thread returns a channel, just like go:
(let [c (thread :foo)]
(<!! c))
;= :foo
The channel is backed by a buffer of size 1 and will be closed after the value returned by the body of the thread form is put on it. (Except if the returned value happens to be nil, in which case the channel will be closed without anything being put on it -- core.async channels do not accept nil.)
This makes thread fit in nicely with the rest of core.async. In particular, it means that go + the single-bang ops and thread + the double-bang ops really are used in the same way in terms of code structure, you can use the returned channel in alt! / alts! (and the double-bang equivalents) and so forth.
In contrast, the return of future can be deref'd (#) to obtain the value returned by the future form's body (possibly nil). This makes future fit in very well with regular Clojure code not using channels.
There's another difference in the thread pool being used -- thread uses a core.async-specific thread pool, while future uses one of the Agent-backing pools.
Of course all the double-bang ops, as well as put! and take!, work just fine regardless of the way in which the thread they are called from was started.
it sounds like he is recommending using core. async's built in thread macro rather than java's Thread class.
http://clojure.github.io/core.async/#clojure.core.async/thread
Aside from which threadpool things are run in (as pointed out in another answer), the main difference between async/thread and future is this:
thread will return a channel which only lets you take! from the channel once before you just get nil, so good if you need channel semantics, but not ideal if you want to use that result over and over
in contrast, future returns a dereffable object, which once the thread is complete will return the answer every time you deref , making it convenient when you want to get this result more than once, but this comes at the cost of channel semantics
If you want to preserve channel semantics, you can use async/thread and place the result on (and return a) async/promise-chan, which, once there's a value, will always return that value on later take!s. It's slightly more work than just calling future, since you have to explicitly place the result on the promise-chan and return it instead of the thread channel, but buys you interoperability with the rest of the core.async infrastructure.
It almost makes one wonder if there shouldn't be a core.async/thread-promise and core.async/go-promise to make this more convenient...

QThread execution freezes my GUI

I'm new to multithread programming. I wrote this simple multi thread program with Qt. But when I run this program it freezes my GUI and when I click inside my widow, it responds that your program is not responding .
Here is my widget class. My thread starts to count an integer number and emits it when this number is dividable by 1000. In my widget simply I catch this number with signal-slot mechanism and show it in a label and a progress bar.
Widget::Widget(QWidget *parent) :
QWidget(parent),
ui(new Ui::Widget)
{
ui->setupUi(this);
MyThread *th = new MyThread;
connect( th, SIGNAL(num(int)), this, SLOT(setNum(int)));
th->start();
}
void Widget::setNum(int n)
{
ui->label->setNum( n);
ui->progressBar->setValue(n%101);
}
and here is my thread run() function :
void MyThread::run()
{
for( int i = 0; i < 10000000; i++){
if( i % 1000 == 0)
emit num(i);
}
}
thanks!
The problem is with your thread code producing an event storm. The loop counts very fast -- so fast, that the fact that you emit a signal every 1000 iterations is pretty much immaterial. On modern CPUs, doing a 1000 integer divisions takes on the order of 10 microseconds IIRC. If the loop was the only limiting factor, you'd be emitting signals at a peak rate of about 100,000 per second. This is not the case because the performance is limited by other factors, which we shall discuss below.
Let's understand what happens when you emit signals in a different thread from where the receiver QObject lives. The signals are packaged in a QMetaCallEvent and posted to the event queue of the receiving thread. An event loop running in the receiving thread -- here, the GUI thread -- acts on those events using an instance of QAbstractEventDispatcher. Each QMetaCallEvent results in a call to the connected slot.
The access to the event queue of the receiving GUI thread is serialized by a QMutex. On Qt 4.8 and newer, the QMutex implementation got a nice speedup, so the fact that each signal emission results in locking of the queue mutex is not likely to be a problem. Alas, the events need to be allocated on the heap in the worker thread, and then deallocated in the GUI thread. Many heap allocators perform quite poorly when this happens in quick succession if the threads happen to execute on different cores.
The biggest problem comes in the GUI thread. There seems to be a bunch of hidden O(n^2) complexity algorithms! The event loop has to process 10,000 events. Those events will be most likely delivered very quickly and end up in a contiguous block in the event queue. The event loop will have to deal with all of them before it can process further events. A lot of expensive operations happen when you invoke your slot. Not only is the QMetaCallEvent deallocated from the heap, but the label schedules an update() (repaint), and this internally posts a compressible event to the event queue. Compressible event posting has to, in worst case, iterate over entire event queue. That's one potential O(n^2) complexity action. Another such action, probably more important in practice, is the progressbar's setValue internally calling QApplication::processEvents(). This can, recursively call your slot to deliver the subsequent signal from the event queue. You're doing way more work than you think you are, and this locks up the GUI thread.
Instrument your slot and see if it's called recursively. A quick-and-dirty way of doing it is
void Widget::setNum(int n)
{
static int level = 0, maxLevel = 0;
level ++;
maxLevel = qMax(level, maxLevel);
ui->label->setNum( n);
ui->progressBar->setValue(n%101);
if (level > 1 && level == maxLevel-1) {
qDebug("setNum recursed up to level %d", maxLevel);
}
level --;
}
What is freezing your GUI thread is not QThread's execution, but the huge amount of work you make the GUI thread do. Even if your code looks innocuous.
Side Note on processEvents and Run-to-Completion Code
I think it was a very bad idea to have QProgressBar::setValue invoke processEvents(). It only encourages the broken way people code things (continuously running code instead of short run-to-completion code). Since the processEvents() call can recurse into the caller, setValue becomes a persona-non-grata, and possibly quite dangerous.
If one wants to code in continuous style yet keep the run-to-completion semantics, there are ways of dealing with that in C++. One is just by leveraging the preprocessor, for example code see my other answer.
Another way is to use expression templates to get the C++ compiler to generate the code you want. You may want to leverage a template library here -- Boost spirit has a decent starting point of an implementation that can be reused even though you're not writing a parser.
The Windows Workflow Foundation also tackles the problem of how to write sequential style code yet have it run as short run-to-completion fragments. They resort to specifying the flow of control in XML. There's apparently no direct way of reusing standard C# syntax. They only provide it as a data structure, a-la JSON. It'd be simple enough to implement both XML and code-based WF in Qt, if one wanted to. All that in spite of .NET and C# providing ample support for programmatic generation of code...
The way you implemented your thread, it does not have its own event loop (because it does not call exec()). I'm not sure if your code within run() is actually executed within your thread or within the GUI thread.
Usually you should not subclass QThread. You probably did so because you read the Qt Documentation which unfortunately still recommends subclassing QThread - even though the developers long ago wrote a blog entry stating that you should not subclass QThread. Unfortunately, they still haven't updated the documentation appropriately.
I recommend reading "You're doing it wrong" on Qt Blog and then use the answer by "Kari" as an example of how to set up a basic multi-threaded system.
But when I run this program it freezes my GUI and when I click inside my window,
it responds that your program is not responding.
Yes because IMO you're doing too much work in thread that it exhausts CPU. Generally program is not responding message pops up when process show no progress in handling application event queue requests. In your case this happens.
So in this case you should find a way to divide the work. Just for the sake of example say, thread runs in chunks of 100 and repeat the thread till it completes 10000000.
Also you should have look at QCoreApplication::processEvents() when you're performing a lengthy operation.

Explanation of Text on Threading in "C# 3.0 in a Nutshell"

While reading C# 3.0 in a Nutshell by Joseph and Ben Albahari, I came across the following paragraph (page 673, first paragraph in section titled "Signaling with Wait and Pulse")
"The Monitor class provides another signalling construct via two static methods, Wait and Pulse. The principle is that you write the signalling logic yourself using custom flags and fields (enclosed in lock statements), and then introduce Wait and Pulse commands to mitigate CPU spinning. The advantage of this low-level approach is that with just Wait, Pulse, and the lock statement, you can achieve the functionality of AutoResetEvent, ManualResetEvent, and Semaphore, as well as WaitHandle's static methods WaitAll and WaitAny. Furthermore, Wait and Pulse
can be amenable in situations where
all of the wait handles are
parsimoniously challenged."
My question is, what is the correct interpretation of the last sentence?
A situation with a decent/large number of wait handles where WaitOne() is only occasionally called on any particular wait handle.
A situation with a decent/large number of wait handles where rarely does more than one thread tend to block on any particular wait handle.
Some other interpretation.
Would also appreciate illuminating examples of such situations and perhaps how and/or why they are more efficiently handled via Wait and Pulse rather than by other methods.
Thank you!
Edit: I found the text online here
What this is saying is that there are some situations where Wait and Pulse provides a simpler solution than wait handles. In general, this happens where:
The waiter, rather than the notifier, decides when to unblock
The blocking condition involves more than a simple flag (perhaps several variables)
You can still use wait handles in these situations, but Wait/Pulse tends to be simpler. The great thing about Wait/Pulse is that Wait releases the underlying lock while waiting. For instance, in the following example, we're reading _x and _y within the safety of a lock - and yet that lock is released while waiting so that another thread can update those variables:
lock (_locker)
{
while (_x < 10 && _y < 20) Monitor.Wait (_locker);
}
Another thread can then update _x and _y atomically (by virtue of the lock) and then Pulse to signal the waiter:
lock (_locker)
{
_x = 20;
_y = 30;
Monitor.Pulse (_locker);
}
The disadvantage of Wait/Pulse is that it's easier to get it wrong and make a mistake (for instance, by updating a variable and forgetting to Pulse). In situations where a program with wait handles is equally simple to a program with Wait/Pulse, I'd recommend going with wait handles for that reason.
In terms of efficiency/resource consumption (which I think you were alluding to), Wait/Pulse is usually faster and lighter (as it has a managed implementation). This is rarely a big deal in practice, though. And on that point, Framework 4.0 includes low-overhead managed versions of ManualResetEvent and Semaphore (ManualResetEventSlim and SemaphoreSlim).
Framework 4.0 also provides many more synchronization options that lessen the need for Wait/Pulse:
CountdownEvent
Barrier
PLINQ / Data Parallelism (AsParallel, Parallel.Invoke, Parallel.For, Parallel.ForEach)
Tasks and continuations
All of these are much higher-level than Wait/Pulse and IMO are preferable for writing reliable and maintainable code (assuming they'll solve the task at hand).

Resources