Usage example when Thread is preferred over Coroutine in Kotlin - multithreading

I'm researching concurrency topic in Kotlin, and can't figure out why do we still need a Threads, if Coroutines are such a good tool to handle concurrency in Kotlin.
I've tried to find some benefits of Threads usage in Kotlin, but all I can see is that how good Coroutines are.

Sometimes you need something more straightforward, even if it's less performant and, as mentioned in a comment by #broot, if you are writing in Java it will probably be a lot easier to use a library that doesn't expose suspend functions.
Also, threads even being more expensive are a lot more accurate when handling delays. An example would be choreographing an animation, if you try to do it with delay(randomDelay) function you will get gaps between animation changes as the coroutine might continue several milliseconds after randomDelay has passed.
Other than that, use coroutines when possible.

Related

Recursion calls on different cores?

So I recently was writing program to confirm that Dijkstra mutual exclusion algorithm is working. I decided to use Kotlin coz I didn't want to use C++ and manage memory myself. Today I saw that despite the fact my program is running sequential all cores are in 100% use, is it some JVM optimization? Or maybe Kotlin optimized my recursion calls? I need to mention that my recursion is not tail recursion. Do any of you know why this happens? I did not use threads or coroutines just to be clear.
Short answer: don't worry!
If you're not using coroutines or explicit threads, then your code should continue to execute in the same thread.
However, there's no guarantee that your thread will always be executed on the same core; the OS is free to schedule it on whichever core it thinks best at each moment.  (I don't know what criteria various OSs may use to make those decisions, but it's likely to take account of other threads and processes. And your thread can move between codes quickly enough to confuse any monitoring you may do.)
Also, even if you don't start any other threads, there are likely to be several background threads handling garbage collection, running finalisers, and other housekeeping.  If you use a GUI toolkit such as JavaFX or Swing, that will use many threads, as will a framework such as Spring.  (These will usually be marked as ‘dæmon’ theads, so they don't prevent the JVM from exiting.)
Finally, the JVM itself is likely to use system threads for background compilation of bytecode, monitoring, and so on.  (Different JVM implementations may do this differently, of course.)
(None of this is specific to Kotlin, by the way; it's the same for all Java apps.  But it'd almost certainly be different for Kotlin/JS and Kotlin/native.)
So no, activity on multiple cores does not imply that your code has been transformed or rewritten.  It just means that you're not running on the bare metal, and should trust the JVM to take care of things!
It was GC, here is Reddit post I created there.
https://www.reddit.com/r/Kotlin/comments/bp9y1t/recursion_calls_on_different_cores/

How coroutines can be faster than threads?

I am trying to find a situation where changing multithreading to coroutines will speed up processing of the affected code section. As far as I discovered that coroutines use less CPU and Heap space comparing to threads, I still can't find the case where coroutines are faster than threads. Although I know that coroutines creation and context-switching are much cheaper than corresponding operations with threads, I've got imperceptible results in speed difference (without measuring thread creation both cases will be absolutely the same).
So, is it even possible to find a case where coroutines will faster an execution more than threads?
One thing to note is that coroutines are vastly superior when you have lots and lots of them. You can create and execute thousands of coroutines without a second thought, if you attempted to do that via threads all the overhead associated with threads might quickly kill the host. So, this enables you to think about massive parallelization without having to manage worker threads and runnables. They also make it easy to implement asynchronous computation patterns which would be very unwieldy to implement with basic threads, like channels and actors.
Out of scope regarding your question, but still noteworthy is the genericity of the concept, as the use cases for coroutines are not just limited to asynchronous computation. The core of coroutines are suspendable functions, which for example also enables generators like you have in python, which you would not immediately connect to asynchronous programming.

BlackBerry multithreading

I want to know about the threading concept in BlackBerry.
In Android we have async task or handlers to communicate. Is there something similar available in BlackBerry? How do two threads communicate? How do you pass data from a background thread to the UI thread?
Concurrency is not a trivial thing. It's really difficult to craft a Thread-safe solution. In Blackberry Java the situation is even worse, as only JavaME Threading APIs are available, meaning you can't use all the Java SE high level classes (like executors, locks, collections, etc).
My advice would be not to try to port Android's AsyncTask or any other high level concurrency-related class on your own, since very likely you'll make mistakes (unless you are well versed in concurrent programming). I myself would avoid it as much as possible. Instead keep the concurrent code as simple and small as you can. Most of the times all you need is to refresh the GUI from a worker thread. This can be done easily with UiApplication.invokeLater and UiApplication.invokeAndWait, and you don't need to write concurrent code at all.
If you want to learn more about concurrency, I'd start with this tutorial by Oracle. It's aimed for JavaSE, but almost the first half is useful for JavaME as well. In case you want to learn more advanced concurrent programming, this book is a must read.

When should a thread generally yield?

In most languages/frameworks, there exists a way for a thread to yield control to other threads. However, I can't really think of a time when yielding from a thread was the correct solution to a given problem. When, in general, should one use Thread.yield(), sleep(0), etc?
One use case could be for testing concurrent programs, try to find interleavings that reveal flaws in your synchronization patterns. For instance in Java:
A useful trick for increasing the number of interleavings, and
therefore more effectively exploring the state space of your programs,
is to use Thread.yield to encourage more context switches during
operations that access shared state. (The effectiveness of this
technique is platform-specific, since the JVM is free to treat
THRead.yield as a no-op [JLS 17.9]; using a short but nonzero sleep
would be slower but more reliable.) — JCIP
Also interesting from the Java point of view is that their semantics are not defined:
The semantics of Thread.yield (and Thread.sleep(0)) are undefined
[JLS 17.9]; the JVM is free to implement them as no-ops or treat them
as scheduling hints. In particular, they are not required to have the
semantics of sleep(0) on Unix systemsput the current thread at the end
of the run queue for that priority, yielding to other threads of the
same prioritythough some JVMs implement yield in this way. — JCIP
This makes them, of course, rather unreliable. This is very Java specific, however, in generally I believe following is true:
Both are low-level mechanism which can be used to influence the scheduling order. If this is used to achieve a certain functionality then this functionality is based on the probability of the OS scheduler which seems a rather bad idea. This should be managed by higher-level synchronization constructs instead.
For testing purpose or for forcing the program into a certain state it seems a handy tool.
When, in general, should one use Thread.yield(), sleep(0), etc?
It depends on the VM are thread model we are talking about. For me the answer is rarely if ever.
Traditionally some thread models were non-preemptive and others are (or were) not mature hence the need for Thread.yield().
I feel that Thread.yield() is like using register in C. We used to rely on it to improve the performance of our programs because in many cases the programmer was better at this than the compiler. But modern compilers are much smarter and in much fewer cases these days can the programmer actually improve the performance of a program with the use of register and Thread.yield().
Keep your OS scheduler decide for you ?
So never yield, and never sleep(0) until you match a case where sleep(0) is absolutly necessary and document it here.
Also context switch are costy so I don't think a lot of people want more context switches.
I know this is old, but you didn't get any good answers here.
In general yielding is a way to be polite to other threads/processes and give them a chance to run on the same CPU with minimal delay to the yielding thread.
Not all yielding is equal either. On Windows SwitchToThread() only releases CPU if another thread of equal or greater priority was scheduled to run on the same CPU which means it very possibly will simply resume the calling thread while Sleep(0) has looser scheduler semantics; on Linux sched_yield() is similar to SwitchToThread() while nanosleep() with a 0 timespec seemingly marks the thread as unready for whatever period the timer slack is set to (inferred from profiling and substantiated here ). Behavior on MacOS is seemingly similar to Linux, but with much less timer slack - haven't looked into it that much though.
Yielding was way more useful in the days when uniprocessor systems were abundant because it really helped keep the system moving, but for example on Windows where by default Sleep(1) is actually predictably at least a 15.6ms delay (note that this is nearly an entire frame at 60fps if you're making a game or media player or something) it's still pretty valid although MessageWaitForMultipleObjectsEx should be preferred in general UI applications. Windows 10 added a new type of high resolution waitable timer with microsecond granularity that should probably be preferred over other methods, so hopefully that kind of yielding won't be so necessary anymore either.
In the context of N:1 and N:M cooperative threading models (not common at the OS level anymore, but still employed at the application-level through libraries providing Fibers and Coroutines often enough) yielding is still also definitely useful to keep things moving.
Unfortunately it's also abused pretty often, for example yielding in a busy loop rather than waiting on a synchronization primitive because the appropriate primitive isn't obvious or because the developer is overly optimistic about how long their threads will wait for / overly pessimistic about the scheduler. But in practice on most modern multitasking OSes unless the system is extremely busy, threads waiting on a synchronization primitive will get run almost instantly when the primitive is triggered/released/whatever.
You should try to avoid yielding, especially as an alternative to using a proper synchronization method. When you do need to yield, a zero sleep or waiting on a high resolution time source is probably better than a normal yield - I call the prior a "long yield" as opposed to a "short yield" - but unless you're using the system interface the implementation of sleep in your programming language/framework of choice might "optimize" sleep(0) into a short yield or even a no-op for you, sadly.

What is so great about TPL

I have done this POC and verified that when you create 4 threads and run them on Quad core machine, all cores get busy - so, CLR is already scheduling threads on different cores effectively, so why the class TASK?
I agree Task simplifies the creation and use of threads, but apart from that what? Its just a wrapper around threads and threadpools right? Or does it in some way help scheduling threads on multicore machines?
I am specifially looking at whats with Task wrt multicore that wasnt there in 2.0 threads.
"I agree Task simplifies the creation and use of threads"
Isn't that enough? Isn't it fabulous that it provides higher-level building blocks so that us mere mortals can build lock-free multithreaded code which is safe because really smart people like Joe Duffy have done the work for us?
If TPL really just consisted of a way of starting a new task, it wouldn't be much use - the work-stealing etc is nice, but probably not crucial to most of us. It's the building blocks around tasks - and in particular around the idea of a "future" - which provide the value. Do you really want to write Parallel.ForEach yourself? Do you want to want to work out how to perform partitioning efficiently yourself? I know that if I tried doing that, it would take me a long time and I'd certainly do a worse job of it than the PFX team.
Many of the advances in development haven't been about making it possible to do something which was impossible before - they've been about raising the abstraction level so that a problem can be solved once and then that solution reused. Do you feel the same way about the CLR itself? You could do the same thing in assembly yourself, obviously... but by raising the abstraction level, the CLR and C# make us more productive.
Although you could do everything equivalently in TPL or threadpool, for a better abstraction, and scalability patterns TPL is preferred over Threadpool. But it is upto the programmer, and if you know exactly what you are doing, and based on your scheduling and synchronization requirements play out in your specific application you could use Threadpool more effectively. There are some stuff you get free with TPL which you've got to code up when using Threadpool, like following few I can think of now.
work stealing
worker thread local pool
scheduling groups of actions like Parallel.For
The TPL lets you program in terms of Tasks not threads. Thinking of tasks solely as a better thread abstraction would be a mistake. Tasks allow you to specify the work you want to get executed not how you want the work executed (threads). This allows you to express the potential parallelism of your application and have the TPL runtime--the scheduler and thread pool--handle how that work gets executed. This means that the TPL will take a lot of the burden off you of having your application deal with ensuring the best perfromance on a wide variety of hardware with different numbers of cores.
For example, the TPL makes it easy to implement key several design patterns that allow you to express the potential parallelism of your application.
http://msdn.microsoft.com/en-us/library/ff963553.aspx
Like Futures (mentioned by Jon) as well as pipelines and parallel loops.

Resources