Is Dart really a single-threaded programming language? - multithreading

I'm very new to Dart and stilling learning it. As I understand, Dart executes code in different isolates. An isolate could start up another isolate to execute some long-running code. For each isolate, there is a thread and some memory allocated for it. These isolates are isolated like a bunch of little VMs.
I also read from the Dart document that Dart is a single threaded language. But, think about it, each isolate has its own thread. If isolate A has thread t1 and isolate B has thread t2, t1 and t2 are not the same thread, right?
If t1 and t2 are the same thread, then t1 and t2 can't execute code at the same time, which is ridiculous. So, t1 and t2 must be different thread.
If so, why we say Dart is a single-threaded language?

Yes and no.
"Yes" in the sense that you don't have to worry about locks or mutexes.
"No" in the sense that you list.
Dart tries to offer some of the benefits of multi-threading with isolates while avoiding all of the issues with shared memory multi-threading.

Related

How much of a thread's code get executed ever time it is scheduled?

I have 3 threads in my program
t1
reads a frame1 of data and
writes it onto a hard disk
t2
reads a frame2 of data and
writes it onto a hard disk
t2
reads a frame3 of data and
writes it onto a hard disk
When the program runs, and t1 t2 and t3 are scheduled for execution one by one, how are the operations performed internally?
Ex:
say t1 -> t2 -> t3 get scheduled in this order
Scenario 1:
will t1 finish one full cycle of read frame1 and write frame1 before t2 is scheduled and whether t2 finishes one full cycle of read frame2 and write frame2 before t3 is scheduled and so on?
or
Scenario 2:
can t1 or t2 or t3 or few or all of these threads' execution be stopped in the middle of their execution before the next thread gets scheduled?
Which of these scenarios is correct?
I am especially mentioning hard disk write as there is a possibility of a blocking fwrite call, which cannot be left in the middle of its execution
You should consider (and code and think) as if all threads are running concurrently (e.g. a the same time on different cores of your processor).
A thread usually don't write directly to the disk: it is writing files to some file system (and the kernel is buffering, e.g. in the page cache, so the disk IO can happen several seconds later).
If you need synchronization, you should make it explicitly (e.g. with mutexes). If you need to synchronize file contents, consider using some file locking machinery à la lockf(3) (but you should really avoid having several threads or processes accessing and writing the same file). BTW stdio is buffered (so you might want to fflush(3) after fwrite(3)...)
And when the kernel is scheduling some thread or process, it will schedule preemptively at arbitrary time (at any machine instruction).
Read some pthread tutorial and Operating Systems: Three Easy Pieces. Read also about memory models (it is tricky).
So all your scenarii could and are likely to be wrong.
How much of a thread's code get executed ever time it is scheduled?
You should not care, and you cannot know. It can be as tiny as nothing (read about thrashing), and as large as several millions machine instructions. BTW, be aware of optimizing compilers and of sequence points in C; so actually the question does not even make sense (from the observable point of view of a C programmer).
I am especially mentioning hard disk write as there is a possibility of a blocking fwrite call
When the stdio library (or directly your application) is actually write(2)-ing a file descriptor, it is likely (but not certain) that the kernel will schedule tasks during such system calls. However, the actual disk IO will happen probably later.
PS. Read also about undefined behavior.
It depends on the method (or methods) these threads are calling. If all these threads are calling a same method and if that method is synchronized then only one thread will be processing it at a time. During that time rest of the threads will wait for currently running thread to complete. If not synchronized or threads are calling different methods then there is no guarantee which thread will get processed first or finish first. They also may end up overwriting class-level variables.

Regarding Mutexes and Semaphors

Suppose there are 4 Threads (T1 to T4) that need to run concurrently and 3 structs (Struct1 to Struct3) as resources
T1 to T2 share struct1 (by T1 writing to struct1 and T2 reading from it)
T2 to T3 share struct2 (by T2 writing to struct2 and T3 reading from it)
T3 to T4 share struct3 (by T3 writing to struct3 and T4 reading from it)
Because of this statement from § 41.2.4 of The C++ Programming Language (4th edition) by Bjarne Stroustrup :
"Two threads have a data race if both can access a memory location
simultaneously and at least one of their accesses is a write. Note
that defining “simultaneously” precisely is not trivial. If two
threads have a data race, no language guarantees hold: the behavior is
undefined."
It becomes clear there is a need for syncrhonization.
1 - Which of these primitives are suitable to this application , just mutices or Semaphores ?
2- If mutex is the choice, we would need 3 mutices, one mutex for each structure , right ?
3- Would the fact of using a mutex at a given non-atomic operation, block CPU time of other threads ?
Your usecase is kind of abstract so batter solutions might be available. But based just on the information you provided:
1) Use mutex. I do not see how semaphores could help except to be used as mutex. A semaphore could be usefull when you share more resources, but in your case it is only one at a time.
If all four threads would access the first free struct or if your struct would be an queue, a semaphore could help.
2) Right, one mutex per structure.
3) Yes, it could, this is the idea, you do not want for T1 to write when T2 is reading struct1 and viceversa. Worstcase could be T1 blocks T2 that has already blocked T3 that has blocked T4.
1 - 3 semaphore for each queue, see Producer–consumer problem.
2- 1 of the semaphores could be a mutex, binary semaphores are much like mutex.
3- if you have to wait for a semaphore or mutex you will be placed in the no ready queue of the OS, waiting for the release. And so doesn't use any CPU (except for the 1000's of cycles it cost for the context switch).

Force function in separate thread on Corona SDK, similar to "dispatch_async block" in iOS

Can a function be forced to be called in a separate thread using Corona SDK?
How?
edit:
So what I felt was slacking down my system was not depending on asynchronous calls. It was a table view that had to be filled with 1000+ elements. Turns out, it was a bug in an earlier version of corona SDK. Updating to the latest build made the table-view row insertion much more rapid.
The closest you can get in Lua (and Corona SDK) is coroutines but these are not really threads but rather (to quote Programming in Lua):
A coroutine is similar to a thread (in the sense of multithreading): a
line of execution, with its own stack, its own local variables, and
its own instruction pointer; but sharing global variables and mostly
anything else with other coroutines. The main difference between
threads and coroutines is that, conceptually (or literally, in a
multiprocessor machine), a program with threads runs several threads
concurrently. Coroutines, on the other hand, are collaborative: A
program with coroutines is, at any given time, running only one of its
coroutines and this running coroutine only suspends its execution when
it explicitly requests to be suspended.
http://www.lua.org/pil/9.html
Unfortunately, if you approach coroutines hoping that they will be like threads you'll be disappointed.

QProcess, QEventLoop - of any use for parallel-processing

I wonder whether I could use QEventLoop (QProcess?) to parallelize multiple calls to same function with Qt. What is precisely the difference with QtConcurrent or QThread? What is a process and an event loop more precisely? I read that QCoreApplication must exec() as early as possible in main() method, so that I wonder why it is different from main Thread.
could you point as some efficient reference to processes and thread with Qt? I came through the official doc and those things remain unclear.
Thanks and regards.
Process and thread are not Qt-specific concepts. You can search for "process vs. thread" anywhere for that distinction to be explained. For instance: What resources are shared between threads?
Though related concepts, spawning a new process is a more "heavyweight" form of parallelism than spawning a new thread within your existing process. Processes are protected from each other by default, while threads of execution within a process can read and write each other's memory directly. The protection you get from spawning processes comes at a greater run-time cost...and since independent processes can't read each other's memory, you have to share data between them using methods of inter-process communication.
Odds are that you want threads, because they're simpler to use in a case where one is writing all the code in a program. Given all the complexities in multithreaded programming, I'd suggest looking at a good book or reading some websites to start with. See: What are some good resources for learning threaded programming?
But if you want to dive in and just get a feel for how threading in Qt looks, you can spend time looking at the examples:
http://qt-project.org/doc/qt-4.8/examples-threadandconcurrent.html
QtConcurrent is an abstraction library that makes it easier to implement some kinds of parallel programming patterns. It's built on top of the QThread abstractions, and there's nothing it can do that you couldn't code yourself by writing to QThread directly. But it might make your code easier to write and less prone to errors.
As for an event loop...that is merely a generic term for how any given thread of execution in your program waits for work items to process, processes them, and can decide when it is no longer needed. If a thread's job were merely to start up, do some math, and exit...then it wouldn't need an event loop. But starting and stopping a thread takes time and churns resources. So typically threads live for longer periods of time, and have an event loop that knows how to wait for events it needs to respond to.
If you build on top of QtConcurrent, you won't have to worry about an event loop in your worker threads because they are managed automatically in a thread pool. The word count example is pretty simple to see:
http://qt-project.org/doc/qt-4.8/qtconcurrent-wordcount-main-cpp.html

Threads(PThreads) stopping execution and going into wait state

I have been having this problem wherein my threads stop execution and go into a wait state(reason : unknown). Pseudo code is posted below followed by some explanation
int arr[1000];
T1
{
tmp = arr[i];
}
T2
{
tmp=arr[i];
}
T3
{
arr[i] = value;
}
Main()
{
spawns of threads and waits for them to finish;
}
So the only thing that is shared across these threads is the array, T3 writes into this array and T1,T2 read from it and use it for some purpose.
When I execute the program all three threads work fine and do what is required of them but when I try to run it for longer periods, after a while they stop execution and go into a wait state. The threads are still in the process mix but in less idle state. I do not know why this is happening and would greatly appreciate if someone can provide any useful pointers as to how I can find a resolution to this problem.
For sure, there is no bug in the provided example. The real bug in your code is somewhere else - this is typical in multithreaded applications, you shouldn't focus just to this particular array. Look for the bug elsewhere. Even when you think there is nothing more related to threads where multithreaded deadlock can occur, there is something for sure!
Sounds like you might be suffering from a deadlock. Do your threads ever hold more than one mutex at a time? (e.g. pthread_mutex_lock(&mutex1); pthread_mutex_lock(&mutex2);) If so, do they always lock their simultaneously-held mutexes in the same order? If they don't, that would be a problem.
If thread T1 does the aforementioned locking sequence, but thread T2 locks mutex2 and then mutex1 (while still holding mutex2) then that is all that is needed to cause an occasional deadlock... which would mean that T1 is holding mutex1 and waiting for mutex2 to become available, while simultaneously T2 is holding mutex2 and waiting for mutex2, and both are stuck forever. Or if you're really unlucky, the deadlock could involve a cycle of 3 or more mutexes.
Note that even if your own code isn't locking a second mutex explicitly, it's possible that some library or function call that your code calls out to is locking its own mutex internally. That, combined with your own held mutex, could be sufficient for a deadlock also.
Your best bet would be to run your program under a debugger (for example gdb), and then when it locks up, break into the debugger and print out the current stack trace of each thread (via the "where" command) to see where it is blocked at.
There was problem with the way I was using one of the calls to an external library that caused me to go into self-deadlock mode. Thanks to everyone who tried to help me.
I was going to say, check your mutex locking and unlocking, especially when using conditions, but you found your problem.
By the way....you don't necessarily need to use a mutex to read or write values to a shared array. Check out gcc atomic operations if you need more speed!

Resources