Has anyone seen a programming language that handles threads like this? - multithreading

Most of the multithreaded work I have done has been in C/C++, Python, or Delphi (Object Pascal). All on Windows. I'll use Delphi for my discussion here. Delphi has a nice class called TThread which abstracts the thread creation process. The class provides an Execute method which is the created thread's thread function. You override that method and typically create a loop within it that exits when the thread is terminated. You do the thread's work inside the loop.
One of the recurring tasks that crops up is keeping track (carefully) of which code gets executed in the thread's context and which code gets executed by external threads in an external context, with synchronization objects guarding data shared by the threads. All basic thread programming stuff. One of the recurring annoyances is creating functions to allow external threads to submit or retrieve data, and moving data from public thread-safe memory objects to those private to the thread and back the other way.
I was wondering if anyone has ever seen a programming language that makes this simpler? Here's what I would love as a programming thread idiom. Let's take a Delphi TThread sub-class created for this discussion. Suppose I could mark class methods with one of three keywords like Private, PublicExecuteInAnyContext, or PublicExecuteInPrivateContext. Here's how they would work.
Private: Private methods would only execute in the thread's context. The compiler would automatically add code that would raise an exception if a code path led to that method being executed in a context outside of the host thread. (E.g. - "Error, attempt to execute method private to thread $AEB from thread $EE0").
PublicExecuteInAnyContext: methods marked as such could be called by the thread that owns the method and any external thread. Any data objects referenced in these methods would automatically be guarded with synchronization objects, with the option to override the default choice and supply your own. (Mutex or semaphore instead of Critical Section, etc.)
PublicExecuteInPrivateContext: Methods marked with this keyword would execute in the thread's context, but are callable by any thread. This option would allow for two strategies for dealing with calls to such methods by external threads:
1) Mode 1 - Block calling thread: the calling thread would block until the method returned. In other words the compiler would automatically write code to make the calling thread would block. Any parameters passed by the calling thread would be copied into variables private to the host thread. The method would not execute until the host thread received control. When the host thread exited the method the calling thread would be released and would have any results returned by the method copied into it's own private variable space.
2) Mode 2 - Do not block the calling thread: this would allow for the additional argument of a callback function. Any parameters passed to the method by an external thread would be copied into the thread's private variable space. The compiler would again hold the execution of the method until the host thread received control, but would let the calling thread continue without blocking. When the host thread completed execution of the method, if a callback function was specified, it would call that function ** but in the context of the original thread that called the method, not in the host thread context **.
A language that would provide this kind of automatic thread handling would, at least to me, be a lot more fun to do multithreading with then the way I have to do it now. Has anybody seen a programming language, or a module/hack for one of the mainstream languages, that provides this kind of multi-threading model?

I don't think it matches your description exactly, but Erlang provides one of the easiest concurrency models I've seen. It uses a share-nothing approach, which may or may not sound appealing to you, but I found it really interesting. It's also really easy to create distributed systems with it, if you ever have the need. Check out "Getting Started with Erlang", it's a great tutorial that covers almost all parts of the language.

Wanted to mention. LabVIEW really makes starting with multi threading a breeze. I just found some article on the Internet which actually explains this in a better way:
If you have programmed in a
traditional, textual language before,
the data-flow paradigm of Labview can
be somewhat hard to embrace. The
data-flow paradigm stipulates that it
does not matter where on the 2D
surface of the block diagram a
particular component is placed in
relation to other components, but what
other components it is wired to. A
particular component-node does not
execute until all its inputs are
available; it executes, however, as
soon as all its inputs are available,
regardless of what else might be
executing at the same time, that is -
in parallel with potentially many
other things.
You need extensive experience with
multi-threading programming in a
traditional language, however, to
truly appreciate how easy it is to
achieve multi-threading with LabVIEW
and how naturally the parallelism
arises - whether intentionally, or
not. While LabVIEW does not magically
dissolve all the challenges related to
parallel processing, it certainly
makes it considerably simpler to get
started with it. In fact, in contrast
with the traditionally sequential
textual programming languages, its is
the sequential execution that comes at
a price in LabVIEW - parallelism is
almost free.
Taken from http://saberrobotics.org/?id=34

Related

What's wrong with using TThread.Resume? [duplicate]

This question already has answers here:
Resuming suspended thread in Delphi 2010?
(2 answers)
Closed 6 years ago.
Long ago, when I started working with threads in Delphi, I was making threads start themselves by calling TThread.Resume at the end of their constructor, and still do, like so:
constructor TMyThread.Create(const ASomeParam: String);
begin
inherited Create(True);
try
FSomeParam:= ASomeParam;
//Initialize some stuff here...
finally
Resume;
end;
end;
Since then, Resume has been deprecated in favor to use Start instead. However, Start can only be called from outside the thread, and cannot be called from within the constructor.
I have continued to design my threads using Resume as shown above, although I know it's been deprecated - only because I do not want to have to call Start from outside the thread. I find it a bit messy to have to call:
FMyThread := TMyThread.Create(SomeParamValue);
FMyThread.Start;
Question: What's the reason why this change was made? I mean, what is so wrong about using Resume that they want us to use Start instead?
EDIT After Sedat's answer, I guess this really depends on when, within the constructor, does the thread actually begin executing.
The short and pithy answer is because the authors of the TThread class didn't trust developers to read or to understand the documentation. :)
Suspending and resuming a thread is a legitimate operation for only a very limited number of use cases. In fact, that limited number is essentially "one": Debuggers
Undesirables
The reason it is considered undesirable (to say the least) is that problems can arise if a thread is suspended while (for example) it owns a lock on some other synchronization object such as a mutex or sempahore etc.
These synchronization objects are specifically designed to ensure the safe operation of a thread with respect to other threads accessing shared resources, so interrupting and interfering with these mechanisms is likely to lead to problems.
A debugger needs a facility to directly suspend a thread irrespective of these mechanisms for surprisingly similar reasons.
Consider for example that a breakpoint involves an implicit (or you might even say explicit) "suspend" operation on a thread. If a debugger halts a thread when it reaches a break-point then it must also suspend all other threads in the process precisely because they will otherwise race ahead doing work that could interfere with many of the low level tasks that the debugger might be asked to then do.
The Strong Arm of the Debugger
A debugger cannot "inject" nice, polite synchronization objects and mechanisms to request that these other thread suspend themselves in a co-ordinated fashion with some other thread that has been unceremoniously halted (by a breakpoint). The debugger has no choice but to strong-arm the threads and this is precisely what the Suspend/Resume API's are for.
They are for situations where you need to stop a thread "Right now. Whatever you are doing I don't care, just stop!". And later, to then say "OK, you can carry on now with whatever it was you were doing before, whatever it was.".
Well Behaved Threads Behave Well Toward Each Other
It should be patently obvious that this is not how a well-behaved thread interacts with other threads in normal operation (if it wishes to maintain a state of "normal" operation and not create all sorts of problems). In those normal cases a thread very much does and should care what those other threads are doing and ensure that it doesn't interfere, using appropriate synchronization techniques to co-ordinate with those other threads.
In those cases, the legitimate use case for Resuming a thread is similarly reduced to just one, single mode. Which is, that you have created and initialised a thread that you do not wish to run immediately but to start execution at some later point in time under the control of some other thread.
But once that new thread has been started, subsequent synchronization with other threads must be achieved using those proper synchronization techniques, not the brute force of suspending it.
Start vs Suspend/Resume
Hence it was decided that Suspend/Resume had no real place on a general purpose thread class (people implementing debuggers could still call the Windows API's directly) and instead a more appropriate "Start" mechanism was provided.
Hopefully it should be apparent that even though this Start mechanism employs the exact same API that the deprecated Resume method previously employed, the purpose is quite different.

Why is it wrong to access GUI elements from another thread? [duplicate]

This question already has answers here:
Why are most UI frameworks single threaded?
(6 answers)
Closed 7 years ago.
In every GUI library I've used (Swing, Android, Windows Forms, WPF) there's this golden rule saying that one cannot access or modify GUI elements from another thread (other than the GUI thread). I suppose this rule applies to any GUI library. Breaking this rule will most likely cause application to crash. However, I've been wondering recently, why is it so? I couldn't find any profound explanation. So what is the low-level explanation of this rule?
No piece of software is thread-safe unless it is explicitly designed and build to be so.
A GUI is a complex and stateful beast, making it thread-safe would be 'prohibitively expensive'.
There is a very simple reason for this. Usually UI functions are not thread-safe (as making them thread-safe would pessimize performance).
Of those you listed, some may be wrappers around existing mechanisms, so you have to answer the question indirectly via the underlying GUI framework. In case of multi-platform GUI frameworks like e.g. Qt, you will also have the lowest-common denominator that determines what is possible and what isn't.
Now, why is access to the GUI not thread-safe? In the cases where I'm most familiar with (win32 and X11), accesses are often performed indirectly by sending requests and sometimes waiting for the according answer. This usually works in an atomic way, even across process boundaries, so that is not directly cause of the problem. However, if you do so from multiple threads, the worst that can happen is that data is modified in an uncoordinated way. For example, if you read, modify and write the same widget from two threads, these operations might be interleaved, so that only one thread's modifications will actually be applied.
There are other reasons for not supporting cross-thread access:
In win32, the queue with the messages is thread-local, which means that only the thread that created a window will actually find and be able to handle messages for that window. I guess this a legacy from times where processes were single-threaded and the message queue was simply a global. Making it thread-local is the same approach as the one used for making errno thread-safe.
Another reason is that support objects are created inside a process that represent some GUI element. For example, the MFC (on top of win32) use a map from the OS' widget handle to a C++ object representing that object. That map is stored in thread-local storage (which follows the thread-local message queue) and the access to the C++ objects is not guarded by a mutex. Accessing these objects from different threads is bad, not because they represent GUI objects but because they are not synchronized, simple as that.
If you think about modifying the structure of a widget tree (like e.g. the DOM tree in a browser), you either have very detailed knowledge of what other parts of the application are doing or you need to lock access to the whole tree before every operation just to be safe. Needless to say, this effectively prevents any parallel operations, so you can also take the next step and require all operations to come from one thread and thus save the whole multithreading overhead.
That said, I believe that Qt and C# (and probably others) actually do support some cross-thread operations. They will work some (more or less obscure) magic that forwards the calls to the GUI thread and forwards the results back to the calling thread again. In other words, they try to make the necessary inter-thread communication more convenient for the programmer, while retaining the efficiency and simplicity of the single-threaded GUI. This is not restricted to GUI handling though but rather a general approach, only that it is especially important for the GUI.
As far as I know, that is simply not true: Every object in Java might be accesed concurrently, as far as thread-safe techniques are correctly applied. The fact is that Java Swing objects are mostly not prepared for multithreading, so you'll have to perform external synchronization.
There are several instances in which you need several threads to interoperate in a GUI: Games, visual effects, user events...
More information about the GUI and multithreading:
https://docs.oracle.com/javase/tutorial/uiswing/concurrency/dispatch.html

Thread in Tcl is not really working as C threads

In Tclsh thread package, a created thread is not sharing variables and namespace with main thread, which is quite different from C implementation of threads. Why is this contradiction in tcl thread design. Or am i missing something in the code? Does all scripting language have similar threaded design with them?
Below is the quote from Tcl thread documentation PDF,
thread::create
. All other extensions must be loaded
explicitly into each thread
that needs to use them
It's not a contradiction. It's just a different model. It has its advantages and its disadvantages. The key disadvantage you already know: scripts and variables are not shared (unless you take special steps). The key advantage is that the Tcl implementation has no big global locks, and that makes it much easier to use multi-core hardware effectively and means that there are very few gotchas when doing so. Contrast this with the Python Global Interpreter Lock, which is necessary because Python uses the C-like global shared state model.
At the low level, Tcl's threading is strongly isolated with plenty of thread-shared variables behind the scenes so that locks can be avoided (including in the memory management a lot of time, which would otherwise be a key bottleneck). Inter-thread communications are based on top of Tcl's built-in event queueing system; when two threads communicate, one sends a message and (optionally) waits for the other to respond, with the receiver getting the message placed on its internal queue of events until it is in a state that is ready to handle it. This does slow down inter-thread communications, but is much faster when they're not communicating.
It is actually similar to one way you'd use threads in C: message passing. Of course, you can use threads in other ways as well in C. But message passing is one way to completely avoid deadlocks since the semaphores/mutexes can be completely managed around the message queues and you don't need them anywhere else in your code.
This is in fact what Tcl implements at the C level. And it is in fact why it was done this way: to avoid the need for semaphores (to prevent the user form deadlocking himself).
Most other scripting languages simply provide a thin wrapper around pthreads so you can deadlock yourself if you're not careful. I remember way back in the early 2000s the general advice for threaded programming in C and most other languages is to implement a message passing architecture to avoid deadlocks.
Since tcl generally takes the view that API exposed at the script level should be high level, the thread implementation was implemented with a message passing architecture built-in. Of course, there is also the convenient fact that it also avoids having to make the tcl interpreter thread-safe (thus introducing mutexes all over the interpreter source code).
Making interpreters thread-safe is non trivial. Some languages suffer mysterious crashes to this day when running threaded applications. Some languages took over a decade to iron out all threading bugs. Tcl just decided not to try. The tcl interpreter is small enough and spins up quite fast so the solution was to simply run one interpreter per thread.

What are some of the core principles needed to master multi-threading using Delphi?

I am kind of new to programming in general (about 8 months with on and off in Delphi and a little Python here and there) and I am in the process of buying some books.
I am interested in learning about concurrent programming and building multi-threaded apps using Delphi. Whenever I do a search for "multithreading Delphi" or "Delphi multithreading tutorial" I seem to get conflicting results as some of the stuff is about using certain libraries (Omnithread library) and other stuff seems to be more geared towards programmers with more experience.
I have studied quite a few books on Delphi and for the most part they seem to kind of skim the surface and not really go into depth on the subject. I have a friend who is a programmer (he uses c++) who recommends I learn what is actually going on with the underlying system when using threads as opposed to jumping into how to actually implement them in my programs first.
On Amazon.com there are quite a few books on concurrent programming but none of them seem to be made with Delphi in mind.
Basically I need to know what are the main things I should be focused on learning before jumping into using threads, if I can/should attempt to learn them using books that are not specifically aimed at Delphi developers (don't want to confuse myself reading books with a bunch of code examples in other languages right now) and if there are any reliable resources/books on the subject that anyone here could recommend.
Short answer
Go to OmnyThreadLibrary install it and read everything on the site.
Longer answer
You asked for some info so here goes:
Here's some stuff to read:
http://delphi.about.com/od/kbthread/Threading_in_Delphi.htm
I personally like: Multithreading - The Delphi Way.
(It's old, but the basics still apply)
Basic principles:
Your basic VCL application is single threaded.
The VCL was not build with multi-threading in mind, rather thread-support is bolted on so that most VCL components are not thread-safe.
The way in which this is done is by making the CPU wait, so if you want a fast application be careful when and how to communicate with the VCL.
Communicating with the VCL
Your basic thread is a decendent of TThread with its own members.
These are per thread variables. As long as you use these you don't have any problems.
My favorite way of communicating with the main window is by using custom windows Messages and postmessage to communicate asynchronically.
If you want to communicate synchronically you will need to use a critical section or a synchonize method.
See this article for example: http://edn.embarcadero.com/article/22411
Communicating between threads
This is where things get tricky, because you can run into all sorts of hard to debug synchonization issues.
My advice: use OmnithreadLibrary, also see this question: Cross thread communication in Delphi
Some people will tell you that reading and writing integers is atomic on x86, but this is not 100% true, so don't use those in a naive way, because you'll most likely get subtle issues wrong and end up with hard to debug code.
Starting and stopping threads
In old Delphi versions Thread.suspend and Thread.resume were used, however these are no longer recommended and should be avoided (in the context of thread synchronization).
See this question: With what delphi Code should I replace my calls to deprecated TThread method Suspend?
Also have a look at this question although the answers are more vague: TThread.resume is deprecated in Delphi-2010 what should be used in place?
You can use suspend and resume to pause and restart threads, just don't use them for thread synchronization.
Performance issues
Putting wait_for... , synchonize etc code in your thread effectively stops your thread until the action it's waiting for has occured.
In my opinion this defeats a big purpose of threads: speed
So if you want to be fast you'll have to get creative.
A long time ago I wrote an application called Life32.
Its a display program for conways game of life. That can generate patterns very fast (millions of generations per second on small patterns).
It used a separate thread for calculation and a separate thread for display.
Displaying is a very slow operation that does not need to be done every generation.
The generation thread included display code that removes stuff from the display (when in view) and the display thread simply sets a boolean that tells the generation thread to also display the added stuff.
The generation code writes directly to the video memory using DirectX, no VCL or Windows calls required and no synchronization of any kind.
If you move the main window the application will keep on displaying on the old location until you pause the generation, thereby stopping the generation thread, at which point it's safe to update the thread variables.
If the threads are not 100% synchronized the display happens a generation too late, no big deal.
It also features a custom memory manager that avoids the thread-safe slowness that's in the standard memory manager.
By avoiding any and all forms of thread synchronization I was able to eliminate the overhead from 90%+ (on smallish patterns) to 0.
You really shouldn't get me started on this, but anyway, my suggestions:
Try hard to not use the following:
TThread.Synchronize
TThread.WaitFor
TThread.OnTerminate
TThread.Suspend
TThread.Resume, (except at the end of constructors in some Delphi versions)
TApplication.ProcessMessages
Use the PostMessage API to communicate to the main thread - post objects in lParam, say.
Use a producer-consumer queue to communicate to secondary threads, (not a Windows message queue - only one thread can wait on a WMQ, making thread pooling impossible).
Do not write directly from one thread to fields in another - use message-passing.
Try very hard indeed to create threads at application startup and to not explicitly terminate them at all.
Do use object pools instead of continually creating and freeing objects for inter-thread communication.
The result will be an app that performs well, does not leak, does not deadlock and shuts down immediately when you close the main form.
What Delphi should have had built-in:
TWinControl.PostObject(anObject:TObject) and TWinControl.OnObjectRx(anObject:TObject) - methods to post objects from a secondary thread and fire a main-thread event with them. A trivial PostMessage wrap to replace the poor performing, deadlock-generating, continually-rewritten TThread.Synchronize.
A simple, unbounded producer-consumer class that actually works for multiple producers/consumers. This is, like, 20 lines of TObjectQueue descendant but Borland/Embarcadero could not manage it. If you have object pools, there is no need for complex bounded queues.
A simple thread-safe, blocking, object pool class - again, really simple with Delphi since it has class variables and virtual constructors, eg. creating a lot of buffer objects:
myPool:=TobjectPool.create(1024,TmyBuffer);
I thought it might be useful to actually try to compile a list of things that one should know about multithreading.
Synchronization primitives: mutexes, semaphores, monitors
Delphi implementations of synchronization primitives: TCriticalSection, TMREWSync, TEvent
Atomic operations: some knowledge about what operations are atomic and what not (discussed in this question)
Windows API multithreading capabilities: InterlockedIncrement, InterlockedExchange, ...
OmniThreadLibrary
Of course this is far from complete. I made this community wiki so that everyone can edit.
Appending to all the other answers I strongly suggest reading a book like:
"Modern Operating Systems" or any other one going into multithreading details.
This seems to be an overkill but it would make you a better programmer and
you defenitely get a very good insight
into threading/processes in an abstract way - so you learn why and how to
use critical section or semaphores on examples (like the
dining philosophers problem or the sleeping barber problem)

How to define threadsafe?

Threadsafe is a term that is thrown around documentation, however there is seldom an explanation of what it means, especially in a language that is understandable to someone learning threading for the first time.
So how do you explain Threadsafe code to someone new to threading?
My ideas for options are the moment are:
Do you use a list of what makes code
thread safe vs. thread unsafe
The book definition
A useful metaphor
Multithreading leads to non-deterministic execution - You don't know exactly when a certain piece of parallel code is run.
Given that, this wonderful multithreading tutorial defines thread safety like this:
Thread-safe code is code which has no indeterminacy in the face of any multithreading scenario. Thread-safety is achieved primarily with locking, and by reducing the possibilities for interaction between threads.
This means no matter how the threads are run in particular, the behaviour is always well-defined (and therefore free from race conditions).
Eric Lippert says:
When I'm asked "is this code thread safe?" I always have to push back and ask "what are the exact threading scenarios you are concerned about?" and "exactly what is correct behaviour of the object in every one of those scenarios?".
It is unhelpful to say that code is "thread safe" without somehow communicating what undesirable behaviors the utilized thread safety mechanisms do and do not prevent.
G'day,
A good place to start is to have a read of the POSIX paper on thread safety.
Edit: Just the first few paragraphs give you a quick overview of thread safety and re-entrant code.
HTH
cheers,
i maybe wrong but one of the criteria for being thread safe is to use local variables only. Using global variables can have undefined result if the same function is called from different threads.
A thread safe function / object (hereafter referred to as an object) is an object which is designed to support multiple concurrent calls. This can be achieved by serialization of the parallel requests or some sort of support for intertwined calls.
Essentially, if the object safely supports concurrent requests (from multiple threads), it is thread safe. If it is not thread safe, multiple concurrent calls could corrupt its state.
Consider a log book in a hotel. If a person is writing in the book and another person comes along and starts to concurrently write his message, the end result will be a mix of both messages. This can also be demonstrated by several threads writing to an output stream.
I would say to understand thread safe, start with understanding difference between thread safe function and reentrant function.
Please check The difference between thread-safety and re-entrancy for details.
Tread-safe code is code that won't fail because the same data was changed in two places at once. Thread safe is a smaller concept than concurrency-safe, because it presumes that it was in fact two threads of the same program, rather than (say) hardware modifying data, or the OS.
A particularly valuable aspect of the term is that it lies on a spectrum of concurrent behavior, where thread safe is the strongest, interrupt safe is a weaker constraint than thread safe, and reentrant even weaker.
In the case of thread safe, this means that the code in question conforms to a consistent api and makes use of resources such that other code in a different thread (such as another, concurrent instance of itself) will not cause an inconsistency, so long as it also conforms to the same use pattern. the use pattern MUST be specified for any reasonable expectation of thread safety to be had.
The interrupt safe constraint doesn't normally appear in modern userland code, because the operating system does a pretty good job of hiding this, however, in kernel mode this is pretty important. This means that the code will complete successfully, even if an interrupt is triggered during its execution.
The last one, reentrant, is almost guaranteed with all modern languages, in and out of userland, and it just means that a section of code may be entered more than once, even if execution has not yet preceeded out of the code section in older cases. This can happen in the case of recursive function calls, for instance. It's very easy to violate the language provided reentrancy by accessing a shared global state variable in the non-reentrant code.

Resources