Are details of the MonoTouch garbage collection published anywhere? I am interested in knowing how it works on the iPhone. I'd like to know:
How often it runs, and are there any constraints that might stop it running.
Whether it is completely thread safe, so objects passed from one thread to another are handled properly, of if there are constraints we should be aware of.
If there is any benefit in manually calling the garbage collector before initiating an action that will use memory.
How does it handle low memory notifications, and running out of memory.
Such information would help us understand the stacks and thread information that we have from application logs.
[Edit] I've now found the information at Hans Boehm's site, but that is very generic and lists various options and choices the implementer has, including how threads are handled. Specific MonoTouch information is what I am wanting here.
The garbage collector is the same one used in Mono, the source code is here:
https://github.com/mono/mono/tree/master/libgc
It is completely thread safe, and multi-core safe, which means that multiple threads can allocate objects and it can garbage collect in the presence of multiple threads.
That being said, your question is a little bit tricky, because you are not really asking about the garbage collector when you say "so objects passed from one thread to another are handled property , of if there are constraints that one should be aware of".
That is not really a garbage collector question, but an API question. And this depends vastly on the API that you are calling. The rules are the same than for .NET: instance methods are never thread safe, static methods are thread safe by default. Unless explicitly stated in the API that they are not.
Now with UI APIs like UIKit or CoreGraphics these are not different than any other GUI toolkit available in the world. UI toolkits are not thread safe, so you can not assume that a UILabel created on the main thread can safely be accessed from a thread. That is why you have to call "BeginInvokeOnMainThread" on an NSObject to ensure that any methods that you call on UIKit objects are only executed no the main thread.
That is just on example.
Check http://monotouch.net/Documentation/Threading for more information
Low memory notifications are delivered by the operating system to your UIViewControllers, not to Mono's GC, so you need to take appropriate action in those cases.
Related
The SDL documentation for threading states:
NOTE: You should not expect to be able to create a window, render, or receive events on any thread other than the main one.
The glfw documentation for glfwCreateWindow states:
Thread safety: This function must only be called from the main thread.
I have read about issues regarding the glut library from people who have tried to run the windowing functions on a second thread.
I could go on with these examples, but I think you get the point I'm trying to make. A lot of cross-platform libraries don't allow you to create a window on a background thread.
Now, two of the libraries I mentioned are designed with OpenGL in mind, and I get that OpenGL is not designed for multithreading and you shouldn't do rendering on multiple threads. That's fine. The thing that I don't understand is why the rendering thread (the single thread that does all the rendering) has to be the main one of the application.
As far as I know, neither Windows nor Linux nor MacOS impose any restrictions on which threads can create windows. I do know that windows have affinity to the thread that creates them (only that thread can receive input for them, etc.); but still that thread does not need to be the main one.
So, I have three questions:
Why do these libraries impose such restrictions? Is it because there is some obscure operating system that mandates that all windows be created on the main thread, and so all operating systems have to pay the price? (Or did I get it wrong?)
Why do we have this imposition that you should not do UI on a background thread? What do threads have to do with windowing, anyways? Is it not a bad abstraction to tie your logic to a specific thread?
If this is what we have and can't get rid of it, how do I overcome this limitation? Do I make a ThreadManager class and yield the main thread to it so it can schedule what needs to be done in the main thread and what can be done in a background thread?
It would be amazing if someone could shed some light on this topic. All the advice I see thrown around is to just do input and UI both on the main thread. But that's just an arbitrary restriction if there isn't a technical reason why it isn't possible to do otherwise.
PS: Please note that I am looking for a cross platform solution. If it can't be found, I'll stick to doing UI on the main thread.
While I'm not quite up to date on the latest releases of MacOS/iOS, as of 2020 Apple UIKit and AppKit were not thread safe. Only one thread can safely change UI objects, and unless you go to a lot of trouble that's going to be the main thread. Even if you do go to all the trouble of closing the window manager connection etc etc you're still going to end up with one thread only doing UI. So the limitation still applies on at least one major system.
While it's possibly unsafe to directly modify the contents of a window from any other thread, you can do software rendering to an offscreen bitmap image from any thread you like, taking as long as you like. Then hand the finished image over to the main thread for rendering. (The possibly is why cross platform toolkits disallow/tell you not to. Sometimes it might work, but you can't say why, or even that it will keep working.)
With Vulkan and DirectX 12 (and I think but am not sure Metal) you can render from multiple threads. Woohoo! Of course now you have to figure out how to do all the coordination and locking and cross-synching without making the whole thing slower than single threaded, but at least you have the option to try.
Adding to the excellent answer by Matt, with Qt programs you can use invokeMethod and postEvent to have background threads update the UI safely.
It's highly unlikely that any of these frameworks actually care about which thread is the 'main thread', i.e., the one that called the entry point to your code. The real restriction is that you have to do all your UI work on the thread that initialized the framework, i.e., the one that called SDL_Init in your case. You will usually do this in your main thread. Why not?
Multithreaded code is difficult to write and difficult to understand, and in UI work, introducing multithreading makes it difficult to reason about when things happen. A UI is a very stateful thing, and when you're writing UI code, you usually need to have a very good idea about what has happened already and what will happen next -- those things are often undefined when multithreading is involved. Also, users are slow, so multithreading the UI is not really necessary for performance in normal cases. Because of all this, making a UI framework thread-safe isn't usually considered beneficial. (multithreading compute-intensive parts of your rendering pipeline is a different thing)
Single-threaded UI frameworks have a dispatcher of some sort that you can use to enqueue activities that should happen on the main thread when it next has time. In SDL, you use SDL_PushEvent for this. You can call that from any thread.
This question is not about "should I block my main thread" as it is generally a bad idea to block a main/STA/UI thread-for messaging and UI operations, but why WinRT C++/cx doesn't allow any blocking of the main thread compared to iOS, Android, and even C#(await doesn't actually block though).
Is there a fundamental difference in the way Android or iOS block the main thread? Why is WinRT the only platform that doesn't allow any form of blocking synchronization?
EDIT: I'm aware of co-await in VS2015, but due to backward compatibility my company still uses VS2013.
Big topic, at break-neck speed. This continues a tradition that started a long time ago in COM. WinRT inherits about all of the same concepts, it did get cleaned-up considerably. The fundamental design consideration is that thread-safety is one of the most difficult aspects of library design. And that any library has classes that are fundamentally thread-unsafe and if the consumer of the library is not aware of it then he'll easily create a nasty bug that is excessively difficult to diagnose.
This is an ugly problem for a company that relies on a closed-source business model and a 1-800 support phone number. Such phone calls can be very unpleasant, threading bugs invariably require telling a programmer "you can't do that, you'll have to rewrite your code". Rarely an acceptable answer, not at SO either :)
So thread-safety is not treated as an afterthought that the programmer needs to get right by himself. A WinRT class explicitly specifies whether or not it is thread-safe (the ThreadingModel attribute) and, if it is used in an unsafe way anyway, what should happen to make it thread-safe (the MarshallingBehavior attribute). Mostly a runtime detail, do note how compiler warning C4451 can even make these attributes produce a compile-time diagnostic.
The "used in an unsafe way anyway" clause is what you are asking about. WinRT can make a class that is not thread-safe safe by itself but there is one detail that it can't figure out by itself. To make it safe, it needs to know whether the thread that creates an object of the class can support the operating system provided way to make the object safe. And if the thread doesn't then the OS has to create a thread by itself to give the object a safe home. Solves the problem but that is pretty inefficient since every method call has to be marshalled.
You have to make a promise, cross-your-heart-hope-to-die style. The operating system can avoid creating a thread if your thread solves the producer-consumer problem. Better known as "pumping the message loop" in Windows vernacular. Something the OS can't figure out by itself since you typically don't start to pump until after you created a thread-unsafe object.
And just one more promise you make, you also promise that the consumer doesn't block and stops accepting messages from the message queue. Blocking is bad, implicit is that worker threads can't continue while the consumer is blocking. And worse, much worse, blocking is pretty likely to cause deadlock. The threading problem that's always a significant risk when there are two synchronization objects involved. One that you block on, the other that's hidden inside the OS that is waiting for the call to complete. Diagnosing a deadlock when you can't see the state of one of the sync objects that caused the deadlock is generally unpleasant.
Emphasis on promise, there isn't anything the OS can do if you break the promise and block anyway. It will let you, and it doesn't necessarily have to be fatal. It often isn't and doesn't cause anything more than an unresponsive UI. Different in managed code that runs on the CLR, if it blocks then the CLR will pump. Mostly works, but can cause some pretty bewildering re-entrancy bugs. That mechanism doesn't exist in native C++. Deadlock isn't actually that hard to diagnose, but you do have to find the thread back that's waiting for the STA thread to get back to business. Its stack trace tells the tale.
Do beware of these attributes when you use C++/CX. Unless you explicitly provide them, you'll create a class that's always considered thread-safe (ThreadingModel = Both, MarshallingType = Standard). An aspect that is not often actually tested, it will be the client code that ruins that expectation. Well, you'll get a phone call and you have to give an unpleasant answer :) Also note that OSX and Android are hardly the only examples of runtime systems that don't provide the WinRT guarantees, the .NET Framework does not either.
In a nutshell: because the policy for WinRT apps was "thou shalt not block the UI thread" and the C++ PPL runtime enforces this policy whilst the .NET runtime does not -- look at ppltasks.h and search for prevent Windows Runtime STA threads from blocking the UI. (Note that although .NET doesn't enforce this policy, it lets you accidentally deadlock yourself instead).
If you have to block the thread, there are ways to do it using Win32 IPC mechanisms (like waiting on an event that will be signaled by your completion handler) but the general guidance is still "don't do that" because it has a poor UX.
This question already has answers here:
Why are most UI frameworks single threaded?
(6 answers)
Closed 7 years ago.
In every GUI library I've used (Swing, Android, Windows Forms, WPF) there's this golden rule saying that one cannot access or modify GUI elements from another thread (other than the GUI thread). I suppose this rule applies to any GUI library. Breaking this rule will most likely cause application to crash. However, I've been wondering recently, why is it so? I couldn't find any profound explanation. So what is the low-level explanation of this rule?
No piece of software is thread-safe unless it is explicitly designed and build to be so.
A GUI is a complex and stateful beast, making it thread-safe would be 'prohibitively expensive'.
There is a very simple reason for this. Usually UI functions are not thread-safe (as making them thread-safe would pessimize performance).
Of those you listed, some may be wrappers around existing mechanisms, so you have to answer the question indirectly via the underlying GUI framework. In case of multi-platform GUI frameworks like e.g. Qt, you will also have the lowest-common denominator that determines what is possible and what isn't.
Now, why is access to the GUI not thread-safe? In the cases where I'm most familiar with (win32 and X11), accesses are often performed indirectly by sending requests and sometimes waiting for the according answer. This usually works in an atomic way, even across process boundaries, so that is not directly cause of the problem. However, if you do so from multiple threads, the worst that can happen is that data is modified in an uncoordinated way. For example, if you read, modify and write the same widget from two threads, these operations might be interleaved, so that only one thread's modifications will actually be applied.
There are other reasons for not supporting cross-thread access:
In win32, the queue with the messages is thread-local, which means that only the thread that created a window will actually find and be able to handle messages for that window. I guess this a legacy from times where processes were single-threaded and the message queue was simply a global. Making it thread-local is the same approach as the one used for making errno thread-safe.
Another reason is that support objects are created inside a process that represent some GUI element. For example, the MFC (on top of win32) use a map from the OS' widget handle to a C++ object representing that object. That map is stored in thread-local storage (which follows the thread-local message queue) and the access to the C++ objects is not guarded by a mutex. Accessing these objects from different threads is bad, not because they represent GUI objects but because they are not synchronized, simple as that.
If you think about modifying the structure of a widget tree (like e.g. the DOM tree in a browser), you either have very detailed knowledge of what other parts of the application are doing or you need to lock access to the whole tree before every operation just to be safe. Needless to say, this effectively prevents any parallel operations, so you can also take the next step and require all operations to come from one thread and thus save the whole multithreading overhead.
That said, I believe that Qt and C# (and probably others) actually do support some cross-thread operations. They will work some (more or less obscure) magic that forwards the calls to the GUI thread and forwards the results back to the calling thread again. In other words, they try to make the necessary inter-thread communication more convenient for the programmer, while retaining the efficiency and simplicity of the single-threaded GUI. This is not restricted to GUI handling though but rather a general approach, only that it is especially important for the GUI.
As far as I know, that is simply not true: Every object in Java might be accesed concurrently, as far as thread-safe techniques are correctly applied. The fact is that Java Swing objects are mostly not prepared for multithreading, so you'll have to perform external synchronization.
There are several instances in which you need several threads to interoperate in a GUI: Games, visual effects, user events...
More information about the GUI and multithreading:
https://docs.oracle.com/javase/tutorial/uiswing/concurrency/dispatch.html
I have heard that garbage collection leads to bad software design. Is that true? It is true that we don't care about the lifetime of objects in garbage collected languages, but does that have an effect in program design?
If an object asks other objects to do something on its behalf until further notice, the object's owner should notify it when its services are not required. Having code abandon such objects without notifying them that their services won't be required anymore would be bad design, but that would be just as true in a garbage-collected framework as in a non-GC framework.
Garbage-collected frameworks, used properly, offer two advantages:
In many cases, objects are created for the purpose of encapsulating values therein, and references to the objects are passed around as proxies for that data. Code receiving such references shouldn't care about whether other references exist to those same objects or whether it holds the last surviving references. As long as someone holds a reference to an object, the data should be kept. Once nobody needs the data anymore, it should cease to exist, but nobody should particularly notice.
In non-GC frameworks, an attempt to use a disposed object will usually generate Undefined Behavior that cannot be reliably trapped (and may allow code to violate security policies). In many GC frameworks, it's possible to ensure that attempts to use disposed resources will be trapped deterministically and cannot undermine security.
In some cases, garbage-collection will allow a programmer to "get away with" designs that are sloppier than would be tolerable in a non-GC system. A GC-based framework will also, however, allow the use of many good programming patterns which would could not be implemented as efficiently in a non-GC system. For example, if a program uses multiple worker threads to find the optimal solution for a problem, and has a UI thread which periodically wants to show the best solution found so far, the UI thread would want to know that when it asks for a status update it will get a solution that has been found, but won't want to burden the worker threads with the synchronization necessary to ensure that it has the absolute-latest solution.
In a non-GC system, thread synchronization would be unavoidable, since the UI thread and worker thread would have to coordinate who was going to delete a status object that becomes obsolete while it's being shown. In a GC-based system, however, the GC would be able to tell whether a UI thread was able to grab a reference to a status object before it got replaced, and thus resolve whether the object needed to be kept alive long enough for the UI thread to display it. The GC would sometimes have to force thread synchronization to find all reachable references, but occasional synchronization for the GC may pose less of a performance drain better than the frequent thread synchronization required in a non-GC system.
I am kind of new to programming in general (about 8 months with on and off in Delphi and a little Python here and there) and I am in the process of buying some books.
I am interested in learning about concurrent programming and building multi-threaded apps using Delphi. Whenever I do a search for "multithreading Delphi" or "Delphi multithreading tutorial" I seem to get conflicting results as some of the stuff is about using certain libraries (Omnithread library) and other stuff seems to be more geared towards programmers with more experience.
I have studied quite a few books on Delphi and for the most part they seem to kind of skim the surface and not really go into depth on the subject. I have a friend who is a programmer (he uses c++) who recommends I learn what is actually going on with the underlying system when using threads as opposed to jumping into how to actually implement them in my programs first.
On Amazon.com there are quite a few books on concurrent programming but none of them seem to be made with Delphi in mind.
Basically I need to know what are the main things I should be focused on learning before jumping into using threads, if I can/should attempt to learn them using books that are not specifically aimed at Delphi developers (don't want to confuse myself reading books with a bunch of code examples in other languages right now) and if there are any reliable resources/books on the subject that anyone here could recommend.
Short answer
Go to OmnyThreadLibrary install it and read everything on the site.
Longer answer
You asked for some info so here goes:
Here's some stuff to read:
http://delphi.about.com/od/kbthread/Threading_in_Delphi.htm
I personally like: Multithreading - The Delphi Way.
(It's old, but the basics still apply)
Basic principles:
Your basic VCL application is single threaded.
The VCL was not build with multi-threading in mind, rather thread-support is bolted on so that most VCL components are not thread-safe.
The way in which this is done is by making the CPU wait, so if you want a fast application be careful when and how to communicate with the VCL.
Communicating with the VCL
Your basic thread is a decendent of TThread with its own members.
These are per thread variables. As long as you use these you don't have any problems.
My favorite way of communicating with the main window is by using custom windows Messages and postmessage to communicate asynchronically.
If you want to communicate synchronically you will need to use a critical section or a synchonize method.
See this article for example: http://edn.embarcadero.com/article/22411
Communicating between threads
This is where things get tricky, because you can run into all sorts of hard to debug synchonization issues.
My advice: use OmnithreadLibrary, also see this question: Cross thread communication in Delphi
Some people will tell you that reading and writing integers is atomic on x86, but this is not 100% true, so don't use those in a naive way, because you'll most likely get subtle issues wrong and end up with hard to debug code.
Starting and stopping threads
In old Delphi versions Thread.suspend and Thread.resume were used, however these are no longer recommended and should be avoided (in the context of thread synchronization).
See this question: With what delphi Code should I replace my calls to deprecated TThread method Suspend?
Also have a look at this question although the answers are more vague: TThread.resume is deprecated in Delphi-2010 what should be used in place?
You can use suspend and resume to pause and restart threads, just don't use them for thread synchronization.
Performance issues
Putting wait_for... , synchonize etc code in your thread effectively stops your thread until the action it's waiting for has occured.
In my opinion this defeats a big purpose of threads: speed
So if you want to be fast you'll have to get creative.
A long time ago I wrote an application called Life32.
Its a display program for conways game of life. That can generate patterns very fast (millions of generations per second on small patterns).
It used a separate thread for calculation and a separate thread for display.
Displaying is a very slow operation that does not need to be done every generation.
The generation thread included display code that removes stuff from the display (when in view) and the display thread simply sets a boolean that tells the generation thread to also display the added stuff.
The generation code writes directly to the video memory using DirectX, no VCL or Windows calls required and no synchronization of any kind.
If you move the main window the application will keep on displaying on the old location until you pause the generation, thereby stopping the generation thread, at which point it's safe to update the thread variables.
If the threads are not 100% synchronized the display happens a generation too late, no big deal.
It also features a custom memory manager that avoids the thread-safe slowness that's in the standard memory manager.
By avoiding any and all forms of thread synchronization I was able to eliminate the overhead from 90%+ (on smallish patterns) to 0.
You really shouldn't get me started on this, but anyway, my suggestions:
Try hard to not use the following:
TThread.Synchronize
TThread.WaitFor
TThread.OnTerminate
TThread.Suspend
TThread.Resume, (except at the end of constructors in some Delphi versions)
TApplication.ProcessMessages
Use the PostMessage API to communicate to the main thread - post objects in lParam, say.
Use a producer-consumer queue to communicate to secondary threads, (not a Windows message queue - only one thread can wait on a WMQ, making thread pooling impossible).
Do not write directly from one thread to fields in another - use message-passing.
Try very hard indeed to create threads at application startup and to not explicitly terminate them at all.
Do use object pools instead of continually creating and freeing objects for inter-thread communication.
The result will be an app that performs well, does not leak, does not deadlock and shuts down immediately when you close the main form.
What Delphi should have had built-in:
TWinControl.PostObject(anObject:TObject) and TWinControl.OnObjectRx(anObject:TObject) - methods to post objects from a secondary thread and fire a main-thread event with them. A trivial PostMessage wrap to replace the poor performing, deadlock-generating, continually-rewritten TThread.Synchronize.
A simple, unbounded producer-consumer class that actually works for multiple producers/consumers. This is, like, 20 lines of TObjectQueue descendant but Borland/Embarcadero could not manage it. If you have object pools, there is no need for complex bounded queues.
A simple thread-safe, blocking, object pool class - again, really simple with Delphi since it has class variables and virtual constructors, eg. creating a lot of buffer objects:
myPool:=TobjectPool.create(1024,TmyBuffer);
I thought it might be useful to actually try to compile a list of things that one should know about multithreading.
Synchronization primitives: mutexes, semaphores, monitors
Delphi implementations of synchronization primitives: TCriticalSection, TMREWSync, TEvent
Atomic operations: some knowledge about what operations are atomic and what not (discussed in this question)
Windows API multithreading capabilities: InterlockedIncrement, InterlockedExchange, ...
OmniThreadLibrary
Of course this is far from complete. I made this community wiki so that everyone can edit.
Appending to all the other answers I strongly suggest reading a book like:
"Modern Operating Systems" or any other one going into multithreading details.
This seems to be an overkill but it would make you a better programmer and
you defenitely get a very good insight
into threading/processes in an abstract way - so you learn why and how to
use critical section or semaphores on examples (like the
dining philosophers problem or the sleeping barber problem)