This question already has answers here:
Resuming suspended thread in Delphi 2010?
(2 answers)
Closed 6 years ago.
Long ago, when I started working with threads in Delphi, I was making threads start themselves by calling TThread.Resume at the end of their constructor, and still do, like so:
constructor TMyThread.Create(const ASomeParam: String);
begin
inherited Create(True);
try
FSomeParam:= ASomeParam;
//Initialize some stuff here...
finally
Resume;
end;
end;
Since then, Resume has been deprecated in favor to use Start instead. However, Start can only be called from outside the thread, and cannot be called from within the constructor.
I have continued to design my threads using Resume as shown above, although I know it's been deprecated - only because I do not want to have to call Start from outside the thread. I find it a bit messy to have to call:
FMyThread := TMyThread.Create(SomeParamValue);
FMyThread.Start;
Question: What's the reason why this change was made? I mean, what is so wrong about using Resume that they want us to use Start instead?
EDIT After Sedat's answer, I guess this really depends on when, within the constructor, does the thread actually begin executing.
The short and pithy answer is because the authors of the TThread class didn't trust developers to read or to understand the documentation. :)
Suspending and resuming a thread is a legitimate operation for only a very limited number of use cases. In fact, that limited number is essentially "one": Debuggers
Undesirables
The reason it is considered undesirable (to say the least) is that problems can arise if a thread is suspended while (for example) it owns a lock on some other synchronization object such as a mutex or sempahore etc.
These synchronization objects are specifically designed to ensure the safe operation of a thread with respect to other threads accessing shared resources, so interrupting and interfering with these mechanisms is likely to lead to problems.
A debugger needs a facility to directly suspend a thread irrespective of these mechanisms for surprisingly similar reasons.
Consider for example that a breakpoint involves an implicit (or you might even say explicit) "suspend" operation on a thread. If a debugger halts a thread when it reaches a break-point then it must also suspend all other threads in the process precisely because they will otherwise race ahead doing work that could interfere with many of the low level tasks that the debugger might be asked to then do.
The Strong Arm of the Debugger
A debugger cannot "inject" nice, polite synchronization objects and mechanisms to request that these other thread suspend themselves in a co-ordinated fashion with some other thread that has been unceremoniously halted (by a breakpoint). The debugger has no choice but to strong-arm the threads and this is precisely what the Suspend/Resume API's are for.
They are for situations where you need to stop a thread "Right now. Whatever you are doing I don't care, just stop!". And later, to then say "OK, you can carry on now with whatever it was you were doing before, whatever it was.".
Well Behaved Threads Behave Well Toward Each Other
It should be patently obvious that this is not how a well-behaved thread interacts with other threads in normal operation (if it wishes to maintain a state of "normal" operation and not create all sorts of problems). In those normal cases a thread very much does and should care what those other threads are doing and ensure that it doesn't interfere, using appropriate synchronization techniques to co-ordinate with those other threads.
In those cases, the legitimate use case for Resuming a thread is similarly reduced to just one, single mode. Which is, that you have created and initialised a thread that you do not wish to run immediately but to start execution at some later point in time under the control of some other thread.
But once that new thread has been started, subsequent synchronization with other threads must be achieved using those proper synchronization techniques, not the brute force of suspending it.
Start vs Suspend/Resume
Hence it was decided that Suspend/Resume had no real place on a general purpose thread class (people implementing debuggers could still call the Windows API's directly) and instead a more appropriate "Start" mechanism was provided.
Hopefully it should be apparent that even though this Start mechanism employs the exact same API that the deprecated Resume method previously employed, the purpose is quite different.
Related
I have found this nice component called TBackgroundWorker. However, people are criticizing it (on SO) because it uses TerminateThread. Here is the "faulty" code:
destructor TBackgroundWorker.Destroy;
begin
if IsWorking then
begin
TerminateThread(fThread.Handle, 0);
Cleanup(True);
raise EBackgroundWorker.CreateFmt(SInvalidExit, [Name]);
end;
inherited Destroy;
end;
For me it seems a valid destructor. Is it? Should I worry?
There is a better solution?
In my opinion, the destructor is valid.
Forcibly terminating a thread is wrong. Also, raising an exception in a destructor may kill the whole application. However, please don't ignore the context.
We talk about a proxy object that wraps a thread. If such a component is running, its destruction is comparable to killing a running thread. The proxy should fail fast and report such a misaction, not manipulate it. Besides, this is a third-party component, which does not know the intent of the application's developer.
I suppose you disagree with me; otherwise, we didn't have this conversation. Let's see what the alternatives are.
Canceling the task and terminating the thread gracefully, no exception message. With this approach, we are guessing the intention of the developer. If the developer has made a mistake, he or she may never know until it is too late. The application would have unexpected behaviors, and it is very complicated to figure out the source of the issue.
Ignoring the running thread and destroying the component anyway, without raising an exception. Seems like turning a deterministic machine into a non-deterministic one. Do we even need to discuss this?
Just raising an exception. Because the thread is still running, the variables and stack trace may hold misleading states, which makes debugging much more difficult.
I believe we all like to discover the bugs in the early stage of development and offer a reliable and stable application to our customers. Should we stop doing that because there is no valid use case for the tool we need to use?
There is always a valid use case for something. If I am wrong, please enlight me.
For me it seems a valid destructor. Is it? Should I worry?
It is a bad destructor code.
First, everything bad you have heard about TerminateThread is true. There is no safe way to terminate thread as it may leave the application in unstable state, and you should never use that function unless you want to immediately close the application, too. And in such cases it is better to just exit the process altogether. See: Calling TerminateThread on a Windows thread when app exits
Windows started picking up the really big pieces of TerminateThread garbage on the sidewalk, but it’s still garbage on the sidewalk
Now the history.
Originally, there was no TerminateThread function. The original
designers felt strongly that no such function should exist because
there was no safe way to terminate a thread, and there’s no point
having a function that cannot be called safely. But people screamed
that they needed the TerminateThread function, even though it wasn’t
safe, so the operating system designers caved and added the function
because people demanded it. Of course, those people who insisted that
they needed TerminateThread now regret having been given it.
It’s one of those “Be careful what you wish for” things.
Additionally, destructor raises an exception, which is something Delphi destructors should never ever do. Raising exceptions in destructor (which are not caught and handled within try..except block) will cause irreparable memory leaks in application.
There is a better solution?
Yes. Since, Cleanup method will call fThread.Free which will wait for thread completion and will perform normal thread shutdown there is no need to call TerminateThread.
Instead of forcing thread termination, it would be better to Cancel the thread and give it time to gracefully terminate itself. This also may require calling WaitFor although pumping Windows messages at that point could interfere with other application code.
destructor TBackgroundWorker.Destroy;
begin
if IsWorking then
begin
Cancel;
// WaitFor;
Cleanup(True);
end;
inherited Destroy;
end;
Ultimately, it is not in the domain of component to handle what happens if the thread is still running during shutdown. If there is a need to handle such scenario and prevent shutdown, then this needs to be handled from outside code.
Generally, I would avoid using this component as generalized solutions can create more problems than they are worth. Waiting for a thread by pumping messages is not the greatest design. It may work well in some circumstances and not in others.
It would be better architecture to rely on TThread.WaitFor function. However, TThread.WaitFor is blocking call, so that behavior may not fit well into TBackgroundWorker architecture and desired behavior.
Note: I didn't fully inspect the code of TBackgroundWorker component so there may be other issues that are not covered in this post.
Given a situation where thread A had to dispatch work to thread B, is there any synchronisation mechanism that allows thread A to not return, but remain usable for other tasks, until thread B is done, of which then thread A can return?
This is not language specific, but simple c language would be a great choice in responding to this.
This could be absolutely counterintuitive; it actually sounds as such, but I have to ask before presuming...
Please Note This is a made up hypothetical situation that I'm interested in. I am not looking for a solution to an existing problem, so alternative concurrency solutions are completely pointless. I have no code for it, and if I were in it I can think of a few alternative code engineering solutions to avoid this setup. I just wish to know if a thread can be usable, in some way, while waiting for a signal from another thread, and what synchronisation mechanism to use for that.
UPDATE
As I mentioned above, I know how to synchronise threads etc. Im only interested in the situation that I have presented here. Mutexes, semaphores and locks all kinds of mechanisms will all synchronise access to resources, synchronise order of events, synchronise all kinds of concurrently issues, yes. But Im not interested in how to do it properly. I just have this made up situation that I wish to know if it can be addressed with a mechanism as described prior.
UPDATE 2
It seems I have opened up a portal for people that think they are experts in concurrency to teleport and lecture at chance how they think the rest of world does not know how threading works. I simply asked if there is a mechanism for this situation, not a work around solution, not 'the proper way to synchronise', not a better way to do it. I already know what I would do and never be in this made up situation. It's simply hypothetical.
After much research, thought, and overview, I have come to the conclusion that its like asking:
If a calculator has the ability for me simply enter a series of 5 digits and automatically get their sum on the screen.
No, it does not have such a mode ready. But I can still get the sum with a few extra clicks using the plus and eventually the equal button.
If i really wanted a thread that can continue while listening for a condition of some sort, I could easily implement a personal class or object around the OS/kernel/SDK thread or whatever and make use of that.
• So at a low level, my answer is no, there is no such mechanism •
If a thread is waiting, then it's waiting. If it can continue executing then it is not really 'waiting', in the concurrency meaning of waiting. Otherwise there would be some other term for this state (Alert Waiting, anyone?). This is not to say it is not possible, just not with one simple low level predefined mechanism similar to a mutex or semaphore etc. One could wrap the required functionality in some class or object etc.
Having said that, there are Interrupts and Interrupt handlers, which come close to addressing this situation. However, an interrupt has to be defined, with its handler. The interrupts may actually be running on another thread (not to say a thread per interrupt). So a number of objects are involved here.
You have a misunderstanding about how mutexes are typically used.
If you want to do some work, you acquire the mutex to figure out what work you need to do. You do this because "what work you need to do" is shared between the thread that decide what work needed to be done and the thread that's going to do the work. But then you release the mutex that protects "what work you need to do" while you do the work.
Then, when you finish the work, you acquire the mutex that protects your report that the work is done. This is needed because the status of the work is shared with other threads. You set that status to "done" and then you release the mutex.
Notice that no thread holds the mutex for very long, just for the microscopic fraction of a second it needs to check on or modify shared state. So to see if work is done, you can acquire the mutex that protects the reporting of the status of that work, check the status, and then release the mutex. The thread doing the work will not hold that mutex for longer than the tiny fraction of a second it needs to change that status.
If you're holding mutexes so long that you worry at all about waiting for them to be released, you're either doing something wrong or using mutexes in a very atypical way.
So use a mutex to protect the status of the work. If you need to wait for work to be done, also use a condition variable. Only hold that mutex while changing, or checking, the status of the work.
But, If a thread attempts to acquire an already acquired mutex, that thread will be forced to wait until the thread that originally acquired the mutex releases it. So, while that thread is waiting, can it actually be usable. This is where my question is.
If you consider any case where one thread might slow another thread down to be "waiting", then you can never avoid waiting. All that has to happen is one thread accesses memory and that might slow another thread down. So what do you do, never access memory?
When we talk about one thread "waiting" for another, what we mean is waiting for the thread to do actual work. We don't worry about the microscopic overhead of inter-thread synchronization both because there's nothing we can do about it and because it's negligible.
If you literally want to find some way that one thread can never, ever slow another thread down, you'll have to re-design pretty much everything we use threads for.
Update:
For example, consider some code that has a mutex and a boolean. The boolean indicates whether or not the work is done. The "assign work" flow looks like this:
Create a work object with a mutex and a boolean. Set the boolean to false.
Dispatch a thread to work on that object.
The "do work" flow looks like this:
Do work. (The mutex is not held here.)
Acquire mutex.
Set boolean to true.
Release mutex.
The "is work done" flow looks like this:
Acquire mutex.
Copy boolean.
Release mutex.
Look at copied value.
This allows one thread to do work and another thread to check if the work is done any time it wants to while doing other things. The only case where one thread waits for the other is the one-in-a-million case where a thread that needs to check if the work is done happens to check right at the instant that the work has just finished. Even in that case, it will typically block for less than a microsecond as the thread that holds the mutex only needs to set one boolean and release the mutex. And if even that bothers you, most mutexes have a non-blocking "try to lock" function (which you would use in the "check if work is done" flow so that the checking thread never blocks).
And this is the normal way mutexes are used. Actual contention is the exception, not the rule.
I know Synchronize must be used in the Execute procedure, but should it be used in Create and Destroy methods too, or is it safe to do whatever I want?
I know Synchronize must be used in the Execute procedure.
That is somewhat vague. You need to use Synchronize when you have code that must execute on the main thread. So the answer to whether or not you will need to use Synchronize depends crucially on what the code under consideration actually does. The question that you must ask yourself, and which is one that only you can answer, is do you have code that must run on the main thread?
As a general rule it would be considered prudent for you not to need to call Synchronize outside the Execute method. If you can find a way to avoid doing so then that would be wise. Remember that the ideal scenario with threads is that they never need to block with Synchronize if at all possible.
You might also wish to consider which thread executes the constructor and destructor.
The constructor Create runs in the thread that calls it. It does not run in the newly created thread. Therefore it is unlikely that you would need to use Synchronize there.
The destructor Destroy runs in the thread that calls it. Typically this is the thread that calls Free on the thread object. And usually that would be called from the same thread that originally created the thread. The common exception to that is a FreeOnTerminate thread which calls Free from the thread.
There is a need to use Synchronize() when the code is executing outside of the context of the main (GUI) thread of the application. Therefore the answer to your question depends on whether the constructor and destructor are called from that thread or not.
If you are unsure you can check that by comparing the result of the Windows API function GetCurrentThreadId() with the variable MainThreadID - if they equal the code executes in the context of the main thread.
Threads that have FreeOnTerminate set will have their destructor called from another thread context, so you would need to use Synchronize() or Queue(). Or you use the termination event the VCL already provides, I believe it is executed in the main thread, but check the documentation for details.
First of all, you don't want to call Synchronize() unnecessarily, because that simply defeats the purpose of using a thread. So the decision should be based on whether: (a) it's possible to encounter race conditions with shared data. (b) you'll be using VCL code which usually has to run on the main thread.
It's unlikely you would need to synchronise in the constructor because TThread instances are usually created from the main thread already. (The exception being if you're creating some TThread's from another child thread.)
NOTE: It won't cause any harm though because Synchronize() already checks if you're on the main thread and will call the synchronised method immediately if you are.
class procedure TThread.Synchronize(ASyncRec: PSynchronizeRecord; QueueEvent: Boolean = False);
var
SyncProc: TSyncProc;
SyncProcPtr: PSyncProc;
begin
if GetCurrentThreadID = MainThreadID then
ASyncRec.FMethod
As for the destructor there are 3 usage patterns:
The TThread instances destroys itself.
Another thread (possibly the main thread) can WaitFor the instance to finish, then destroy it.
You can intercept the OnTerminate event. This is fired when the instance is finished, and you could then destroy it.
NOTE: The OnTerminate event will already be synchronised.
procedure TThread.DoTerminate;
begin
if Assigned(FOnTerminate) then Synchronize(CallOnTerminate);
end;
Given the above, the only time you might need to synchronise is if the thread self-destructs.
However, I'd advise that you rather avoid putting code into your destructor that might need to be synchronised. If you need some results of a calculation from your thread instance, OnTerminate is the more appropriate place to get this.
To add to what has been said in other answers...
You never need to use Synchronize at all. Synchronize may be useful, however, in the following circumstance:
In the context of your thread you need to execute code that touches objects that have affinity to the main thread.
You require your thread to block until that code has been executed.
Even in that case, there are other ways to achive the same goal, but Synchronize provides a convenient way to satisfy those two needs. If you need only one of those two items, there are better strategies available.
On topic #1, the obvious objects are user interface objects. These are objects that have thread affinity to the main thread simply by virtue of the fact that the main thread is continually reading and writing the properties of those objects (not the least because it needs to paint them to the screen, etc) and it does so at its own convenience. This means that your thread cannot safely access those components with a guarantee that the main thread will not also be accessing or modifying them at the same time. In order to prevent corruption, the thread has to pass the work to the main thread (since the main thread can only do one thing at a time and can't, obviously, interfere with itself). Synchronize simply places the work onto the main thread's queue and waits until the main thread gets around to completing it before returning.
This gets to point #2. Do you need to (or, equally, can you afford to) wait around until the main thread finishes the work? There are three cases and two options.
Yes, you can or must wait. (Synchronize is a good fit)
No, you cannot wait. (Synchronize is not a good fit)
Don't care. (Synchronize is easy, so it's a sensible option)
If you are simply updating a status display that will soon be overwritten anyway and your thread has more pressing issues, then it's probably sensible to just post a message to the main thread and carry on doing things, for example. If your thread is just waiting around doing nothing, mostly, and it's not worth the time to code anything more sophisticated, then Synchronize is just fine, and it can be replaced with something better if needs dictate so in the future.
As others have said, it really depends on what you are doing. The more important question, I think, at least conceptually, is to sort out when you need to worry about concurrency and when you don't. Any time you have more than one thread that requires access to a single resource you need to use some sort of mechanism to coordinate that access to avoid the threads crashing into each other. Synchronize is one of those methods, but it not the least nor the last of them.
The POSIX specifies two types for thread cancellation type: PTHREAD_CANCEL_ASYNCHRONOUS, and PTHREAD_CANCEL_DEFERRED (set by pthread_setcanceltype(3)) determining when pthread_cancel(3) should take effect. By my reading, the POSIX manual pages do not say much about these, but Linux manual page says the following about PTHREAD_CANCEL_ASYNCHRONOUS:
The thread can be canceled at any time. (Typically, it will be canceled immediately upon receiving a cancellation request, but the system doesn't guarantee this.)
I am curious about the meaning about the system doesn't guarantee this. I can easily imagine this happening in multicore/multi-CPU systems (before context switch). But what about single core systems:
Could we have a thread not cancelled immediately when cancellation is requested and cancellation is enabled (pthread_setcancelstate(3)) and cancel type set to PTHREAD_CANCEL_ASYNCHRONOUS?
If yes, under what conditions could this happen?
I am mainly curious about Linux (LinuxThreads / NPTL), but also more generally about POSIX standard compliant way of viewing this cancellation business.
Update/Clarification: Here the real practical concern is usage of resources that are destroyed immediately after calling pthread_cancel() where the targeted thread have cancellation enabled and set to type PTHREAD_CANCEL_ASYNCHRONOUS!!! So the point really is: is there even a tiny possibility for the cancelled thread in this case to continue running normally after context switch (even for a very small time)?
Thanks for Damon's answer the question is reduced about signal delivery and handling in relation to the next context switch.
Update-2: I answered my own question to point that this is bad concern and that the underlying program design should be addressed in fundamentally different conceptual level. I wish this "wrong" question is useful for others wondering about mysteries of asynchronous cancellation.
The meaning is just what it says: It's not guaranteed to happen instantly. The reason for this is that a certain "liberty" for implementation details is needed and accounted for in the standard.
For example under Linux/NPTL, cancellation is implemented by sending signal nr. 32. The thread is cancelled when the signal is received, which usually happens at the next kernel-to-user switch, or at the next interrupt, or at the end of the time slice (which may accidentially be immediately, but usually is not). A signal is never received while the thread isn't running, however. So the real catch here is actually that signals are not necessarily received immediately.
If you think about it, it isn't even possible to do it much different, either. Since you can phtread_cleanup_push some handlers which the operating system must execute (it cannot just blast the thread out of existence!), the thread must necessarily run to be cancelled. There is no guarantee that any particular thread (including the one you want to cancel) is running at the exact time you cancel a thread, so there can be no guarantee that it is cancelled immediately.
Except of course, hypothetically, if the OS was implemented in a way as to block the calling thread and schedule the to-be-cancelled thread so it executes its handlers, and only unblocks pthread_cancel afterwards. But since pthread_cancel isn't specified as blocking, this would be an utterly nasty surprise. It would also be somewhat inacceptable because of interfering wtih execution time limits and scheduler fairness.
So, either your cancel type is "disable", then nothing happens. Or, it is "enable", and the cancel type is "deferred", then the thread cancels when calling a function that is listed as cancellation point in pthreads(7).
Or, it is "asynchronous", then as stated above, the OS will do "something" to cancel the thread as soon as it deems appropriate -- not at a precise, well-defined time, but "soon". In the case of Linux, by sending a signal.
If you need to wonder when the asynchronous cancellation happen, you are doing something terribly wrong.
Following Standards: You are eating ground below your feet by deliberately creating or allowing code to exist whose correctness depends on assumptions about the platform (single core, particular implementation, whatever). It is almost always better, if possible, to follow the standards (and document clearly when it is not possible). The name PTHREAD_CANCEL_ASYNCHROUNOUS itself suggests the meaning asynchronous, which is different from immediate or even almost immediate. The original poster specifically states single core, but why should you allow code to exist that will break in non-deterministic ways, when your code is put to run in truly parallel machines (multiple cores or CPUs) where it is practically impossible to have guarantee of immediateness (this would require stopping the other cores from running or waiting for context switch or some other terrible hack which your OS/CPU is not going to support to support your unconventional wishes).
Asynchronous thread cancellation mode is not meant for guaranteed immediate cancellation of a thread. Hence it is a terribly confusing hack to use them in this way even if it would work.
Async-Safeness: If you are concerned about the mechanism of asynchronous cancellation, it raises the suspicion that the threads in question (because of lack of independence) are maybe not purely computational or written in async-cancel-safe manner.
POSIX specifies only three functions as async-cancel safe: pthread_cancel(3), pthread_setcancelstate(3), and pthread_setcancelmode(3) - see IEEE Std 1003.1, 2013 Edition, 2.9.5. This cancellation mode is only suitable for purely computational tasks that do not call (other than purely computational) library functions; such code would not provide cancellation points if the threads were set to run in the default deferred cancellation mode. Hence the rationale for defining such mode.
It is possible to write async-cancel-safe code by disabling cancellation during critical sections. But library writers (including POSIX library implementors) in general should not care about async-safetyness by reasons of following general convention, avoiding complexity, and even avoiding performance overhead. Because the library writers should not care, you should never expect async-safetyness unless it is explicitly stated otherwise.
If your code is not async-safe (because for example calling other libraries, including POSIX/standard C libraries without temporarily disabling cancellation or changing cancellation mode) and asynchronous cancellation occurs, you might leak resources (memory, etc), leave behind inconsistent states and locked mutexes dead-locking other threads, and summon many other problems currently imaginable and non-imaginable. (If you are writing in C++, it seems you will have other issues to deal with due to POSIX thread cancellation's close association with exception handling.)
I have a threading problem with Delphi. I guess this is common in other languages too. I have a long process which I do in a thread, that fills a list in main window. But if some parameters change in the mean time, then I should stop current executing thread and start from the beginning. Delphi suggests terminating a thread by setting Terminated:=true and checking for this variable's value in the thread. However my problem is this, the long executing part is buried in a library call and in this call I cannot check for the Terminated variable. Therefore I had to wait for this library call to finish, which affects the whole program.
What is the preferred way to do in this case? Can I kill the thread immediately?
The preferred way is to modify the code so that it doesn't block without checking for cancellation.
Since you can't modify the code, you can't do that; you either have to live with the background operation (but you can disassociate it from any UI, so that its completion will be ignored); or alternatively, you can try terminating it (TerminateThread API will rudely terminate any thread given its handle). Termination isn't clean, though, like Rob says, any locks held by the thread will be abandoned, and any cross-thread state protected by such locks may be in a corrupted state.
Can you consider calling the function in a separate executable? Perhaps using RPC (pipes, TCP, rather than shared memory owing to same lock problem), so that you can terminate a process rather than terminating a thread? Process isolation will give you a good deal more protection. So long as you aren't relying on cross-process named things like mutexes, it should be far safer than killing a thread.
The threads need to co-operate to achieve a graceful shutdown. I am not sure if Delphi offers a mechanism to abort another thread, but such mechanisms are available in .NET and Java, but should be considered an option of last resort, and the state of the application is indeterminate after they have been used.
If you can kill a thread at an arbitrary point, then you may kill it while it is holding a lock in the memory allocator (for example). This will leave your program open to hanging when your main thread next needs to access that lock.
If you can't modify the code to check for termination, then just set its priority really low, and ignore it when it returns.
I wrote this in reply to a similar question:
I use an exception-based technique
that's worked pretty well for me in a
number of Win32 applications.
To terminate a thread, I use
QueueUserAPC to queue a call to a
function which throws an exception.
However, the exception that's thrown
isn't derived from the type
"Exception", so will only be caught by
my thread's wrapper procedure.
I've used this with C++Builder apps very successfully. I'm not aware of all the subtleties of Delphi vs C++ exception handling, but I'd expect it could easily be modified to work.