Multithreading questions - multithreading

1) This probably was asked a lot before, and I have read quite a lot about Sleep vs WaitForSingleObject, but still a little bit confused. As an example I have a simple background thread which is called from data-processing thread to show some message to the user without blocking data-processing thread. Which, in terms of performance (CPU usage / CPU time) is better in this case: http://pastebin.com/VuhfZUEg or http://pastebin.com/eciK92ze
I suspect that the shorter Sleep time is and the more flags I have, the worse performance is. On other hand with longer Sleep time performance will be better but reaction delay will increase, and with a lot of threads reaction delay will eventually become noticeable for user on low-end machine. Is that correct? So what is the best way to keep the "dormant", inactive threads?
2) Is SetEvent blocking (SendMessage) or non-blocking (PostMessage)?
3) In TForm.OnCreate event I have a following code:
procedure TFormSubsystem.FormCreate(Sender: TObject);
begin
Application.OnException:=LogApplicationException;
Application.OnActivate:=InitiateApplication;
end;
procedure TFormSubsystem.LogApplicationException(Sender: TObject; E: Exception);
var
ErrorFile: TextFile;
ErrorInfo: String;
begin
AssignFile(ErrorFile, AppPath+'Error.log');
if FileExists(AppPath+'Error.log') then
Append(ErrorFile)
else
Rewrite(ErrorFile);
ErrorInfo:=DateTimeToStr(Now)+' Unhandled exception';
if Assigned(Sender) then
ErrorInfo:=ErrorInfo+' in '+Sender.UnitName+'->'+Sender.ClassName;
ErrorInfo:=ErrorInfo+': '+E.Message;
try
WriteLn(ErrorFile, ErrorInfo);
finally
CloseFile(ErrorFile)
end;
end;
It's not the best way to log errors, but it's simple. The question is: What happens if there was an exception inside TSomeThreadAncestor.Execute or inside a method called via Synchronize?
4) What exactly is difference between Synchronize and Queue? Which one I should use when my background threads interact with GUI? I don't have race conditions and already use something like semaphores.
5) Is it safe to use such constructions?
procedure TShowBigDialoxBoxThread.Execute;
begin
while ThreadNotTerminated do begin
EventHandler.WaitFor(INFINITE);
if not(ThreadNotTerminated) then
Continue;
EventHandler.ResetEvent;
Synchronize(procedure begin
MessageDlgBig(FMsg, FDlgType, FButtons, FHelpContext, FDefaultButton, FDlgMinWidth);
end); // this kind of Synchronize call looks fishy
end;
end;
Or should I stick with calling a class method like in examples I provided above?
Edit:
I currently use Delphi XE5.

Polling in a thread core is a bad practice if you could wait with a blocking function. What you need to learn is that a context change is very expensive in terms of CPU usage. The only case you should poll, when there is no other oprtion (you can't be notified when to do the work).
The difference is the following:
Polling:
Your thread gives the control to the thread controller by calling 'Sleep()' which forces a context change. After the given period (and a bit), the thread controller will give the control back to your thread initiating another context change. Your thread checks if it has anything to do and if not, it calls 'Sleep()' which forces another context change. So your thread will surely not respond while the 'Sleep()' period is over and forces at least 2 other context changes per 'Sleep()' period.
Blocking:
Your thread gives the control to the thread controller by calling 'Sleep()' which forces a context change, but will not wake up until you signal it. It means only one context change per operation, and instant answer from the signalled thread.
Also: 'SetEvent()' is not blocking, but you should have asked it in a different question.

Related

Ensure all TThread.Queue methods complete before thread self-destructs

I have discovered that if a method queued with TThread.Queue calls a method that invokes TApplication.WndProc (e.g. ShowMessage) then subsequent queued methods are allowed to run before the original method has completed. Worse still, they don't seem to be invoked in FIFO order.
[Edit: Actually they do start in FIFO order. With ShowMessage it looks like the later one ran first because there is a call to CheckSynchronize before the dialog appears. This unqueues the next method and runs it, not returning until the latter method has completed. Only then does the dialog appear.]
I'm trying to ensure that all methods queued from the worker thread to run in the VCL thread run in strict FIFO order, and that they all complete before the worker thread is destroyed.
My other constraint is that I am trying to maintain strict separation of the GUI from the business logic. The thread in this case is part of the business logic layer so I can't use PostMessage from an OnTerminate handler to arrange for the thread to be destroyed (as recommended by a number of contributors elsewhere). So I'm setting FreeOnTerminate := True in a final queued method just before TThread.Execute exits. (Hence the need for them to execute in strict FIFO order.)
This is how my TThread.Execute method ends:
finally
// Queue a final method to execute in the main thread that will set an event
// allowing this thread to exit. This ensures that this thread can't exit
// until all of the queued procedures have run.
Queue(
procedure
begin
if Assigned(fOnComplete) then
begin
fOnComplete(Self);
// Handler sets fWorker.FreeOnTerminate := True and fWorker := nil
end;
SetEvent(fCanExit);
end);
WaitForSingleObject(fCanExit, INFINITE);
end;
but as I said this doesn't work because this queued method executes before some of the earlier queued methods.
Can anyone suggest a simple and clean way to make this work, or a simple and clean alternative?
[The only idea I've come up with so far that maintains separation of concerns and modularity is to give my TThread subclass a WndProc of its own. Then I can use PostMessage directly to this WndProc instead of the main form's. But I'm hoping for something a bit more light-weight.]
Thanks for the answers and comments so far. I now understand that my code above with a queued SetEvent and WaitForSingleObject is functionally equivalent to calling Synchronize at the end instead of Queue because Queue and Synchronize share the same queue. I tried Synchronize first and it failed for the same reason as the code above fails - the earlier queued methods invoke message handling so the final Synchronize method runs before the earlier queued methods have completed.
So I'm still stuck with the original problem, which now boils down to: Can I cleanly ensure that all of the queued methods have completed before the worker thread is freed, and can I cleanly free the worker thread without using PostMessage, which requires a window handle to post to (that my business layer doesn't have access to).
I've also updated the title better to reflect the original problem, although I'd be happy for an alternative solution that doesn't use TThread.Queue if appropriate. If someone can think up a better title then please edit it.
Another update: This answer by David Heffernan suggests using PostMessage with a special AllocateHWnd in the general case if TThread.Queue isn't available or suitable. Significantly, it's never safe to use PostMessage to the main form because the window can be spontaneously recreated changing its handle, which would cause all subsequent messages to the old handle to be lost. This makes a strong argument for me adopting this particular solution, since there's no additional overhead to creating a hidden window in my case since any application using PostMessage should do this - i.e. my separation of concerns argument is irrelevant.
TThread.Queue() is a FIFO queue. In fact, it shares the same queue that Thread.Sychronize() uses. But you are correct that message handling does cause queued methods to execute. This is because TApplication.Idle() calls CheckSynchronize() whenever the message queue goes idle after processing new messages. So if a queued/synched method invokes message processing, that can allow other queued/synched methods to being running even if the earlier method is still running.
If you want to ensure a queue method is called before the thread terminates, you should be using Synchronize() instead of Queue(), or use the OnTerminate event instead (which is triggered by Synchronize()). What you are doing in your finally block is effectively the same as what the OnTerminate event already does natively.
Setting FreeOnTerminate := True in a queued method is asking for a memory leak. FreeOnTerminate is evaluated immediately when Execute() exits, before DoTerminate() is called to trigger the OnTerminate event (which is an oversight in my opinion, as evaluating it that early prevents OnTerminate from deciding at termination time whether a thread should free itself or not after OnTerminate exits). So if the queued method runs after Execute() has exited, there is no guarantee that FreeOnTerminate will be set in time. Waiting for a queued method to finish before returning control to the thread is exactly what Synchronize() is meant for. Synchronize() is synchronous, it waits for the method to exit. Queue() is asynchronous, it does not wait at all.
I fixed this problem by adding a call to Synchronize() at the end of my Execute() method. This forces the thread to wait till all of the calls added with Queue() are completed on the main thread before the call added with Synchronize() can be called.
TMyThread = class (TThread)
private
procedure QueueMethod;
procedure DummySync;
protected
procedure Execute; override;
end;
procedure TMyThread.QueueMethod;
begin
// Do something on the main thread
UpdateSomething;
end;
procedure TMyThread.DummySync;
begin
// You don't need to do anything here. It's just used
// as a fence to stop the thread ending before all the
// Queued messages are processed.
end;
procedure TMyThread.Execute;
begin
while SomeCondition do
begin
// Some process
Queue(QueueMethod);
end;
Synchronize(DummySync);
end;
This is the solution I finally adopted.
I used a Delphi TCountdownEvent to track the number of outstanding queued methods from my thread, incrementing the count just before queuing a method, and decrementing it as the final act of the queued method.
Just before my override of TThread.Execute returns, it waits for the TCountdownEvent object to be signalled, i.e. when the count reaches zero. This is the crucial step that guarantees that all of the queued methods have completed before Execute returns.
Once all of the queued methods are complete, it calls Synchronize with an OnComplete handler - thanks to Remy for pointing out that this is equivalent to but simpler than my original code that used Queue and WaitForSingleObject. (OnComplete is like OnTerminate, but called before Execute returns so that the handler can modify FreeOnTerminate.)
The only wrinkle is that TCountdownEvent.AddCount works only if the count is already greater than zero. So I wrote a class helper to implement ForceAddCount:
procedure TCountdownEventHelper.ForceAddCount(aCount: Integer);
begin
if not TryAddCount(aCount) then
begin
Reset(aCount);
end;
end;
Normally this would be risky, but in my case we know that by the time the thread starts waiting for number of queued methods outstanding to reach zero no more methods can be queued (so from this point once the count hits zero it will stay at zero).
This doesn't completely solve the problem of queued methods that handle messages, in that individual queued methods could still appear to run out of order. But I do now have the guarantee that all queued methods run asynchronously but will have completed before the thread exits. This was the primary goal, because it allows the thread to clean itself up without the risk of losing queued methods.
A few thoughts:
FreeOnTerminate is not the end of the world if you want your thread to delete itself.
Semaphores let you maintain a count should you feel the need, there are such constructs.
There's nothing to stop you writing or using your own queueing primitives and AllocateHWnd if you want some fine grained control.

Delphi check if TThread is still running

I am using TThread, i just need to find out if thread is still running and terminate the application if it takes to long.
Ex:
MyThread := TMyThread.Create;
MyThread .Resume;
MyThread.Terminate;
total := 0;
while MyThread.isRunning = true do // < i need something like this
begin
// show message to user to wait a little longer we are still doing stuff
sleep(1000);
total := total + 1000;
if total > 60000 then
exit;
end;
Is this possible with Delphi?
The straight answer to your question is the Finished property of TThread. However, I don't recommend that. It leads you to that rather nasty Sleep based loop that you present in the question.
There are a number of cleaner options, including at least the following:
Use WaitFor to block until the thread completes. This will block the calling thread which will preclude showing UI, or responding to the user.
Use MsgWaitForMultipleObjects in a loop, passing the thread handle. This allows you to wait until the thread completes, but also service the UI.
Implement an event handler for the OnTerminate event. This allows you to be notified in an event driven manner of the thread's demise.
I recommend that you use one of these options instead of your home-grown Sleep based loop. In my experience, option 3 generally leads to the cleanest code. But it is hard to give rigid advice without knowing all the requirements. Only you have that knowledge.

Do I need a thread-safe string list to prevent a deadlock in this scenario?

I'm working with a simple static thread pool, where there are 4 threads, each with a queue, that process individual lines from a string list. After each thread has completed one of the requests in its queue, it synchronizes an event, which is handled in the parent thread. This is done by calling DoComplete() after it's done, like so:
procedure TDecoderThread.DoComplete(const Line: Integer; const Text: String);
begin
FLine:= Line;
FText:= Text;
Synchronize(SYNC_OnComplete);
end;
procedure TDecoderThread.SYNC_OnComplete;
begin
if assigned(FOnComplete) then
FOnComplete(Self, FText, FLine); //Triggers event which is handled in parent thread
end;
On the other end, in their parent thread, these events are handled with this procedure:
procedure TDecoder.ThreadComplete(Sender: TDecoderThread; const Text: String;
const Line: Integer);
begin
FStrings[Line]:= Text; //Updates the original line in the list with the new text
end;
Since I have 4 different threads, each of which might call this OnComplete() event at the same time as each other, do I also have to worry about thread protecting this FStrings: TStrings? Could two threads triggering their OnComplete() event at the same time cause a deadlock in their parent thread when writing to this string list? Or would the main thread be smart enough to wait until one of them is done before handling the other?
PS - Yes, this little project was an attempt to answer another previous question from someone else here on SO, which has been answered far differently, but in order to get myself a little more familiar with multi-threading, I continued this sample project anyway.
Since the OnComplete event is being triggered by Synchronize(), you do not need to use a thread-safe lock around the FStrings list, since all access to the list is being delegated through the main thread, so only one OnComplete event handler can actually run at a time. If you were not using Synchronize(), you would need such a lock around FStrings if items are being added/removed and thus reallocating the list memory, or if other threads were reading the values from FStrings, while the threads were still running. If the processing threads are the only ones accessing FStrings, there is no risk for concurrent access of the individual items, so no lock would be needed.

Can I execute TDataSet.DisableControls in worker thread without wrapping it with Synchronize()?

First of all, I am not sure that it is a good design to allow worker thread to disable controls. However, I am curious can I do it safely without synchronization with GUI?
The code in TDataSet looks like this:
procedure TDataSet.DisableControls;
begin
if FDisableCount = 0 then
begin
FDisableState := FState;
FEnableEvent := deDataSetChange;
end;
Inc(FDisableCount);
end;
So it looks safe to do. The situation would be different in case of EnableControls. But DisableControls seems to only increase lock counter and assigning event which is fired up during EnableControls.
What do you think?
It looks safe to do so, but things may go wrong because these flags are used in code that may be in the middle of being executed at the moment you call this method from your thread.
I would Synchronise the call to DisableControls, because you want your thread to start using this dataset only if no controls are using it.
The call to EnableControls can be synchronised too, or you can post a message to the form using PostMessage. That way, the thread doesn't have to wait for the main thread.
But my gut feelings tells me that is may be better to not use the same dataset for the GUI and the thread at all.
Without having looked up the actual code: It might be safe, as long as you can be sure that the main thread currently does not access FDisableCount, FDisableState and FEnableEvent. There is the possibility of a race condition here.
I would still recommend that you call DisableControls from within the main thread.

Why does my multi-threaded app sometimes hang when it closes?

I'm using several critical sections in my application. The critical sections prevent large data blobs from being modified and accessed simultaneously by different threads.
AFAIK it's all working correctly except sometimes the application hangs when exiting. I'm wondering if this is related to my use of critical sections.
Is there a correct way to free TCriticalSection objects in a destructor?
Thanks for all the answers. I'm looking over my code again with this new information in mind. Cheers!
As Rob says, the only requirement is that you ensure that the critical section is not currently owned by any thread. Not even the thread about to destroy it. So there is no pattern to follow for correctly destroying a TCriticalSection, as such. Only a required behaviour that your application must take steps to ensure occurs.
If your application is locking then I doubt it is the free'ing of any critical section that is responsible. As MSDN says (in the link that Rob posted), the DeleteCriticalSection() (which is ultimately what free'ing a TCriticalSection calls) does not block any threads.
If you were free'ing a critical section that other threads were still trying to access you would get access violations and other unexpected behaviours, not deadlocks, as this little code sample should help you demonstrate:
implementation
uses
syncobjs;
type
tworker = class(tthread)
protected
procedure Execute; override;
end;
var
cs: TCriticalSection;
worker: Tworker;
procedure TForm2.FormCreate(Sender: TObject);
begin
cs := TCriticalSection.Create;
worker := tworker.Create(true);
worker.FreeOnTerminate := TRUE;
worker.Start;
sleep(5000);
cs.Enter;
showmessage('will AV before you see this');
end;
{ tworker }
procedure tworker.Execute;
begin
inherited;
cs.Free;
end;
Add to the implementation unit of a form, correcting the "TForm2" reference for the FormCreate() event handler as required.
In FormCreate() this creates a critical section then launches a thread whose sole purpose is to free that section. We introduce a Sleep() delay to give the thread time to initialise and execute, then we try to enter the critical section ourselves.
We can't of course because it has been free'd. But our code doesn't hang - it is not deadlocked trying to access a resource that is owned by something else, it simply blows up because, well, we're trying to access a resource that no longer exists.
You could be even more sure of creating an AV in this scenario by NIL'ing the critical section reference when it is free'd.
Now, try changing the FormCreate() code to this:
cs := TCriticalSection.Create;
worker := tworker.Create(true);
worker.FreeOnTerminate := TRUE;
cs.Enter;
worker.Start;
sleep(5000);
cs.Leave;
showmessage('appearances can be deceptive');
This changes things... now the main thread will take ownership of the critical section - the worker thread will now free the critical section while it is still owned by the main thread.
In this case however, the call to cs.Leave does not necessarily cause an access violation. All that occurs in this scenario (afaict) is that the owning thread is allowed to "leave" the section as it would expect to (it doesn't of course, because the section has gone, but it seems to the thread that it has left the section it previously entered) ...
... in more complex scenarios an access violation or other error is possibly likely, as the memory previously used for the critical section object may be re-allocated to some other object by the time you call it's Leave() method, resulting in some call to some other unknown object or access to invalid memory etc.
Again, changing the worker.Execute() so that it NIL's the critical section ref after free'ing it would ensure an access violation on the attempt to call cs.Leave(), since Leave() calls Release() and Release() is virtual - calling a virtual method with a NIL reference is guaranteed to AV (ditto for Enter() which calls the virtual Acquire() method).
In any event:
Worst case: an exception or weird behaviour
"Best" case: the owning thread appears to believe it has "left" the section as normal.
In neither case is a deadlock or a hang going to occur simply as the result of when a critical section is free'd in one thread in relation to when other threads then try to enter or leave that critical section.
All of which is a round-a-bout way of saying that it sounds like you have a more fundamental race condition in your threading code not directly related to the free'ing of your critical sections.
In any event, I hope my little bit of investigative work might set you down the right path.
Just make sure nothing still owns the critical section. Otherwise, MSDN explains, "the state of the threads waiting for ownership of the deleted critical section is undefined." Other than that, call Free on it like you do with all other objects.
AFAIK it's all working correctly except sometimes the application hangs when exiting. I'm wondering if this is related to my use of critical sections.
Yes it is. But the problem is likely not in the destruction. You probably have a deadlock.
Deadlocks are when two threads wait on two exclusive resources, each wanting both of them and each owning only one:
//Thread1:
FooLock.Enter;
BarLock.Enter;
//Thread2:
BarLock.Enter;
FooLock.Enter;
The way to fight these is to order your locks. If some thread wants two of them, it has to enter them only in specific order:
//Thread1:
FooLock.Enter;
BarLock.Enter;
//Thread2:
FooLock.Enter;
BarLock.Enter;
This way deadlock will not occur.
Many things can trigger deadlock, not only TWO critical sections. For instance, you might have used SendMessage (synchronous message dispatch) or Delphi's Synchronize AND one critical section:
//Thread1:
OnPaint:
FooLock.Enter;
FooLock.Leave;
//Thread2:
FooLock.Enter;
Synchronize(SomeProc);
FooLock.Leave;
Synchronize and SendMessage send messages to Thread1. To dispatch those messages, Thread1 needs to finish whatever work it's doing. For instance, OnPaint handler.
But to finish painting, it needs FooLock, which is taken by Thread2 which waits for Thread1 to finish painting. Deadlock.
The way to solve this is either to never use Synchronize and SendMessage (the best way), or at least to use them outside of any locks.
Is there a correct way to free TCriticalSection objects in a destructor?
It does not matter where you are freeing TCriticalSection, in a destructor or not.
But before freeing TCriticalSection, you must ensure that all the threads that could have used it, are stopped or are in a state where they cannot possibly try to enter this section anymore.
For example, if your thread enters this section while dispatching a network message, you have to ensure network is disconnected and all the pending messages are processed.
Failing to do that will in most cases trigger access violations, sometimes nothing (if you're lucky), and rarely deadlocks.
There are no magical in using TCriticalSection as well as in critical sections themselves. Try to replace TCriticalSection objects with plain API calls:
uses
Windows, ...
var
CS: TRTLCriticalSection;
...
EnterCriticalSection(CS);
....
here goes your code that you have to protect from access by multiple threads simultaneously
...
LeaveCriticalSection(FCS);
...
initialization
InitializeCriticalSection(CS);
finalization
DeleteCriticalSection(CS);
Switching to API will not harm clarity of your code, but, perhaps, help to reveal hidden bugs.
You NEED to protect all critical sections using a try..finally block.
Use TRTLCriticalSection instead of a TCriticalSection class. It's cross-platform, and TCriticalSection is only an unnecessary wrapper around it.
If any exception occurs during the data process, then the critial section is not left, and another thread may block.
If you want fast response, you can also use TryEnterCriticalSection for some User Interface process or such.
Here are some good practice rules:
make your TRTLCriticalSection a property of a Class;
call InitializeCriticalSection in the class constructor, then DeleteCriticalSection in the class destructor;
use EnterCriticalSection()... try... do something... finally LeaveCriticalSection(); end;
Here is some code sample:
type
TDataClass = class
protected
fLock: TRTLCriticalSection;
public
constructor Create;
destructor Destroy; override;
procedure SomeDataProcess;
end;
constructor TDataClass.Create;
begin
inherited;
InitializeCriticalSection(fLock);
end;
destructor TDataClass.Destroy;
begin
DeleteCriticalSection(fLock);
inherited;
end;
procedure TDataClass.SomeDataProcess;
begin
EnterCriticalSection(fLock);
try
// some data process
finally
LeaveCriticalSection(fLock);
end;
end;
If the only explicit synchronisation code in your app is through critical sections then it shouldn't be too difficult to track this down.
You indicate that you have only seen the deadlock on termination. Of course this doesn't mean that it cannot happen during normal operation of your app, but my guess (and we have to guess without more information) is that it is an important clue.
I would hypothesise that the error may be related to the way in which threads are forcibly terminated. A deadlock such as you describe would happen if a thread terminated whilst still holding the lock, but then another thread attempted to acquire the lock before it had a chance to terminate.
A very simple thing to do which may fix the problem immediately is to ensure, as others have correctly said, that all uses of the lock are protected by Try/Finally. This really is a critical point to make.
There are two main patterns for resource lifetime management in Delphi, as follows:
lock.Acquire;
Try
DoSomething();
Finally
lock.Release;
End;
The other main pattern is pairing acquisition/release in Create/Destroy, but that is far less common in the case of locks.
Assuming that your usage pattern for the locks is as I suspect (i.e. acquireand release inside the same method), can you confirm that all uses are protected by Try/Finally?
If your application only hangs/ deadlocks on exit please check the onterminate event for all threads. If the main thread signals for the other threads to terminate and then waits for them before freeing them. It is important not to make any synchronised calls in the on terminate event. This can cause a dead lock as the main thread waits for the worker thread to terminate. But the synchronise call is waiting on the main thread.
Don't delete critical sections at object's destructor. Sometimes will cause your application to crash.
Use a seperate method which deletes the critical section.
procedure someobject.deleteCritical();
begin
DeleteCriticalSection(criticalSection);
end;
destructor someobject.destroy();
begin
// Do your release tasks here
end;
1) You call delete critical section
2) After you release(free) the object

Resources