Delphi check if TThread is still running - multithreading

I am using TThread, i just need to find out if thread is still running and terminate the application if it takes to long.
Ex:
MyThread := TMyThread.Create;
MyThread .Resume;
MyThread.Terminate;
total := 0;
while MyThread.isRunning = true do // < i need something like this
begin
// show message to user to wait a little longer we are still doing stuff
sleep(1000);
total := total + 1000;
if total > 60000 then
exit;
end;
Is this possible with Delphi?

The straight answer to your question is the Finished property of TThread. However, I don't recommend that. It leads you to that rather nasty Sleep based loop that you present in the question.
There are a number of cleaner options, including at least the following:
Use WaitFor to block until the thread completes. This will block the calling thread which will preclude showing UI, or responding to the user.
Use MsgWaitForMultipleObjects in a loop, passing the thread handle. This allows you to wait until the thread completes, but also service the UI.
Implement an event handler for the OnTerminate event. This allows you to be notified in an event driven manner of the thread's demise.
I recommend that you use one of these options instead of your home-grown Sleep based loop. In my experience, option 3 generally leads to the cleanest code. But it is hard to give rigid advice without knowing all the requirements. Only you have that knowledge.

Related

Multithreading questions

1) This probably was asked a lot before, and I have read quite a lot about Sleep vs WaitForSingleObject, but still a little bit confused. As an example I have a simple background thread which is called from data-processing thread to show some message to the user without blocking data-processing thread. Which, in terms of performance (CPU usage / CPU time) is better in this case: http://pastebin.com/VuhfZUEg or http://pastebin.com/eciK92ze
I suspect that the shorter Sleep time is and the more flags I have, the worse performance is. On other hand with longer Sleep time performance will be better but reaction delay will increase, and with a lot of threads reaction delay will eventually become noticeable for user on low-end machine. Is that correct? So what is the best way to keep the "dormant", inactive threads?
2) Is SetEvent blocking (SendMessage) or non-blocking (PostMessage)?
3) In TForm.OnCreate event I have a following code:
procedure TFormSubsystem.FormCreate(Sender: TObject);
begin
Application.OnException:=LogApplicationException;
Application.OnActivate:=InitiateApplication;
end;
procedure TFormSubsystem.LogApplicationException(Sender: TObject; E: Exception);
var
ErrorFile: TextFile;
ErrorInfo: String;
begin
AssignFile(ErrorFile, AppPath+'Error.log');
if FileExists(AppPath+'Error.log') then
Append(ErrorFile)
else
Rewrite(ErrorFile);
ErrorInfo:=DateTimeToStr(Now)+' Unhandled exception';
if Assigned(Sender) then
ErrorInfo:=ErrorInfo+' in '+Sender.UnitName+'->'+Sender.ClassName;
ErrorInfo:=ErrorInfo+': '+E.Message;
try
WriteLn(ErrorFile, ErrorInfo);
finally
CloseFile(ErrorFile)
end;
end;
It's not the best way to log errors, but it's simple. The question is: What happens if there was an exception inside TSomeThreadAncestor.Execute or inside a method called via Synchronize?
4) What exactly is difference between Synchronize and Queue? Which one I should use when my background threads interact with GUI? I don't have race conditions and already use something like semaphores.
5) Is it safe to use such constructions?
procedure TShowBigDialoxBoxThread.Execute;
begin
while ThreadNotTerminated do begin
EventHandler.WaitFor(INFINITE);
if not(ThreadNotTerminated) then
Continue;
EventHandler.ResetEvent;
Synchronize(procedure begin
MessageDlgBig(FMsg, FDlgType, FButtons, FHelpContext, FDefaultButton, FDlgMinWidth);
end); // this kind of Synchronize call looks fishy
end;
end;
Or should I stick with calling a class method like in examples I provided above?
Edit:
I currently use Delphi XE5.
Polling in a thread core is a bad practice if you could wait with a blocking function. What you need to learn is that a context change is very expensive in terms of CPU usage. The only case you should poll, when there is no other oprtion (you can't be notified when to do the work).
The difference is the following:
Polling:
Your thread gives the control to the thread controller by calling 'Sleep()' which forces a context change. After the given period (and a bit), the thread controller will give the control back to your thread initiating another context change. Your thread checks if it has anything to do and if not, it calls 'Sleep()' which forces another context change. So your thread will surely not respond while the 'Sleep()' period is over and forces at least 2 other context changes per 'Sleep()' period.
Blocking:
Your thread gives the control to the thread controller by calling 'Sleep()' which forces a context change, but will not wake up until you signal it. It means only one context change per operation, and instant answer from the signalled thread.
Also: 'SetEvent()' is not blocking, but you should have asked it in a different question.

Thread Pool Class developement [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I run a multithread application and I want to limit the number of threads on my machine
The code concept goes currently like this (it is just a drft to show the major ideas behind)
// a List with all the threads I have
class MyTHreadList = List<TMyCalcThread>;
// a pool class, check how many threads are active
// try to start additions threads once the nr. of running THreads
// is below the max_thread_value_running
class MyTheardPool = Class
ThreadList : MyTHreadList;
StillRunningTHreads : Integer;
StartFromThreadID : Integer;
end;
var aTHreadList : MyTHreadList;
procedure MainForm.CallCreateThreadFunction( < THREAD PARAMS > );
begin
// just create but do not start here
MyTHread := TMyCalcThread.create ( < THREAD PARAMS > );
MyTHread.Onterminate := What_To_do;
// add all threads to a list
aTHreadList.add(MyTHread);
end;
/// (A)
procedure MainForm.What_To_do ()
var start : Integer;
begin
max_thread_value_running := max_thread_value_running -1;
if max_thread_value_running < max_thread_count then
begin
start := max_thread_count - max_thread_value_running;
startThereads(start,StartFromThreadID)
end;
end;
procedure MainForm.startThereads (HowMany , FromIndex : Integer);
var i : INteger;
begin
for I := FromIndex to HowMany + FromIndexdo
begin
// thread[i].start
end;
end;
MainForm.Button ();
/// main VCL form , create threats
for i := 0 to allTHreads do
begin
CallCreateTHreadFunction( < THREAD PARAMS > );
end;
......
/// (B)
while StillRunningTHreads > 0 do
begin
Application.processmessages;
end;
The complete idea is a small list with the Threads, on every individual Thread terminate step I update the number of running threads and start the now possible max. number of new threads.(A) Instead of a Function WaitforSingleObject () .... I do a loop at the end to wait for all threads to finish execution. (B)
From the code design I did not find any full example on the net, I may approach a vaild design or will I run into some trouble which I did not consider right now.
Any comment on the diesign or a better class design is welcome.
Don't try to micro-manage threads like this. Just don't. Either use the Win API threadpool calls, or make your own threadpool from a producer-consumer queue and TThread instances.
When a task is completed, I suggest that the work thread call an 'OnCompletion' TNotifyEvent of the task class with the task as the parameter. Ths can be set by the issuer of the task to anything they might wish, eg. postMessaging the task instance to the GUI thread for display.
Micro-managing threads, continually creating/waiting/terminating/destroying, Application.processmessages loops etc. is just horrible and almost sure to go wrong at some point.
Waiting for ANYTHING in a GUI event-handler is just bad. The GUI system is a state-machine and you should not wait inside it. If you want to issue a task from the GUI thread, do so but don't wait for it - fire it off and forget it until it is completed and gets posted back to a message-handler procedure.
Basically thread-pools are a nice feature (There is an implementation in the Win32 API, however I don't have any experience with it).
There is one basic stumble stone however: You need to remember that a task may be delayed until an empty thread is available. If you need synchronization between different tasks (e.g. tasks are waiting on other tasks) then you have a serious deadlock problem:
Just assume that all running tasks wait for a single task which is waiting for a free thread...
A similar problem can also happen if your threads wait for the main thread to react while the main thread waits for a new task to start.
If your tasks don't need any further synchronization (e.g. once a task is finished it will just mark itself as finished and the main thread will then later on read the result) you don't need to worry about this.
As a small side note:
I would use two separate lists: One for free (suspended) threads and one for running threads.
I'd consider to use an existing Thread-Pool implementation (like Winapi CreateThreadpool) before I created my own...
The loop (B) takes away a lot of CPU power from the threads. Don't wait actively in a loop for threads, use one of the WaitFor.... functions.
Reuse your threads and have a list of "todos" that the threads will execute. Thread creation and destruction can be expensive.
Apart from that I'd recommend that you use an existing library instead of reinventing the wheel. Or use at least the Windows API thread pool functions.

Ensure all TThread.Queue methods complete before thread self-destructs

I have discovered that if a method queued with TThread.Queue calls a method that invokes TApplication.WndProc (e.g. ShowMessage) then subsequent queued methods are allowed to run before the original method has completed. Worse still, they don't seem to be invoked in FIFO order.
[Edit: Actually they do start in FIFO order. With ShowMessage it looks like the later one ran first because there is a call to CheckSynchronize before the dialog appears. This unqueues the next method and runs it, not returning until the latter method has completed. Only then does the dialog appear.]
I'm trying to ensure that all methods queued from the worker thread to run in the VCL thread run in strict FIFO order, and that they all complete before the worker thread is destroyed.
My other constraint is that I am trying to maintain strict separation of the GUI from the business logic. The thread in this case is part of the business logic layer so I can't use PostMessage from an OnTerminate handler to arrange for the thread to be destroyed (as recommended by a number of contributors elsewhere). So I'm setting FreeOnTerminate := True in a final queued method just before TThread.Execute exits. (Hence the need for them to execute in strict FIFO order.)
This is how my TThread.Execute method ends:
finally
// Queue a final method to execute in the main thread that will set an event
// allowing this thread to exit. This ensures that this thread can't exit
// until all of the queued procedures have run.
Queue(
procedure
begin
if Assigned(fOnComplete) then
begin
fOnComplete(Self);
// Handler sets fWorker.FreeOnTerminate := True and fWorker := nil
end;
SetEvent(fCanExit);
end);
WaitForSingleObject(fCanExit, INFINITE);
end;
but as I said this doesn't work because this queued method executes before some of the earlier queued methods.
Can anyone suggest a simple and clean way to make this work, or a simple and clean alternative?
[The only idea I've come up with so far that maintains separation of concerns and modularity is to give my TThread subclass a WndProc of its own. Then I can use PostMessage directly to this WndProc instead of the main form's. But I'm hoping for something a bit more light-weight.]
Thanks for the answers and comments so far. I now understand that my code above with a queued SetEvent and WaitForSingleObject is functionally equivalent to calling Synchronize at the end instead of Queue because Queue and Synchronize share the same queue. I tried Synchronize first and it failed for the same reason as the code above fails - the earlier queued methods invoke message handling so the final Synchronize method runs before the earlier queued methods have completed.
So I'm still stuck with the original problem, which now boils down to: Can I cleanly ensure that all of the queued methods have completed before the worker thread is freed, and can I cleanly free the worker thread without using PostMessage, which requires a window handle to post to (that my business layer doesn't have access to).
I've also updated the title better to reflect the original problem, although I'd be happy for an alternative solution that doesn't use TThread.Queue if appropriate. If someone can think up a better title then please edit it.
Another update: This answer by David Heffernan suggests using PostMessage with a special AllocateHWnd in the general case if TThread.Queue isn't available or suitable. Significantly, it's never safe to use PostMessage to the main form because the window can be spontaneously recreated changing its handle, which would cause all subsequent messages to the old handle to be lost. This makes a strong argument for me adopting this particular solution, since there's no additional overhead to creating a hidden window in my case since any application using PostMessage should do this - i.e. my separation of concerns argument is irrelevant.
TThread.Queue() is a FIFO queue. In fact, it shares the same queue that Thread.Sychronize() uses. But you are correct that message handling does cause queued methods to execute. This is because TApplication.Idle() calls CheckSynchronize() whenever the message queue goes idle after processing new messages. So if a queued/synched method invokes message processing, that can allow other queued/synched methods to being running even if the earlier method is still running.
If you want to ensure a queue method is called before the thread terminates, you should be using Synchronize() instead of Queue(), or use the OnTerminate event instead (which is triggered by Synchronize()). What you are doing in your finally block is effectively the same as what the OnTerminate event already does natively.
Setting FreeOnTerminate := True in a queued method is asking for a memory leak. FreeOnTerminate is evaluated immediately when Execute() exits, before DoTerminate() is called to trigger the OnTerminate event (which is an oversight in my opinion, as evaluating it that early prevents OnTerminate from deciding at termination time whether a thread should free itself or not after OnTerminate exits). So if the queued method runs after Execute() has exited, there is no guarantee that FreeOnTerminate will be set in time. Waiting for a queued method to finish before returning control to the thread is exactly what Synchronize() is meant for. Synchronize() is synchronous, it waits for the method to exit. Queue() is asynchronous, it does not wait at all.
I fixed this problem by adding a call to Synchronize() at the end of my Execute() method. This forces the thread to wait till all of the calls added with Queue() are completed on the main thread before the call added with Synchronize() can be called.
TMyThread = class (TThread)
private
procedure QueueMethod;
procedure DummySync;
protected
procedure Execute; override;
end;
procedure TMyThread.QueueMethod;
begin
// Do something on the main thread
UpdateSomething;
end;
procedure TMyThread.DummySync;
begin
// You don't need to do anything here. It's just used
// as a fence to stop the thread ending before all the
// Queued messages are processed.
end;
procedure TMyThread.Execute;
begin
while SomeCondition do
begin
// Some process
Queue(QueueMethod);
end;
Synchronize(DummySync);
end;
This is the solution I finally adopted.
I used a Delphi TCountdownEvent to track the number of outstanding queued methods from my thread, incrementing the count just before queuing a method, and decrementing it as the final act of the queued method.
Just before my override of TThread.Execute returns, it waits for the TCountdownEvent object to be signalled, i.e. when the count reaches zero. This is the crucial step that guarantees that all of the queued methods have completed before Execute returns.
Once all of the queued methods are complete, it calls Synchronize with an OnComplete handler - thanks to Remy for pointing out that this is equivalent to but simpler than my original code that used Queue and WaitForSingleObject. (OnComplete is like OnTerminate, but called before Execute returns so that the handler can modify FreeOnTerminate.)
The only wrinkle is that TCountdownEvent.AddCount works only if the count is already greater than zero. So I wrote a class helper to implement ForceAddCount:
procedure TCountdownEventHelper.ForceAddCount(aCount: Integer);
begin
if not TryAddCount(aCount) then
begin
Reset(aCount);
end;
end;
Normally this would be risky, but in my case we know that by the time the thread starts waiting for number of queued methods outstanding to reach zero no more methods can be queued (so from this point once the count hits zero it will stay at zero).
This doesn't completely solve the problem of queued methods that handle messages, in that individual queued methods could still appear to run out of order. But I do now have the guarantee that all queued methods run asynchronously but will have completed before the thread exits. This was the primary goal, because it allows the thread to clean itself up without the risk of losing queued methods.
A few thoughts:
FreeOnTerminate is not the end of the world if you want your thread to delete itself.
Semaphores let you maintain a count should you feel the need, there are such constructs.
There's nothing to stop you writing or using your own queueing primitives and AllocateHWnd if you want some fine grained control.

Do I need a thread-safe string list to prevent a deadlock in this scenario?

I'm working with a simple static thread pool, where there are 4 threads, each with a queue, that process individual lines from a string list. After each thread has completed one of the requests in its queue, it synchronizes an event, which is handled in the parent thread. This is done by calling DoComplete() after it's done, like so:
procedure TDecoderThread.DoComplete(const Line: Integer; const Text: String);
begin
FLine:= Line;
FText:= Text;
Synchronize(SYNC_OnComplete);
end;
procedure TDecoderThread.SYNC_OnComplete;
begin
if assigned(FOnComplete) then
FOnComplete(Self, FText, FLine); //Triggers event which is handled in parent thread
end;
On the other end, in their parent thread, these events are handled with this procedure:
procedure TDecoder.ThreadComplete(Sender: TDecoderThread; const Text: String;
const Line: Integer);
begin
FStrings[Line]:= Text; //Updates the original line in the list with the new text
end;
Since I have 4 different threads, each of which might call this OnComplete() event at the same time as each other, do I also have to worry about thread protecting this FStrings: TStrings? Could two threads triggering their OnComplete() event at the same time cause a deadlock in their parent thread when writing to this string list? Or would the main thread be smart enough to wait until one of them is done before handling the other?
PS - Yes, this little project was an attempt to answer another previous question from someone else here on SO, which has been answered far differently, but in order to get myself a little more familiar with multi-threading, I continued this sample project anyway.
Since the OnComplete event is being triggered by Synchronize(), you do not need to use a thread-safe lock around the FStrings list, since all access to the list is being delegated through the main thread, so only one OnComplete event handler can actually run at a time. If you were not using Synchronize(), you would need such a lock around FStrings if items are being added/removed and thus reallocating the list memory, or if other threads were reading the values from FStrings, while the threads were still running. If the processing threads are the only ones accessing FStrings, there is no risk for concurrent access of the individual items, so no lock would be needed.

OmniThreadLibrary: How to detect when all recursively scheduled (=pooled) threads have completed?

Let's say I have to recursively iterate over items stored in a tree structure in the background and I want to walk this tree using multiple threads from a thread pool (one thread per "folder" node). I have already managed to implement this using several different low and high-level approaches provided by the OmniThreadLibrary.
However, what I haven't figured out yet is how to properly detect that the scan has actually completed, i.e. that every last leaf node has been processed.
I found various examples on the net that either checked whether GlobalThreadPool.CountExecuting + GlobalThreadPool.CountQueued <= 0 or that used a IOmniTaskGroup.WaitForAll(). Unfortunately, none of these approaches appears to work for me. The check always returns True too early, i.e. when there still are some tasks running. None of the examples I looked at used recursion though - and those that did did not use a thread pool - is this maybe just not a good combination in the first place?
Here's a (very) simplified example code snippet of how I'm trying to do this at the moment:
procedure CreateScanFolderTask(const AFolder: IFolder);
begin
CreateTask(ScanFolder)
.SetParameter('Folder', AFolder)
.Schedule();
end;
procedure ScanFolder(const ATask: IOmniTask);
var
lFolder,
lCurrentFolder: IFolder;
begin
if ATaks.CancellationToken.IsSignalled then Exit;
lCurrentFolder := ATask.Param['Folder'].AsInterface as IFolder;
DoSomethingWithItemsInFolder(lCurrentFolder.Items);
for lFolder in lCurrentFolder.Folders do
begin
if ATaks.CancellationToken.IsSignalled then Exit;
CreateScanFolderTask(lFolder);
end;
end;
begin
GlobalOmniThreadPool.MaxExecuting := 8;
CreateScanFolderTask(FRootFolder);
// ??? wait for recursive scan to finish
OutputResult();
end.
One example implementation for the wait that I have tried was this (based on an example found on About.com):
while GlobalOmniThreadPool.CountExecuting + GlobalOmniThreadPool.CountQueued > 0 do
Application.ProcessMessages;
But this appears to always exit immediately right after the "root thread" has finished. Even when I add an artificial delay using Sleep()-calls it still always exits too early. It seems that there occurs a "gap" between one task being struck off the list of executing tasks and the ones that were scheduled inside that task to be added to the list of queued tasks...
Actually, instead of waiting for the scan to finish, I would very much prefer to use an event handler (also, I'd rather not use Application.ProcessMessages as I will need this in form-less applications, too) and I already did try with both IOmniTaskControl.OnTerminated and using a TOmniEventMonitor but as these fire for every finished task I still somehow need to check whether the current one was the last one which again boils down to the same problem as above.
Or is there maybe a better way to create the tasks that would avoid this problem?
A simple way is to count 'folders to be processed' by yourself. Increment a value every
time you create a folder task and decrement it every time a folder is processed.
var
counter: TOmniCounter;
counter.Value := 0;
procedure ScanFolder(const ATask: IOmniTask);
var
lFolder,
lCurrentFolder: IFolder;
begin
if ATaks.CancellationToken.IsSignalled then Exit;
lCurrentFolder := ATask.Param['Folder'].AsInterface as IFolder;
DoSomethingWithItemsInFolder(lCurrentFolder.Items);
for lFolder in lCurrentFolder.Folders do
begin
if ATaks.CancellationToken.IsSignalled then Exit;
counter.Increment;
CreateScanFolderTask(lFolder);
end;
counter.Decrement;
end;
What I usually do is to count all the issued 'folderScan' objects out and count them back in again.
Each time a new TfolderScan is needed, the creating TfolderScan calls a factory for it The factory increments a CS-protected 'taskCount' as well as creating the TfolderScan. Every time a TfolderScan is completed, it calls the 'OnComplete' method of the factory that decrements the CS-protected 'taskCount'. If, in 'OnComplete', the count is decremeted to 0, there can be no TfolderScan left and the whole search most be complete. The thread that manages to decrement the count to 0 can do whatever is needed to signal the completion - PostMessage() a main form or call an 'OnSearchComplete' event.
Just as a side-note : Instead of checking (GlobalOmniThreadPool.CountExecuting + GlobalOmniThreadPool.CountQueued > 0) use (not GlobalOmniThreadPool.IsIdle). This hides implementation details and is more efficient.

Resources