I am working on small monitoring application which will have some threads for communication with some devices via SNMP, TCP, ICMP, other threads have to perform some calculations.
All this result I have to output in GUI (some Forms or TabSheets).
I am thinking about next possibilities:
use Synchronize from every worker thread:
use shared buffer and windows messaging mechanism. Thread will put message in shared buffer (queue) and will notify GUI with windows message.
use separate thread which will listen for Synchronization primitives (Events, Semaphores, etc) and use again Synchronize, but only from GUI-dedicated thread only, or Critical Section on GUI to display message.
UPDATE: (Proposed by one workmate) use shared buffer and TTimer in main form which will check periodically (100-1000 ms) shared buffer and consuming, instead of windows messaging. (Does it have some benefit over messaging?)
Other?
Dear experts, please explain what is the best practice or what are the advantages and disadvantages of exposed alternatives.
UPDATE:
As idea:
//shared buffer + send message variant
LogEvent global function will be called from everywhere (from worker threads too):
procedure LogEvent(S: String);
var
liEvent: IEventMsg;
begin
liEvent := TEventMsg.Create; //Interfaced object
with liEvent do
begin
Severity := llDebug;
EventType := 'General';
Source := 'Application';
Description := S;
end;
MainForm.AddEvent(liEvent); //Invoke main form directly
end;
In Main Form, where Events ListView and shared section (fEventList: TTInterfaceList which is already thread-safe) we'll be:
procedure TMainForm.AddEvent(aEvt: IEventMsg);
begin
fEventList.Add(aEvt);
PostMessage(Self.Handle, WM_EVENT_ADDED, 0, 0);
end;
Message handler:
procedure WMEventAdded(var Message: TMessage); message WM_EVENT_ADDED;
...
procedure TMainForm.WMEventAdded(var Message: TMessage);
var
liEvt: IEventMsg;
ListItem: TListItem;
begin
fEventList.Lock;
try
while fEventList.Count > 0 do
begin
liEvt := IEventMsg(fEventList.First);
fEventList.Delete(0);
with lvEvents do //TListView
begin
ListItem := Items.Add;
ListItem.Caption := SeverityNames[liEvt.Severity];
ListItem.SubItems.Add(DateTimeToStr(now));
ListItem.SubItems.Add(liEvt.EventType);
ListItem.SubItems.Add(liEvt.Source);
ListItem.SubItems.Add(liEvt.Description);
end;
end;
finally
fEventList.UnLock;
end;
end;
Is there something bad? Main Form is allocated ONCE on application startup and Destroyed on application exit.
Use Synchronize from every worker thread
This would probably be the simplest approach to implement, but as others have indicated will lead to your IO threads being blocked. This may/may not be a problem in your particular application.
However it should be noted that there are other reasons to avoid blocking. Blocking can make performance profiling a little trickier because it effectively pushes up the time spent in routines that are "hurrying up and waiting".
Use shared buffer and windows messaging mechanism
This is a good approach with a few special considerations.
If your data is extremely small, PostMessage can pack it all into the parameters of the message making it ideal.
However, since you make mention of a shared buffer, it seems you might have a bit more data. This is where you have to be a little careful. Using a "shared buffer" in the intuitive sense can expose you to race conditions (but I'll cover this in more detail later).
The better approach is to create a message object and pass ownership of the object to the GUI.
Create a new object containing all the details required for the GUI to update.
Pass the reference to this object via the additional parameters in PostMessage.
When the GUI finishes processing the message it is responsible for destroying it.
This neatly avoids the race conditions.
WARNING: You need to be certain the GUI gets all your messages, otherwise you will have memory leaks. You must check the return value of PostMessage to confirm it was actually sent, and you may as well destroy the object if not sent.
This approach works quite well if the data can be sent in light-weight objects.
Use separate thread ...
Using any kind of separate intermediate thread still requires similar considerations for getting the relevant data to the new thread - which then still has to be passed to the GUI in some way. This would probably only make sense if your application needs to perform aggreagation and time-consuming calculations before updating the GUI. In the same way that you don't want to block your IO threads, you don't want to block your GUI thread.
Use shared buffer and TTimer in main form
I mentioned earlier that the "intuitive idea" of a shared buffer, meaning: "different threads reading and writing at the same time"; exposes you to the risk of race conditions. If in the middle of a write operation you start reading data, then you risk reading data in an inconsistent state. These problems can be a nightmare to debug.
In order to avoid these race conditions you need to fall back on other synchronisation tools such as locks to protect the shared data. Locks of course bring us back to the blocking issues, albeit in a slightly better form. This is because you can control the granularity of the protection desired.
This does have some benefits over messaging:
If your data structures are large and complex, your messages might be inefficient.
You won't need to define a rigorous messaging protocol to cover all update scenarios.
The messaging approach can lead to a duplication of data within the system because the GUI keeps its own copy of the data to avoid race conditions.
There is a way to improve the idea of shared data, only if applicable: Some situations give you the option of using immutable data structures. That is: data structures that do not change after they've been created. (NOTE: The message objects mentioned earlier should be immutable.) The benefit of this is that you can safely read the data (from any number of threads) without any synchronisation primitives - provided you can guarantee the data doesn't change.
The best approach is to use a GDI custom message and just call PostMessage() to notify the GUI.
type
TMyForm = class(TForm)
.
.
.
private
procedure OnMyMessage(var Msg: TMessage); message WM_MY_MESSAGE;
procedure OnAnoMessage(var Msg: TMessage); message WM_ANO_MESSAGE;
.
.
PostMessage(self.Handle,WM_MY_MESSAGE,0,0);
See this great article for full explanation.
This is a lighter/faster approach to rely on the OS internal features.
Related
I found a Delphi library named EventBus and I think it will be very useful, since the Observer is my favorite design pattern.
In the process of learning its source code, I found a piece of code that may be due to multithreading security considerations, which is in the following (property Active's getter and setter methods).
TSubscription = class(TObject)
private
FActive: Boolean;
procedure SetActive(const Value: Boolean);
function GetActive: Boolean;
// ... other members
public
constructor Create(ASubscriber: TObject;
ASubscriberMethod: TSubscriberMethod);
destructor Destroy; override;
property Active: Boolean read GetActive write SetActive;
// ... other methods
end;
function TSubscription.GetActive: Boolean;
begin
TMonitor.Enter(self);
try
Result := FActive;
finally
TMonitor.exit(self);
end;
end;
procedure TSubscription.SetActive(const Value: Boolean);
begin
TMonitor.Enter(self);
try
FActive := Value;
finally
TMonitor.exit(self);
end;
end;
Could you please tell me the lock protection for FActive is whether or not necessary and why?
Summary
Let me start by making this point as clear as possible: Do not attempt to distill multi-threaded development into a set of "simple" rules. It is essential to understand how the data is shared in order to evaluate which of the available concurrency protection techniques would be correct for a particular situation.
The code you have presented suggests the original authors had only a superficial understanding of multi-threaded development. So it serves as a lesson in what not to do.
First, locking the Boolean for read/write access in that way serves no purpose at all. I.e. each read or write is already atomic.
Furthermore, in cases where the property does need protection for concurrent access: it fails abysmally to provide any protection at all.
The net effect is redundant ineffective code that can trigger pointless wait states.
Thread-safety
In order to evaluate 'thread-safety', the following concepts should be understood:
If 2 threads 'race' for the opportunity to access a shared memory location, one will be first, and the other second. In the absence of other factors, you have no control over which thread would 'start' its access first.
Your only control is to block the 'second' thread from concurrent access if the 'first' thread hasn't finished its critical work.
The word "critical" has loaded meaning and may take some effort to fully understand. Take note of the explanation later about why a Boolean variable might need protection.
Critical work refers to all the processing required for the operation on the shared data to be deemed complete.
It's related to concepts of atomic operations or transactional integrity.
The 'second' thread could either be made to wait for the 'first' thread to finish or to skip its operation altogether.
Note that if the shared memory is accessed concurrently by both threads, then there's the possibility of inconsistent behaviour based on the exact ordering of the internal sub-steps of each thread's processing.
This is the fundamental risk and area of concern when thinking about thread-safety. It is the base principle from which other principles are derived.
'Simple' reads and writes are (usually) atomic
No concurrent operations can interfere with the reading/writing of a single byte of data. You will always either get the value in its entirety or replace the value in its entirety.
This concept extends to multiple bytes up to the machine architecture bit size; but does have a caveat, known as tearing.
When a memory address is not aligned on the bit size, then there's the possibility of the bytes spanning the end of one aligned location into the beginning of the next aligned location.
This means that reading/writing the bytes may take 2 operations at the machine level.
As a result 2 concurrent threads could interleave their sub-steps resulting in invalid/incorrect values being read. E.g.
Suppose one thread writes $ffff over an existing value of $0000 while another reads.
"Valid" reads would return either $0000 or $ffff depending on which thread is 'first'.
If the sub-steps run concurrently, then the reading thread could return invalid values of $ff00 or $00ff.
(Note that some platforms might still guarantee atomicity in this situation, but I don't have the knowledge to comment in detail on this.)
To reiterate: single byte values (including Boolean) cannot span aligned memory locations. So they're not subject to the tearing issue above. And this is why the code in the question that attempts to protect the Boolean is completely pointless.
When protection is needed
Although reads and writes in isolation are atomic, it's important to note that when a value is read and impacts a write decision, then this cannot be assumed to be thread-safe. This is best explained by way of a simple example.
Suppose 2 threads invert a shared boolean value: FBool := not FBool;
2 threads means this happens twice and once both threads have finished, the boolean should end up having its starting value. However, each is a multi-step operation:
Read FBool into a location local to the thread (either stack or register).
Invert the value.
Write the inverted value back to the shared location.
If there's no thread-safety mechanism employed then the sub-steps can run concurrently. And it's possible that both threads:
Read FBool; both getting the starting value.
Both threads invert their local copies.
Both threads write the same inverted value to the shared location.
And the end result is that the value is inverted when it should have been reverted to its starting value.
Basically the critical work is clearly more than simply reading or writing the value. To properly protect the boolean value in this situation, the protection must start before the read, and end after the write.
The important lesson to take away from this is that thread-safety requires understanding how the data is shared. It's not feasible to produce an arbitrary generic safety mechanism without this understanding.
And this is why any such attempt as in the EventBus code in the question is almost certainly doomed to be deficient (or even an outright failure).
First of all, I am not sure that it is a good design to allow worker thread to disable controls. However, I am curious can I do it safely without synchronization with GUI?
The code in TDataSet looks like this:
procedure TDataSet.DisableControls;
begin
if FDisableCount = 0 then
begin
FDisableState := FState;
FEnableEvent := deDataSetChange;
end;
Inc(FDisableCount);
end;
So it looks safe to do. The situation would be different in case of EnableControls. But DisableControls seems to only increase lock counter and assigning event which is fired up during EnableControls.
What do you think?
It looks safe to do so, but things may go wrong because these flags are used in code that may be in the middle of being executed at the moment you call this method from your thread.
I would Synchronise the call to DisableControls, because you want your thread to start using this dataset only if no controls are using it.
The call to EnableControls can be synchronised too, or you can post a message to the form using PostMessage. That way, the thread doesn't have to wait for the main thread.
But my gut feelings tells me that is may be better to not use the same dataset for the GUI and the thread at all.
Without having looked up the actual code: It might be safe, as long as you can be sure that the main thread currently does not access FDisableCount, FDisableState and FEnableEvent. There is the possibility of a race condition here.
I would still recommend that you call DisableControls from within the main thread.
I'm using several critical sections in my application. The critical sections prevent large data blobs from being modified and accessed simultaneously by different threads.
AFAIK it's all working correctly except sometimes the application hangs when exiting. I'm wondering if this is related to my use of critical sections.
Is there a correct way to free TCriticalSection objects in a destructor?
Thanks for all the answers. I'm looking over my code again with this new information in mind. Cheers!
As Rob says, the only requirement is that you ensure that the critical section is not currently owned by any thread. Not even the thread about to destroy it. So there is no pattern to follow for correctly destroying a TCriticalSection, as such. Only a required behaviour that your application must take steps to ensure occurs.
If your application is locking then I doubt it is the free'ing of any critical section that is responsible. As MSDN says (in the link that Rob posted), the DeleteCriticalSection() (which is ultimately what free'ing a TCriticalSection calls) does not block any threads.
If you were free'ing a critical section that other threads were still trying to access you would get access violations and other unexpected behaviours, not deadlocks, as this little code sample should help you demonstrate:
implementation
uses
syncobjs;
type
tworker = class(tthread)
protected
procedure Execute; override;
end;
var
cs: TCriticalSection;
worker: Tworker;
procedure TForm2.FormCreate(Sender: TObject);
begin
cs := TCriticalSection.Create;
worker := tworker.Create(true);
worker.FreeOnTerminate := TRUE;
worker.Start;
sleep(5000);
cs.Enter;
showmessage('will AV before you see this');
end;
{ tworker }
procedure tworker.Execute;
begin
inherited;
cs.Free;
end;
Add to the implementation unit of a form, correcting the "TForm2" reference for the FormCreate() event handler as required.
In FormCreate() this creates a critical section then launches a thread whose sole purpose is to free that section. We introduce a Sleep() delay to give the thread time to initialise and execute, then we try to enter the critical section ourselves.
We can't of course because it has been free'd. But our code doesn't hang - it is not deadlocked trying to access a resource that is owned by something else, it simply blows up because, well, we're trying to access a resource that no longer exists.
You could be even more sure of creating an AV in this scenario by NIL'ing the critical section reference when it is free'd.
Now, try changing the FormCreate() code to this:
cs := TCriticalSection.Create;
worker := tworker.Create(true);
worker.FreeOnTerminate := TRUE;
cs.Enter;
worker.Start;
sleep(5000);
cs.Leave;
showmessage('appearances can be deceptive');
This changes things... now the main thread will take ownership of the critical section - the worker thread will now free the critical section while it is still owned by the main thread.
In this case however, the call to cs.Leave does not necessarily cause an access violation. All that occurs in this scenario (afaict) is that the owning thread is allowed to "leave" the section as it would expect to (it doesn't of course, because the section has gone, but it seems to the thread that it has left the section it previously entered) ...
... in more complex scenarios an access violation or other error is possibly likely, as the memory previously used for the critical section object may be re-allocated to some other object by the time you call it's Leave() method, resulting in some call to some other unknown object or access to invalid memory etc.
Again, changing the worker.Execute() so that it NIL's the critical section ref after free'ing it would ensure an access violation on the attempt to call cs.Leave(), since Leave() calls Release() and Release() is virtual - calling a virtual method with a NIL reference is guaranteed to AV (ditto for Enter() which calls the virtual Acquire() method).
In any event:
Worst case: an exception or weird behaviour
"Best" case: the owning thread appears to believe it has "left" the section as normal.
In neither case is a deadlock or a hang going to occur simply as the result of when a critical section is free'd in one thread in relation to when other threads then try to enter or leave that critical section.
All of which is a round-a-bout way of saying that it sounds like you have a more fundamental race condition in your threading code not directly related to the free'ing of your critical sections.
In any event, I hope my little bit of investigative work might set you down the right path.
Just make sure nothing still owns the critical section. Otherwise, MSDN explains, "the state of the threads waiting for ownership of the deleted critical section is undefined." Other than that, call Free on it like you do with all other objects.
AFAIK it's all working correctly except sometimes the application hangs when exiting. I'm wondering if this is related to my use of critical sections.
Yes it is. But the problem is likely not in the destruction. You probably have a deadlock.
Deadlocks are when two threads wait on two exclusive resources, each wanting both of them and each owning only one:
//Thread1:
FooLock.Enter;
BarLock.Enter;
//Thread2:
BarLock.Enter;
FooLock.Enter;
The way to fight these is to order your locks. If some thread wants two of them, it has to enter them only in specific order:
//Thread1:
FooLock.Enter;
BarLock.Enter;
//Thread2:
FooLock.Enter;
BarLock.Enter;
This way deadlock will not occur.
Many things can trigger deadlock, not only TWO critical sections. For instance, you might have used SendMessage (synchronous message dispatch) or Delphi's Synchronize AND one critical section:
//Thread1:
OnPaint:
FooLock.Enter;
FooLock.Leave;
//Thread2:
FooLock.Enter;
Synchronize(SomeProc);
FooLock.Leave;
Synchronize and SendMessage send messages to Thread1. To dispatch those messages, Thread1 needs to finish whatever work it's doing. For instance, OnPaint handler.
But to finish painting, it needs FooLock, which is taken by Thread2 which waits for Thread1 to finish painting. Deadlock.
The way to solve this is either to never use Synchronize and SendMessage (the best way), or at least to use them outside of any locks.
Is there a correct way to free TCriticalSection objects in a destructor?
It does not matter where you are freeing TCriticalSection, in a destructor or not.
But before freeing TCriticalSection, you must ensure that all the threads that could have used it, are stopped or are in a state where they cannot possibly try to enter this section anymore.
For example, if your thread enters this section while dispatching a network message, you have to ensure network is disconnected and all the pending messages are processed.
Failing to do that will in most cases trigger access violations, sometimes nothing (if you're lucky), and rarely deadlocks.
There are no magical in using TCriticalSection as well as in critical sections themselves. Try to replace TCriticalSection objects with plain API calls:
uses
Windows, ...
var
CS: TRTLCriticalSection;
...
EnterCriticalSection(CS);
....
here goes your code that you have to protect from access by multiple threads simultaneously
...
LeaveCriticalSection(FCS);
...
initialization
InitializeCriticalSection(CS);
finalization
DeleteCriticalSection(CS);
Switching to API will not harm clarity of your code, but, perhaps, help to reveal hidden bugs.
You NEED to protect all critical sections using a try..finally block.
Use TRTLCriticalSection instead of a TCriticalSection class. It's cross-platform, and TCriticalSection is only an unnecessary wrapper around it.
If any exception occurs during the data process, then the critial section is not left, and another thread may block.
If you want fast response, you can also use TryEnterCriticalSection for some User Interface process or such.
Here are some good practice rules:
make your TRTLCriticalSection a property of a Class;
call InitializeCriticalSection in the class constructor, then DeleteCriticalSection in the class destructor;
use EnterCriticalSection()... try... do something... finally LeaveCriticalSection(); end;
Here is some code sample:
type
TDataClass = class
protected
fLock: TRTLCriticalSection;
public
constructor Create;
destructor Destroy; override;
procedure SomeDataProcess;
end;
constructor TDataClass.Create;
begin
inherited;
InitializeCriticalSection(fLock);
end;
destructor TDataClass.Destroy;
begin
DeleteCriticalSection(fLock);
inherited;
end;
procedure TDataClass.SomeDataProcess;
begin
EnterCriticalSection(fLock);
try
// some data process
finally
LeaveCriticalSection(fLock);
end;
end;
If the only explicit synchronisation code in your app is through critical sections then it shouldn't be too difficult to track this down.
You indicate that you have only seen the deadlock on termination. Of course this doesn't mean that it cannot happen during normal operation of your app, but my guess (and we have to guess without more information) is that it is an important clue.
I would hypothesise that the error may be related to the way in which threads are forcibly terminated. A deadlock such as you describe would happen if a thread terminated whilst still holding the lock, but then another thread attempted to acquire the lock before it had a chance to terminate.
A very simple thing to do which may fix the problem immediately is to ensure, as others have correctly said, that all uses of the lock are protected by Try/Finally. This really is a critical point to make.
There are two main patterns for resource lifetime management in Delphi, as follows:
lock.Acquire;
Try
DoSomething();
Finally
lock.Release;
End;
The other main pattern is pairing acquisition/release in Create/Destroy, but that is far less common in the case of locks.
Assuming that your usage pattern for the locks is as I suspect (i.e. acquireand release inside the same method), can you confirm that all uses are protected by Try/Finally?
If your application only hangs/ deadlocks on exit please check the onterminate event for all threads. If the main thread signals for the other threads to terminate and then waits for them before freeing them. It is important not to make any synchronised calls in the on terminate event. This can cause a dead lock as the main thread waits for the worker thread to terminate. But the synchronise call is waiting on the main thread.
Don't delete critical sections at object's destructor. Sometimes will cause your application to crash.
Use a seperate method which deletes the critical section.
procedure someobject.deleteCritical();
begin
DeleteCriticalSection(criticalSection);
end;
destructor someobject.destroy();
begin
// Do your release tasks here
end;
1) You call delete critical section
2) After you release(free) the object
I have a small client-server application, where server sends some messages to the client using named pipes. The client has two threads - main GUI thread and one "receiving thread", that keeps receiving the messages sent by server via the named pipe. Now whenever some message is received, I'd like to fire a custom event - however, that event should be handled not on the calling thread, but on the main GUI thread - and I don't know how to do it (and whether it's even possible).
Here's what I have so far:
tMyMessage = record
mode: byte;
//...some other fields...
end;
TMsgRcvdEvent = procedure(Sender: TObject; Msg: tMyMessage) of object;
TReceivingThread = class(TThread)
private
FOnMsgRcvd: TMsgRcvdEvent;
//...some other members, not important here...
protected
procedure MsgRcvd(Msg: tMyMessage); dynamic;
procedure Execute; override;
public
property OnMsgRcvd: TMsgRcvdEvent read FOnMsgRcvd write FOnMsgRcvd;
//...some other methods, not important here...
end;
procedure TReceivingThread.MsgRcvd(Msg: tMyMessage);
begin
if Assigned(FOnMsgRcvd) then FOnMsgRcvd(self, Msg);
end;
procedure TReceivingThread.Execute;
var Msg: tMyMessage
begin
//.....
while not Terminated do begin //main thread loop
//.....
if (msgReceived) then begin
//message was received and now is contained in Msg variable
//fire OnMsgRcvdEvent and pass it the received message as parameter
MsgRcvd(Msg);
end;
//.....
end; //end main thread loop
//.....
end;
Now I'd like to be able to create event handler as member of TForm1 class, for example
procedure TForm1.MessageReceived(Sender: TObject; Msg: tMyMessage);
begin
//some code
end;
that wouldn't be executed in the receiving thread, but in main UI thread. I'd especially like the receiving thread to just fire the event and continue in the execution without waiting for the return of event handler method (basically I'd need something like .NET Control.BeginInvoke method)
I'm really beginner at this (I tried to learn how to define custom events just few hours ago.), so I don't know if it's even possible or if I'm doing something wrong, so thanks a lot in advance for your help.
You've had some answers already, but none of them mentioned the troubling part of your question:
tMyMessage = record
mode: byte;
//...some other fields...
end;
Please take note that you can't do all the things you may take for granted in a .NET environment when you use Delphi or some other wrapper for native Windows message handling. You may expect to be able to pass random data structures to an event handler, but that won't work. The reason is the need for memory management.
In .NET you can be sure that data structures that are no longer referenced from anywhere will be disposed off by the garbage collection. In Delphi you don't have the same kind of leeway, you will need to make sure that any allocated block of memory is also freed correctly.
In Windows a message receiver is either a window handle (a HWND) which you SendMessage() or PostMessage() to, or it is a thread which you PostThreadMessage() to. In both cases a message can carry only two data members, which are both of machine word width, the first of type WPARAM, the second of type LPARAM). You can not simply send or post any random record as a message parameter.
All the message record types Delphi uses have basically the same structure, which maps to the data size limitation above.
If you want to send data to another thread which consists of more than two 32 bit sized variables, then things get tricky. Due to the size limits of the values that can be sent you may not be able to send the whole record, but only its address. To do that you would dynamically allocate a data structure in the sending thread, pass the address as one of the message parameters, and reinterpret the same parameter in the receiving thread as the address of a variable with the same type, then consume the data in the record, and free the dynamically allocated memory structure.
So depending on the amount of data you need to send to your event handler you may need to change your tMyMessage record. This can be made to work, but it's more difficult than necessary because type checking is not available for your event data.
I'd suggest to tackle this a bit differently. You know what data you need to pass from the worker threads to the GUI thread. Simply create a queueing data structure that you put your event parameter data into instead of sending them with the message directly. Make this queue thread-safe, i.e. protect it with a critical section so that adding or removing from the queue is safe even when attempted simultaneously from different threads.
To request a new event handling, simply add the data to your queue. Only post a message to the receiving thread when the first data element is added to a previously empty queue. The receiving thread should then receive and process the message, and continue to pop data elements from the queue and call the matching event handlers until the queue is empty again. For best performance the queue should be locked as shortly as possible, and it should definitely be unlocked again temporarily while the event handler is called.
You should use PostMessage (asynch) or SendMessage (synch) API to send a message to' a window. You could use also some kind of "queue" or use the fantastic OmniThreadLibrary to' do this (highly recomended)
Declare a private member
FRecievedMessage: TMyMEssage
And a protected procedure
procedure PostRecievedMessage;
begin
if Assigned(FOnMsgRcvd) then FOnMsgRcvd(self, FRecievedMessage);
FRecievedMessage := nil;
end;
And change the code in the loop to
if (msgReceived) then begin
//message was received and now is contained in Msg variable
//fire OnMsgRcvdEvent and pass it the received message as parameter
FRecievedMessage := Msg;
Synchronize(PostRecievedMessage);
end;
If you want to do it completely asynch use PostMessage API instead.
Check docs for Synchronize method. It's designed for tasks like yours.
My framework does can do this for you if you wish to check it out (http://www.csinnovations.com/framework_overview.htm).
A certain form in our application displays a graphical view of a model. The user can, amongst loads of other stuff, initiate a transformation of the model that can take quite some time. This transformation sometimes proceeds without any user interaction, at other times frequent user input is necessary. While it lasts the UI should be disabled (just showing a progress dialog) unless user input is needed.
Possible Approaches:
Ignore the issue, just put the transformation code in a procedure and call that. Bad because the app seems hung in cases where the transformation needs some time but requires no user input.
Sprinkle the code with callbacks: This is obtrusive - you’d have to put a lot of these calls in the transformation code - as well as unpredictable - you could never be sure that you’d found the right spots.
Sprinkle the code with Application.ProcessMessages: Same problems as with callbacks. Additionally you get all the issues with ProcessMessages.
Use a thread: This relieves us from the “obtrusive and unpredictable” part of 2. and 3. However it is a lot of work because of the “marshalling” that is needed for the user input - call Synchronize, put any needed parameters in tailor-made records etc. It’s also a nightmare to debug and prone to errors.
//EDIT: Our current solution is a thread. However it's a pain in the a** because of the user input. And there can be a lot of input code in a lot of routines. This gives me a feeling that a thread is not the right solution.
I'm going to embarass myself and post an outline of the unholy mix of GUI and work code that I've produced:
type
// Helper type to get the parameters into the Synchronize'd routine:
PGetSomeUserInputInfo = ^TGetSomeUserInputInfo;
TGetSomeUserInputInfo = record
FMyModelForm: TMyModelForm;
FModel: TMyModel;
// lots of in- and output parameters
FResult: Boolean;
end;
{ TMyThread }
function TMyThread.GetSomeUserInput(AMyModelForm: TMyModelForm;
AModel: TMyModel; (* the same parameters as in TGetSomeUserInputInfo *)): Boolean;
var
GSUII: TGetSomeUserInputInfo;
begin
GSUII.FMyModelForm := AMyModelForm;
GSUII.FModel := AModel;
// Set the input parameters in GSUII
FpCallbackParams := #GSUII; // FpCallbackParams is a Pointer field in TMyThread
Synchronize(DelegateGetSomeUserInput);
// Read the output parameters from GSUII
Result := GSUII.FResult;
end;
procedure TMyThread.DelegateGetSomeUserInput;
begin
with PGetSomeUserInputInfo(FpCallbackParams)^ do
FResult := FMyModelForm.DoGetSomeUserInput(FModel, (* the params go here *));
end;
{ TMyModelForm }
function TMyModelForm.DoGetSomeUserInput(Sender: TMyModel; (* and here *)): Boolean;
begin
// Show the dialog
end;
function TMyModelForm.GetSomeUserInput(Sender: TMyModel; (* the params again *)): Boolean;
begin
// The input can be necessary in different situations - some within a thread, some not.
if Assigned(FMyThread) then
Result := FMyThread.GetSomeUserInput(Self, Sender, (* the params *))
else
Result := DoGetSomeUserInput(Sender, (* the params *));
end;
Do you have any comments?
I think as long as your long-running transformations require user interaction, you're not going to be truly happy with any answer you get. So let's back up for a moment: Why do you need to interrupt the transformation with requests for more information? Are these really questions you couldn't have anticipated before starting the transformation? Surely the users aren't too happy about the interruptions, either, right? They can't just set the transformation going and then go get a cup of coffee; they need to sit and watch the progress bar in case there's an issue. Ugh.
Maybe the issues the transformation encounters are things that could be "saved up" until the end. Does the transformation need to know the answers immediately, or could it finish everything else, and then just do some "fix-ups" afterward?
Definitely go for a threaded option (even after your edit, saying you find it complex). The solution that duffymo suggests is, in my opinion, very poor UI design (even though it's not explicitly about the appearance, it is about how the user interfaces with your application). Programs that do this are annoying, because you have no idea how long the task will take, when it will complete, etc. The only way this approach could be made better would be by stamping the results with the generation date/time, but even then you require the user to remember when they started the process.
Take the time/effort and make the application useful, informative and less frustrating for your end user.
For an optimal solution you will have to analyse your code anyway, and find all the places to check whether the user wants to cancel the long-running operation. This is true both for a simple procedure and a threaded solution - you want the action to finish after a few tenths of a second to have your program appear responsive to the user.
Now what I would do first is to create an interface (or abstract base class) with methods like:
IModelTransformationGUIAdapter = interface
function isCanceled: boolean;
procedure setProgress(AStep: integer; AProgress, AProgressMax: integer);
procedure getUserInput1(...);
....
end;
and change the procedure to have a parameter of this interface or class:
procedure MyTransformation(AGuiAdapter: IModelTransformationGUIAdapter);
Now you are prepared to implement things in a background thread or directly in the main GUI thread, the transformation code itself will not need to be changed, once you have added code to update the progress and check for a cancel request. You only implement the interface in different ways.
I would definitely go without a worker thread, especially if you want to disable the GUI anyway. To make use of multiple processor cores you can always find parts of the transformation process that are relatively separated and process them in their own worker threads. This will give you much better throughput than a single worker thread, and it is easy to accomplish using AsyncCalls. Just start as many of them in parallel as you have processor cores.
Edit:
IMO this answer by Rob Kennedy is the most insightful yet, as it does not focus on the details of the implementation, but on the best experience for the user. This is surely the thing your program should be optimised for.
If there really is no way to either get all information before the transformation is started, or to run it and patch some things up later, then you still have the opportunity to make the computer do more work so that the user has a better experience. I see from your various comments that the transformation process has a lot of points where the execution branches depending on user input. One example that comes to mind is a point where the user has to choose between two alternatives (like horizontal or vertical direction) - you could simply use AsyncCalls to initiate both transformations, and there are chances that the moment the user has chosen his alternative both results are already calculated, so you can simply present the next input dialog. This would better utilise multi-core machines. Maybe an idea to follow up on.
TThread is perfect and easy to use.
Develope and debug your slow function.
if it is ready, put the call into the tthread execute method.
Use the onThreadTerminate Event to find out the end of you function.
for user feedback use syncronize!
I think your folly is thinking of the transformation as a single task. If user input is required as part of the calculation and the input asked for depends on the caclulation up to that point, then I would refactor the single task into a number of tasks.
You can then run a task, ask for user input, run the next task, ask for more input, run the next task, etc.
If you model the process as a workflow, it should become clear what tasks, decisions and user input is required.
I would run each task in a background thread to keep the user interface interactive, but without all the marshaling issues.
Process asynchronously by sending a message to a queue and have the listener do the processing. The controller sends an ACK message to the user that says "We've received your request for processing. Please check back later for results." Give the user a mailbox or link to check back and see how things are progressing.
While I don't completely understand what your trying to do, what I can offer is my view on a possible solution. My understanding is that you have a series of n things to do, and along the way decisions on one could cause one or more different things to be added to the "transformation". If this is the case, then I would attempt to separate (as much as possible) the GUI and decisions from the actual work that needs to be done. When the user kicks off the "transformation" I would (not in a thread yet) loop through each of the necessary decisions but not performing any work...just asking the questions required to do the work and then pushing the step along with the parameters into a list.
When the last question is done, spawn your thread passing it the list of steps to run along with the parameters. The advantage of this method is you can show a progress bar of 1 of n items to give the user an idea of how long it might take when they come back after getting their coffee.
I'd certainly go with threads. Working out how a thread will interact with the user is often difficult, but the solution that has worked well for me is to not have the thread interact with the user, but have the user side GUI interact with the thread. This solves the problem of updating the GUI using synchronize, and gives the user more responsive activity.
So, to do this, I use various variables in the thread, accessed by Get/Set routines that use critical sections, to contain status information. For starters, I'd have a "Cancelled" property for the GUI to set to ask the thread to stop please. Then a "Status"property that indicates if the thread is waiting, busy or complete. You might have a "human readable" status to indicate what is happening, or a percentage complete.
To read all this information, just use a timer on the form and update. I tend to have a "statusChanged" property too, which is set if one of the other items needs refreshing, which stops too much reading going on.
This has worked well for me in various apps, including one which displays the status of up to 8 threads in a list box with progress bars.
If you decide going with Threads, which I also find somewhat complex the way they are implemented in Delphi, I would recommend the OmniThreadLibrary by Primož Gabrijelčič or Gabr as he is known here at Stack Overflow.
It is the simplest to use threading library I know of. Gabr writes great stuff.
If you can split your transformation code into little chunks, then you can run that code when the processor is idle. Just create an event handler, hook it up to the Application.OnIdle event. As long as you make sure that each chunk of code is fairly short (the amount of time you want the application to be unresponsive...say 1/2 a second. The important thing is to set the done flag to false at the end of your handler :
procedure TMyForm .IdleEventHandler(Sender: TObject;
var Done: Boolean);
begin
{Do a small bit of work here}
Done := false;
end;
So for example if you have a loop, instead of using a for loop, use a while loop, make sure the scope of the loop variable is at the form level. Set it to zero before setting the onIdle event, then for example perform 10 loops per onidle hit until you hit the end of the loop.
Count := 0;
Application.OnIdle := IdleEventHandler;
...
...
procedure TMyForm .IdleEventHandler(Sender: TObject;
var Done: Boolean);
var
LocalCount : Integer;
begin
LocalCount := 0;
while (Count < MaxCount) and (Count < 10) do
begin
{Do a small bit of work here}
Inc(Count);
Inc(LocalCount);
end;
Done := false;
end;