Is it safe to set the boolean value in thread from another one? - multithreading

I'm wondering if the following (pseudo) code is safe to use. I know about Terminated flag but I need to set some sort of cancel flag at recursive search operation from the main thread and keep the worker thread running. I will check there also the Terminated property, what is missing in this pseudo code.
type
TMyThread = class(TThread)
private
FCancel: Boolean;
procedure RecursiveSearch(const ItemID: Integer);
protected
procedure Execute; override;
public
procedure Cancel;
end;
procedure TMyThread.Cancel;
begin
FCancel := True;
end;
procedure TMyThread.Execute;
begin
RecursiveSearch(0);
end;
procedure TMyThread.RecursiveSearch(const ItemID: Integer);
begin
if not FCancel then
RecursiveSearch(ItemID);
end;
procedure TMainForm.ButtonCancelClick(Sender: TObject);
begin
MyThread.Cancel;
end;
Is it safe to set the boolean property FCancel inside of the thread this way ? Wouldn't this collide with reading of this flag in the RecursiveSearch procedure while the button in the main form (main thread) is pressed ? Or will I have to add e.g. critical section for reading and writing of this value ?
Thanks a lot

It's perfectly safe to do this. The reading thread will always read either true or false. There will be no tearing because a Boolean is just a single byte. In fact the same is true for an aligned 32 bit value in a 32 bit process, i.e. Integer.
This is what is known as a benign race. There is a race condition on the boolean variable since one thread reads whilst another thread writes, without synchronisation. But the logic of this program is not adversely affected by the race. In more complex scenarios, such a race could be harmful and then synchronisation would be needed.

Writing to a boolean field from different threads is thread safe - meaning, the write operation is atomic. No observer of the field will ever see a "partial value" as the value is being written to the field. With larger data types, partial writes are a real possibility because multiple CPU instructions are required to write the value to the field.
So, the actual write of the boolean is not a thread safety issue. However, how observers are using that boolean field may be a thread safety issue. In your example, the only visible observer is the RecursiveSearch function, and its use of the FCancel value is pretty simple and harmless. The observer of the FCancel state does not change the FCancel state, so this is a straight / acyclic producer-consumer type dependency.
If instead the code was using the boolean field to determine whether a one-time operation needed to be done, simple reads and writes to the boolean field will not be enough because the observer of the boolean field also needs to modify the field (to mark that the one-time operation has been done). That's a read-modify-write cycle, and that is not safe when two ore more threads perform the same steps at just the right time. In that situation, you can put a mutex lock around the one-time operation (and boolean field check and update), or you can use InterlockedExchange to update and test the boolean field without a mutex. You could also move the one-time operation into a static type constructor and not have to maintain any locks yourself (though .NET may use locks behind the scenes for this).

I agree that writing a boolean from one thread and reading from another is thread safe. However, be careful with incrementing - this is not atomic and may cause a decidedly non-benigh race condition in your code depending on the implementation. Increment/Decrement normally turns into three seperate machine instructions - load/inc/store.
This is what the InterlockedIncrement, InterlockedDecrement, and InterlockedExchange Win32 API calls are for - to enable 32-bit increment, decrement, and loads to occur atomically without a seperate synchronization object.

Yes it is safe, you need to use Critical Sections only when you are reading or writing from/to another thread, within same thread it is safe.
BTW. the way you have RecursiveSearch method defined, if (FCancel = False) then you'll get an Stack overflow (:

Related

Is it necessary to do Multi-thread protection for a Boolean property in Delphi?

I found a Delphi library named EventBus and I think it will be very useful, since the Observer is my favorite design pattern.
In the process of learning its source code, I found a piece of code that may be due to multithreading security considerations, which is in the following (property Active's getter and setter methods).
TSubscription = class(TObject)
private
FActive: Boolean;
procedure SetActive(const Value: Boolean);
function GetActive: Boolean;
// ... other members
public
constructor Create(ASubscriber: TObject;
ASubscriberMethod: TSubscriberMethod);
destructor Destroy; override;
property Active: Boolean read GetActive write SetActive;
// ... other methods
end;
function TSubscription.GetActive: Boolean;
begin
TMonitor.Enter(self);
try
Result := FActive;
finally
TMonitor.exit(self);
end;
end;
procedure TSubscription.SetActive(const Value: Boolean);
begin
TMonitor.Enter(self);
try
FActive := Value;
finally
TMonitor.exit(self);
end;
end;
Could you please tell me the lock protection for FActive is whether or not necessary and why?
Summary
Let me start by making this point as clear as possible: Do not attempt to distill multi-threaded development into a set of "simple" rules. It is essential to understand how the data is shared in order to evaluate which of the available concurrency protection techniques would be correct for a particular situation.
The code you have presented suggests the original authors had only a superficial understanding of multi-threaded development. So it serves as a lesson in what not to do.
First, locking the Boolean for read/write access in that way serves no purpose at all. I.e. each read or write is already atomic.
Furthermore, in cases where the property does need protection for concurrent access: it fails abysmally to provide any protection at all.
The net effect is redundant ineffective code that can trigger pointless wait states.
Thread-safety
In order to evaluate 'thread-safety', the following concepts should be understood:
If 2 threads 'race' for the opportunity to access a shared memory location, one will be first, and the other second. In the absence of other factors, you have no control over which thread would 'start' its access first.
Your only control is to block the 'second' thread from concurrent access if the 'first' thread hasn't finished its critical work.
The word "critical" has loaded meaning and may take some effort to fully understand. Take note of the explanation later about why a Boolean variable might need protection.
Critical work refers to all the processing required for the operation on the shared data to be deemed complete.
It's related to concepts of atomic operations or transactional integrity.
The 'second' thread could either be made to wait for the 'first' thread to finish or to skip its operation altogether.
Note that if the shared memory is accessed concurrently by both threads, then there's the possibility of inconsistent behaviour based on the exact ordering of the internal sub-steps of each thread's processing.
This is the fundamental risk and area of concern when thinking about thread-safety. It is the base principle from which other principles are derived.
'Simple' reads and writes are (usually) atomic
No concurrent operations can interfere with the reading/writing of a single byte of data. You will always either get the value in its entirety or replace the value in its entirety.
This concept extends to multiple bytes up to the machine architecture bit size; but does have a caveat, known as tearing.
When a memory address is not aligned on the bit size, then there's the possibility of the bytes spanning the end of one aligned location into the beginning of the next aligned location.
This means that reading/writing the bytes may take 2 operations at the machine level.
As a result 2 concurrent threads could interleave their sub-steps resulting in invalid/incorrect values being read. E.g.
Suppose one thread writes $ffff over an existing value of $0000 while another reads.
"Valid" reads would return either $0000 or $ffff depending on which thread is 'first'.
If the sub-steps run concurrently, then the reading thread could return invalid values of $ff00 or $00ff.
(Note that some platforms might still guarantee atomicity in this situation, but I don't have the knowledge to comment in detail on this.)
To reiterate: single byte values (including Boolean) cannot span aligned memory locations. So they're not subject to the tearing issue above. And this is why the code in the question that attempts to protect the Boolean is completely pointless.
When protection is needed
Although reads and writes in isolation are atomic, it's important to note that when a value is read and impacts a write decision, then this cannot be assumed to be thread-safe. This is best explained by way of a simple example.
Suppose 2 threads invert a shared boolean value: FBool := not FBool;
2 threads means this happens twice and once both threads have finished, the boolean should end up having its starting value. However, each is a multi-step operation:
Read FBool into a location local to the thread (either stack or register).
Invert the value.
Write the inverted value back to the shared location.
If there's no thread-safety mechanism employed then the sub-steps can run concurrently. And it's possible that both threads:
Read FBool; both getting the starting value.
Both threads invert their local copies.
Both threads write the same inverted value to the shared location.
And the end result is that the value is inverted when it should have been reverted to its starting value.
Basically the critical work is clearly more than simply reading or writing the value. To properly protect the boolean value in this situation, the protection must start before the read, and end after the write.
The important lesson to take away from this is that thread-safety requires understanding how the data is shared. It's not feasible to produce an arbitrary generic safety mechanism without this understanding.
And this is why any such attempt as in the EventBus code in the question is almost certainly doomed to be deficient (or even an outright failure).

Is TStringList thread safe?

Is it ok to read data from TStringList without any form of synchronization? For example synchronization with main thread.
Example code
var MyStringList:TStringList; //declared globally
procedure TForm1.JvThread1Execute(Sender: TObject; Params: Pointer);
var x:integer;
begin
for x:=0 to MaxInt do MyStringList.Add(FloatToStr(Random));
end;
procedure TForm1.ButtonClick(Sender: TObject);
var x:integer;
SumOfRandomNumbers:double;
begin
for x:=0 to MyStringList.Count-1 do
SumOfRandomNumbers:=SumOfRandomNumbers+StrToFloat(MyStringList.Strings[x]);
end;
or Should I protect access to MyStringList with EnterCiticalSection
var MyStringList:TStringList; //declared globally
procedure TForm1.JvThread1Execute(Sender: TObject; Params: Pointer);
var x:integer;
begin
for x:=0 to MaxInt do
begin
EnterCriticalSection(MySemaphore);
MyStringList.Add(FloatToStr(Random));
LeaveCriticalSection(MySemaphore);
end;
end;
procedure TForm1.ButtonClick(Sender: TObject);
var x:integer;
SumOfRandomNumbers:double;
begin
for x:=0 to MyStringList.Count-1 do
begin
EnterCriticalSection(MySemaphore);
SumOfRandomNumbers:=SumOfRandomNumbers+StrToFloat(MyStringList.Strings[x]);
LeaveCriticalSection(MySemaphore);
end;
end;
First, no TStringList is not thread-safe.
Second, attempting to make it so would be a terrible idea for a low-level container that in the vast majority of cases would not be shared across multiple threads.
Third, the naive code you propose to make it thread-safe is woefully insufficient. It falls well short of making it truly thread-safe - which is part of the problem in trying to do so generically.
In the text of your question you ask:
Is it ok to read data from TStringList without any form of synchronisation?
Yes it is okay. In fact, that is preferred because it is more efficient.
However, if the data is shared across threads, you may run into problems. Which is why you should minimise the amount of data (not just string lists) shared across threads. And if you do need to share data, do so in a suitably controlled fashion.
Expanding on point 3
The reason your code is not thread-safe is that it falls short of protecting all your data from shared access. This is a common misunderstanding in multi-threaded development: "I just need to wrap certain operations with locks and all will be fine."
The point is, if your list is shared, you are:
Sharing the structures that represent the container.
AND you are sharing the data members (the actual strings) themselves.
When dealing with strings, this goes a step further, because the way Delphi manages strings means they could be shared (through internal reference counting) with other strings of the same value in an entirely different area of the application.
While it is possible your proposed locking strategy might be suitable for your current requirements, it is far from being generally thread-safe.
Conclusion
If you want to write thread-safe code the onus is on you to:
Understand the data access paths.
Minimise sharing between threads (by far the best bang for buck).
And to implement the best strategy to share the data safely (of which there are many options, and locking is not guaranteed to be best in any case).
Sidenote
I indicated earlier that your locking technique only "might be suitable for your current requirements" because I do not believe you have really given an indication as to you real requirements. If you have then you really do need to take note of the following:
In the code you have presented there would be absolutely no benefit in making your TStringList "thread-safe". You populate the list in a loop, and you read values in a second loop. You're doing absolutely nothing to use the data concurrently.
The closest your code should come to multi-threading is: It would be a good idea to process both loops off the main thread to avoid blocking the UI. In which case, the background thread should NOT share its TStringList instance. And can simply synchronise with the main thread to report the result (and possibly progress updates).
By not sharing data that doesn't need to be shared, you can bypass the need for locks entirely. They would be an unnecessary overhead. And you can be happy that TStringList doesn't have a built-in "thread-safety" mechanism.
No, it isn't. There is no mechanism inside of TStringList, that locks for example .Add() or .GetStrings().
Unfortunately there is nothing built in like TThreadList, that is a threadsafe wrapper for TList. But you could build that easily on your own.
Here is a simple example for a synchronized decorator of TStringList, in that I cover the case for Add():
TThreadStringList = class
private
FStringList: TStringList;
FCriticalSection: TRtlCriticalSection;
// ...
public
function Add(const S: string): Integer;
// ...
end;
// ...
TThreadStringList.Add(const S: string): Integer;
begin
EnterCriticalSection(FCriticalSection);
try
Result:= Add(S);
finally
LeaveCriticalSection(FCriticalSection);
end;
end;
It should be easy, to apply this to all other methods you need.
Bear in mind, that you have to initialize the critical section, before you can use it, and to delete it afterwards.

Delphi: preferred way of protection using Critical Sections

I have an object x that needs to be accessed from several (5+ threads). The structure of the object is
Tx = class
private
Fx: integer;
public
property x: integer read Fx read Fx;
etc;
What is the better (most elegant) way of protection:
a)
Tx = class
private
Fx: integer;
public
property x: integer read Fx read Fx;
public
constructor Create; <--- Create the criticalsection here
destructor Destroy; <--destroy it here
etc;
var
cs: TCriticalSection;
Obj: Tx;
function GetSafeObject(): Tx;
begin
CS.Enter;
try
Result:= Obj;
finally
CS.Leave;
end;
and access always the object as GetSafeObj().x:= 3;
or
Tx = class
private
Fx: integer;
FCS: TCriticalSection;
public
property x: integer read GerX read SetX;
public
constructor Create; <--- Create the criticalsection here
destructor Destroy; <--destroy it here
etc;
where
function Tx.Getx(): integer;
begin
CS.Enter;
try
Result:= Fx;
finally
CS.Leave;
end;
end;
end;
and always access the object normally. I guess the first option is more elegant, even if both methods should work fine. Ay comments?
Go for option B, making the critical section internal to the object. If the user of the class has to make use of an external function to safely access the instance, it is inevitable that somebody won't and the house will come tumbling down.
You also need to think about what operational semantic you are wanting to protect from multiple concurrent reads & writes. If you put a lock inside your getter and setter, you can guarantee that your object is internally coherent, but users of your object may see multithreading artifacts. For example, if thread A writes 10 to a property of your object, and thread B writes 50 to that property of the same object, only one of them can be last one in. If A happens to go first, then A will observe that they wrote a 10 to the property, but when they read it back again they see B's 50 that snuck in there in the gap between A's read-after-write.
Note also that you don't really need a lock to protect a single integer field. Aligned pointer sized integer writes are atomic operations on just about every hardware system today. You definitely need a lock to protect multi-piece data like structs or multi-step operations like changing two related fields at the same time.
If there is any way you can rework your design to make these objects local to a particular operation on a thread, do it. Making local copies of data may increase your memory footprint slightly, but it can dramatically simplify your code for multithreading and run faster than leaving mutex landmines all over the app. Look for other simplifying assumptions as well - if you can set up your system so that the object is immutable while its available to multiple threads, then the object doesn't need any lock protections at all. Read-only data is good for sharing across threads. Very very good.
Making the CS be a member of the object and using the CS inside of property getter/setter methods is the correct approach. The other approach does not work because it locks and unlocks the CS before the object is actually acessed, so the property value is not protected at all.
An easy way is to have a thread-safe wrapper around the object, similar to TThreadList. The wrapper needs two methods: Lock (to enter the critical section and return the inner object) and Unlock (to leave the critical section).

Why does my multi-threaded app sometimes hang when it closes?

I'm using several critical sections in my application. The critical sections prevent large data blobs from being modified and accessed simultaneously by different threads.
AFAIK it's all working correctly except sometimes the application hangs when exiting. I'm wondering if this is related to my use of critical sections.
Is there a correct way to free TCriticalSection objects in a destructor?
Thanks for all the answers. I'm looking over my code again with this new information in mind. Cheers!
As Rob says, the only requirement is that you ensure that the critical section is not currently owned by any thread. Not even the thread about to destroy it. So there is no pattern to follow for correctly destroying a TCriticalSection, as such. Only a required behaviour that your application must take steps to ensure occurs.
If your application is locking then I doubt it is the free'ing of any critical section that is responsible. As MSDN says (in the link that Rob posted), the DeleteCriticalSection() (which is ultimately what free'ing a TCriticalSection calls) does not block any threads.
If you were free'ing a critical section that other threads were still trying to access you would get access violations and other unexpected behaviours, not deadlocks, as this little code sample should help you demonstrate:
implementation
uses
syncobjs;
type
tworker = class(tthread)
protected
procedure Execute; override;
end;
var
cs: TCriticalSection;
worker: Tworker;
procedure TForm2.FormCreate(Sender: TObject);
begin
cs := TCriticalSection.Create;
worker := tworker.Create(true);
worker.FreeOnTerminate := TRUE;
worker.Start;
sleep(5000);
cs.Enter;
showmessage('will AV before you see this');
end;
{ tworker }
procedure tworker.Execute;
begin
inherited;
cs.Free;
end;
Add to the implementation unit of a form, correcting the "TForm2" reference for the FormCreate() event handler as required.
In FormCreate() this creates a critical section then launches a thread whose sole purpose is to free that section. We introduce a Sleep() delay to give the thread time to initialise and execute, then we try to enter the critical section ourselves.
We can't of course because it has been free'd. But our code doesn't hang - it is not deadlocked trying to access a resource that is owned by something else, it simply blows up because, well, we're trying to access a resource that no longer exists.
You could be even more sure of creating an AV in this scenario by NIL'ing the critical section reference when it is free'd.
Now, try changing the FormCreate() code to this:
cs := TCriticalSection.Create;
worker := tworker.Create(true);
worker.FreeOnTerminate := TRUE;
cs.Enter;
worker.Start;
sleep(5000);
cs.Leave;
showmessage('appearances can be deceptive');
This changes things... now the main thread will take ownership of the critical section - the worker thread will now free the critical section while it is still owned by the main thread.
In this case however, the call to cs.Leave does not necessarily cause an access violation. All that occurs in this scenario (afaict) is that the owning thread is allowed to "leave" the section as it would expect to (it doesn't of course, because the section has gone, but it seems to the thread that it has left the section it previously entered) ...
... in more complex scenarios an access violation or other error is possibly likely, as the memory previously used for the critical section object may be re-allocated to some other object by the time you call it's Leave() method, resulting in some call to some other unknown object or access to invalid memory etc.
Again, changing the worker.Execute() so that it NIL's the critical section ref after free'ing it would ensure an access violation on the attempt to call cs.Leave(), since Leave() calls Release() and Release() is virtual - calling a virtual method with a NIL reference is guaranteed to AV (ditto for Enter() which calls the virtual Acquire() method).
In any event:
Worst case: an exception or weird behaviour
"Best" case: the owning thread appears to believe it has "left" the section as normal.
In neither case is a deadlock or a hang going to occur simply as the result of when a critical section is free'd in one thread in relation to when other threads then try to enter or leave that critical section.
All of which is a round-a-bout way of saying that it sounds like you have a more fundamental race condition in your threading code not directly related to the free'ing of your critical sections.
In any event, I hope my little bit of investigative work might set you down the right path.
Just make sure nothing still owns the critical section. Otherwise, MSDN explains, "the state of the threads waiting for ownership of the deleted critical section is undefined." Other than that, call Free on it like you do with all other objects.
AFAIK it's all working correctly except sometimes the application hangs when exiting. I'm wondering if this is related to my use of critical sections.
Yes it is. But the problem is likely not in the destruction. You probably have a deadlock.
Deadlocks are when two threads wait on two exclusive resources, each wanting both of them and each owning only one:
//Thread1:
FooLock.Enter;
BarLock.Enter;
//Thread2:
BarLock.Enter;
FooLock.Enter;
The way to fight these is to order your locks. If some thread wants two of them, it has to enter them only in specific order:
//Thread1:
FooLock.Enter;
BarLock.Enter;
//Thread2:
FooLock.Enter;
BarLock.Enter;
This way deadlock will not occur.
Many things can trigger deadlock, not only TWO critical sections. For instance, you might have used SendMessage (synchronous message dispatch) or Delphi's Synchronize AND one critical section:
//Thread1:
OnPaint:
FooLock.Enter;
FooLock.Leave;
//Thread2:
FooLock.Enter;
Synchronize(SomeProc);
FooLock.Leave;
Synchronize and SendMessage send messages to Thread1. To dispatch those messages, Thread1 needs to finish whatever work it's doing. For instance, OnPaint handler.
But to finish painting, it needs FooLock, which is taken by Thread2 which waits for Thread1 to finish painting. Deadlock.
The way to solve this is either to never use Synchronize and SendMessage (the best way), or at least to use them outside of any locks.
Is there a correct way to free TCriticalSection objects in a destructor?
It does not matter where you are freeing TCriticalSection, in a destructor or not.
But before freeing TCriticalSection, you must ensure that all the threads that could have used it, are stopped or are in a state where they cannot possibly try to enter this section anymore.
For example, if your thread enters this section while dispatching a network message, you have to ensure network is disconnected and all the pending messages are processed.
Failing to do that will in most cases trigger access violations, sometimes nothing (if you're lucky), and rarely deadlocks.
There are no magical in using TCriticalSection as well as in critical sections themselves. Try to replace TCriticalSection objects with plain API calls:
uses
Windows, ...
var
CS: TRTLCriticalSection;
...
EnterCriticalSection(CS);
....
here goes your code that you have to protect from access by multiple threads simultaneously
...
LeaveCriticalSection(FCS);
...
initialization
InitializeCriticalSection(CS);
finalization
DeleteCriticalSection(CS);
Switching to API will not harm clarity of your code, but, perhaps, help to reveal hidden bugs.
You NEED to protect all critical sections using a try..finally block.
Use TRTLCriticalSection instead of a TCriticalSection class. It's cross-platform, and TCriticalSection is only an unnecessary wrapper around it.
If any exception occurs during the data process, then the critial section is not left, and another thread may block.
If you want fast response, you can also use TryEnterCriticalSection for some User Interface process or such.
Here are some good practice rules:
make your TRTLCriticalSection a property of a Class;
call InitializeCriticalSection in the class constructor, then DeleteCriticalSection in the class destructor;
use EnterCriticalSection()... try... do something... finally LeaveCriticalSection(); end;
Here is some code sample:
type
TDataClass = class
protected
fLock: TRTLCriticalSection;
public
constructor Create;
destructor Destroy; override;
procedure SomeDataProcess;
end;
constructor TDataClass.Create;
begin
inherited;
InitializeCriticalSection(fLock);
end;
destructor TDataClass.Destroy;
begin
DeleteCriticalSection(fLock);
inherited;
end;
procedure TDataClass.SomeDataProcess;
begin
EnterCriticalSection(fLock);
try
// some data process
finally
LeaveCriticalSection(fLock);
end;
end;
If the only explicit synchronisation code in your app is through critical sections then it shouldn't be too difficult to track this down.
You indicate that you have only seen the deadlock on termination. Of course this doesn't mean that it cannot happen during normal operation of your app, but my guess (and we have to guess without more information) is that it is an important clue.
I would hypothesise that the error may be related to the way in which threads are forcibly terminated. A deadlock such as you describe would happen if a thread terminated whilst still holding the lock, but then another thread attempted to acquire the lock before it had a chance to terminate.
A very simple thing to do which may fix the problem immediately is to ensure, as others have correctly said, that all uses of the lock are protected by Try/Finally. This really is a critical point to make.
There are two main patterns for resource lifetime management in Delphi, as follows:
lock.Acquire;
Try
DoSomething();
Finally
lock.Release;
End;
The other main pattern is pairing acquisition/release in Create/Destroy, but that is far less common in the case of locks.
Assuming that your usage pattern for the locks is as I suspect (i.e. acquireand release inside the same method), can you confirm that all uses are protected by Try/Finally?
If your application only hangs/ deadlocks on exit please check the onterminate event for all threads. If the main thread signals for the other threads to terminate and then waits for them before freeing them. It is important not to make any synchronised calls in the on terminate event. This can cause a dead lock as the main thread waits for the worker thread to terminate. But the synchronise call is waiting on the main thread.
Don't delete critical sections at object's destructor. Sometimes will cause your application to crash.
Use a seperate method which deletes the critical section.
procedure someobject.deleteCritical();
begin
DeleteCriticalSection(criticalSection);
end;
destructor someobject.destroy();
begin
// Do your release tasks here
end;
1) You call delete critical section
2) After you release(free) the object

Accessing Variable in Parent Form from OnTimer Event - Getting Exception

I'm getting an exception in an OnTimer event handler (TTimer) that when executed increments an integer variable in the parent form. The timers need to be able to access an incremented integer used as an id.
My first question is: How can I tell in Delphi 2007 which code is running in which thread? Is there a way in debug mode to inspect this so I can determine for sure?
Secondly, if I need to access and modify variables in a parent form from another thread, what is the best way to do that? It seems like sometimes Delphi allows me to access these variables "incorrectly" without giving an exception and other times it does give an exception.
Just to be sure: On one hand you are talking about a timer event, on the other about multithreading. Those are two totally different ways of running code in parallel.
A timer will always be run in the main thread. It should be safe there to access everything that was created and is being used in the main thread. In fact, a timer event can only occur, when no other main thread code is running, because it needs the application's message handler to process the timer message. So it is either outside of any event handling code or when one of your event handlers calls Application.ProcessMessages.
A thread is very different from this. In this case, the code in different threads runs independently from each other. If running on a multi-processor machine (or multi core), it is even possible they truly run in parallel. There are quite a few issues you may have this way, in particular the Delphi VCL (up and including Delphi XE) is not thread save, so calls to any VCL class must only be done from the main thread (there are a few exceptions to this rule).
So, please first clarify whether you are talking about timers or true multithreading, before expecting any useful answers.
How can I tell in Delphi 2007 which
code is running in which thread? Is
there a way in debug mode to inspect
this so I can determine for sure?
You can set a breakpoint and when execution stops look at the threads debug window. Double click on each thread to see its callstack in the callstack debug window. You can also use the Win32 function GetCurrentThreadId to find out about the current thread (e.g. for logging, or to determine if the current thread is the main thread etc).
Since you are not showing any code it is hard to be more specific. Just to be sure: code in a timer event handler is not getting executed in a different thread. You won't have concurrent-access issues if you are just using timers, not real background threads.
Secondly, if I need to access and
modify variables in a parent form from
another thread, what is the best way
to do that? It seems like sometimes
Delphi allows me to access these
variables "incorrectly" without giving
an exception and other times it does
give an exception.
If you really are in another thread and access a shared variable you can see all sorts of things happening if you don't protect that access. It might work ok most of the time or you get strange values. If you just want to modify an integer in a thread-safe manner, look at InterlockedIncrement. Otherwise you could use a critical section, mutex, monitor... JEDI has some useful classes in the JclSynch unit for that.
You are asking two questions, so I'll answer them in two answers.
Your first question is about using TTimers; those always run in the main thread.
Most likely, your exception is an access violation.
If it is, it is usually caused by either of these:
a- your parent form is already
destroyed when your TTimer fires.
b- your do not have a reference yet to
your parent form when your TTimer
fires.
b is easy: just check if your reference is nil.
a is more difficult and depends on how you reference your parent form.
Basically you want to make sure your reference gets nil when the parent is being destroyed or removed.
If you reference your parent form through a global variable (in this example through Form2), then you should have TForm2 make the Form2 variable nil using the OnDestroy event like this:
unit Unit2;
interface
uses
Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
Dialogs;
type
TForm2 = class(TForm)
procedure FormDestroy(Sender: TObject);
private
{ Private declarations }
public
{ Public declarations }
end;
var
Form2: TForm2;
implementation
{$R *.dfm}
procedure TForm2.FormDestroy(Sender: TObject);
begin
Form2 := nil;
end;
end.
If you are using a field reference to your parent form (like FMyForm2Reference), then you should use add a Notification method like this:
unit Unit1;
interface
uses
Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms,
Dialogs, Unit2;
type
TForm1 = class(TForm)
private
FMyForm2Reference: TForm2;
protected
procedure Notification(AComponent: TComponent; Operation: TOperation); override;
public
end;
var
Form1: TForm1;
implementation
{$R *.dfm}
procedure TForm1.Notification(AComponent: TComponent; Operation: TOperation);
begin
inherited Notification(AComponent, Operation);
if (Operation = opRemove) then
if (AComponent = FMyForm2Reference) then
FMyForm2Reference := nil;
end;
end.
Regards,
Jeroen Pluimers
You are asking two questions, so I'll answer them in two answers.
Your second question is about making sure only 1 thread accessing 1 variable in a form at a time.
Since the variable is on a form, the best way is to use the Synchronize method for this.
There is an excellent example about this which that ships with Delphi, it is in the thrddemo.dpr project, where the unit in SortThds.pas has this method that shows how to use it:
procedure TSortThread.VisualSwap(A, B, I, J: Integer);
begin
FA := A;
FB := B;
FI := I;
FJ := J;
Synchronize(DoVisualSwap);
end;
Good luck,
Jeroen Pluimers

Resources