I would like to have one instance of an .exe running and am using mutex as follows in app::InitInstance()
hMutex = OpenMutex(MUTEX_ALL_ACCESS, 0, _T("app.0"));
if (!hMutex)
hMutex = CreateMutex(0, 0, _T("app.0"));
In app::ExitInstance() I have
int iii = ReleaseMutex(hMutex);
where hMutex is a global variable: HANDLE hMutex;
This works and only limits the app to one instance. However, upon closing I get the following message using GetLastError(): "Attempt to release mutex not owned by caller."
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682411(v=vs.85).aspx
Remarks
The ReleaseMutex function fails if the calling thread does not own the
mutex object.
A thread obtains ownership of a mutex either by creating it with the
bInitialOwner parameter set to TRUE or by specifying its handle in a
call to one of the wait functions. When the thread no longer needs to
own the mutex object, it calls the ReleaseMutex function so that
another thread can acquire ownership.
A thread can specify a mutex that it already owns in a call to one of
the wait functions without blocking its execution. This prevents a
thread from deadlocking itself while waiting for a mutex that it
already owns. However, to release its ownership, the thread must call
ReleaseMutex one time for each time that it obtained ownership (either
through CreateMutex or a wait function).
And
HANDLE WINAPI CreateMutex(
_In_opt_ LPSECURITY_ATTRIBUTES lpMutexAttributes,
_In_ BOOL bInitialOwner,
_In_opt_ LPCTSTR lpName
);
Ownership or release of mutex is not required in this case, since you aren't doing any resource protection. You may, however, issue a CloseHandle on it - which will decrease the reference count, and when last handle is closed, the mutex object will be destroyed. You, don't need to call it either, as OS will do it for you (and would maintain reference counting mechanism also).
You may, however, need to close it as soon as possible, when second instance is running (possibility displaying a dialog box). In this case, just before displaying the dialog (that "Another instance is running"), you must close it. If you don't do, consider a scenario:
Application is normally started.
Another instance (by mistake) is launched, the new process detects it and displays a dialog box (without closing the handle).
You close the actual application opened in step 1 (but you don't close the dialog shown on (2)).
Now, you attempt to launch application - it will say that application is already running - which in fact, is not running (only a dialog is displayed).
Related
what is the rigth way to close Thread in Winapi, threads don't use common resources.
I am creating threads with CreateThread , but I don't know how to close it correctly in ,because someone suggest to use TerminateThread , others ExitThread , but what is the correct way to close it .
Also where should I call closing function in WM_CLOSE or WM_DESTROY ?
Thx in advance .
The "nicest" way to close a thread in Windows is by "telling" the thread to shutdown via some thread-safe signaling mechanism, then simply letting it reach its demise its own, potentially waiting for it to do so via one of the WaitForXXXX functions if completion detection is needed (which is frequently the case). Something like:
Main thread:
// some global event all threads can reach
ghStopEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
// create the child thread
hThread = CreateThread(NULL, 0, ThreadProc, NULL, 0, NULL);
//
// ... continue other work.
//
// tell thread to stop
SetEvent(ghStopEvent);
// now wait for thread to signal termination
WaitForSingleObject(hThread, INFINITE);
// important. close handles when no longer needed
CloseHandle(hThread);
CloseHandle(ghStopEvent);
Child thread:
DWORD WINAPI ThreadProc(LPVOID pv)
{
// do threaded work
while (WaitForSingleObject(ghStopEvent, 1) == WAIT_TIMEOUT)
{
// do thread busy work
}
return 0;
}
Obviously things can get a lot more complicated once you start putting it in practice. If by "common" resources you mean something like the ghStopEvent in the prior example, it becomes considerably more difficult. Terminating a child thread via TerminateThread is strongly discouraged because there is no logical cleanup performed at all. The warnings specified in the `TerminateThread documentation are self-explanatory, and should be heeded. With great power comes....
Finally, even the called thread invoking ExitThread is not required explicitly by you, and though you can do so, I strongly advise against it in C++ programs. It is called for you once the thread procedure logically returns from the ThreadProc. I prefer the model above simply because it is dead-easy to implement and supports full RAII of C++ object cleanup, which neither ExitThread nor TerminateThread provide. For example, the ExitThread documentation :
...in C++ code, the thread is exited before any destructors can be called
or any other automatic cleanup can be performed. Therefore, in C++
code, you should return from your thread function.
Anyway, start simple. Get a handle on things with super-simple examples, then work your way up from there. There are a ton of multi-threaded examples on the web, Learn from the good ones and challenge yourself to identify the bad ones.
Best of luck.
So you need to figure out what sort of behaviour you need to have.
Following is a simple description of the methods taken from documentation:
"TerminateThread is a dangerous function that should only be used in the most extreme cases. You should call TerminateThread only if you know exactly what the target thread is doing, and you control all of the code that the target thread could possibly be running at the time of the termination. For example, TerminateThread can result in the following problems:
If the target thread owns a critical section, the critical section will not be released.
If the target thread is allocating memory from the heap, the heap lock will not be released.
If the target thread is executing certain kernel32 calls when it is terminated, the kernel32 state for the thread's process could be inconsistent.
If the target thread is manipulating the global state of a shared DLL, the state of the DLL could be destroyed, affecting other users of the DLL."
So if you need your thread to terminate at any cost, call this method.
About ExitThread, this is more graceful. By calling ExitThread, you're telling to windows you're done with that calling thread, so the rest of the code isn't going to get called. It's a bit like calling exit(0).
"ExitThread is the preferred method of exiting a thread. When this function is called (either explicitly or by returning from a thread procedure), the current thread's stack is deallocated, all pending I/O initiated by the thread is canceled, and the thread terminates. If the thread is the last thread in the process when this function is called, the thread's process is also terminated."
There is a singleton object of EventHandler class to receive events from the mainthread. It registers the input to a vector and creates a thread that runs a lambda function that waits for some time before deleting the input from the vector to prevent repeated execution of the event for this input for some time.
But I'm getting mutex destroyed while busy error. I'm not sure where it happened and how it happened. I am not even sure what it meant either because it shouldn't be de-constructed ever as a singleton object. Some help would be appreciated.
class EventHandler{
public:
std::mutex simpleLock;
std::vector<UInt32> stuff;
void RegisterBlock(UInt32 input){
stuff.push_back(input);
std::thread removalCallBack([&](UInt32 input){
std::this_thread::sleep_for(std::chrono::milliseconds(200));
simpleLock.lock();
auto it = Find(stuff, input);
if (it != stuff.end())
stuff.erase(it);
simpleLock.simpleLock.unlock();
}, input)
removalCallBack.detach();
}
virtual EventResult ReceiveEvent(UInt32 input){
simpleLock.lock();
if (Find(stuff, input) != stuff.end()){
RegisterBlock(input));
//dostuff
}
simpleLock.simpleLock.unlock();
}
};
What is happening is that a thread is created
std::thread removalCallBack([&](UInt32 input){
std::this_thread::sleep_for(std::chrono::milliseconds(200));
simpleLock.lock();
...
removalCallBack.detach();
And then since removalCallBack is a local variable to the function RegisterBlock, when the function exits, the destructor for removalCallBack gets called which invokes std::terminate()
Documentation for thread destructor
~thread(); (since C++11)
Destroys the thread object. If *this still has an associated running thread (i.e. joinable() == true), std::terminate() is called.
but depending on timing, simpleLock is still owned by the thread (is busy) when the thread exits which according to the spec leads to undefined behavior, in your case the destroyed while busy error.
To avoid this error, you should either allow the thread to exist after the function exits (e.g. not make it a local variable) or block until the thread exits before the function exits using thread::join
Dealing with cleaning up after threads can be tricky especially if they are essentially used as different programs occupying the same address space, and in those cases many times a manager thread just like you thought of is created whose only job is to reclaim thread related resources. Your situation is a little easier because of the simplicity of the work done in the thread created by removalCallBack, but there still is cleanup to do.
If the thread object is going to be created by new, then although system resources used by the system thread the C++ thread object represents will get cleaned up, but the memory the object uses will remain allocated until delete is called.
Also, consider if the program exits while there are threads running, then the threads will be terminated, but if there is a mutex locked when that happens, once again there will be undefined behavior.
What is usually done to guarantee that a thread is no longer running is to join with it, but though this doesn't say, the pthread_join man page states
Once a thread has been detached, it can't be joined with pthread_join(3) or be made joinable again.
I would like to confirm here if I understood correctly how TCriticalSection and Synchronize operate.
As far as I know right now Synchronize uses SendMessage (update: or at least used it in older VCL versions as mentioned in couple of comments below) which suspends currently executing thread (as well as any other thread) unlike PostMessage which doesn't and then executes required function (from main thread). In a way SendMessage "stops" multithreading when executing.
But I am not sure about TCriticalSection. Let's say for example I create something like this:
// Global variables somewhere in my code any thread can access
boost::scoped_ptr<TCriticalSection> ProtectMyVarAndCallFnction(new TCriticalSection);
int MyVariable1;
void CallMyFunctionThatAlsoModifiesMoreStuff() { /* do even more here */ };
// Thread code within one of the threads
try {
ProtectMyVarAndCallFnction->Acquire();
MyVariable1++;
CallMyFunctionThatAlsoModifiesMoreStuff();
}
__finally {
ProtectMyVarAndCallFnction->Release();
}
Now, my question is - how the critical section "knows" that I am protecting MyVariable1 in this case as well as whatever the called function may modify?
If I understood it correctly - it doesn't - and it is my responsibility to correctly call Acquire() in any thread wants to change MyVariable1 or call this function (or do any of the two). In other words I think of TCriticalSection as user-defined block which defines whatever logically I assigned to it. It may be a set of variables or any particular function as long as I call Acquire() within all of the threads that might write to this block or use this function. For example "DiskOp" may be my name of TCriticalSection that writes on disk, "Internet" may be the name of TCriticalSection that calls functions that retrieve some data from the Internet. Did I get it correctly?
Also, within this context, does TCriticalSection therefore always needs to be a global kind of variable?
SendMessage suspends currently executing thread (as well as any other thread).
No, that is incorrect. SendMessage does not suspend anything. SendMessage merely delivers a message synchronously. The function does not return until the message has been delivered. That is, the window proc of the target window has been executed. And because the window proc is always called on the thread that owns the window, this means that the calling thread may need to be blocked to wait until the window's owning thread is ready to execute the window proc. It most definitely doesn't suspend all threads in the process.
How does the critical section know that I am protecting MyVariable1?
It doesn't. It's entirely up to you to make sure that all uses of MyVariable1 that need protection, are given protection. A critical section is a form of mutex. A mutex ensures that only one thread of execution can hold the mutex any any instant in time.
As I call Acquire() within all of the threads that might write to this block or use this function.
That's not really it either. The "within all of the threads" is a mis-think. You need to be thinking about "at all sections of code that use the variable".
Does a critical section therefore always need to be a global kind of variable?
No, a critical section can be a global variable. But it need not be.
I have an application, where most of the actions take some time and I want to keep the GUI responsive at all times. The basic pattern of any action triggered by the user is as follows:
prepare the action (in the main thread)
execute the action (in a background thread while keeping the gui responsive)
display the results (in the main thread)
I tried several things to accomplish this but all of them are causing problems in the long run (seemingly random access violations in certain situations).
Prepare the action, then invoke a background thread and at the end of the background thread, use Synchronize to call an OnFinish event in the main thread.
Prepare the action, then invoke a background thread and at the end of the background thread, use PostMessage to inform the GUI thread that the results are ready.
Prepare the action, then invoke a background thread, then busy-wait (while calling Application.ProcessMessages) until the background thread is finished, then proceed with displaying the results.
I cannot come up with another alternative and none of this worked perfectly for me. What is the preferred way to do this?
1) Is the 'Orignal Delphi' way, forces the background thread to wait until the synchronized method has been executed and exposes the system to more deadlock-potential than I am happy with. TThread.Synchronize has been re-written at least twice. I used it once, on D3, and had problems. I looked at how it worked. I never used it again.
2) I the design I use most often. I use app-lifetime threads, (or thread pools), create inter-thread comms objects and queue them to background threads using a producer-consumer queue based on a TObjectQueue descendant. The background thread/s operate on the data/methods of the object, store results in the object and, when complete, PostMessage() the object, (cast to lParam) back to the main thread for GUI display of results in a message-handler, (cast the lParam back again). The background threads in the main GUI thread then never have to operate on the same object and never have to directly access any fields of each other.
I use a hidden window of the GUI thread, (created with RegisterWindowClass and CreateWindow), for the background threads to PostMessage to, comms object in LParam and 'target' TwinControl, (usually a TForm class), as WParam. The trivial wndproc for the hidden window just uses TwinControl.Perform() to pass on the LParam to a message-handler of the form. This is safer than PostMessaging the object directly to a TForm.handle - the handle can, unfortunately, change if the window is recreated. The hidden window never calls RecreateWindow() and so its handle never changes.
Producer-consumer queues 'out from GUI', inter-thread comms classes/objects and PostMessage() 'in to GUI' WILL work well - I've been doing it for decades.
Re-using the comms objects is fairly easy too - just create a load in a loop at startup, (preferably in an initialization section so that the comms objects outlive all forms), and push them onto a P-C queue - that's your pool. It's easier if the comms class has a private field for the pool instance - the 'releaseBackToPool' method then needs no parameters and, if there is more than one pool, ensures that the objects are always released back to their own pool.
3) Can't really improve on David Hefferman's comment. Just don't do it.
You can implement the pattern questioned by using OTL as demonstrated by the OTL author here
You could communicate data between threads as messages.
Thread1:
allocate memory for a data structure
fill it in
send a message to Thread2 with the pointer to this structure (you could either use Windows messages or implement a queue, insuring its enque and dequeue methods don't have race conditions)
possibly receive a response message from Thread2...
Thread2:
receive the message with the pointer to the data structure from Thread1
consume the data
deallocate the data structure's memory
possibly send a message back to Thread1 in a similar fashion (perhaps reusing the data structure, but then you don't deallocate it)
You may end up with more than 1 non-GUI thread if you want your GUI not only live, but also responding to some input, while the input that takes long time to be processed is being processed.
Is there any way to determine if an object is locked in C#? I have the unenviable position, through design where I'm reading from a queue inside a class, and I need to dump the contents into a collection in the class. But that collection is also read/write from an interface outside the class. So obviously there may be a case when the collection is being written to, as the same time I want to write to it.
I could program round it, say using delegate but it would be ugly.
You can always call the static TryEnter method on the Monitor class using a value of 0 for the value to wait. If it is locked, then the call will return false.
However, the problem here is that you need to make sure that the list that you are trying to synchronize access to is being locked on itself in order to synchronize access.
It's generally bad practice to use the object that access is being synchronized as the object to lock on (exposing too much of the internal details of an object).
Remember, the lock could be on anything else, so just calling this on that list is pointless unless you are sure that list is what is being locked on.
Monitor.TryEnter will succeed if the object isn't locked, and will return false if at this very moment, the object is locked. However, note that there's an implicit race here: the instance this method returns, the object may not be locked any more.
I'm not sure if a static call to TryEnter with a time of 0 will guarantee that the lock will not be acquired if it is available. The solution I did to test in debug mode that the sync variable was locked was using the following:
#if DEBUG
// Make sure we're inside a lock of the SyncRoot by trying to lock it.
// If we're able to lock it, that means that it wasn't locked in the first
// place. Afterwards, we release the lock if we had obtained it.
bool acquired = false;
try
{
acquired = Monitor.TryEnter(SyncRoot);
}
finally
{
if (acquired)
{
Monitor.Exit(SyncRoot);
}
}
Debug.Assert(acquired == false, "The SyncRoot is not locked.");
#endif
Monitor.IsEntered
Determines whether the current thread holds the lock on the specified object.
Available since 4.5
Currently you may call Monitor.TryEnter to inspect whether object is locked or not.
In .NET 4.0 CLR team is going to add "Lock inspection API"
Here is a quotation from Rick Byers article:
lock inspection
We're adding some simple APIs to ICorDebug which allow you to explore managed locks (Monitors). For example, if a thread is blocked waiting for a lock, you can find what other thread is currently holding the lock (and if there is a time-out).
So, with this API you will be able to check:
1) What object is holding a lock?
2) Who’s waiting for it?
Hope this helps.