DLL_PROCESS_DETACH only one thread remaining - multithreading

--- symptoms
When i load my Dll from a subthread of the host app and host app closes, only 1 thread remains when Dll_PROCESS_DETACH called. This is bad. It causes memory leaks and the required cleanup can't be done.
When i load my Dll from MAIN thread of the host app and host app closes,
all threads created in Dll are still running when Dll_PROCESS_DETACH called.
This is good, because i can do all the cleanup work required.
My Dll_PROCESS_ATTACH contains NO code. No thread is created, no API function called.
-- purpose of this Dll, use case
I need a Dll which can run in various host apps where i do not know when
exactly my Dll is loaded and unloaded.
Some of those host appls obviously
load my Dll from within a thread, e.g. this thread is running a script and the script uses exported functions of my Dll.
The general problem is: When the Dll is first loaded from a subthread of host app, it is not unloaded properly, because all threads seemed to be removed when Dll_PROCESS_DETACH called. This just not only causes memory leaks, also it can't do some internal clean up work, stopping threads and doing a final socket connection, used to communicate with a server.
All this works fine, when the Dll is loaded from a main thread (in my test host) or in specific host apps it tested.
Two stack traces of debug sessions of the c++ Dll running in Visual Studio 17.
First is the bad one, where the Dll is loaded froma subthread.
The 2nd is the good one, where the Dll been loaded from main thread.
// exit stack Dll Dll loaded in subthread ; breakpoint in `Dll_PROCESS_DETACH` This is BAD
DllTest.dll!DllTest_app::~DllTest_app() Line 176 C++
[External Code]
DllTest.dll!DllTest_app::destroy() Line 208 C++
DllTest.dll!DllMain(HINSTANCE__ * hModule, unsigned long ul_reason_for_call, void * lpReserved) Line 43 C++
[External Code]
DllTest_test.exe!exit_or_terminate_process(const unsigned int return_code) Line 130 C++
DllTest_test.exe!common_exit(const int return_code, const _crt_exit_cleanup_mode cleanup_mode, const _crt_exit_return_mode return_mode) Line 271 C++
DllTest_test.exe!exit(int return_code) Line 283 C++
[External Code]
// exit stack Dll loaded in mainthread; breakpoint in `Dll_PROCESS_DETACH` This is GOOD
DllTest.dll!DllTest_app::~DllTest_app() Line 175 C++
[External Code]
DllTest.dll!DllTest_app::destroy() Line 208 C++
DllTest.dll!DllMain(HINSTANCE__ * hModule, unsigned long ul_reason_for_call, void * lpReserved) Line 43 C++
[External Code]
DllTest_test.exe!DllTestWrap::Unload(int code, int bdeleteerror) Line 139 C++
DllTest_test.exe!DllTestWrap::~DllTestWrap() Line 66 C++
[External Code]
DllTest_test.exe!_execute_onexit_table::__l22::<lambda>() Line 198 C++
DllTest_test.exe!__crt_seh_guarded_call<int>::operator()<void <lambda>(void),int <lambda>(void) & __ptr64,void <lambda>(void) >(__acrt_lock_and_call::__l3::void <lambda>(void) && setup, _execute_onexit_table::__l22::int <lambda>(void) & action, __acrt_lock_and_call::__l4::void <lambda>(void) && cleanup) Line 199 C++
DllTest_test.exe!__acrt_lock_and_call<int <lambda>(void) >(const __acrt_lock_id lock_id, _execute_onexit_table::__l22::int <lambda>(void) && action) Line 882 C++
DllTest_test.exe!_execute_onexit_table(_onexit_table_t * table) Line 222 C++
DllTest_test.exe!common_exit(const int return_code, const _crt_exit_cleanup_mode cleanup_mode, const _crt_exit_return_mode return_mode) Line 211 C++
DllTest_test.exe!exit(int return_code) Line 283 C++
[External Code]
How to accomplish, that on Dll_PROCESS_DETACH called, all threads created in Dll still running and not terminated, regardless, if Dll is loaded from subthread or mainthread of host app.
Is there something that can be done by compiler- or linker settings, or a workaround ?
Thank you in advance for each hint.

It seems like one of the threads in your app calls exit() which leads to calling ExitProcess() API - https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-exitprocess
As documented in ExitProcess documentation, it terminates all threads and then calls DllMain() with DLL_PROCESS_DETACH.
Calling ExitProcess can also lead to a deadlock, if the terminated threads hold a mutex which the last thread executing DLL_PROCESS_DETACH tries to acquire.

Related

Getting an exception from MSVC address sanitizer before even entering WinMain

I wanted to try out the new address sanitizer for MSVC, and after enabling it in my project I'm getting an access violation exception which the call stack says originates in __acrt_initialize() Line 291. Some research show that this happens before calling the main function, and indeed, a breakpoint on the first line of WinMain is not reached before this happens. With the sanitizer disabled, I am not getting any exceptions.

Data race in MFC in afxCurrentResourceHandle

we have an issue in MFC-based application related to “current MFC state” and threads. In main thread we call VisualManager, because we want to have fancy toolbars. The call ends up in CMFCVisualManagerOffice2007::OnUpdateSystemColors function located inside afxvisualmanageroffice2007.cpp, which changes “current resource handle” by calling AfxSetResourceHandle function. This function gets “current module state” and changes “current resource handle” from MyApp.exe to mfc140u.dll. This is fine, because assets for VisualManager are located in that DLL and the change will be restored back to MyApp.exe in all cases.
However, what is not fine is that we spawn a new thread just before call to VisualManager (by using AfxBeginThread), this thread needs to load some strings from string table (by using CString class), but it sometimes fails to do so. It fails because there is race with main thread about AFX_MODULE_STATE::m_hCurrentResourceHandle variable. The thread expect it to be set to MyApp.exe, but the main thread changes it to mfc140u.dll and back, the “current resource handle” is effectively a global variable.
So, my questions are: 1) Are we doing something obviously wrong managing our MFC-based threads? Should we somehow copy or protect the “module state” so our new thread is immune to the change main thread is doing? Should aim MFC to create something like per-thread variable / state? 2) I believe Microsoft is wrong here, changing what is effectively a global variable and screwing other threads expectations, VisualManager should obtain the handle and pass it to all its functions as a parameter. Am I right?
EDIT:
Hi guys #iinspectable, #cha, I have an update, sorry it took so long. Steps to reproduce: Open Visual Studio 2015 Update 3, Create new MFC application through the wizard, make sure it has the "Project style" and "Visual style and colors" selected as "Office" and "Office 2007 (Blue theme)". Open file afxvisualmanageroffice2007.cpp from MSVS folder and put 4 break-points into CMFCVisualManagerOffice2007::OnUpdateSystemColors function where it calls AfxSetResourceHandle. Open file MFCApplication1.cpp in your newly created project folder and put this code [1] into CMFCApplication4App::InitInstance function just before CMainFrame* pMainFrame = new CMainFrame;, put break-point into this thread proc.
Now build and run this MFC application in debug mode, on each break-point hit, use freeze thread and thaw thread functions from Threads window, so you will arrange main thread in the middle of CMFCVisualManagerOffice2007::OnUpdateSystemColors function just after setting the global variable using AfxSetResourceHandle function and the worker thread before CStringT::LoadString. Now the load string will fail because it is looking for it inside mfc140ud.dll instead of using resource chain and MFCApplication1.exe.
I believe this is Microsoft's bug (changing global variable for a while), my code-base is full of innocent CString::LoadString calls which rely on carefully and correctly constructed resource chain with various plug-in DLLs and with an .exe at the end. If this is not Microsoft's bug then it is my bug relying on MFC on providing me a usable resource chain. I would need to create my own resource-chain-like functionality and use it everywhere when loading strings and other stuff from resources.
// [1]
AFX_THREADPROC thread_proc = [](LPVOID pParam){
CString str;
str.LoadString(IDS_CAPTION_TEXT);
UINT ret = 0;
return ret;
};
::AfxBeginThread(thread_proc, (LPVOID)nullptr);
// Same result with ::AfxBeginThread(CRuntimeClass*) overload.

File accesses fail inside in-process COM server compiled with VS2013, with access-violation in release build

I am attempting to use a C++ COM server compiled with VS2013. It appears unable to access local files when compiled as 'Release'. By contrast it exhibits no problems with local file access when compiled as 'Debug' or when compiled using VS2010. Both the COM client and COM server are compiled as 32-bit. I haven't tested with VS2012.
In order to localise the problem, I used the following fragment of code:
{
std::basic_ifstream<char, char_traits<char> >
in_strm(L"P:\\pp\\1302\\Examples\\copy.pp", ios_base::in | ios_base::binary);
if (in_strm)
{
int x = 1;
in_strm.read(reinterpret_cast<char*>(&x), sizeof(int));
bool result = (x == 0 && in_strm.good());
in_strm.close();
}
}
The above code works fine within the COM client, but throws an access violation with the following call-stack when executed within the COM server.
ntdll.dll!_RtlpWaitOnCriticalSection#8() Unknown
ntdll.dll!_RtlEnterCriticalSection#4() Unknown
msvcr120.dll!_lock_file(_iobuf * pf) Line 223 C
ConvertServer13001.dll!std::basic_filebuf<char,std::char_traits<char> >::_Lock() Line 353 C++
msvcp120d.dll!std::basic_istream<char,std::char_traits<char> >::_Sentry_base::_Sentry_base(std::basic_istream<char,std::char_traits<char> > & _Istr={...}) Line 97 C++
msvcp120d.dll!std::basic_istream<char,std::char_traits<char> >::sentry::sentry(std::basic_istream<char,std::char_traits<char> > & _Istr={...}, bool _Noskip=true) Line 117 C++
msvcp120d.dll!std::basic_istream<char,std::char_traits<char> >::read(char * _Str=0x060bbea8, __int64 _Count=4) Line 730 C++
I have checked the "last error" in the Visual Studio debugger using "#err,hr", and no error is reported. The local file definitely exists and is not read-only.
I suspect that Microsoft have added a new COM security setting that I must handle in a particular way. Please could anyone advise me regarding this or anything else that might be an issue here?

Delphi: Exceptions when using adocomponents in objects that are created in a plugindll on threads

The situation:
I have an application and a plugin dll both written in delphi 7.
The dll exports 3 functions: createobject:pointer, runobject(instance:pointer), freeobject(instance:pointer).
createobject:pointer creates an instance of an dll-internal workobject and returns the pointer to the object.
runobject(instance:pointer) takes this instance pointer as a parameter and uses the pointer to start some processingfunction
in the objectinstance that this instance pointer points to.
freeobject(instance:pointer) takes the instance pointer and frees the internal object that this instance pointer points to.
I did this, so that i can create multiple workobject instances from the plugin dll.
Now, the application sets up 2 workerthreads. While setting up the 2 threads, the plugin dll is dynamically loaded twice via loadlibrary
(one for each thread) and the exported functions are given to the thread. (Note: because it's the same DLL with the same filename, the
DLL is loaded only once into my application and just the reference count of the loaded dll goes to 2.)
Each workerthread starts, calls CoInitialize(nil) to initialize the com system (because i want to use ado components) and then creates its
own dll-internal object via the dllfunction createobject and then calls runobject with the returned instancepointer as parameter.
Now, the code inside runobject uses adoconnection + adoquery components to read from a database.
The adocomponents are created inside the workobject and nothing is shared between the 2 threads... no global vars used.
The problem:
I get strange random accessviolations while the 2 objectinstances, each on its own thread, use their own ado components to read from the DB...!?
Both threads start to read some Databaserows. Then, at some random time and "random place" in the adoquery read code, exceptions are being raised.
"Random place" means, that the exceptions sometimes occur in the call to adoquery.open, sometimes in the call to adoquery.next...
the ado code is really simple... it looks like this:
with adoquery do
begin
sql.clear;
sql.add('select * from sometable');
open;
while not eof do
begin
test := fieldbyname('test').asstring;
next;
end;
close
end;
I did some testing:
a) If I use only 1 thread (and so only 1 workobject inside the dll is beeing created) then everything works fine.
b) If I make a copy of the DLL File with another filename but the same code inside this file, and thread_1 loads dll_1 and thread_2 loads dll_2
then these 2 identical dlls are really both beeing loaded into my application and everything works fine.(Note: loadlibrary in this test was called from the
context of the mainthread, not the context of each workerthread, but that seemed no problem because no exceptions did occur.)
c) If I don't use the DLL at all and just create my 2 workobjects directly on my 2 threads then everything works fine.
The exceptions only occur when I use adocomponents in 2 separate workobjects that are created on 2 threads and the creationcode of the workobject
is inside a dll which is loaded only once into my application.
Questions:
If I call exported functions from a dll, does the loadlibrary call which loaded the dll has to be called from within the context of the thread? I
don't think this is the case (see test b) ), but perhaps somebody knows better!?`Could this be causing my problems? If this is the case then there seems to be no way of using functions from one dll from multiple threads!?
Does anybody have an idea what causes these strange exceptions?
Any help/idea/explanation/suggestion greatly appreciated.
I found it. The Problem was, that i had to switch the delphi memorymanager to multithread mode with IsMultiThread := TRUE inside the dllcode! I already did this in the main application code, but the dll seems to use it's own version of the IsMultiThread flag or even it's own version of the delphi memorymanager. After adding IsMultiThread := TRUE to the dllcode everything works fine now.

Make a Simple wxWidgets Program Without Memory Leaks

I'm trying to make a basic wxWidgets program that doesn't leak any memory (I'm developing on Windows 7 and am using Visual Studio 2010 and trying to use CRT to check for leaks).
I started from the OpenGL sample and gradually worked it down. After adding CRT calls to the OnExit method of my wxApp object (the only place I ever even saw it mentioned), I realized that memory was being leaked everywhere.
I gradually worked it down more until I created this sample code, which makes CRT spit out a huge load of leaks:
#include <wx/glcanvas.h>
#include <wx/wxprec.h>
#ifndef WX_PRECOMP
#include <wx/wx.h>
#endif
#ifdef __WXMSW__
#include <wx/msw/msvcrt.h>
#endif
#if !defined(_INC_CRTDBG)// || !defined(_CRTDBG_MAP_ALLOC)
#error "Debug CRT functions have not been included!"
#endif
class App : public wxApp {
public:
bool OnInit(void);
int OnExit(void);
};
bool App::OnInit(void) {
if (!wxApp::OnInit()) return false;
return true;
}
int App::OnExit(void) {
return wxApp::OnExit();
}
int WINAPI WinMain(HINSTANCE h_instance, HINSTANCE h_prev_instance, wxCmdLineArgType cmd_line, int cmd_show) {
int leaks = _CrtDumpMemoryLeaks();
if (leaks) {
int i=0, j=6/i; //Put a breakpoint here or throw an exception
}
return EXIT_SUCCESS;
}
#pragma comment(lib,"wxbase29ud.lib")
#pragma comment(lib,"wxmsw29ud_gl.lib")
#pragma comment(lib,"wxmsw29ud_core.lib")
#pragma comment(lib,"wxpngd.lib")
#pragma comment(lib,"wxzlibd.lib")
#pragma comment(lib,"comctl32.lib")
#pragma comment(lib,"rpcrt4.lib")
Notice that the class App is not used anywhere. The function definitions outside the class are necessary to prevent it being optimized away. If the class App is not present, then no errors occur.
The questions are, why isn't this working? How can I make a leak free wxWidgets program? How should I use _CrtDumpMemoryLeaks()? Why aren't there resources about this--and if there are, where are they? The best I could find was this, which only suggested using CRT, but didn't actually say how. Help?
It is possible that are these are not real memory leaks. When you call _CrtDumpMemoryLeaks() it goes through the heap looking for objects that have not been freed and displays them as leaks. Since you are calling it before your application has ended then anything that has been allocated on the heap will show up as leaks.
I'm pretty sure that wxWidgets creates some global objects (for example, I know there are wxEmptyString, wxDefaultPosition and so forth and I daresay there are others that do actually perform some allocations) that will not be destroyed until after the end of your application. _CrtDumpMemoryLeaks() would need to be called after that point in order to not show false positives.
You can try to get the CRT to call _CrtDumpMemoryLeaks() automatically on program exit as explained on MSDN.
There is also a related question here that might help you.
Edit: I've tried this myself by adding the following code to the top of my App::OnInit() method and the only leaks I get shown are a 64 byte one, which matches my forced leak. So it doesn't look like all wx applications are leaky. However, I also tried it with your code and I do get leaks reported.
_CrtSetReportMode( _CRT_ERROR, _CRTDBG_MODE_FILE );
_CrtSetReportFile( _CRT_ERROR, _CRTDBG_FILE_STDERR );
int tmpDbgFlag = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG);
tmpDbgFlag |= _CRTDBG_LEAK_CHECK_DF;
_CrtSetDbgFlag(tmpDbgFlag);
// Force a leak
malloc(64);
Edit 2: You need to include the following line after your App class definition so that wxWidgets uses your App class as the application object (and provides it's own WinMain). I'm guessing that whetever it does in wxApp requires this line in order to clean itself up properly:
IMPLEMENT_APP(App)
Edit 3: I also found, in the wxWidgets page you linked to that the startup code will automatically call _CrtSetDbgFlag() for you in debug mode. So you get leak detection without having to add the code yourself. You can test this by allocating some memory and not freeing it.

Resources