Garbage Collector : finalize isn't always called? - garbage-collection

My question is related to Java finalize : How can I free non-GC resource even if there's mistake .
The finalize is NOT always called in most Garbage Collectors? If so, Why not? and Is there any GC that guarantee call finalize before program is normally exit?
I use boehm-gc in some project. Does boehm-gc guarantee call finalize before program is normally exit? If not, Is there any way to call finalize when program is normally exit? (so to speak, call GC_gcollect before main returns.)

Related

What happens when a panic hook panics?

What happens if the function I passed to std::panic::set_hook panics?
I can imagine many ways of reacting to this: consider this UB, abort the program like C++ does, invoke the panic handler again for the new panic, simply abort the execution of the hook... What exactly does Rust promise here?
Context. I'm writing a web app with Rust/WASM backend and I would like to make a panic hook that sends any errors to the server for debugging. This involves a network operation, which can itself fail. So I'm trying to figure out how I can ensure some reasonable behavior in this double-failure scenario.
It's not documented outside of the source code.
The source code for the panic entry point in std has this comment:
// If this is the third nested call (e.g., panics == 2, this is 0-indexed),
// the panic hook probably triggered the last panic, otherwise the
// double-panic check would have aborted the process. In this case abort the
// process real quickly as we don't want to try calling it again as it'll
// probably just panic again.
So the answer to your question is either "invoke the panic handler again for the new panic" or "abort the program" depending on how many times the hook already panicked.
This all assumes you aren't using #![no_std]. If you are then you're either disabling panicking altogether or you are implementing your own panic handler with #[panic_handler], in which case you get to decide what happens yourself.

JNI code ie executed concurrently with garbage collection.

It is written on the http://psy-lob-saw.blogspot.com/2015/12/safepoints.html
A Java thread is at a safepoint while executing JNI code. Before
crossing the native call boundary the stack is left in a consistent
state before handing off to the native code. This means that the
thread can still run while at a safepoint.
How is it possible? After all, I can pass a object's reference to JNI.
In JNI I can set a field in that object.
It is clear that it can't be collected (we have a local reference). But, it can moved to old generation by a GC during full gc collection.
So, we have the following situation:
GC collector: | Thread executing JNI code
compact old generation | modify object fields that can be
and move object from young generation | moved now! A catastrophe.
to old generation. |
How JVM deals with that?
Almost every JNI call has a safepoint guard. Whenever you invoke a JNI function from a native method, a thread switches from in_native to in_vm state. A part of this transition is a safepoint check.
See ThreadStateTransition::transition_from_native() which calls JavaThread::check_safepoint_and_suspend_for_native_trans(thread)
// Slow path when the native==>VM/Java barriers detect a safepoint is in
// progress or when _suspend_flags is non-zero.
// Current thread needs to self-suspend if there is a suspend request and/or
// block if a safepoint is in progress.
That is, a thread calling JNI function while GC is active will be suspended until GC completes.

When does the garbage collector run when calling Haskell exports from C?

When exporting a Haskell function to be called from C, when does Haskell's garbage get collected? If C owns main then there is no way to predict the next call in to Haskell. This question is especially pertinent when running single-threaded Haskell or without parallel GC.
When you initialize the ghc runtime, you can pass rts flags to it via argc and argv like so:
RtsConfig conf = defaultRtsConfig;
conf.rts_opts_enabled = RtsOptsAll;
hs_init_ghc(&argc, &argv, conf);
This lets you set options to, for example fix a smaller maximum heap size or use a compaction algorithm on the nursery to further reduce allocation. Further, note there is an idle GC whose interval can be set (or disabled), and if you link the threaded runtime, that should run whether or not you ever yield back to a Haskell call.
Edit: I haven't actually performed experimentation to verify the following, but if we look at the source of hs_init_ghc we see that it initializes signal handlers, which should include the timer handlers that respond on SIGVTALRM and indeed it also starts the time, which calls (on POSIX) timer_create that should throw those signals on regular intervals. In turn, this periodically should "wake up" the RTS whether or not anything is happening, which in turn should mean that it will run idle GC whether or not the system yields back to Haskell from C. But again, I have only read the code and commentary, not tested this myself.

winapi apc function parameter passing - what is the best practice

Hi i using winapi's QueueUserAPC to invoke an apc function call in another thread.
my question is, what is the best practice for passing a parameter to it.
i refer to the object lifetime and allocation/deallocation responsibility.
DWORD WINAPI QueueUserAPC(PAPCFUNC pfnAPC, HANDLE hThread, ULONG_PTR dwData);
i am using the dwData to pass the parameter to pass a pointer to some data and i was wondering how i should handle it.
i need to make sure that it lives until the receiving thread finished using it.
should i use a smart pointer to make sure that data is deallocated when no longer used?
i guess that allocation in the calling thread and dealloc. in the receiving is possible but probably not such a good thing.
anything else that can be done?
i think i would like to avoid synchronization between the two only to notify that the receiving thread is done with the data...
thanks!
Alloc'ing in the sending thread and dealloc'ing in the receiving one is easy, but it has the main drawback that it may leak, even if you handle the sending failure, the receiving thread may finish before having a chance to execute the APC.
Probably your easiest way to avoid the leak is to create a queue for sent data -maybe a queue per thread- and when thread finishes, you traverse the thread queue and free all the pending data.
But as usual, the devil is in the details...

Is the first thread that gets to run inside a Win32 process the "primary thread"? Need to understand the semantics

I create a process using CreateProcess() with the CREATE_SUSPENDED and then go ahead to create a little patch of code inside the remote process to load a DLL and call a function (exported by that DLL), using VirtualAllocEx() (with ..., MEM_RESERVE | MEM_COMMIT, PAGE_EXECUTE_READWRITE), WriteProcessMemory(), then call FlushInstructionCache() on that patch of memory with the code.
After that I call CreateRemoteThread() to invoke that code, creating me a hRemoteThread. I have verified that the remote code works as intended. Note: this code simply returns, it does not call any APIs other than LoadLibrary() and GetProcAddress(), followed by calling the exported stub function that currently simply returns a value that will then get passed on as the exit status of the thread.
Now comes the peculiar observation: remember that the PROCESS_INFORMATION::hThread is still suspended. When I simply ignore hRemoteThread's exit code and also don't wait for it to exit, all goes "fine". The routine that calls CreateRemoteThread() returns and PROCESS_INFORMATION::hThread gets resumed and the (remote) program actually gets to run.
However, if I call WaitForSingleObject(hRemoteThread, INFINITE) or do the following (which has the same effect):
DWORD exitCode = STILL_ACTIVE;
while(STILL_ACTIVE == exitCode)
{
Sleep(500);
if(!GetExitCodeThread(hRemoteThread, &exitCode))
break;
}
followed by CloseHandle() this leads to hRemoteThread finishing before PROCESS_INFORMATION::hThread gets resumed and the process simply "disappears". It is enough to allow hRemoteThread to finish somehow without PROCESS_INFORMATION::hThread to cause the process to die.
This looks suspiciously like a race condition, since under certain circumstances hRemoteThread may still be faster and the process would likely still "disappear", even if I leave the code as is.
Does that imply that the first thread that gets to run within a process becomes automatically the primary thread and that there are special rules for that primary thread?
I was always under the impression that a process finishes when its last thread dies, not when a particular thread dies.
Also note: there is no call to ExitProcess() involved here in any way, because hRemoteThread simply returns and PROCESS_INFORMATION::hThread is still suspended when I wait for hRemoteThread to return.
This happens on Windows XP SP3, 32bit.
Edit: I have just tried Sysinternals Process Monitor to see what's happening and I could verify my observations from before. The injected code does not crash or anything, instead I get to see that if I don't wait for the thread it doesn't exit before I close the program where the code got injected. I'm thinking whether the call to CloseHandle(hRemoteThread) should be postponed or something ...
Edit+1: it's not CloseHandle(). If I leave that out just for a test, the behavior doesn't change when waiting for the thread to finish.
The first thread to run isn't special.
For example, create a console app which creates a suspended thread and terminates the original thread (by calling ExitThread). This process never terminates (on Windows 7 anyway).
Or make the new thread wait for five seconds then exit. As expected, the process will live for five seconds and exit when the secondary thread terminates.
I don't know what's happening with your example. The easiest way to avoid the race is to make the new thread resume the original thread.
Speculating now, I do wonder if what you're doing isn't likely to cause problems anyway. For example, what happens to all the DllMain calls for the implicitly loaded DLLs? Are they unexpectedly happening on the wrong thread, are they being skipped, or are they postponed until after your code has run and the main thread starts?
Odds are good that the thread with the main (or equivalent) function calls ExitProcess (either explicitly or in its runtime library). ExitProcess, well, exits the entire process, including killing all threads. Since the main thread doesn't know about your injected code, it doesn't wait for it to finish.
I don't know that there's a good way to make the main thread wait for yours to complete...

Resources