I have a need to check for memory leak in a Java application that uses JNI (C++ Code) a lot.
When I attach libumem, the process exits after receiving a SIGKILL (Signal 9).
When does a process receive SIGKILL?
How is libumem causing it?
OS: Solaris 8.
Solaris 8, Does NOT have libumem by default! Thus the program was not able to startup at all.
Related
I experienced a problem with my program at the customer site. It seems that the process is suddenly disappearing. I'm trying to find out why. Program is written in C++ and running on modern Linux systems (RHEL/Centos).
What I checked so far:
program prints nothing on standard output or standard error, which it does when exception is thrown as my handler prints backtrace before aborting
dmesg does not include anything meaningful (like OOM killer message or any other information the process was killed).
I have very limited access to the customer environment and I asked them to run a gdb and provide us with the log. The gdb script attaches to the process, catches throw, signals: SIGTERM SIGUSR1 SIGUSR2 SIGINT SIGSEGV SIGABRT SIGBUS SIGILL SIGQUIT as well puts a breakpoint on exit and _exit. The gdb log also does not contain any information regarding process catching any of these, nor receiving SIGKILL (and I believe this would normally be logged).
Any other ideas what else I could check?
I am running a Linux program that uses a lot of memory. If I terminate it manually using Ctrl-C, it will do the necessary memory clean-up. Now I'm trying to terminate the program using a script. What is an elegant way to do so? I'm hoping to do something similar to Ctrl-C so it can do the memory clean-up. Will using the "kill -9" command do this?
What do you mean by memory clean-up?
Keep in mind that memory will be freed anyway, regardless of the killing signal.
Default kill signal - SIGTERM (15) gives application a chance to do some additional work but it has to be implemented with a signal handler.
Signal handling in c++
I'm sending SIG_KILL to process on Linux, during the exit it encounters a memory bug and aborts generating core dump. I don't think it is possible on any Unix system, however this is what I observe. Is it possible for process killed by signal 9 to die from any other signal and to leave a core dump?
No, process can't catch SIGKILL but there is an option for a "process watcher" or wrapper.
Are you sure that not other processes are spawned to watch this process?
i am getting crash in the thread. while debugging coredump with gdb, i want to see the state of the thread just before crash.
in my program i am raising a signal for that thread and handling it. it would be helpful to know the state before the thread has crashed and the time before the signal raised for that thread. is it possible to obtain this information from gdb?
Thanks
With "Reversible Debugging" of gdb 7.4 it is possible. Look here for a little tutorial.
Please refer to this page
http://linux-hacks.blogspot.com/2009/07/looking-at-thread-state-inside-gdb.html
I have a server program, which doesn't have a very clean/graceful shutdown (not supposed to terminate in general). When tracing memory leaks, I run it under valgrind, but finally have to kill the process by a signal (^C). Generally I try to terminate the process when the ambiance is quiet but still then some threads might have been busy processing jobs and memory held by them cause false alarms. To assist such analysis, is there any way (tool) in valgrind, so that it can print the backtrace of threads when the program exits (by a signal?).
I know it's inconvenient, but could you get your program to dump core when it gets this signal, then diagnose the core dump with gdb?
Don't sure I quite understand your question, but you can print backtrace of all pthreads by gdb:
thread apply all bt