This is on a Redhat EL5 machine w/ a 2.6.18-164.2.1.el5 x86_64 kernel using gcc 4.1.2 and gdb 7.0.
When I run my application with gdb and break in while it's running, several of my threads show the following call stack when I do a backtrace:
#0 0x000000000051d7da in pthread_cond_wait ()
#1 0x0000000100000000 in ?? ()
#2 0x0000000000c1c3b0 in ?? ()
#3 0x0000000000c1c448 in ?? ()
#4 0x00000000000007dd in ?? ()
#5 0x000000000051d630 in ?? ()
#6 0x00007fffffffdc90 in ?? ()
#7 0x000000003b1ae84b in ?? ()
#8 0x00007fffffffdd50 in ?? ()
#9 0x0000000000000000 in ?? ()
Is this a symptom of a common problem?
Is there a known issue with viewing the call stack while waiting on a condition?
The problem is that pthread_cond_wait is written in hand-coded assembly, and apparently doesn't have proper unwind descriptor (required on x86_64 to unwind the stack) in your build of glibc. This problem may have recently been fixed here.
You can try to build and install the latest glibc (note: if you screw up installation, your machine will likely become unbootable; approach with extreme caution!), or just live with "bogus" stack traces from pthread_cond_wait.
Generally, synchronization is required when multiple threads share a single resource.
In such a case, when you interrupt the program, you'll see only 1 thread is running (i.e., accessing the resource) and other threads are waiting within pthread_cond_wait().
So I don't think pthread_cond_wait() itself is problematic.
If your program hangs with deadlock or performance doesn't scale, it might be caused by pthread_cond_wait().
That looks like a corrupt stack trace to me
for example:
#9 0x0000000000000000 in ?? ()
There shouldn't be code at NULL
Related
I am using zmq PUSH and PULL sockets. And recently started observing SIGABRT crash, in zmq_poll() operation.
The error/exit log is "Permission denied (src/tcp_connecter.cpp:361)"
(gdb) bt
#0 0x00007ffff76d053f in raise () from /lib64/libc.so.6
#1 0x00007ffff76ba895 in abort () from /lib64/libc.so.6
#2 0x00007ffff7f59ace in zmq::zmq_abort(char const*) () from /lib64/libzmq.so.5
#3 0x00007ffff7f9ef36 in zmq::tcp_connecter_t::connect() () from /lib64/libzmq.so.5
#4 0x00007ffff7f9f060 in zmq::tcp_connecter_t::out_event() () from /lib64/libzmq.so.5
#5 0x00007ffff7f6bc2c in zmq::epoll_t::loop() () from /lib64/libzmq.so.5
#6 0x00007ffff7f9ffba in thread_routine () from /lib64/libzmq.so.5
#7 0x00007ffff75d058e in start_thread () from /lib64/libpthread.so.0
#8 0x00007ffff77956a3 in clone () from /lib64/libc.so.6
Could anyone help me here ??
Process is a part of a container running in Kubernates. And this issue started occuring suddenly. And couldn't be able to recover.
Thanks,
Meanwhile, I resolved the issue.
The zmq interface in host A was trying to connect to Host B. And above error is observed in Host A.
And this issue started occuring once after restarting HostB. And I could notice, there is an ip6table rule got added in HostB as part of its restart. This rule was doing "reject with admin prohibited" in INPUT and forward chain in HostB. (I have to search my notes for exact rule.)
As part of this, zmq client in HostA was ending up in above mentioned crash. I believe crash (SIGABRT) should not be solution for hitting such rule in peer end. Since SIGABRT exception is unable to handle in code.
This question already has answers here:
Debugging core files generated on a Customer's box
(5 answers)
Closed 2 years ago.
So I have my core dump after setting the ulimit: (ulimit -c unlimited)
The core dump comes from another system that is experiencing some issues.
I have copied the core over to my dev system to examine it.
I go into gdb:
$ gdb -c core
...
Core was generated by `./ovcc'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fedd95678a9 in ?? ()
[Current thread is 1 (LWP 15155)]
(gdb) symbol-file ./ovcc
Reading symbols from ./ovcc...
(gdb) bt
#0 0x00007fedd95678a9 in ?? ()
#1 0x0000000000000002 in ?? ()
#2 0x000055e01cd5e7e0 in ?? ()
#3 0x00007fedd21e9e00 in ?? ()
#4 0x0000000000000201 in ?? ()
#5 0x000055e01cd5e7e0 in ?? ()
#6 0x0000000000000201 in ?? ()
#7 0x0000000000000000 in ?? ()
(gdb)
I check the compile and link commands and they both have "-g" and I can visually step through the program with the codium debugger!
So why can't I see where the executable is crashing?
What have I missed?
Is the problem the fact that the core was created on another system?
Is the problem the fact that the core was created on another system?
Yes, exactly.
See this answer for possible solutions.
Update:
So does this mean I can only debug the program on the system where it is both built and crashes?
It is certainly not true that you can only debug a core on the system where the binary was both built and crashed -- I debug core dumps from different systems every day, and in my case the build host, the host where the program crashed, and the host on which I debug are all separate.
One thing I just noticed: your style of loading the core: gdb -c core followed by symbol-file, doesn't work for PIE executables (at least when using GDB 10.0) -- this may be a bug in GDB.
The "regular" way of loading the core is:
gdb ./ovcc core
See if that gives you better results. (You still need to arrange for matching DSOs, as linked answer shows how to do.)
I have an application which was running fine for last 15 days and below functions were getting called multiple times, but it crashed today in fopen. I have pasted the bt below, can someone please advice what might have happened wrong, from backtrace it doesn't seems to be a memory corruption as all thread data and stack variables look good. Can it be related to some bug in RHEL 5.x
>>(gdb) bt
>>#0 0x00fe4410 in __kernel_vsyscall ()
>>#1 0x0057ab10 in raise () from /lib/libc.so.6
>>#2 0x0057c421 in abort () from /lib/libc.so.6
>>#3 0x005b367b in __libc_message () from /lib/libc.so.6
>>#4 0x005bc8bd in _int_malloc () from /lib/libc.so.6
>>#5 0x005be247 in malloc () from /lib/libc.so.6
>>#6 0x005aa8ef in __fopen_internal () from /lib/libc.so.6
>>#7 0x005aa9bc in fopen##GLIBC_2.1 () from /lib/libc.so.6
>>#8 0x0811cbff in file_timer_expiry (p_mod_ctx=0xb07e4c8, p_timer_ctx=0x7ce78368)
>>#9 0x08117c33 in timer_handler (timerId=0xad54aa50, p_timer_info=0x7ce78368, p_module_context=0xb07e4c8)
>>#10 0x08397b43 in ProcessTimerTable (vc=0xae6edb8, nw=0xa89fd380)
>>#11 0x0839974c in Schedule (nw=0xa89fd380, f=0x832027e <BaseUpdate>, ctxt=0x9955e98)
>>#12 0x080730a1 in DriverWhile (p_info=0x95f68c8, W=0x84a698c, policy=2 '\002')
>>#13 0x080732e1 in start_id (args=0x95f68c8)
>>#14 0x006e7912 in start_thread () from /lib/libpthread.so.0
>>#15 0x0062747e in clone () from /lib/libc.so.6
>>#16 0x00000000 in ?? ()
A crash inside malloc implementation is (in 99.99% of cases) a result of heap corruption.
It is likely that your program has printed a message, similar to
glibc detected ./a.out: double free or corruption (!prev): 0x0000000000c6ed50
to the terminal on which it ran.
To find heap corruption, use Valgrind or (better) Address Sanitizer (supported by recent versions of GCC and Clang).
I've been tracking a bug triggered at the launch of my program. Here is the backtrace provided by gdb:
(gdb) bt
#0 0xb753f571 in llvm::cl::parser<llvm::FunctionPass* (*)()>::getOption(unsigned int) const ()
from ./libgdl.so
#1 0xb79aeab4 in llvm::cl::generic_parser_base::findOption(char const*) ()
from ./libgdl.so
#2 0xb753f679 in llvm::RegisterPassParser<llvm::RegisterRegAlloc>::NotifyRemove(char const*) ()
from ./libgdl.so
#3 0xaf35f0b6 in llvm::MachinePassRegistry::Add(llvm::MachinePassRegistryNode*) () from /usr/lib
/i386-linux-gnu/libLLVM-3.1.so.1
#4 0xaef42b16 in ?? () from /usr/lib/i386-linux-gnu/libLLVM-3.1.so.1
#5 0xb7fece9b in ?? () from /lib/ld-linux.so.2
In fact, the crash is due to the system using LLVM 3.1 ( for graphics related task ) while I'm using LLVM 3.0, which is embedded in my program ( libgdl.so ):
When libLLVM-3.1.so.1 wants to call the NotifyRemove function, the call is forwarded to my version of LLVM in libgdl.so and it lead to the crash as the version are incompatible.
Is there any way to prevent such a mess?
Im building a shared library on linux. the library ".so" was sucessfully created, but when I tried to link it to a test application (with an empty main) and run the executable I got a segmentation error : "Segmentation error (cure dumped)"
when I tried to debug it with gdb and check the backtrace I got this output:
Program received signal SIGSEGV, Segmentation fault.
0x0073d5df in std::_Rb_tree_decrement(std::_Rb_tree_node_base*) () from /usr/lib/libstdc++.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.12.1-4.i686 libgcc-4.4.5-2.fc13.i686 libstdc++-4.4.5-2.fc13.i686 zlib-1.2.3-23.fc12.i686
(gdb) backtrace
#0 0x0073d5df in std::_Rb_tree_decrement(std::_Rb_tree_node_base*) () from /usr/lib/libstdc++.so.6
#1 0x0012d70c in ?? () from /opt/cuda/lib/libcudart.so.3
#2 0x0012df0c in ?? () from /opt/cuda/lib/libcudart.so.3
#3 0x0012c88a in ?? () from /opt/cuda/lib/libcudart.so.3
#4 0x00121435 in __cudaRegisterFatBinary () from /opt/cuda/lib/libcudart.so.3
#5 0x005d7bfd in __sti____cudaRegisterAll_55_tmpxft_00000fe6_00000000_26_MonteCarloPaeo_SM10_cpp1_ii_3a8af011()
() from libsharedCUFP.so
#6 0x005db40d in __do_global_ctors_aux () from libsharedCUFP.so
#7 0x005a8748 in _init () from libsharedCUFP.so
#8 0x008abd00 in _dl_init_internal () from /lib/ld-linux.so.2
#9 0x0089d88f in _dl_start_user () from /lib/ld-linux.so.2
Im not familiar with gdb debugging, and it's the first time Im trying to build a shared library on Linux, but it seems to me that it has something to do with the library dynamic linking.
If anyone had any idea about this error and could help me, I would be grateful.
It doesn't have anything to do with dynamic linking or shared libraries - one of the constructors in libsharedCUFP.so (I assume this is your shared library) is most probably passing an illegal address to a function in libcudart.so which crashes.
You simply need to debug your code.