I'm debugging a OpenCV app compiled with C++11 (I use OpenCV 2.4.10). The app has two threads that do some image processing on the CPU (no GPU functions used but I also included libopencv_gpu.so in the linked libraries).
Using gdb I noticed that instead of just two threads (the main process thread and another thread created by the main process thread) I found 3 threads running:
(gdb) info threads
Id Target Id Frame
78 Thread 0x7fffe2ff5700 (LWP 20531) "app_name" 0x00007ffff5bb2f3d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
2 Thread 0x7fffe3c42700 (LWP 20454) "app_name" 0x00007ffff5bdf12d in poll () at ../sysdeps/unix/syscall-template.S:81
* 1 Thread 0x7ffff7fab800 (LWP 20450) "app_name" 0x00007ffff5bb2f3d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
Thread 1 and 78 (using gdb ID) are executing my code. I added a sleep call in each one so I can make sure that those are my threads.
Thread 2 (using gdb ID) is created before entering the main function of the main process I believe. As far as I could debug this, thread with ID 2 just calls poll() function all the time.
I'm new to gdb and maybe you can tell me how to find out who creates this thread and what is it's purpose? Is this OpenCV related or C++11? When I compile the same app using Opencv4Tegra and run it on a Tegra K1 board, thread number 2 does not exist.
EDIT
This is the backtrace when creating thread number 2. It seems that libusb creates this but I don't know why yet:
(gdb) backtrace
#0 __pthread_create_2_1 (newthread=0x7fffea79c438, attr=0x0, start_routine=0x7fffea5941c0, arg=0x0) at pthread_create.c:466
#1 0x00007fffea5943df in ?? () from /lib/x86_64-linux-gnu/libusb-1.0.so.0
#2 0x00007fffea5926a5 in ?? () from /lib/x86_64-linux-gnu/libusb-1.0.so.0
#3 0x00007fffea58b715 in libusb_init () from /lib/x86_64-linux-gnu/libusb-1.0.so.0
#4 0x00007ffff2f06a0e in ?? () from /usr/lib/x86_64-linux-gnu/libdc1394.so.22
#5 0x00007ffff2ef5465 in dc1394_new () from /usr/lib/x86_64-linux-gnu/libdc1394.so.22
#6 0x00007ffff6f615e9 in CvDC1394::CvDC1394() () from /usr/local/lib/libopencv_highgui.so.2.4
#7 0x00007ffff6f373f0 in _GLOBAL__sub_I_cap_dc1394_v2.cpp () from /usr/local/lib/libopencv_highgui.so.2.4
#8 0x00007ffff7dea13a in call_init (l=<optimized out>, argc=argc#entry=3, argv=argv#entry=0x7fffffffdcd8, env=env#entry=0x7fffffffdcf8) at dl-init.c:78
#9 0x00007ffff7dea223 in call_init (env=<optimized out>, argv=<optimized out>, argc=<optimized out>, l=<optimized out>) at dl-init.c:36
#10 _dl_init (main_map=0x7ffff7ffe1c8, argc=3, argv=0x7fffffffdcd8, env=0x7fffffffdcf8) at dl-init.c:126
#11 0x00007ffff7ddb30a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
(gdb) quit
Related
stack trace of the thread show nothing except __nanosleep_nocancel from the core dump'ed using gdb on Debian. This is been observed when analyzing the threads stack trace from the coredump generated by the kernel which is triggered from the application when anomaly found
Thread 5 (Thread 0x7f8b307bf700 (LWP 27000)):
#0 ......Application function .....
#1......Application function .....
#2 ......Application function .....
#3 ......Application function .....
#4 0x00007f8b303c9494 in start_thread () from /lib/x86_64-linux- gnu/libpthread.so.0
#5 0x00007f8b2f666aff in __libc_ifunc_impl_list () from /lib/x86_64-linux-gnu/libc.so.6
#6 0x0000000000000000 in ?? ()
Thread 3 (Thread 0x7f8b30685700 (LWP 27025)):
#0 0x00007f8b303d27dd in __nanosleep_nocancel () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x0000000000000000 in ?? ()
Thread 2 (Thread 0x7f8b2eb31700 (LWP 27032)):
#0 0x00007f8b303d27dd in __nanosleep_nocancel () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x0000000000000000 in ?? ()
Thread 1 (Thread 0x7f8b306c3700 (LWP 27022)):
#0 0x00007f8b303d2f9f in raise () from /lib/x86_64-linux- gnu/libpthread.so.0
Here thread 2 and 3's stack trace showing __nanosleep_nocancel , where I expect stack trace be like thread 5.
any leads on this would be greatly appreciated.
Two threads in same process using rwlock object stored in shared memory encounter crash during pthreads stress test. I spent a while trying to find memory corruption or deadlock but nothing so far. is this just an less than optimal way of informing me I have created a deadlock? Any pointers on tools/methods for debugging this?
Thread 5 "tms_test" received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff28a7700 (LWP 3777)]
0x00007ffff761e428 in __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff761e428 in __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007ffff762002a in __GI_abort () at abort.c:89
#2 0x00007ffff76607ea in __libc_message (do_abort=do_abort#entry=1, fmt=fmt#entry=0x7ffff77776cc "%s") at ../sysdeps/posix/libc_fatal.c:175
#3 0x00007ffff766080e in __GI___libc_fatal (message=message#entry=0x7ffff79c4ae0 "The futex facility returned an unexpected error code.") at ../sysdeps/posix/libc_fatal.c:185
#4 0x00007ffff79be7e5 in futex_fatal_error () at ../sysdeps/nptl/futex-internal.h:200
#5 futex_wait (private=, expected=, futex_word=0x7ffff7f670d9) at ../sysdeps/unix/sysv/linux/futex-internal.h:77
#6 futex_wait_simple (private=, expected=, futex_word=0x7ffff7f670d9) at ../sysdeps/nptl/futex-internal.h:135
#7 __pthread_rwlock_wrlock_slow (rwlock=0x7ffff7f670cd) at pthread_rwlock_wrlock.c:67
#8 0x00000000004046e3 in _memstat (offset=0x7fffdc0b11a5, func=0x0, lineno=0, size=134, flag=1 '\001') at tms_mem.c:107
#9 0x000000000040703b in TmsMemReallocExec (in=0x7fffdc0abb81, size=211, func=0x43f858 "_malloc_thread", lineno=478) at tms_mem.c:390
#10 0x000000000042a008 in _malloc_thread (arg=0x644c11) at tms_test.c:478
#11 0x000000000041a1d6 in _threadStarter (arg=0x644c51) at tms_mem.c:2384
#12 0x00007ffff79b96ba in start_thread (arg=0x7ffff28a7700) at pthread_create.c:333
#13 0x00007ffff76ef82d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
(gdb)
It's pretty hard to debug something what is not documented well. I was trying to find any helpful information about "The futex facility returned an unexpected error code" but it seems that it isn't specified in futex documentation.
In my case this message was generated by sem_wait(sem), where sem wasn't valid sem_t pointer. I was accidentally overwriting it (the memory pointed by sem) with some random integers after initializing sem with sem_init(sem,1,1).
Try checking if you are passing valid pointer to locking function.
I was getting this error when i declared sem_t mutex as local variable.
I'm currently debugging kernel code using KGDB.
Whenever I break in I naturally jump to the interrupt handler for kgdb.
Under GDB I ran the following command.
info threads
and the output would be
7 Thread 7 (rcu_sched) 0x0000000000000000 in irq_stack_union ()
6 Thread 5 (kworker/0:0H) 0x0000000000000000 in irq_stack_union ()
5 Thread 3 (ksoftirqd/0) 0x0000000000000000 in irq_stack_union ()
4 Thread 2 (kthreadd) 0x0000000000000000 in irq_stack_union ()
3 Thread 1 (init) 0x0000000000000000 in irq_stack_union ()
2 Thread 3754 (Xorg) 0x0000000000000000 in irq_stack_union ()
* 1 Thread 4294967294 (shadowCPU0) kgdb_breakpoint ()
at kernel/debug/debug_core.c:1042
I would then jump through the code expecting to end up in a different thread (I'm interested in Xorg) however after i step through the code the next executing thread becomes cpu idle.
info thread
* 1 Thread 4294967294 (shadowCPU0) cpu_idle_loop () at kernel/cpu/idle.c:116
How can I switch my debug context to Xorg or any other thread, additionally what does irq_stack_union () mean. Thread is idle pending interrupts?
According to the offical documentation is is just thread threadno
https://sourceware.org/gdb/onlinedocs/gdb/Threads.html
I have a multithreaded program that I'm trying to debug. When I run info thread in GDB, I get the following:
(gdb) info thread
Id Target Id Frame
8 Thread 0x7fffe77fd700 (LWP 17425) "SocketWriter" 0x00007ffff7bc9b2f in pthread_cond_wait##GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
7 Thread 0x7fffe73fc700 (LWP 17426) "SocketWriter" 0x00007ffff7bc9b2f in pthread_cond_wait##GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
6 Thread 0x7fffe7fff700 (LWP 17423) "SocketReader" 0x00007ffff7bcc66d in read () from /usr/lib/libpthread.so.0
5 Thread 0x7fffe7bfe700 (LWP 17424) "SocketReader" 0x00007ffff7bcc66d in read () from /usr/lib/libpthread.so.0
* 4 Thread 0x7ffff4810700 (LWP 17422) "unittest" 0x00007ffff7bcc38c in __lll_lock_wait () from /usr/lib/libpthread.so.0
3 Thread 0x7ffff4c11700 (LWP 17421) "receiver" 0x00007ffff7bcc38c in __lll_lock_wait () from /usr/lib/libpthread.so.0
2 Thread 0x7ffff5a3b700 (LWP 17420) "unittest" 0x00007ffff634e553 in select () from /usr/lib/libc.so.6
1 Thread 0x7ffff7fc9780 (LWP 17419) "unittest" 0x00007ffff7bc9b2f in pthread_cond_wait##GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
It would be excellent if I could make GDB display the parent/child relationships between the threads, something like the following:
(gdb) info thread
Id Target Id Frame
1 Thread 0x7ffff7fc9780 (LWP 17419) "unittest" 0x00007ffff7bc9b2f in pthread_cond_wait##GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
3 Thread 0x7ffff4c11700 (LWP 17421) "receiver" 0x00007ffff7bcc38c in __lll_lock_wait () from /usr/lib/libpthread.so.0
8 Thread 0x7fffe77fd700 (LWP 17425) "SocketWriter" 0x00007ffff7bc9b2f in pthread_cond_wait##GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
6 Thread 0x7fffe7fff700 (LWP 17423) "SocketReader" 0x00007ffff7bcc66d in read () from /usr/lib/libpthread.so.0
2 Thread 0x7ffff5a3b700 (LWP 17420) "unittest" 0x00007ffff634e553 in select () from /usr/lib/libc.so.6
5 Thread 0x7fffe7bfe700 (LWP 17424) "SocketReader" 0x00007ffff7bcc66d in read () from /usr/lib/libpthread.so.0
* 4 Thread 0x7ffff4810700 (LWP 17422) "unittest" 0x00007ffff7bcc38c in __lll_lock_wait () from /usr/lib/libpthread.so.0
7 Thread 0x7fffe73fc700 (LWP 17426) "SocketWriter" 0x00007ffff7bc9b2f in pthread_cond_wait##GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
For example, thread 3 is the parent of threads 8, 6, and 2, and thread 1 is the parent of everything.
Does such functionality exist? I have not seen reference to it, if it does.
gdb doesn't print this information because it doesn't exist in your program -- there is no way for gdb to discover it once the threads have been created.
There are maybe two ways it could be done.
First, you could set a breakpoint on the thread-creation function and record the information. This is readily done from Python. Then you can write a new command, also in Python, to format the output the way you like.
The problem with this approach is that it won't work if you "attach" to a running program. It will be too late to capture the information.
Another method is if you have extra information available in your program that describes the hierarchy. Then you can write a new command in Python that extracts this information to display things as you like.
I'm trying to use libvlc from within node.js using node-ffi, and while it seems to work great for the general basic media player functionality, I keep getting crashes, segmentation faults and general freezes in my program when I try to use libvlc's asynchronous event system and integrate it with node's EventEmitter. The code I'm using thus far is hosted at https://gist.github.com/2644721 but doesn't seem to work.
GDB produces a mixed-bag of results, but the last crash I received was:
Program received signal SIGSEGV, Segmentation fault.
0x000000000057cc86 in v8::Function::Call(v8::Handle<v8::Object>, int, v8::Handle<v8::Value>*) ()
(gdb) bt
#0 0x000000000057cc86 in v8::Function::Call(v8::Handle<v8::Object>, int, v8::Handle<v8::Value>*) ()
#1 0x00007ffff5997a41 in CallbackInfo::DispatchToV8(CallbackInfo*, void*, void**) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#2 0x00007ffff5997adb in CallbackInfo::WatcherCallback(uv_async_s*, int) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#3 0x00000000007be12f in ev_invoke_pending ()
#4 0x00000000007c2087 in ev_run ()
#5 0x00000000007b597f in uv_run ()
#6 0x000000000052a147 in node::Start(int, char**) ()
#7 0x00007ffff63ca76d in __libc_start_main ()
from /lib/x86_64-linux-gnu/libc.so.6
#8 0x0000000000524fe5 in _start ()
It's obvious I'm doing something wrong here - node-ffi documentation say that it's really easy to cause this sort of behaviour if you do something wrong. I'm thinking perhaps the callback isn't being run from the same thread or scope, but I'm not sure how to check or even fix that. Any help would be appreciated...
Program received signal SIGSEGV, Segmentation fault.
IsGlobalObject (this=0x1)
at /build/buildd/nodejs-0.6.17/deps/v8/src/objects-inl.h:796
796 in /build/buildd/nodejs-0.6.17/deps/v8/src/objects-inl.h
(gdb) bt
#0 IsGlobalObject (this=0x1)
at /build/buildd/nodejs-0.6.17/deps/v8/src/objects-inl.h:796
#1 v8::internal::Invoke (construct=<optimised out>, func=..., receiver=...,
argc=2, args=0x7fffffffdeb0, has_pending_exception=0x7fffffffde1f)
at /build/buildd/nodejs-0.6.17/deps/v8/src/execution.cc:101
#2 0x00000000005ae967 in v8::internal::Execution::Call (callable=...,
receiver=..., argc=2, args=0x7fffffffdeb0,
pending_exception=0x7fffffffde1f, convert_receiver=<optimised out>)
at /build/buildd/nodejs-0.6.17/deps/v8/src/execution.cc:175
#3 0x000000000057cd31 in v8::Function::Call (this=0xc0aae0, recv=..., argc=2,
argv=0x7fffffffdeb0) at /build/buildd/nodejs-0.6.17/deps/v8/src/api.cc:3601
#4 0x00007ffff5997a41 in CallbackInfo::DispatchToV8(CallbackInfo*, void*, void**) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#5 0x00007ffff5997adb in CallbackInfo::WatcherCallback(uv_async_s*, int) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#6 0x00000000007be12f in ev_invoke_pending (loop=0xb9dea0)
at src/unix/ev/ev.c:2149
#7 0x00000000007c2087 in ev_run (loop=0xb9dea0, flags=0)
at src/unix/ev/ev.c:2525
#8 0x00000000007b597f in uv_run (loop=<optimised out>) at src/unix/core.c:194