Debugging a failed node-ffi callback / segmentation fault - node.js

I'm trying to use libvlc from within node.js using node-ffi, and while it seems to work great for the general basic media player functionality, I keep getting crashes, segmentation faults and general freezes in my program when I try to use libvlc's asynchronous event system and integrate it with node's EventEmitter. The code I'm using thus far is hosted at https://gist.github.com/2644721 but doesn't seem to work.
GDB produces a mixed-bag of results, but the last crash I received was:
Program received signal SIGSEGV, Segmentation fault.
0x000000000057cc86 in v8::Function::Call(v8::Handle<v8::Object>, int, v8::Handle<v8::Value>*) ()
(gdb) bt
#0 0x000000000057cc86 in v8::Function::Call(v8::Handle<v8::Object>, int, v8::Handle<v8::Value>*) ()
#1 0x00007ffff5997a41 in CallbackInfo::DispatchToV8(CallbackInfo*, void*, void**) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#2 0x00007ffff5997adb in CallbackInfo::WatcherCallback(uv_async_s*, int) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#3 0x00000000007be12f in ev_invoke_pending ()
#4 0x00000000007c2087 in ev_run ()
#5 0x00000000007b597f in uv_run ()
#6 0x000000000052a147 in node::Start(int, char**) ()
#7 0x00007ffff63ca76d in __libc_start_main ()
from /lib/x86_64-linux-gnu/libc.so.6
#8 0x0000000000524fe5 in _start ()
It's obvious I'm doing something wrong here - node-ffi documentation say that it's really easy to cause this sort of behaviour if you do something wrong. I'm thinking perhaps the callback isn't being run from the same thread or scope, but I'm not sure how to check or even fix that. Any help would be appreciated...
Program received signal SIGSEGV, Segmentation fault.
IsGlobalObject (this=0x1)
at /build/buildd/nodejs-0.6.17/deps/v8/src/objects-inl.h:796
796 in /build/buildd/nodejs-0.6.17/deps/v8/src/objects-inl.h
(gdb) bt
#0 IsGlobalObject (this=0x1)
at /build/buildd/nodejs-0.6.17/deps/v8/src/objects-inl.h:796
#1 v8::internal::Invoke (construct=<optimised out>, func=..., receiver=...,
argc=2, args=0x7fffffffdeb0, has_pending_exception=0x7fffffffde1f)
at /build/buildd/nodejs-0.6.17/deps/v8/src/execution.cc:101
#2 0x00000000005ae967 in v8::internal::Execution::Call (callable=...,
receiver=..., argc=2, args=0x7fffffffdeb0,
pending_exception=0x7fffffffde1f, convert_receiver=<optimised out>)
at /build/buildd/nodejs-0.6.17/deps/v8/src/execution.cc:175
#3 0x000000000057cd31 in v8::Function::Call (this=0xc0aae0, recv=..., argc=2,
argv=0x7fffffffdeb0) at /build/buildd/nodejs-0.6.17/deps/v8/src/api.cc:3601
#4 0x00007ffff5997a41 in CallbackInfo::DispatchToV8(CallbackInfo*, void*, void**) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#5 0x00007ffff5997adb in CallbackInfo::WatcherCallback(uv_async_s*, int) ()
from /home/adam/node_modules/node-ffi/compiled/0.6/linux/x64/ffi_bindings.node
#6 0x00000000007be12f in ev_invoke_pending (loop=0xb9dea0)
at src/unix/ev/ev.c:2149
#7 0x00000000007c2087 in ev_run (loop=0xb9dea0, flags=0)
at src/unix/ev/ev.c:2525
#8 0x00000000007b597f in uv_run (loop=<optimised out>) at src/unix/core.c:194

Related

stacktrace of few threads show nothing except __nanosleep_nocancel from core generated

stack trace of the thread show nothing except __nanosleep_nocancel from the core dump'ed using gdb on Debian. This is been observed when analyzing the threads stack trace from the coredump generated by the kernel which is triggered from the application when anomaly found
Thread 5 (Thread 0x7f8b307bf700 (LWP 27000)):
#0 ......Application function .....
#1......Application function .....
#2 ......Application function .....
#3 ......Application function .....
#4 0x00007f8b303c9494 in start_thread () from /lib/x86_64-linux- gnu/libpthread.so.0
#5 0x00007f8b2f666aff in __libc_ifunc_impl_list () from /lib/x86_64-linux-gnu/libc.so.6
#6 0x0000000000000000 in ?? ()
Thread 3 (Thread 0x7f8b30685700 (LWP 27025)):
#0 0x00007f8b303d27dd in __nanosleep_nocancel () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x0000000000000000 in ?? ()
Thread 2 (Thread 0x7f8b2eb31700 (LWP 27032)):
#0 0x00007f8b303d27dd in __nanosleep_nocancel () from /lib/x86_64-linux-gnu/libpthread.so.0
#1 0x0000000000000000 in ?? ()
Thread 1 (Thread 0x7f8b306c3700 (LWP 27022)):
#0 0x00007f8b303d2f9f in raise () from /lib/x86_64-linux- gnu/libpthread.so.0
Here thread 2 and 3's stack trace showing __nanosleep_nocancel , where I expect stack trace be like thread 5.
any leads on this would be greatly appreciated.

The futex facility returned an unexpected error code?

Two threads in same process using rwlock object stored in shared memory encounter crash during pthreads stress test. I spent a while trying to find memory corruption or deadlock but nothing so far. is this just an less than optimal way of informing me I have created a deadlock? Any pointers on tools/methods for debugging this?
Thread 5 "tms_test" received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff28a7700 (LWP 3777)]
0x00007ffff761e428 in __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff761e428 in __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54
#1 0x00007ffff762002a in __GI_abort () at abort.c:89
#2 0x00007ffff76607ea in __libc_message (do_abort=do_abort#entry=1, fmt=fmt#entry=0x7ffff77776cc "%s") at ../sysdeps/posix/libc_fatal.c:175
#3 0x00007ffff766080e in __GI___libc_fatal (message=message#entry=0x7ffff79c4ae0 "The futex facility returned an unexpected error code.") at ../sysdeps/posix/libc_fatal.c:185
#4 0x00007ffff79be7e5 in futex_fatal_error () at ../sysdeps/nptl/futex-internal.h:200
#5 futex_wait (private=, expected=, futex_word=0x7ffff7f670d9) at ../sysdeps/unix/sysv/linux/futex-internal.h:77
#6 futex_wait_simple (private=, expected=, futex_word=0x7ffff7f670d9) at ../sysdeps/nptl/futex-internal.h:135
#7 __pthread_rwlock_wrlock_slow (rwlock=0x7ffff7f670cd) at pthread_rwlock_wrlock.c:67
#8 0x00000000004046e3 in _memstat (offset=0x7fffdc0b11a5, func=0x0, lineno=0, size=134, flag=1 '\001') at tms_mem.c:107
#9 0x000000000040703b in TmsMemReallocExec (in=0x7fffdc0abb81, size=211, func=0x43f858 "_malloc_thread", lineno=478) at tms_mem.c:390
#10 0x000000000042a008 in _malloc_thread (arg=0x644c11) at tms_test.c:478
#11 0x000000000041a1d6 in _threadStarter (arg=0x644c51) at tms_mem.c:2384
#12 0x00007ffff79b96ba in start_thread (arg=0x7ffff28a7700) at pthread_create.c:333
#13 0x00007ffff76ef82d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
(gdb)
It's pretty hard to debug something what is not documented well. I was trying to find any helpful information about "The futex facility returned an unexpected error code" but it seems that it isn't specified in futex documentation.
In my case this message was generated by sem_wait(sem), where sem wasn't valid sem_t pointer. I was accidentally overwriting it (the memory pointed by sem) with some random integers after initializing sem with sem_init(sem,1,1).
Try checking if you are passing valid pointer to locking function.
I was getting this error when i declared sem_t mutex as local variable.

OpenCV app compiled with C++11 creates extra thread

I'm debugging a OpenCV app compiled with C++11 (I use OpenCV 2.4.10). The app has two threads that do some image processing on the CPU (no GPU functions used but I also included libopencv_gpu.so in the linked libraries).
Using gdb I noticed that instead of just two threads (the main process thread and another thread created by the main process thread) I found 3 threads running:
(gdb) info threads
Id Target Id Frame
78 Thread 0x7fffe2ff5700 (LWP 20531) "app_name" 0x00007ffff5bb2f3d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
2 Thread 0x7fffe3c42700 (LWP 20454) "app_name" 0x00007ffff5bdf12d in poll () at ../sysdeps/unix/syscall-template.S:81
* 1 Thread 0x7ffff7fab800 (LWP 20450) "app_name" 0x00007ffff5bb2f3d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
Thread 1 and 78 (using gdb ID) are executing my code. I added a sleep call in each one so I can make sure that those are my threads.
Thread 2 (using gdb ID) is created before entering the main function of the main process I believe. As far as I could debug this, thread with ID 2 just calls poll() function all the time.
I'm new to gdb and maybe you can tell me how to find out who creates this thread and what is it's purpose? Is this OpenCV related or C++11? When I compile the same app using Opencv4Tegra and run it on a Tegra K1 board, thread number 2 does not exist.
EDIT
This is the backtrace when creating thread number 2. It seems that libusb creates this but I don't know why yet:
(gdb) backtrace
#0 __pthread_create_2_1 (newthread=0x7fffea79c438, attr=0x0, start_routine=0x7fffea5941c0, arg=0x0) at pthread_create.c:466
#1 0x00007fffea5943df in ?? () from /lib/x86_64-linux-gnu/libusb-1.0.so.0
#2 0x00007fffea5926a5 in ?? () from /lib/x86_64-linux-gnu/libusb-1.0.so.0
#3 0x00007fffea58b715 in libusb_init () from /lib/x86_64-linux-gnu/libusb-1.0.so.0
#4 0x00007ffff2f06a0e in ?? () from /usr/lib/x86_64-linux-gnu/libdc1394.so.22
#5 0x00007ffff2ef5465 in dc1394_new () from /usr/lib/x86_64-linux-gnu/libdc1394.so.22
#6 0x00007ffff6f615e9 in CvDC1394::CvDC1394() () from /usr/local/lib/libopencv_highgui.so.2.4
#7 0x00007ffff6f373f0 in _GLOBAL__sub_I_cap_dc1394_v2.cpp () from /usr/local/lib/libopencv_highgui.so.2.4
#8 0x00007ffff7dea13a in call_init (l=<optimized out>, argc=argc#entry=3, argv=argv#entry=0x7fffffffdcd8, env=env#entry=0x7fffffffdcf8) at dl-init.c:78
#9 0x00007ffff7dea223 in call_init (env=<optimized out>, argv=<optimized out>, argc=<optimized out>, l=<optimized out>) at dl-init.c:36
#10 _dl_init (main_map=0x7ffff7ffe1c8, argc=3, argv=0x7fffffffdcd8, env=0x7fffffffdcf8) at dl-init.c:126
#11 0x00007ffff7ddb30a in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
(gdb) quit

How to list backtraces of all threads non-interactively (without needing to press ENTER key)?

I am in a gdb session trying to debug a core dump with more than 200 threads.
When I do thread apply all bt in gbd, I have to press Enter key repeatedly for more threads. Its quite annoying. Is there a way I can specify in my command to do it without pressing Enter?
Thanks for any info.
EDIT:
Here is a sample output:
(gdb) thread apply all bt
Thread 409 (Thread 7505):
#0 0x00007ffff1d6961c in ?? ()
#1 0x0000000000000000 in ?? ()
...
...
...
...
<snipping out 20 some backtraces>
...
...
...
...
Thread 390 (Thread 10529):
#0 0x00007ffff1d6961c in ?? ()
#1 0x0000001300000000 in ?? ()
#2 0x00007fffe860bd50 in ?? ()
#3 0x00007fffe8464690 in ?? ()
#4 0x0000000000000014 in ?? ()
---Type <return> to continue, or q <return> to quit---
Disable the pager by using:
set height 0

Why is ImageMagick within node.js crashing?

I'm using node.js, node-vips and libvips compiled with ImageMagick to convert and resize images. I'm getting segmentation faults and failed assertions when I try and resize more than a couple of images.
I've had so many different crashes I'm not sure where to begin. I started off with libvips 7.26.8, I've also tried 7.30.7. This is with node v0.8.17 compiled from source, on a fairly standard, clean ubuntu box.
#0 0x0000158ac3765c59 in ?? ()
Cannot access memory at address 0x7fffa837ec90
#0 0x0000000000000000 in ?? ()
#1 0x0000000000000000 in ?? ()
node: ../deps/uv/src/unix/stream.c:729: uv__stream_io: Assertion `!!(events & EV_READ) ^ !!(events & EV_WRITE)' failed.
Aborted (core dumped)
#0 0x00007f5ed4df9425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f5ed4dfcb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007f5ed4df20ee in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007f5ed4df2192 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x00000000005d1bc8 in uv__stream_io (loop=<optimized out>, w=<optimized out>, events=<optimized out>) at ../deps/uv/src/unix/stream.c:729
#5 0x00000000005c6ac2 in ev_invoke_pending (loop=0xdb34c0 <default_loop_struct>) at ../deps/uv/src/unix/ev/ev.c:2145
#6 0x00000000005c2986 in uv__poll (loop=0xdb27e0 <default_loop_struct>) at ../deps/uv/src/unix/core.c:246
#7 uv__run (loop=0xdb27e0 <default_loop_struct>) at ../deps/uv/src/unix/core.c:257
#8 0x00000000005c2c60 in uv_run (loop=0xdb27e0 <default_loop_struct>) at ../deps/uv/src/unix/core.c:265
#9 0x000000000057d9f7 in node::Start(int, char**) ()
#10 0x00007f5ed4de476d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6
#11 0x0000000000575245 in _start ()
#0 0x00000000006b13b8 in v8::Object::SetHiddenValue(v8::Handle<v8::String>, v8::Handle<v8::Value>) ()
#1 0x0000000000593f5a in node::SlabAllocator::Allocate(v8::Handle<v8::Object>, unsigned int) ()
#2 0x0000000000591d04 in node::StreamWrap::OnAlloc(uv_handle_s*, unsigned long) ()
#3 0x00000000005d08fb in uv__read (stream=0x1fa7f90) at ../deps/uv/src/unix/stream.c:575
#4 0x00000000005d1a1a in uv__stream_io (loop=<optimized out>, w=<optimized out>, events=<optimized out>) at ../deps/uv/src/unix/stream.c:745
#5 0x00000000005c6ac2 in ev_invoke_pending (loop=0xdb34c0 <default_loop_struct>) at ../deps/uv/src/unix/ev/ev.c:2145
#6 0x00000000005c29df in uv__poll (loop=0xdb27e0 <default_loop_struct>) at ../deps/uv/src/unix/core.c:248
#7 uv__run (loop=0xdb27e0 <default_loop_struct>) at ../deps/uv/src/unix/core.c:257
#8 0x00000000005c2c60 in uv_run (loop=0xdb27e0 <default_loop_struct>) at ../deps/uv/src/unix/core.c:265
#9 0x000000000057d9f7 in node::Start(int, char**) ()
#10 0x00007f065aa1176d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6
#11 0x0000000000575245 in _start ()
More often than not I get one of the first two errors – ie, no stack trace. These all occurred while trying to resize 6 or so images – occasionally they all succeed without error, but usually it seg faults after the first one or two have been resized.
How on earth do I go about debugging this?
In the unit test for the node-vips plugin there's a comment that reads:
this test will crash if vips is compiled with imagemagick support because imagemagick crashes when called from libeio
Why is this? Is this still true? I thought ImageMagick was perfectly thread-safe, what about it makes it not safe to be called from libeio/libuv?

Resources