I am trying to migrate the Kafka library being used in my application from kafka-node to node-rdkafka. My application is running on EC2 with Ubuntu 14 version and the node version is 12.14.0. I have tried two version's of node-rdkafka (2.10.0 & 2.10.1). Both the versions are throwing segmentation fault while deployment.
This is the error I am getting in gdb
Thread 16 "rdk:broker-1" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffce7fc700 (LWP 32445)]
0x00007fffdd81ba9b in ?? () from /lib/x86_64-linux-gnu/libssl.so.1.0.0
What could be the possible cause for this issue?
Related
Hello I have the following problem running a Xenomai demo: "prologue failed for thread" EINVAL
debian:~/xenomai_mercury_lib/demo$ sudo ./alchemy/altency
0"000.665| WARNING: [main] prologue failed for thread <anonymous>, EINVAL
== Sampling period: 100 us
== Test mode: periodic user-mode task
== All results in microseconds
0"000.997| WARNING: [main] prologue failed for thread alt-display-2077, EINVAL
altency: failed to create display task, code -22
What I have:
debian 10.10.0-amd64, installed inside a VirtualBox
xenomai 3.1 mercury installed and built for 32bit target
xenomai configure:
../xenomai-3.1/configure --enable-lores-clock --with-core=mercury --enable-smp --enable-pshared CFLAGS="-m32 -O2" LDFLAGS="-m32"
Maybe something is missing inside the underlying OS? Something to install?
Do you have some ideas?
Thanks a lot.
Hi i am using below as my Docker image for fastapi application
FROM tensorflow/tensorflow:latest
when i run docker its running but i am getting this error
2021-06-23 23:31:50.516749: F tensorflow/core/lib/monitoring/sampler.cc:42] Check failed: bucket_limits_[i] > bucket_limits_[i - 1] (0 vs. 10)
qemu: uncaught target signal 6 (Aborted) - core dumped
[2021-06-23 23:31:50 +0530] [1] [WARNING] Worker with pid 2697 was terminated due to signal 6
and when i call api, i am not getting response, does it take time for api call or can you please tell me where it is wrong
I am guessing you are using a Mac with a M1 chip as This is a qemu bug, which is the upstream component we use for running Intel containers on M1 chips, this issue hasn't been solved yet. I suggest you can try and build TensorFlow for aarch64 Linux from source.
Node version: 4.8.0
Platform: Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux
Node crashed during the Garbace collection but without any other high level pattern (maybe related to https://github.com/nodejs/node/issues/3715).
Unfurtunately I don't have any code to reproduce as I was not able to isolate the problem.
This is the crash stack trace captured with segfault-handler module:
PID 24495 received SIGSEGV for address: 0x3809f3d021f8
<path_node_modules>/segfault-handler/build/Release/segfault-handler.node(+0x1a5b)[0x7f7dd565ca5b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f7dd9c20890]
/usr/bin/nodejs(_ZN2v88internal20MarkCompactCollector22ProcessWeakCollectionsEv+0xfd)[0xaec4dd]
/usr/bin/nodejs(_ZN2v88internal20MarkCompactCollector15MarkLiveObjectsEv+0x214)[0xaf3a14]
/usr/bin/nodejs(_ZN2v88internal20MarkCompactCollector14CollectGarbageEv+0x11)[0xaf47e1]
/usr/bin/nodejs(_ZN2v88internal4Heap11MarkCompactEv+0x60)[0xaaafe0]
/usr/bin/nodejs(_ZN2v88internal4Heap24PerformGarbageCollectionENS0_16GarbageCollectorENS_15GCCallbackFlagsE+0x4c0)[0xac2be0]
/usr/bin/nodejs(_ZN2v88internal4Heap14CollectGarbageENS0_16GarbageCollectorEPKcS4_NS_15GCCallbackFlagsE+0x238)[0xac30f8]
/usr/bin/nodejs(_ZN2v88internal4Heap15HandleGCRequestEv+0x8f)[0xac3aef]
/usr/bin/nodejs(_ZN2v88internal10StackGuard16HandleInterruptsEv+0x31c)[0xa6041c]
/usr/bin/nodejs(_ZN2v88internal18Runtime_StackGuardEiPPNS0_6ObjectEPNS0_7IsolateE+0x2b)[0xca51ab]
[0x2f2137d0963b]
And also this other stack sometimes:
PID 7545 received SIGSEGV for address: 0x68233500009
/home/documentapp/node_modules/segfault-handler/build/Release/segfault-handler.node(+0x1a5b)[0x7f89249bfa5b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f8928f83890]
/usr/bin/nodejs(_ZN2v88internal32IncrementalMarkingMarkingVisitor26VisitFixedArrayIncrementalEPNS0_3MapEPNS0_10HeapObjectE+0x3fe)[0xad51ee]
/usr/bin/nodejs(_ZN2v88internal18IncrementalMarking4StepElNS1_16CompletionActionENS1_18ForceMarkingActionENS1_21ForceCompletionActionE+0x30c)[0xad2a7c]
/usr/bin/nodejs(_ZN2v88internal8NewSpace15SlowAllocateRawEiNS0_19AllocationAlignmentE+0x78)[0xb00f18]
/usr/bin/nodejs(_ZN2v88internal4Heap11AllocateRawEiNS0_15AllocationSpaceES2_NS0_19AllocationAlignmentE+0x109)[0xa64719]
/usr/bin/nodejs(_ZN2v88internal4Heap20AllocateFillerObjectEibNS0_15AllocationSpaceE+0x19)[0xaabd19]
/usr/bin/nodejs(_ZN2v88internal7Factory15NewFillerObjectEibNS0_15AllocationSpaceE+0x2d)[0xa64c5d]
/usr/bin/nodejs(_ZN2v88internal29Runtime_AllocateInTargetSpaceEiPPNS0_6ObjectEPNS0_7IsolateE+0x5e)[0xca52ee]
[0x1e31ede06355]
Can someone give me some hint on how I can find the prblem? Thanks
If you prefer you can also answer on the node issues that I have created:
https://github.com/nodejs/node/issues/11606
Additional information:
Node framework: express, Sails.js
My native modules founded with find node_modules -name '*.node' are:
node_modules/bcrypt/build/Release/bcrypt_lib.node
node_modules/bcrypt/build/Release/obj.target/bcrypt_lib.node
node_modules/segfault-handler/build/Release/segfault-handler.node
node_modules/segfault-handler/build/Release/obj.target/segfault-handler.node
The problems seems to be cause by mongodb logs that fill up the disk space a some point. Was actually hard to see because we clean this periodically so was not critical at the moment I've checked.
I'm programming c# in monodevelop for ubuntu program. When the program is started, it gets run time error. The error message from GDB is here:
Native stacktrace:
/usr/bin/mono() [0x80e6431]
/usr/bin/mono() [0x8062b70]
[0xb77b940c]
/lib/i386-linux-gnu/libc.so.6(_IO_file_underflow+0x68) [0xb76cb9c8]
/lib/i386-linux-gnu/libc.so.6(_IO_default_uflow+0x19) [0xb7612449]
/lib/i386-linux-gnu/libc.so.6(__uflow+0x90) [0xb7612260]
/lib/i386-linux-gnu/libc.so.6(getc+0xb2) [0xb7608772]
/usr/lib/i386-linux-gnu/liblua5.1.so.0(luaL_loadfile+0xdc) [0xb429fd8c]
/usr/lib/vlc/plugins/lua/liblua_plugin.so(+0xa055) [0xb42d8055]
/usr/lib/vlc/plugins/lua/liblua_plugin.so(+0xa220) [0xb42d8220]
/usr/lib/vlc/plugins/lua/liblua_plugin.so(+0xd689) [0xb42db689]
/usr/lib/vlc/plugins/lua/liblua_plugin.so(+0xa546) [0xb42d8546]
/usr/lib/libvlccore.so.5(+0x916a0) [0xb52706a0]
/usr/lib/libvlccore.so.5(vlc_module_load+0x573) [0xb5270de3]
/usr/lib/libvlccore.so.5(module_need+0x42) [0xb5271302]
/usr/lib/libvlccore.so.5(+0x27716) [0xb5206716]
/lib/i386-linux-gnu/libpthread.so.0(+0x6d4c) [0xb774fd4c]
/lib/i386-linux-gnu/libc.so.6(clone+0x5e) [0xb768ddde]
Debug info from gdb:
TagLib: MP4: No audio tracks
Unable to attach: program terminated with signal SIGSEGV, Segmentation fault.
Could anyone give a hand about this?
Than you very much
I have process that works perfectly in the same machine in 2 accounts but when i copy the process to other account and run the process im getting core dump.
when i run the process with strace in the end im getting :
--- SIGBUS (Bus error) # 0 (0) ---
+++ killed by SIGBUS (core dumped) +++
when i open the core dump im getting :
#0 0x000000360046fed3 in malloc_consolidate () from /lib64/libc.so.6
#1 0x00000036004723fd in _int_malloc () from /lib64/libc.so.6
#2 0x000000360047402a in malloc () from /lib64/libc.so.6
#3 0x00000036004616ba in __fopen_internal () from /lib64/libc.so.6
#4 0x0000000000fe9652 in LogMngr::OpenFile (this=0x2aaaaad17010, iLogIndex=0) at LogMngr.c:801
i can see it something with opening the file for logging , but why it only in one account and in the other is fine ?
You can get a SIGBUS from an unaligned memory access . Are you using something like mmap, shared memory regions, or something similar ?
Any core dump inside malloc always indicates heap corruption, and heap corruption in general is sneaky like that: it may never show up on machine A, sometimes show up on machine B, and always show up on machine C.
Valgrind will likely point you straight at the problem.