I'm using Logstash 8.3.3 on MacOS (Apple Silicon) and created around 60 pipelines in logstash. Logstash starts up fine if I use less than 10 pipelines. Anything greater than 10 results in JVM crashing:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000000000000, pid=41063, tid=63747
#
# JRE version: OpenJDK Runtime Environment Temurin-11.0.15+10 (11.0.15+10) (build 11.0.15+10)
# Java VM: OpenJDK 64-Bit Server VM Temurin-11.0.15+10 (11.0.15+10, mixed mode, tiered, compressed oops, concurrent mark sweep gc, bsd-amd64)
# Problematic frame:
# C 0x0000000000000000
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
[thread 64771 also had an error]
[thread 68867 also had an error]
# An error report file with more information is saved as:
# /Users/hashamrasheed/Work/logstash-8.3.3/bin/hs_err_pid41063.log
[thread 60419 also had an error]
[thread 64259 also had an error]
[thread 59907 also had an error]
[thread 71939 also had an error]
[thread 172547 also had an error][thread 70659 also had an error]
[thread 73475 also had an error]
[thread 174339 also had an error]
[thread 174087 also had an error]
#
# If you would like to submit a bug report, please visit:
# https://github.com/adoptium/adoptium-support/issues
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
At this point, I'm not sure how to debug this issue. I've also increased JVM heap size and stack size to 4g and 10m respectively.
I was able to resolve this issue by upgrading the SQLite version being used in logstash-input-mongodb plugin. I upgraded it to 3.28.0, everything's working now.
Related
When running my application on the Amazon Corretto JVM I encountered the following error. What does this mean?
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fcde9765caa, pid=1, tid=144
#
# JRE version: OpenJDK Runtime Environment Corretto-11.0.18.10.1 (11.0.18+10) (build 11.0.18+10-LTS)
# Java VM: OpenJDK 64-Bit Server VM Corretto-11.0.18.10.1 (11.0.18+10-LTS, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
# Problematic frame:
# V [libjvm.so+0xc03caa] ObjectSampleCheckpoint::add_to_leakp_set(Method const*, unsigned long)+0x7a
When reporting a JVM crash, always include hs_err.log dump produced by the JVM. A short error message is not enough to provide a definitive conclusion.
In your case, however, the reason is most likely the JVM bug JDK-8236743.
Upgrade to JDK 17+ where the issue is already fixed or disable OldObjectSample events in your JFR recording.
Hello I have the following problem running a Xenomai demo: "prologue failed for thread" EINVAL
debian:~/xenomai_mercury_lib/demo$ sudo ./alchemy/altency
0"000.665| WARNING: [main] prologue failed for thread <anonymous>, EINVAL
== Sampling period: 100 us
== Test mode: periodic user-mode task
== All results in microseconds
0"000.997| WARNING: [main] prologue failed for thread alt-display-2077, EINVAL
altency: failed to create display task, code -22
What I have:
debian 10.10.0-amd64, installed inside a VirtualBox
xenomai 3.1 mercury installed and built for 32bit target
xenomai configure:
../xenomai-3.1/configure --enable-lores-clock --with-core=mercury --enable-smp --enable-pshared CFLAGS="-m32 -O2" LDFLAGS="-m32"
Maybe something is missing inside the underlying OS? Something to install?
Do you have some ideas?
Thanks a lot.
After Apache Spark Executor JVM crash in C++ library I'm unable to locate hs_err_pid.log file, that is specified in the Executor JVM output log. Here's an example of Executor output log:
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f6326dce8b0, pid=28580, tid=0x00007f630ea57700
#
# JRE version: OpenJDK Runtime Environment (8.0_212-b01) (build 1.8.0_212-8u212-b01-1~deb9u1-b01)
# Java VM: OpenJDK 64-Bit Server VM (25.212-b01 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libessence-jni.so+0x18b0] Java_com_evernote_service_nts_indexer_lib_Essence_EssProcess+0x0
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1559573462307_0002/container_1559573462307_0002_01_000005/hs_err_pid28580.log
10:50:00:[32m562[m [Executor task launch worker for task 41] [32mINFO[m .....NtsLibInternalIndexerProcessor(NtsLibInternalIndexerProcessor.java:50) [32mprocess [m Process for user: 18432
[thread 140063422109440 also had an error]
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
But when I'm SSH to target worker machine to locate hs_err_pid28580.log I can't find any traces of this file. I've tried:
vglazkov#reindex-cluster-vg-w-0:~$ sudo find / -name hs_err_pid28580.log
vglazkov#reindex-cluster-vg-w-0:~$
vglazkov#reindex-cluster-vg-w-0:~$ sudo ls -la /hadoop/yarn/nm-local-dir/usercache/root/appcache/
total 12
drwx--x--- 3 yarn yarn 4096 Jun 4 10:46 .
drwxr-x--- 4 yarn yarn 4096 May 15 15:47 ..
drwx--x--- 3 yarn yarn 4096 Jun 4 10:48 application_1557935076075_0097
But in the last case directory named application_1557935076075_0097 does not match my applicationId application_1559573462307_0002 and does not contain any hs_err_pid.log files
My mistake - after 6-8 hours of running programs on Java i get this log hs_err_pid6662.log
and this
[testuser#apus ~]$ sh /home/progr/work/import.sh
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: Resource temporarily unavailable
Programs run every five minutes and try to import/export from oracle
How to fix this?
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (gcTaskThread.cpp:48), pid=6662,
tid=0x00007f429a675700
#
--------------- T H R E A D ---------------
Current thread (0x00007f4294019000): JavaThread "Unknown thread"
[_thread_in_vm, id=6696, stack(0x00007f429a575000,0x00007f429a676000)]
Stack: [0x00007f429a575000,0x00007f429a676000], sp=0x00007f429a674550,
free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native
code)
VM Arguments:
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Launcher Type: SUN_STANDARD
Environment Variables:
JAVA_HOME=/usr/java/jdk1.8.0_102
# JRE version: (8.0_102-b14) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-
amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core
dumping, try "ulimit -c unlimited" before starting Java again
Memory: 4k page, physical 24591972k(6051016k free), swap 12369916k(11359436k
free)
I am running programs like sqoop-import,sqoop-export on Java every 5 minutes.
example:
#!/bin/bash
hadoop jar /home/progr/import_sqoop/oracle.jar.
CDH version 5.11.1
java version jdk1.8.0_102
OS:Red Hat Enterprise Linux Server release 6.9 (Santiago)
Mem free:
total used free shared buffers cached
Mem: 24591972 20080336 4511636 132036 334456 2825792
-/+ buffers/cache: 16920088 7671884
Swap: 12369916 1008664 11361252
Host Memory Usage
enter image description here
The maximum heap memory is (by default) limited to 1GB. You need to increase this
JRE version: (8.0_102-b14) (build )
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Try the following for to increase this to 2048MB (or higher if required).
export HADOOP_CLIENT_OPTS="-Xmx2048m ${HADOOP_CLIENT_OPTS}"
Reference:
Pig: Hadoop jobs Fail
https://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201104.mbox/%3C5FFFF0E4-B3BA-420A-ADE3-B422A66E8B11#yahoo-inc.com%3E
I just downloaded SOAPUI 4.0.1 and tried to run it in Ubuntu 11.10. I run the file soapui.sh. The application started up and the window actually appeared, but then after a few seconds it closed. Looking at the terminal I saw that the JVM crashed. Below are the details of the error:
(process:4183): GLib-GObject-CRITICAL **: /build/buildd/glib2.0-2.30.0/./gobject/gtype.c:2708: You forgot to call g_type_init()
(process:4183): GLib-GObject-CRITICAL **: g_object_new: assertion `G_TYPE_IS_OBJECT (object_type)' failed
(process:4183): GLib-GObject-CRITICAL **: g_object_ref: assertion `G_IS_OBJECT (object)' failed
Problematic frame:
C [libgconf-2.so.4+0x15b99] gconf_enum_to_string+0xd59
Can anyone help? Thanks.
Look here: http://www.eviware.com/forum/viewtopic.php?f=13&t=7736
Look in ..../soapui-4.0.1/bin/soapui.sh:
#uncomment to disable browser component
#JAVA_OPTS="$JAVA_OPTS -Dsoapui.jxbrowser.disable=true" <- uncomment this line
if you are usising soapui.sh to start soapUI. If you used installer and using launcher than
in soapUI-*.vmoptions add -Dsoapui.jxbrowser.disable=true
that should do the trick.
I also have the same issue
--
DUMP
...
# JRE version: 6.0_33-b03
# Java VM: Java HotSpot(TM) Server VM (20.8-b03 mixed mode linux-x86 )
# Problematic frame:
# C [libgconf-2.so.4+0x176aa] __float128+0x176aa
...
OS:Fedora release 16 (Verne)
uname:Linux 3.3.2-6.fc16.i686 #1 SMP Sat Apr 21 13:23:12 UTC 2012 i686
libc:glibc 2.14.90 NPTL 2.14.90
...
--
This jxbrowser...jar is working with xulrunner-2.8...jar and native code doesn't full compatible with your OS dependencies.
jxbrowser it's used for 'HTML rendering' but works also without it.
--
It works also in FC16