What causes this fatal error in the Amazon Corretto JVM? - memory-leaks

When running my application on the Amazon Corretto JVM I encountered the following error. What does this mean?
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fcde9765caa, pid=1, tid=144
#
# JRE version: OpenJDK Runtime Environment Corretto-11.0.18.10.1 (11.0.18+10) (build 11.0.18+10-LTS)
# Java VM: OpenJDK 64-Bit Server VM Corretto-11.0.18.10.1 (11.0.18+10-LTS, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
# Problematic frame:
# V [libjvm.so+0xc03caa] ObjectSampleCheckpoint::add_to_leakp_set(Method const*, unsigned long)+0x7a

When reporting a JVM crash, always include hs_err.log dump produced by the JVM. A short error message is not enough to provide a definitive conclusion.
In your case, however, the reason is most likely the JVM bug JDK-8236743.
Upgrade to JDK 17+ where the issue is already fixed or disable OldObjectSample events in your JFR recording.

Related

MongoDB Input Plugin for Logstash: JVM Crashes with Fatal Error

I'm using Logstash 8.3.3 on MacOS (Apple Silicon) and created around 60 pipelines in logstash. Logstash starts up fine if I use less than 10 pipelines. Anything greater than 10 results in JVM crashing:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000000000000000, pid=41063, tid=63747
#
# JRE version: OpenJDK Runtime Environment Temurin-11.0.15+10 (11.0.15+10) (build 11.0.15+10)
# Java VM: OpenJDK 64-Bit Server VM Temurin-11.0.15+10 (11.0.15+10, mixed mode, tiered, compressed oops, concurrent mark sweep gc, bsd-amd64)
# Problematic frame:
# C 0x0000000000000000
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
[thread 64771 also had an error]
[thread 68867 also had an error]
# An error report file with more information is saved as:
# /Users/hashamrasheed/Work/logstash-8.3.3/bin/hs_err_pid41063.log
[thread 60419 also had an error]
[thread 64259 also had an error]
[thread 59907 also had an error]
[thread 71939 also had an error]
[thread 172547 also had an error][thread 70659 also had an error]
[thread 73475 also had an error]
[thread 174339 also had an error]
[thread 174087 also had an error]
#
# If you would like to submit a bug report, please visit:
# https://github.com/adoptium/adoptium-support/issues
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
At this point, I'm not sure how to debug this issue. I've also increased JVM heap size and stack size to 4g and 10m respectively.
I was able to resolve this issue by upgrading the SQLite version being used in logstash-input-mongodb plugin. I upgraded it to 3.28.0, everything's working now.

Anyone experiencing Error code -46 on Java SDK 1.20

Latest Java SDK (1.20) seems to throw NoSuchMethodError when trying to autheticate with the accesspoint using onAuthenticationRequired() ; Once the exception is thrown then all subsequent attempt to connect to the bridge will result in " Error - Code 46 Message bridge not responding"
Anyone experiencing this behaviour, code is executed on
java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.8) (6b27-1.12.8)
OpenJDK Zero VM (build 20.0-b12, mixed mode)
Exception in thread "Thread-25" java.lang.NoSuchMethodError: java.lang.Integer.compare(II)I
at com.philips.lighting.hue.sdk.fbp.PHBridgeVersionManager$1.compare(PHBridgeVersionManager.java:83)
at com.philips.lighting.hue.sdk.fbp.PHBridgeVersionManager$1.compare(PHBridgeVersionManager.java:1)
at java.util.Arrays.mergeSort(Arrays.java:1283)
at java.util.Arrays.mergeSort(Arrays.java:1294)
at java.util.Arrays.sort(Arrays.java:1223)
at java.util.Collections.sort(Collections.java:176)
at com.philips.lighting.hue.sdk.fbp.PHBridgeVersionManager.setFallbackBridgeVersion(PHBridgeVersionManager.java:130)
at com.philips.lighting.hue.sdk.fbp.PHBridgeVersionManager.setBridgeVersion(PHBridgeVersionManager.java:365)
at com.philips.lighting.hue.sdk.connection.impl.PHBridgeInternal.processResponse(PHBridgeInternal.java:450)
at com.philips.lighting.hue.sdk.connection.impl.PHBridgeInternal$1.run(PHBridgeInternal.java:122)
Integer.compare was introduced in Java 7 and you're clearly using Java 6 based on the error message.
Try using Java 7 or Java 8.

Hadoop error log jvm sqoop

My mistake - after 6-8 hours of running programs on Java i get this log hs_err_pid6662.log
and this
[testuser#apus ~]$ sh /home/progr/work/import.sh
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: Resource temporarily unavailable
Programs run every five minutes and try to import/export from oracle
How to fix this?
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (gcTaskThread.cpp:48), pid=6662,
tid=0x00007f429a675700
#
--------------- T H R E A D ---------------
Current thread (0x00007f4294019000): JavaThread "Unknown thread"
[_thread_in_vm, id=6696, stack(0x00007f429a575000,0x00007f429a676000)]
Stack: [0x00007f429a575000,0x00007f429a676000], sp=0x00007f429a674550,
free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native
code)
VM Arguments:
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Launcher Type: SUN_STANDARD
Environment Variables:
JAVA_HOME=/usr/java/jdk1.8.0_102
# JRE version: (8.0_102-b14) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-
amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core
dumping, try "ulimit -c unlimited" before starting Java again
Memory: 4k page, physical 24591972k(6051016k free), swap 12369916k(11359436k
free)
I am running programs like sqoop-import,sqoop-export on Java every 5 minutes.
example:
#!/bin/bash
hadoop jar /home/progr/import_sqoop/oracle.jar.
CDH version 5.11.1
java version jdk1.8.0_102
OS:Red Hat Enterprise Linux Server release 6.9 (Santiago)
Mem free:
total used free shared buffers cached
Mem: 24591972 20080336 4511636 132036 334456 2825792
-/+ buffers/cache: 16920088 7671884
Swap: 12369916 1008664 11361252
Host Memory Usage
enter image description here
The maximum heap memory is (by default) limited to 1GB. You need to increase this
JRE version: (8.0_102-b14) (build )
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Try the following for to increase this to 2048MB (or higher if required).
export HADOOP_CLIENT_OPTS="-Xmx2048m ${HADOOP_CLIENT_OPTS}"
Reference:
Pig: Hadoop jobs Fail
https://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201104.mbox/%3C5FFFF0E4-B3BA-420A-ADE3-B422A66E8B11#yahoo-inc.com%3E

Error while starting android studio

I am getting this error while starting android studio. Can you please help
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f6b09ea7208, pid=4501, tid=0x00007f6a7aafc700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# J 6230 C1 org.iq80.snappy.UnsafeMemory.copyLong([BI[BI)V (116 bytes) # 0x00007f6b09ea7208 [0x00007f6b09ea71a0+0x68]

JVM crashes when running SOAPUI on Ubuntu

I just downloaded SOAPUI 4.0.1 and tried to run it in Ubuntu 11.10. I run the file soapui.sh. The application started up and the window actually appeared, but then after a few seconds it closed. Looking at the terminal I saw that the JVM crashed. Below are the details of the error:
(process:4183): GLib-GObject-CRITICAL **: /build/buildd/glib2.0-2.30.0/./gobject/gtype.c:2708: You forgot to call g_type_init()
(process:4183): GLib-GObject-CRITICAL **: g_object_new: assertion `G_TYPE_IS_OBJECT (object_type)' failed
(process:4183): GLib-GObject-CRITICAL **: g_object_ref: assertion `G_IS_OBJECT (object)' failed
Problematic frame:
C [libgconf-2.so.4+0x15b99] gconf_enum_to_string+0xd59
Can anyone help? Thanks.
Look here: http://www.eviware.com/forum/viewtopic.php?f=13&t=7736
Look in ..../soapui-4.0.1/bin/soapui.sh:
#uncomment to disable browser component
#JAVA_OPTS="$JAVA_OPTS -Dsoapui.jxbrowser.disable=true" <- uncomment this line
if you are usising soapui.sh to start soapUI. If you used installer and using launcher than
in soapUI-*.vmoptions add -Dsoapui.jxbrowser.disable=true
that should do the trick.
I also have the same issue
--
DUMP
...
# JRE version: 6.0_33-b03
# Java VM: Java HotSpot(TM) Server VM (20.8-b03 mixed mode linux-x86 )
# Problematic frame:
# C [libgconf-2.so.4+0x176aa] __float128+0x176aa
...
OS:Fedora release 16 (Verne)
uname:Linux 3.3.2-6.fc16.i686 #1 SMP Sat Apr 21 13:23:12 UTC 2012 i686
libc:glibc 2.14.90 NPTL 2.14.90
...
--
This jxbrowser...jar is working with xulrunner-2.8...jar and native code doesn't full compatible with your OS dependencies.
jxbrowser it's used for 'HTML rendering' but works also without it.
--
It works also in FC16

Resources