jruby OutOfMemoryError very small example - garbage-collection

I have a question concerning jruby garbage collection. An OutOfMemoryError lead me to the following minimal example. I run it with JAVA_MEM=-Xmx500m ruby -w -J-verbose:gc to limit the memory to 500 MB and show the gc output.
(1..5_000_000).to_a
(1..5_000_000).to_a
This works fine. However, running this:
(1..5_000_000).to_a
(1..5_000_000).to_a
(1..5_000_000).to_a
I get an OutOfMemoryError (see below for full OutOfMemoryError stack trace). This puzzles me since the garbage collector should be able to free the memory used by the first object.
Output of stack trace:
java.lang.OutOfMemoryError: Java heap space
at org.jruby.RubyFixnum.newFixnum(RubyFixnum.java:211)
at org.jruby.RubyRange.to_a(RubyRange.java:406)
at org.jruby.RubyRange$INVOKER$i$0$0$to_a.call(RubyRange$INVOKER$i$0$0$to_a.gen)
at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:306)
at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:293)
at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:131)
at app.performance.invokeOther4:to_a(app/performance.rb)
at app.performance.RUBY$script(app/performance.rb:3)
at java.lang.invoke.LambdaForm$DMH/1172823033.invokeStatic_LLLLLLL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$DMH/1009311031.invokeSpecial_LLLLLLLL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$NFI/1072591159.invoke_LLLLLLLL_L(LambdaForm$NFI)
at java.lang.invoke.LambdaForm$DMH/871997608.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1136)
at java.lang.invoke.LambdaForm.interpretName(LambdaForm.java:625)
at java.lang.invoke.LambdaForm.interpretWithArguments(LambdaForm.java:604)
at java.lang.invoke.LambdaForm$LFI/515921628.interpret_L(LambdaForm$LFI)
at java.lang.invoke.LambdaForm$NFI/1072591159.invoke_LLLLLLLL_L(LambdaForm$NFI)
at java.lang.invoke.LambdaForm$DMH/871997608.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1136)
at java.lang.invoke.LambdaForm.interpretName(LambdaForm.java:625)
at java.lang.invoke.LambdaForm.interpretWithArguments(LambdaForm.java:604)
at java.lang.invoke.LambdaForm$LFI/1903586078.interpret_L(LambdaForm$LFI)
at java.lang.invoke.LambdaForm$NFI/939187759.invoke_LLLLLLLLL_L(LambdaForm$NFI)
at java.lang.invoke.LambdaForm$DMH/871997608.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1136)
at java.lang.invoke.LambdaForm.interpretName(LambdaForm.java:625)
at java.lang.invoke.LambdaForm.interpretWithArguments(LambdaForm.java:604)
at java.lang.invoke.LambdaForm$LFI/1903586078.interpret_L(LambdaForm$LFI)
at java.lang.invoke.LambdaForm$NFI/939187759.invoke_LLLLLLLLL_L(LambdaForm$NFI)
at java.lang.invoke.LambdaForm$DMH/871997608.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$NamedFunction.invokeWithArguments(LambdaForm.java:1136)
at java.lang.invoke.LambdaForm.interpretName(LambdaForm.java:625)
jruby version:
jruby 9.0.1.0 (2.2.2) 2015-09-02 583f336 OpenJDK 64-Bit Server VM 24.79-b02 on 1.7.0_79-b14 +jit [linux-amd64]

Related

What causes this fatal error in the Amazon Corretto JVM?

When running my application on the Amazon Corretto JVM I encountered the following error. What does this mean?
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fcde9765caa, pid=1, tid=144
#
# JRE version: OpenJDK Runtime Environment Corretto-11.0.18.10.1 (11.0.18+10) (build 11.0.18+10-LTS)
# Java VM: OpenJDK 64-Bit Server VM Corretto-11.0.18.10.1 (11.0.18+10-LTS, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
# Problematic frame:
# V [libjvm.so+0xc03caa] ObjectSampleCheckpoint::add_to_leakp_set(Method const*, unsigned long)+0x7a
When reporting a JVM crash, always include hs_err.log dump produced by the JVM. A short error message is not enough to provide a definitive conclusion.
In your case, however, the reason is most likely the JVM bug JDK-8236743.
Upgrade to JDK 17+ where the issue is already fixed or disable OldObjectSample events in your JFR recording.

Hadoop error log jvm sqoop

My mistake - after 6-8 hours of running programs on Java i get this log hs_err_pid6662.log
and this
[testuser#apus ~]$ sh /home/progr/work/import.sh
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: Resource temporarily unavailable
Programs run every five minutes and try to import/export from oracle
How to fix this?
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (gcTaskThread.cpp:48), pid=6662,
tid=0x00007f429a675700
#
--------------- T H R E A D ---------------
Current thread (0x00007f4294019000): JavaThread "Unknown thread"
[_thread_in_vm, id=6696, stack(0x00007f429a575000,0x00007f429a676000)]
Stack: [0x00007f429a575000,0x00007f429a676000], sp=0x00007f429a674550,
free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native
code)
VM Arguments:
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Launcher Type: SUN_STANDARD
Environment Variables:
JAVA_HOME=/usr/java/jdk1.8.0_102
# JRE version: (8.0_102-b14) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-
amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core
dumping, try "ulimit -c unlimited" before starting Java again
Memory: 4k page, physical 24591972k(6051016k free), swap 12369916k(11359436k
free)
I am running programs like sqoop-import,sqoop-export on Java every 5 minutes.
example:
#!/bin/bash
hadoop jar /home/progr/import_sqoop/oracle.jar.
CDH version 5.11.1
java version jdk1.8.0_102
OS:Red Hat Enterprise Linux Server release 6.9 (Santiago)
Mem free:
total used free shared buffers cached
Mem: 24591972 20080336 4511636 132036 334456 2825792
-/+ buffers/cache: 16920088 7671884
Swap: 12369916 1008664 11361252
Host Memory Usage
enter image description here
The maximum heap memory is (by default) limited to 1GB. You need to increase this
JRE version: (8.0_102-b14) (build )
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Try the following for to increase this to 2048MB (or higher if required).
export HADOOP_CLIENT_OPTS="-Xmx2048m ${HADOOP_CLIENT_OPTS}"
Reference:
Pig: Hadoop jobs Fail
https://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201104.mbox/%3C5FFFF0E4-B3BA-420A-ADE3-B422A66E8B11#yahoo-inc.com%3E

i have just begun to use android studio and i cant seem to get my gradle to sync with my application. here is what it shows :

7:46:20 PM Gradle sync started
7:46:35 PM Gradle sync failed: Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at https://docs.gradle.org/2.10/userguide/gradle_daemon.html
Please read the following process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Consult IDE log for more details (Help | Show Log)
the jvm version is 1.7.0_79
and the studio version is 2.1.1
Error occurred during initialization of VM Could not reserve enough space for object heap Error: Could not create the Java Virtual Machine.
There's no space available in RAM. To fix go to /android-studio-dir/bin and edit studio.vmoptions and studio64.vmoptions to increment the -Xmx and to reserve more memory to Java. Note that the number of processes active may influence on that.
Probably, the /tmp location is full..
Found this somewhere..
Use df command
df
You should see an output with a line like this:
tmpfs 102400 102312 88 100% /tmp
So to change the size of the tmp file:
sudo mount -o remount,size=2G /tmp
Done! Now, It should work..

How to launch JBoss on Win7?

i tried to launch it by cmd with this command:
C:...\jboss-eap-6.4.0\bin>standalone.bat
but it failed with error:
Error occured during initialization of VM
Could not reserve enough space for object heap
how to launch JBoss? i did something wrong?
JBoss could be trying to use more memory than it's permitted by Java's memory range. You can try increasing the heap size.
Add this to your standalone.bat before running it again:
set "JAVA_OPTS=-Xms128M -Xmx512M -XX:MaxPermSize=256M"

Spark - UbuntuVM - insufficient memory for the Java Runtime Environment

I'm trying to install Spark1.5.1 on Ubuntu14.04 VM. After un-tarring the file, I changed the directory to the extracted folder and executed the command "./bin/pyspark" which should fire up the pyspark shell. But I got an error message as follows:
[ OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5550000, 715849728, 0) failed;
error='Cannot allocate memory' (errno=12) There is insufficient
memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 715849728 bytes
for committing reserved memory.
An error report file with more information is saved as:
/home/datascience/spark-1.5.1-bin-hadoop2.6/hs_err_pid2750.log ]
Could anyone please give me some directions to sort out the problem?
We need to set spark.executor.memory in conf/spark-defaults.conf file to a value specific to your machine. For example,
usr1#host:~/spark-1.6.1$ cp conf/spark-defaults.conf.template conf/spark-defaults.conf
nano conf/spark-defaults.conf
spark.driver.memory 512m
For more information, refer to the official documentation: http://spark.apache.org/docs/latest/configuration.html
Pretty much what it says. It wants 7GB of RAM. So give the VM ~ 8GB of RAM.

Resources