Docker Elasticsearch on VM - linux

I created a vm with Alpine-3.7 and I am trying to start a container with elasticsearch.
VM memory situation is:
alpine:~/elastic$ free -m
total used free shared buffers cached
Mem: 7483 242 7241 0 34 128
-/+ buffers/cache: 79 7404
Swap: 4095 0 4095
When I try to run elasticsearch I obtain this error:
elasticsearch | OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000637299000000, 2555904, 1) failed; error='Operation not permitted' (errno=1)
elasticsearch | OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007337ad000000, 2555904, 1) failed; error='Operation not permitted' (errno=1)
elasticsearch | #
elasticsearch | # There is insufficient memory for the Java Runtime Environment to continue.
elasticsearch | # Native memory allocation (mmap) failed to map 2555904 bytes for committing reserved memory.
elasticsearch | # Can not save log file, dump to screen..
elasticsearch | #
elasticsearch | # There is insufficient memory for the Java Runtime Environment to continue.
elasticsearch | # Native memory allocation (mmap) failed to map 2555904 bytes for committing reserved memory.
elasticsearch | # Possible reasons:
elasticsearch | # The system is out of physical RAM or swap space
elasticsearch | # In 32 bit mode, the process size limit was hit
elasticsearch | # Possible solutions:
elasticsearch | # Reduce memory load on the system
elasticsearch | # Increase physical memory or swap space
elasticsearch | # Check if swap backing store is full
elasticsearch | # Use 64 bit Java on a 64 bit OS
elasticsearch | # Decrease Java heap size (-Xmx/-Xms)
elasticsearch | # Decrease number of Java threads
elasticsearch | # Decrease Java thread stack sizes (-Xss)
elasticsearch | # Set larger code cache with -XX:ReservedCodeCacheSize=
elasticsearch | # This output file may be truncated or incomplete.
elasticsearch | #
elasticsearch | # Out of Memory Error (os_linux.cpp:2651), pid=29, tid=0x00007337c59ab700
I dont' copied all the error.
Could anyone help me?

I solved setting this kernel parameter:
kernel.pax.softmode=1
with
# echo 1 > /proc/sys/pax/softmode
Note: for reboots-persistent config add kernel.pax.softmode=1 to /etc/sysctl.conf
Thanks

Related

JDK 1.8 -XX:+UseLargePages behavior when there's not enough huge pages left on os

I am currently confusing how to optimize using HugePages with JVM applications with Netty, -XX:+UseLargePages option enabled, and using G1Gc.
Also, I didn't forget to set the same max and min size of the heap and metaspace.
My application looks fine, but I was wondering what happens if there's no remaining free huge pages on system since JVM uses additional native memory area to allocate direct memory buffer, etc.
(Assume that application started up normally, and consumes additional HugePages on off-heap memory area.)
I've read following page, but there's no description of the behavior when JVM failed to allocate huge pages.
https://www.oracle.com/java/technologies/javase/largememory-pages.html
I use CentOS 7 and OpenJDK 1.8.0_151-b12 for the testbed before deployment.
If allocating large pages fails, OpenJDK 8 or later falls back to allocating regular pages.
src/hotspot/share/memory/virtualspace.cpp:
if (base != NULL) {
[...]
} else {
// failed; try to reserve regular memory below
if (UseLargePages && (!FLAG_IS_DEFAULT(UseLargePages) ||
!FLAG_IS_DEFAULT(LargePageSizeInBytes))) {
log_debug(gc, heap, coops)("Reserve regular memory without large pages");
}
}
All GC implementations use the ReservedSpace helper for allocating memory, so this is not GC-specific.
You can easily test that behavior on Linux by restricting available large pages:
$ echo 16 > /proc/sys/vm/nr_hugepages
$ cat /proc/meminfo | grep HugePages
AnonHugePages: 40960 kB
HugePages_Total: 16
HugePages_Free: 16
HugePages_Rsvd: 0
HugePages_Surp: 0
$ java -XX:+UseLargePages Test
OpenJDK 64-Bit Server VM warning: Failed to reserve large pages memory req_addr: 0x0000000000000000 bytes: 251658240 (errno = 12).
OpenJDK 64-Bit Server VM warning: Failed to reserve large pages memory req_addr: 0x0000000707c00000 bytes: 4164943872 (errno = 12).
OpenJDK 64-Bit Server VM warning: Failed to reserve large pages memory req_addr: 0x0000000000000000 bytes: 67108864 (errno = 12).
OpenJDK 64-Bit Server VM warning: Failed to reserve large pages memory req_addr: 0x0000000000000000 bytes: 67108864 (errno = 12).
$ echo $?
0
strace confirms the failed allocation attempt and the successful retry with the same size but without MAP_HUGETLB:
11631 mmap(NULL, 251658240, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANONYMOUS|MAP_HUGETLB, -1, 0) = -1 ENOMEM (Cannot allocate memory)
11631 mmap(NULL, 251658240, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x7f35d489c000

Cassandra 3.11.3 and RHEL 7 compatibility

Am trying to run Cassandra v3.11.3 on RHEL 7 system.
But facing memory errors while trying to start Cassandra.
Facing following error:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000000080000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /../../../hs_err_pid16347.log
hs_err_pid16347.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2760), pid=16347, tid=0x00007f7a61dec700
#
# JRE version: (8.0_212-b04) (build )
# Java VM: OpenJDK 64-Bit Server VM (25.212-b04 mixed mode linux-amd64 compressed oops)
# Core dump written. Default location: /../logs/core or core.16347
#
--------------- T H R E A D ---------------
Current thread (0x00007f7a5800dae0): JavaThread "Unknown thread" [_thread_in_vm, id=16349, stack(0x00007f7a61dad000,0x00007f7a61ded000)]
Stack: [0x00007f7a61dad000,0x00007f7a61ded000], sp=0x00007f7a61deb520, free space=249k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0xa7ffb2] VMError::report_and_die()+0x2e2
V [libjvm.so+0x4cae47] report_vm_out_of_memory(char const*, int, unsigned long, VMErrorType, char const*)+0x67
V [libjvm.so+0x8c0ed0] os::pd_commit_memory(char*, unsigned long, unsigned long, bool)+0x100
V [libjvm.so+0x8b85ff] os::commit_memory(char*, unsigned long, unsigned long, bool)+0x1f
V [libjvm.so+0xa7c2ac] VirtualSpace::initialize(ReservedSpace, unsigned long)+0x20c
V [libjvm.so+0x5cf3e6] Generation::Generation(ReservedSpace, unsigned long, int)+0x96
V [libjvm.so+0x4d4b80] DefNewGeneration::DefNewGeneration(ReservedSpace, unsigned long, int, char const*)+0x30
V [libjvm.so+0x8e24ef] ParNewGeneration::ParNewGeneration(ReservedSpace, unsigned long, int)+0x2f
V [libjvm.so+0x5d0d26] GenerationSpec::init(ReservedSpace, int, GenRemSet*)+0x3d6
V [libjvm.so+0x5bbbaf] GenCollectedHeap::initialize()+0x20f
V [libjvm.so+0xa4326a] Universe::initialize_heap()+0x16a
V [libjvm.so+0xa43553] universe_init()+0x33
V [libjvm.so+0x613000] init_globals()+0x50
V [libjvm.so+0xa24cc5] Threads::create_vm(JavaVMInitArgs*, bool*)+0x4e5
V [libjvm.so+0x68c691] JNI_CreateJavaVM+0x51
C [libjli.so+0x7f44] JavaMain+0x84
C [libpthread.so.0+0x7ea5] start_thread+0xc5
Did check "free -m" and there is enough memory available but still this error related to memory, what could be this related to? The -Xmn1G defined here.
Thinking enough memory available did try checking "ulimit -a" and updated as per this link recommendation https://serverfault.com/questions/662992/java-on-linux-insufficient-memory-even-though-there-is-plenty-of-available-memor
Rebooted the system and tried again to start Cassandra but still facing the same issue. Are there any additional settings at system level which needs to be updated here?
Also can anyone please share Cassdandra 3.11 and Operating System compatibility matrix?

Unable to locate JVM fatal error log file (hs_err_pid.log) after Dataproc Spark Job crash

After Apache Spark Executor JVM crash in C++ library I'm unable to locate hs_err_pid.log file, that is specified in the Executor JVM output log. Here's an example of Executor output log:
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f6326dce8b0, pid=28580, tid=0x00007f630ea57700
#
# JRE version: OpenJDK Runtime Environment (8.0_212-b01) (build 1.8.0_212-8u212-b01-1~deb9u1-b01)
# Java VM: OpenJDK 64-Bit Server VM (25.212-b01 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libessence-jni.so+0x18b0] Java_com_evernote_service_nts_indexer_lib_Essence_EssProcess+0x0
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /hadoop/yarn/nm-local-dir/usercache/root/appcache/application_1559573462307_0002/container_1559573462307_0002_01_000005/hs_err_pid28580.log
10:50:00:[32m562[m [Executor task launch worker for task 41] [32mINFO[m .....NtsLibInternalIndexerProcessor(NtsLibInternalIndexerProcessor.java:50) [32mprocess [m Process for user: 18432
[thread 140063422109440 also had an error]
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
But when I'm SSH to target worker machine to locate hs_err_pid28580.log I can't find any traces of this file. I've tried:
vglazkov#reindex-cluster-vg-w-0:~$ sudo find / -name hs_err_pid28580.log
vglazkov#reindex-cluster-vg-w-0:~$
vglazkov#reindex-cluster-vg-w-0:~$ sudo ls -la /hadoop/yarn/nm-local-dir/usercache/root/appcache/
total 12
drwx--x--- 3 yarn yarn 4096 Jun 4 10:46 .
drwxr-x--- 4 yarn yarn 4096 May 15 15:47 ..
drwx--x--- 3 yarn yarn 4096 Jun 4 10:48 application_1557935076075_0097
But in the last case directory named application_1557935076075_0097 does not match my applicationId application_1559573462307_0002 and does not contain any hs_err_pid.log files

Hadoop error log jvm sqoop

My mistake - after 6-8 hours of running programs on Java i get this log hs_err_pid6662.log
and this
[testuser#apus ~]$ sh /home/progr/work/import.sh
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: retry: Resource temporarily unavailable
/usr/bin/hadoop: fork: Resource temporarily unavailable
Programs run every five minutes and try to import/export from oracle
How to fix this?
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create GC thread. Out of system resources.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (gcTaskThread.cpp:48), pid=6662,
tid=0x00007f429a675700
#
--------------- T H R E A D ---------------
Current thread (0x00007f4294019000): JavaThread "Unknown thread"
[_thread_in_vm, id=6696, stack(0x00007f429a575000,0x00007f429a676000)]
Stack: [0x00007f429a575000,0x00007f429a676000], sp=0x00007f429a674550,
free space=1021k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native
code)
VM Arguments:
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Launcher Type: SUN_STANDARD
Environment Variables:
JAVA_HOME=/usr/java/jdk1.8.0_102
# JRE version: (8.0_102-b14) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.102-b14 mixed mode linux-
amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core
dumping, try "ulimit -c unlimited" before starting Java again
Memory: 4k page, physical 24591972k(6051016k free), swap 12369916k(11359436k
free)
I am running programs like sqoop-import,sqoop-export on Java every 5 minutes.
example:
#!/bin/bash
hadoop jar /home/progr/import_sqoop/oracle.jar.
CDH version 5.11.1
java version jdk1.8.0_102
OS:Red Hat Enterprise Linux Server release 6.9 (Santiago)
Mem free:
total used free shared buffers cached
Mem: 24591972 20080336 4511636 132036 334456 2825792
-/+ buffers/cache: 16920088 7671884
Swap: 12369916 1008664 11361252
Host Memory Usage
enter image description here
The maximum heap memory is (by default) limited to 1GB. You need to increase this
JRE version: (8.0_102-b14) (build )
jvm_args: -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -
Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.11.1-
1.cdh5.11.1.p0.4/lib/hadoop -Dhadoop.id.str= -
Dhadoop.root.logger=INFO,console -
Try the following for to increase this to 2048MB (or higher if required).
export HADOOP_CLIENT_OPTS="-Xmx2048m ${HADOOP_CLIENT_OPTS}"
Reference:
Pig: Hadoop jobs Fail
https://mail-archives.apache.org/mod_mbox/hadoop-mapreduce-user/201104.mbox/%3C5FFFF0E4-B3BA-420A-ADE3-B422A66E8B11#yahoo-inc.com%3E

Spark - UbuntuVM - insufficient memory for the Java Runtime Environment

I'm trying to install Spark1.5.1 on Ubuntu14.04 VM. After un-tarring the file, I changed the directory to the extracted folder and executed the command "./bin/pyspark" which should fire up the pyspark shell. But I got an error message as follows:
[ OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5550000, 715849728, 0) failed;
error='Cannot allocate memory' (errno=12) There is insufficient
memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 715849728 bytes
for committing reserved memory.
An error report file with more information is saved as:
/home/datascience/spark-1.5.1-bin-hadoop2.6/hs_err_pid2750.log ]
Could anyone please give me some directions to sort out the problem?
We need to set spark.executor.memory in conf/spark-defaults.conf file to a value specific to your machine. For example,
usr1#host:~/spark-1.6.1$ cp conf/spark-defaults.conf.template conf/spark-defaults.conf
nano conf/spark-defaults.conf
spark.driver.memory 512m
For more information, refer to the official documentation: http://spark.apache.org/docs/latest/configuration.html
Pretty much what it says. It wants 7GB of RAM. So give the VM ~ 8GB of RAM.

Resources