Expiring Daemon because JVM heap space is exhausted - android-studio

I just updated the Android Studio to 3.5 Beta 1 and I'm getting
Expiring Daemon because JVM heap space is exhausted
message while the build is running. Also, the build is taking more time to complete. Does anyone have any idea regarding this?

This can be fixed by increasing the configured max heap size for the project.
Through IDE:
Add the below lines into the gradle.properties file. Below memory size (1) can be configured based on the RAM availability
org.gradle.daemon=true
org.gradle.jvmargs=-Xmx2560m
Through GUI:
In the Settings, search for 'Memory Settings' and increase the IDE max heap size and Daemon max heap size as per the system RAM availability.
(1)
$ man java
...
-Xmxsize
Specifies the maximum size (in bytes) of the memory allocation pool in bytes. This value
must be a multiple of 1024 and greater than 2 MB. Append the letter k or K to indicate
kilobytes, m or M to indicate megabytes, g or G to indicate gigabytes. The default value
is chosen at runtime based on system configuration. For server deployments, -Xms and
-Xmx are often set to the same value. See the section "Ergonomics" in Java SE HotSpot
Virtual Machine Garbage Collection Tuning Guide at
http://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/index.html.
The following examples show how to set the maximum allowed size of allocated memory to
80 MB using various units:
-Xmx83886080
-Xmx81920k
-Xmx80m
The -Xmx option is equivalent to -XX:MaxHeapSize.
...

I was able to solve this for my React Native project by configuring the following:
1. gradle.properties
org.gradle.daemon=true
org.gradle.configureondemand=true
org.gradle.jvmargs=-Xmx4g -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
2. app/build.gradle
android {
dexOptions {
javaMaxHeapSize "3g"
}
}

The solution is to increase Android build memory.
As you add more modules to your app, there is an incredible demand placed on the Android build system, and the default memory settings will not work. To avoid OutOfMemoryErrors during Android builds, you should uncomment the alternate gradle memory setting present in /android/gradle.properties:
org.gradle.jvmargs=-Xmx2048m -XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
You can find gradle.properties inside android folder.
P.S.
What we are doing this and why it helps?
Let me clear some basic terminology for understanding the whole thing.
Daemon : - A daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user.
Android Studio 2.1 enables a new feature: Dex In Process, that can dramatically increase the speed of full clean builds as well as improving Instant Run performance.
To take advantage of Dex In Process, you’ll need to modify your gradle.properties file and increase the amount of memory allocated to the Gradle Daemon VM by 1 Gb, to a minimum of 2 Gb, using the org.gradle.jvmargs property:
Specifies the JVM arguments used for the daemon process.
The setting is particularly useful for tweaking memory settings.
org.gradle.jvmargs=-Xmx2048m
Default value:
-Xmx10248m -XX:MaxPermSize=256m
The default Gradle Daemon VM memory allocation is 1 gigabyte — which is insufficient to support dexInProcess, so to take advantage you’ll need to set it to at least 2 gigabytes.
Dex in process works by allowing multiple DEX processes to run within a single VM that’s also shared with Gradle, which is why you need to allocate the extra memory before it can be enabled — that memory will be shared between Gradle and multiple DEX processes.
If you’ve increased the javaMaxHeapSize in your module-level build.gradle file beyond the default of 1 gigabyte, you’ll need increase the memory assigned to the Gradle Daemon correspondingly.
When there’s enough memory assigned Dex in Process is enabled by default, improving overall build performance and removing the overhead of starting multiple parallel VM instances. The result is a significant improvement in all build times, including Instant Run, incremental, and full builds.
Source :
https://medium.com/google-developers/faster-android-studio-builds-with-dex-in-process-5988ed8aa37e
https://rnfirebase.io/#increasing-android-build-memory

Balance memory consumption and build speed using gradle options. For sample
Android Studio 2022.1.1 (PC RAM 16GB)
Gradle v7.3.3 (./gradle/wrapper/gradle-wrapper.properties)
AGP v7.2.0 (./build.gradle)
com.android.tools.build:gradle:7.2.0
Cache Fix Gradle Plugin
org.gradle.android.cache-fix:org.gradle.android.cache-fix.gradle.plugin:2.5.3
This Google Services dependency version supports Gradle Configuration Cache
com.google.gms:google-services:4.3.5
./gradle.properties
android.enableJetifier=true
android.jetifier.ignorelist=bcprov-jdk15on
android.useAndroidX=true
kapt.incremental.apt=true
kapt.use.worker.api=true
kotlin.daemon.jvm.options=-Xms1g -Xmx4g
manifestmerger.enabled=true
org.gradle.caching=true
org.gradle.configureondemand=true
org.gradle.daemon=true
org.gradle.jvmargs=-XX:InitialHeapSize=1g -XX:MaxHeapSize=6g -XX:MaxPermSize=2g -XX:MaxMetaspaceSize=2g -XX:NewSize=1g -XX:MaxNewSize=2g -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8
org.gradle.parallel=true
org.gradle.unsafe.configuration-cache=true
org.gradle.unsafe.configuration-cache-problems=warn
Useful links:
https://proandroiddev.com/how-we-reduced-our-gradle-build-times-by-over-80-51f2b6d6b05b
https://developer.android.com/studio/build/profile-your-build#using-the-gradle---profile-option

In my case it was some kind of gradle bug probably. We had actually no memory problems, but the message kept on appearing. My solution was:
gradlew --no-daemon

Related

OSX - Unable to update Android Studio - OutOfMemoryError: Java heap size

Whenever I try to update Android Studio I keep getting the same error. In the past I've had to manually uninstall/download the new build but it's annoying have to do that all the time especially for small updates (minor versions) like this one. Please see attached screenshots to see the problem I'm having
I'm on OSX 10.14.6 Mojave. Everything else is up-to-date. I can see that I'm not running low on free memory either.
Anyone have any idea what's going on? Thanks
Java heap size is set at VM creation time, and has nothing to do with the physical memory available, i. e. it doesn't grow as needed until there's no more virtual memory, the way a native program's heap would.
Go to Help/Change memory settings, and increase the heap size as you see fit.

Multiple Cassandra node goes down

WE have a 12 node cassandra cluster across 2 different datacenter. We are migrating the data from sql DB to cassandra through a net application and there is another .net app thats reads data from the cassandra. Off recently we are seeing one or the other node going down (nodetool status shows DN and the service is stopped on it). Below is the output of the nodetool status. WE have to start the service to again get it working but it stops again.
https://ibb.co/4P1T453
Path to the log: https://pastebin.com/FeN6uDGv
So in looking through your pastebin, I see a few things that can be adjusted.
First I'm reasonably sure that this is your primary issue:
Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out,
especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK or run Cassandra as root.
From GNU Error Codes:
Macro: int ENOMEM
“Cannot allocate memory.” The system cannot allocate more virtual
memory because its capacity is full.
-Xms12G, -Xmx12G, -Xmn3000M,
How much RAM is on your instance? From what I'm seeing your node is dying from an OOM (Out of Memory error). My guess is that you're designating too much RAM to the heap, and there isn't enough for the OS/page-cache. In fact, I wouldn't designate much more than 50%-60% of RAM to the heap.
For example, I mostly build instances on 16GB of RAM, and I've found that a 10GB max heap is about as high as you'd want to go on that.
-XX:+UseParNewGC, -XX:+UseConcMarkSweepGC
In fact, as you're using CMS GC, I wouldn't go higher than 8GB for max heap size.
Maximum number of memory map areas per process (vm.max_map_count) 65530 is too low,
recommended value: 1048575, you can change it with sysctl.
This means you haven't adjusted your limits.conf or sysctl.conf. Check through the guide (DSE 6.0 - Recommended Production Settings), but generally it's a good idea to add the following to these files:
/etc/limits.conf
* - memlock unlimited
* - nofile 100000
* - nproc 32768
* - as unlimited
/etc/sysctl.conf
vm.max_map_count = 1048575
Note: After adjusting sysctl.conf, you'll want to run a sudo sysctl -p or reboot.
Is swap disabled? : false,
You will want to disable swap. If Cassandra starts swapping contents of RAM to disk, things will get really slow. Run a swapoff -a and then edit /etc/fstab and remove any swap entries.
tl;dr; Summary
Set your initial and max heap sizes to 8GB (heap new size is fine).
Modify your limits.conf an sysctl.conf files appropriately.
Disable swap.
It's also a good idea to get on the latest version of 3.11 (3.11.4).
Hope this helps!

private bytes increase for a javaw process in java 8

My project has started using java 8 from java 7.
After switching to java 8, we are seeing issues like the memory consumed is getting higher with time.
Here are the investigations that we have done :
Issues comes only after migrating from java7 and from java8
As metaspace is the only thing related to memory which is changes from hava 7 to java 8. We monitored metaspace and this does not grow more then 20 MB.
Heap also remains consistent.
Now the only path left is to analyze how the memory gets distributes to process in java 7 and java 8, specifically private byte memory. Any thoughts or links here would be appreciated.
NOTE: this javaw application is a swing based application.
UPDATE 1 : After analyzing the native memory with NMT tool and generated a diff of memory occupied as compare to baseline. We found that the heap remained same but threads are leaking all this memory. So as no change in Heap, I am assuming that this leak is because of native code.
So challenge remains still open. Any thoughts on how to analyze the memory occupied by all the threads will be helpful here.
Below are the snapshots taken from native memory tracking.
In this pic, you can see that 88 MB got increased in threads. Where arena and resource handle count had increased a lot.
in this picture you can see that 73 MB had increased in this Malloc. But no method name is shown here.
So please throw some info in understanding these 2 screenshot.
You may try another GC implementation like G1 introduced in Java 7 and probably the default GC in Java 9. To do so just launch your Java apps with:
-XX:+UseG1GC
There's also an interesting functionality with G1 GC in Java 8u20 that can look for duplicated Strings in the heap and "deduplicate" them (this only works if you activate G1, not with the default Java 8's GC).
-XX:+UseStringDeduplication
Be aware to test thoroughly your system before going to production with such a change!!!
Here you can find a nice description of the diferent GCs you can use
I encountered the exact same issue.
Heap usage constant, only metaspace increase, NMT diffs showed a slow but steady leak in the memory used by threads specifically in the arena allocation. I had tried to fix it by setting the MALLOC_ARENAS_MAX=1 env var but that was not fruitful. Profiling native memory allocation with jemalloc/jeprof showed no leakage that could be attributed to client code, pointing instead to a JDK issue as the only smoking gun there was the memory leak due to malloc calls which, in theory, should be from JVM code.
Like you, I found that upgrading the JDK fixed the problem. The reason I am posting an answer here is because I know the reason it fixes the issue - it's a JDK bug that was fixed in JDK8 u152: https://bugs.openjdk.java.net/browse/JDK-8164293
The bug report mentions Class/malloc increase, not Thread/arena, but a bit further down one of the comments clarifies that the bug reproduction clearly shows increase in Thread/arena.
consider optimising the JVM options
Parallel Collector(throughput collector)
-XX:+UseParallelGC
concurrent collectors (low-latency collectors)
-XX:+UseConcMarkSweepGC
use String Duplicates remover
-XX:+UseStringDeduplication
optimise compact ratio
-XXcompactRatio:
and refer
link1
link2
In this my answer you can see information and references how to profile native memory of JVM to find memory leaks. Shortly, see this.
UPDATE
Did you use -XX:NativeMemoryTracking=detail option? The results are straightforward, they show that the most memory allocated by malloc. :) It's a little bit obviously. Your next step is to profile your application. To analyze native methods and Java I use (and we use on production) flame graphs with perf_events. Look at this blog post for a good start.
Note, that your memory increased for threads, likely your threads grow in application. Before perf I recommend analyze thread dumps before/after to check does Java threads number grow and why. Thread dumps you can get with jstack/jvisualvm/jmc, etc.
This issue does not come with Java 8 update 152. The exact root cause of why it was coming with earlier versions is still not clearly identified.

Swappiness in JVM

I stumbled upon some interesting problems last week where I noticed that the one of our production servers, which has Apache HTTP Server over Tomcat running, stopped, reporting an HTTP outage.
On further investigating this issue it seemed like it was due to JVM causing memory pages to be swapped out quickly. This resulted in the swap space being completely populated, causing a memory issue the next time a page is moved to swap.
Investigating further, it seems that there is a JVM swappiness factor that's set to 60% by default with some of the our Linux distributions. Based on some research, it seems that this could be a high value for a web service that has high traffic. Our swap space was set to 2 GB.
The swap details are:
Filename Type Size Used Priority
/dev/sda3 partition 2096472 1261420 -1
From /proc/meminfo
SwapCached: 944668 kB
Our JVM properties are as follows:
-Xmx6g -Xms4g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:PermSize=512M -XX:MaxPermSize=1024M -XX:NewSize=2g -XX:MaxNewSize=2g -XX:ParallelGCThreads=8
The server runs with 12 GB RAM.
Swappiness does not go well with the JVM's GC process. Thus I tried reducing the swappiness to 0, but that did not change anything. We still see cases where the entire swap space is consumed and resulting in an OutOfMemory error.
How can I tune the JVM performance?
The swappiness factor is actually a system-wide setting in Linux, not JVM-specific.
My personal experience with some kinds of applications is that they need swap to be turned off in order to work without hiccups, period. While I can't say for sure if your application belongs to this category, I have seen apps being swapped out despite enough free RAM being available. As you pointed out, GC and swap don't mix well. Swappiness is just an indication to the OS, so its impact on how much gets swapped out may be larger or smaller depending on circumstances. My suggestion would be to try turning swap completely off.
In order to do this, you need to be root or have sudo access. Comment out the line describing swap in /etc/fstab, which will prevent swap from being turned on after a reboot. Then, in order to turn swap off for the current server run, run swapoff -a. This may take up to a few minutes if there's data in swap that needs to be pulled back into RAM. Then, check the last line of the output of free to ensure that the total size of available swap is 0. After this, observe your app in order to determine if turning swap off solved your issue.
If your physical RAM is 12 GB and your heap is set to only 6 GB (max), then you have plenty of RAM. Is this server dedicated to the Tomcat instance?
Take a look at the verbose GC log files to see what the memory usage is over time. Check your access log files to confirm whether the access requests have increased over time.

how to check heap size allocated for jvm by linux

I have apache-tomcat as my web server.
I want to check what heap size is allocated for jvm by linux.
Also from where, I can modify it.
A simple way on Linux is to run the following:
ps -ef |grep tomcat
Look for the starting and maximum JVM memory:
-Xms1024m -Xmx4096m
In this case it is allocating 1G on startup and the Maximum is 4G.
You can easily check the heap size memory allocation using JConsole, if you have a path to your jre/jdk set up correctly on the system you should be able to start it with command jconsole from anywhere.
For managing your heap memory allocation you can have a look here: http://javahowto.blogspot.com/2006/06/6-common-errors-in-setting-java-heap.html
The heap size used by Tomcat is defined in its configuration.
This is the place where you can both check and change it.
If you're unsure about where this configuration is saved, I'd suggest looking at the Tomcat documentation where this is explained together with all configuration values.
If you need more information from the server but cannot log into it interactively (or don't have a GUI or JMX set up etc) you can include javamelody in your POM file/libs and it will create a page at host:8080//monitoring with all kinds of good information, including heap size, GC statistics and permgen size.
This is NOT a safe thing to leave running in a production environment - if you need it all the time at least lock it down!

Resources