How is total memory in Java calculated - garbage-collection

If I have 8GB RAM and I use the following on a 64-bit JVM
max heap size 6144MB
max perm gen space 2048MB
stack size 2MB
Q1 : Is perm gen space allocated from the max heap or a separate?
Q2 : if seperate then will the jvm with above settings get started or it will give error as heap + permgen + stack + program data would be above the total RAM?

First of all remember that the parameter you set with -Xmx (since that's the way I suppose you are setting your heap size) is the size of heap available to your Java code, not the amount of memory the JVM will consume. The difference comes from housekeeping structures that the JVM keeps (garbage collector structures, JIT overhead etc.), sometimes memory allocated by native code, buffers, and so on. The size of this additional memory depends on JVM version, the app you are running, and other factors, but I've seen JVMs allocate twice as much RAM as the heap size visible to the application. For the average case, I usually consider 50% to be a safe margin, with 20-30% acceptable. If you set your heap size to be close to amount of RAM in your machine, you will hit the swap and performance will suffer.
Now for the enumerated questions:
Perm gen is a separate space from the heap at least in Oracle's JDK 6. It is separate because it undergoes completely different memory management rules than the regular heap. By the way, 2 GB of pergen space is huge - are you sure you really need it?
Regarding the second question, see above. If this is Oracle's JDK, you are likely to run into trouble since perm and heap sums up but there will be additional memory, usually on the order of 20-50% of your 6 GB heap, and together with heap and perm space this will be more than your RAM. At first try this setup may work, but once both the heap and perm gen space usages come close to their configured limits, you could run out of memory.

heap and permgen are different memory parts of JVM. As such you will be consuming virtually all the memory on system. It is always better to leave 20% ram to be free for os/other tasks to execute properly.
Also, 2 gb for perm space is a huge figure. Have you looked at jar optimisation meaning that only relevant classes are present in the classpath?

This depends on the JVM and the version of the JVM.
In Hotspot Java 6, PermGen space is independent from the max heap size argument (-Xmx and -Xms control only the Young/OldGen sizes). The PermGen space size is given by the -XX:PermSize and -XX:MaxPermSize. See Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning
UPDATE: In Hotspot Java 8, there is no PermGen space anymore and the objects reside in the Young/Old Generation spaces.

Related

Application fails when free memory is low but available memory is high

I am building data models via an app called Sisense on Linux. Lately the process fails with an out of memory error. Running free -h I see that that the failure occurs when free memory is low, but before it actually reaches zero and even though there is still plenty of available memory.
Here is the exception:
Failed to build custom table: Rule_pre; BE#521691 SQL error: SafeModeException:
Safe-Mode triggered due to memory pressure. Pod physical memory: 5.31 GB available, 2.87 GB
used, 8.19 GB total. Server physical memory: 4.86 GB available, 28.67 GB used,
33.54 GB total. Application total virtual memory: 2.54 GB. The server exceeded 85% capacity
(28.67/33.54). Possible ways to reduce memory pressure: increase server memory, adjust data
modelling (M2M, un-indexed string fields, etc.), reduce number of simultaneous queries
And here is the output of free -h where you can see the declining memory in the center "free" column. Once free memory got below 235 MB I saw the above exception.
The free util man page has these definitions for free and available memory:
free Unused memory (MemFree and SwapFree in /proc/meminfo)
available
Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free
As I read on the internet there seems to be a casualness about low free memory. That it is not an issue. But the failure coincides with free memory getting to low. If I understand the man page, the available memory is for starting new applications. I am assuming then that available memory is not available to the existing application that fails, and that free memory is indeed what matters. But any confirmation form others or additional explanation would be appreciated. I'd also be curious about opinions on whether this may constitute a memory leak or if I should simply allocate more memory somehow perhaps at the Linux layer.
I think I have enough understanding here. Free memory never goes below 200MB whether a build fails or succeed. It does not appear to be an indicator of the issue. A successful build will also show a drop in free memory to 200MB.

How garbage collector works with Xmx and Xms values

I have some doubts how the JVM garbage collector would work with different values of Xmx and Xms and machine memory size:
How would garbage collector would work in following scenarios:
1. Machine memory size = 7.5GB
Xmx = 1024Mb
Number of processes = 16
Xms = 512Mb
I know 16*512Mb already exceeds the machine memory size. How would the garbage collector would work in this scenario. I think the memory usage would be entire 7.5GB in this case. Will the processes would be able to do anything in this? Or they all will be stuck?
2. Machine memory size = 7.5GB
Xmx = 320MB
Xms is not defined.
Number of Processes = 16
In this, 16*320Mb should be less than 7.5GB. But in my case, memory usage is again reaching 7.5GB. Is it possible? Or I have probably have a memory leak in my application?
So, basically I want to understand when does garbage collector runs? Does it run whenever memory used by the application reached exactly Xmx value? Or they are not related at all?
There's a couple of things to understand here and then consider in your situation.
Each JVM process has its own virtual address space, which is protected from other processes by the operating system. The OS maps physical ranges of addresses (called pages) to the virtual address space of each process. When more physical pages are required than are available, pages that have not been used for a while will be written to disk (called paging) and can then be reused. When the data of these saved pages is required again they are read back to the same or different physical page. By doing this you can easily run 16 or more JVMs all with a heap of 1Gb on a machine with 8Gb of physical memory. The problem is that the more paging to disk that is required the more you are going to degrade the performance of your applications since disk IO is orders of magnitude slower than RAM access. This is also the reason that the heap space of a single JVM should not be bigger than physical memory.
The reason for having -Xms and -Xmx options is so you can specify the initial and maximum size of the heap. As your application runs and requires more heap space the JVM is able to increase the heap size within these bounds. A lot of time these values are set to be the same to eliminate the overhead of having to resize the heap while the application is running. Most operating systems only allocate physical pages when they're required so in your situation making -Xms small won't change the amount of paging that occurs.
The key point here is it's the virtual memory system of the operating system that makes it possible to appear to be using more memory than you physically have in your machine.

What is the difference between heap and swap memory?

What is the difference between heap and swap memory in Ubuntu/Any OS? How does this affect in choosing Cassandra?
Heap memory is what the jvm uses, swap is what OS uses to push things not used often onto disk and save memory. Its very recommended to disable swap on C* hosts, as the old gen objects in jvm may get pushed onto disk, and when a GC occurs and it gets touched it will be very slow. If it can C* will pin its memory to prevent itself from being swapped, but you should disable it anyway.

Is Linux RSS not equivalant to java Xmx + MaxMetaspaceSize? [duplicate]

This question already has answers here:
Java process memory usage (jcmd vs pmap)
(3 answers)
Relation between memory host and memory arguments xms and xmx from Java
(1 answer)
Closed 5 years ago.
This is my ps -eo snapshot some process occupy 2.1GB memory.
Max size of its heap is 768mb and max size of its metaspace size is 256mb.
And I guess the process will cannot occupy over 1024mb(768+256). But It isn`t.
What is included in "RSS" except heap and metaspace? And how can I monitoring inside of "RSS" like heap stack analzer?
the RSS is the size of all the memory used for any purpose including the JVM, Shared libraries, thread stacks, direct memory, memory mapped files, native memory use, native GFX components. The heap and meta space are just two memory regions.
Note the virtual memory size is 15 GB.
To see what the memory is used for you can dump /proc/{pid}/smaps which shows all the memory regions (and there will be hundreds) and how much of each one is resident. (IntelliJ running on my machine has 403 memory regions)

Why JVM calculated PS Survivor Space size too low for parallel collector

I am using JDK1.6.0_16 JVM for the java application that is hosted on an Linux Intel procesor 80 cores machine.
while starting the Java application I have only two options configured
-Xms2048m -Xmx8000m in the JVM Options (after java command).
I see that PS Old Gen is calculated as 5.21G and PS Eden is calcuated 2.6G but the PS Survivor space is 25MB.
I have exactly same JVM in production and in that PS Survivor Space size is shown as 888MB. I am seeing these sizes in java mission control Memory tab.
The cache size (output of /proc/cpuinfo) is showing 24656 in both UAT and production boxes.
Dont think it will make any difference for JVM but still mentioning that the there was very low load on machine at the time of starting JVM.
Can you please advise what parameters does JVM consider for calculating the PS Survivor Space size?
From oracle gc tuning article 1 and article 2 :
Survivor Space Sizing
You can use the parameter SurvivorRatio can be used to tune the size of the survivor spaces, but this is often not important for performance. For example, -XX:SurvivorRatio=6 sets the ratio between eden and a survivor space to 1:6.
In other words, each survivor space will be one-sixth the size of eden, and thus one-eighth the size of the young generation (not one-seventh, because there are two survivor spaces).
If survivor spaces are too small, copying collection overflows directly into the tenured generation. If survivor spaces are too large, they will be uselessly empty.
The NewSize and MaxNewSize parameters control the new generation’s minimum and maximum size. Regulate the new generation size by setting these parameters equal. The bigger the younger generation, the less often minor collections occur.
NewRatio: The size of the young generation relative to the old generation is controlled by NewRatio. For example, setting -XX:NewRatio=3 means that the ratio between the old and young generation is 1:3, the combined size of eden and the survivor spaces will be fourth of the heap.
As correctly quoted by Peter Lawrey, setting survivor depends on type of your application. From gc tuning article by Oracle, here are the guidelines.
First decide the maximum heap size you can afford to give the virtual machine. Then plot your performance metric against young generation sizes to find the best setting
If the total heap size is fixed, then increasing the young generation size requires reducing the tenured generation size. Keep the tenured generation large enough to hold all the live data used by the application at any given time, plus some amount of slack space (10 to 20% or more).
Subject to the previously stated constraint on the tenured generation: Grant plenty of memory to the young generation and increase the young generation size as you increase the number of processors, because allocation can be parallelized. The default is calculated from NewRatio and the -Xmx setting
Can you please advise what parameters does JVM consider for calculating the PS Survivor Space size?
It must large enough to never actually fill up after a collection of the Eden space, otherwise you will get Full GCs which is undesirable.
What is an optimal Survivor space size depends on your application. I suggest you test you application under realistic loads with a larger Eden and Survivor space than you imagine useful and see how much of that space ever gets used and add 50% to 100% based on what you see is used.
The machine has 256G of physical memory out of which ~200G
The default heap size is 32 GB and I suggest you use this default unless you have a good reason to reduce it.
-XX:SurvivorRatio=1
This is usually a bad idea and having a high survivor ratio like 8 is usually better.
Setting the value to 8 didnt have any effect
Most likely you have a low allocation rate. I usually set a high Young space, alike -Xmn8g or even -Xmn24g but whether this is a good/bad idea depends on your application.

Resources