What is tomcat memory heap committed? - linux

We are monitoring tomcat servers and i found whenever Committed Heap memory reached Max heap memory my tomcat got crashed or OOM error, But at same time Heap Used memory is under normal used. Could some one explain what is committed used and why its crashing while Heap used is normal.
See following graph so you get some idea.

Committed space is space that is not virtual. Namely space that is statically assigned to a given partition of the JVM. When you use the optional sizing switches -Xms and -Xmx not everything is initially committed to particular partition of memory. As the generations have a need to expand they are permitted to grow into the "virtual" space.
Your chart looks like how things should be working. As your used space grows toward your committed, the committed space expands towards the maximum. When committed reaches the maximum size thats it, it has no where to go. If it can't save itself with a last ditch GC then down your JVM goes.

Related

How to increase memory at startup?

is there an option for node.js to increase initial allocated memory?
https://futurestud.io/tutorials/node-js-increase-the-memory-limit-for-your-process
the --max-old-space-size seems to increase max memory but what about initial memory?
Kind of like xmx and xms for the JVM.
V8 developer here. The short answer is: no.
The reason no such option exists is that adding fresh pages to the heap is so fast that there is no significant benefit to doing it up front.
V8 does have a flag --initial-old-space-memory, but it doesn't increase the initial allocation. Instead, what it means is "don't bother doing (old-space) GC while the heap size is below this limit". If you set that to, e.g., 1000 (MB), and then allocate 800MB of unreachable objects, and then just wait, then V8 will sit around forever with 800MB of garbage on the heap and won't lift a finger to get rid of any of that.
I'm not sure in what scenario this behavior would be useful (it's not like it will turn off GC entirely; GC will just run less frequently, but fewer GCs on a bigger heap don't necessarily add up to less total time than more GCs on a smaller heap), so I would strongly recommend to measure the effect on your particular workload carefully before using this flag -- if it were a good idea to have this on by default, then it would be on by default!
If I had to guess: this flag might be beneficial if you know that (1) your application will have a large amount of "eternal" (=lives as long as the app is running) data on the heap, and (2) you can estimate the amount of that data with reasonable accuracy. E.g.: if you know that at any given time, your old-space will consist of 500MB of always-reachable-anyway data plus any potentially-freeable-garbage, you could use this flag to tell V8 "if old-space size is below 600MB (=500MB plus a little), then don't bother trying to find garbage, it won't be worth the effort".

What is the difference between heap and swap memory?

What is the difference between heap and swap memory in Ubuntu/Any OS? How does this affect in choosing Cassandra?
Heap memory is what the jvm uses, swap is what OS uses to push things not used often onto disk and save memory. Its very recommended to disable swap on C* hosts, as the old gen objects in jvm may get pushed onto disk, and when a GC occurs and it gets touched it will be very slow. If it can C* will pin its memory to prevent itself from being swapped, but you should disable it anyway.

Vxworks memory allocation failure even though there is enough memory

I am rather new to vxworks, and I am building an RTP application, which needs to allocate some memory dynamically. I have configured the kernel for a memory size of 750MB.
I am allocating memory in blocks 10 numbers each of size 32MB in the very beginning of the program, but after the 5th or 6th block allocation, I get an allocation failure with message memPartAlloc: block too big 15912260 bytes (0x10 aligned) in partition 0xe004608 on the console.
How could memory allocation be failing when there is enough memory available? I do not think memory had fragmented enough for allocation to fail right in the beginning of my program and as per output of memShow(), there is indeed enough free memory to satisfy the request.
If memory has indeed fragmented due to any strange reason, is there some way to compact free space and continue in Vxworks?
This is an old question, so this answer may be moot now, and is to an extent based on speculation based on the limited information in the question.
Whilst the kernel maybe configured to support 750MB, this will be the total memory available. Some of this will be used by the OS image, although we wont expect much, and we can assume that at least 700MB should be available for use.
Some extra memory will be used to provide the stacks for each task - how much is very application dependant, as it is specified in the taskSpawn. You can check this, but again, is unlikely to make significant difference.
Lets be generous, and assume that you really only have 650MB. This should, in theory, be plenty.
And yet we have this error:
memPartAlloc: block too big 15912260 bytes (0x10 aligned) in partition 0xe004608
What can be happening? And what does this mean?
This error tells you that the memory allocator could not allocate memory, as the request was too large. Interestingly, the request is 15912260, which is not 32MB, it is actually a shade over 15MB. So it would be worth checking what you are actually requesting.
Secondly, this error message is coming from memPartAlloc. Are you using allocating memory using malloc() or memPartAlloc()? The distinction matters, since malloc will allocate memory from the system memory partition, whereas memPartAlloc allocates memory from a user-specifed, and created, partition.
If you are using memPartAlloc, ensure that you are allocating memory from the correct partition, and that it has been created with enough memory to fulfill the request.
EDIT:
As it appears that this was an RTP, you should also confirm that the RTP has a large enough heap allocated. This is specified via an environment variable, as this answer describes.

Why does the java8 GC not collect for over 11 hours?

Context: 64 bit Oracle Java SE 1.8.0_20-b26
For over 11 hours, my running java8 app has been accumulating objects in the Tenured generation (close to 25%). So, I manually clicked on the Perform GC button in jconsole and you can see the precipitous drop in heap memory on the right of the chart. I don't have any special VM options turned on except for XX:NewRatio=2.
Why does the GC not clean up the tenured generation ?
This is a fully expected and desirable behavior. The JVM has been successfully avoiding a Major GC by performing timely Minor GC's all along. A Minor GC, by definition, does not touch the Tenured Generation, and the key idea behind generational garbage collectors is that precisely this pattern will emerge.
You should be very satisfied with how your application is humming along.
The throughput collector's primary goal is, as its name says, throughput (via GCTimeRatio). Its secondary goal is pause times (MaxGCPauseMillis). Only as tertiary goal it considers keeping the memory footprint low.
If you want to achieve a low heap size you will have to relax the other two goals.
You may also want to lower MaxHeapFreeRatio to allow the JVM to yield back memory to the OS.
Why does the GC not clean up the tenured generation ?
Because it doesn't need to.
It looks like your application is accumulating tenured garbage at a relatively slow rate, and there was still plenty of space for tenured objects. The "throughput" collector generally only runs when a space fills up. That is the most efficient in terms of CPU usage ... which is what the throughput collector optimizes for.
In short, the GC is working as intended.
If you are concerned by the amount of memory that is being used (because the tenured space is not being collected), you could try running the application with a smaller heap. However, the graph indicates that the application's initial behavior may be significantly different to its steady-state behavior. In other words, your application may require a large heap to start with. If that is the case, then reducing the heap size could stop the application working, or at least make the startup phase a lot slower.

major GC in JVM issue

JVM heap is divided into two spaces, space of old generation and space of young generation. After major GC, there will be freed space in old generation after compacting/sweep process, I am wondering whether the free space we got during major GC still belong to old generation space, or the free space of old generation could be moved to the space of young generation?
In other words, I am asking whether there is fixed size/boundary for the space of old generation and space of young generation.
thanks in advance,
Lin
In Hotspot, there are options for that
-XX:+UseAdaptiveSizePolicy
-XX:+UseAdaptiveGCBoundary
However this still can be ignored by the VM. Its part of the dark auto tuning magic.
For simpicity, just assume that the division betwen old and young is fixed. Same applies to eden and survivor.
I think this is a boundary between each generation, but the size of some generations maybe changeable sine the -Xmx and -Xms not same.
When to collect an object, the garbage collection mark the space as available whick the object used.
It looks like deleting a file on you disk. The OS just mark the file path unaccessible and make the space available for next store.
Generations like disk partitions, but generations can decrease or increase their's space.

Resources