Potential Memory leak in Suse - linux

I've a SUSE server running tomcat with my web application (which has threads running in the backend to update database).
The server has 4GB RAM and tomcat is configured to use maximum of 1GB.
After running for few days, the free command shows that system has only 300MB free memory. Tomcat uses only 400MB and no other process seems to use unreasonable amount of memory.
Adding up the memory usage of all process (returned from ps aux command) shows only 2GB is in use.
Is there any way to identify if there is leak at system level?

Related

Netty webclient memory leak in tomcat server

I am observing swap memory issue in our tomcat servers which is installed in linux machines and when tried to collect heap dump, got this while analyzing heap dump.
16 instances of "io.netty.buffer.PoolArena$HeapArena", loaded by "org.apache.catalina.loader.ParallelWebappClassLoader # 0x7f07994aeb58" occupy 201,697,824 (15.40%) bytes.
Have seen in this blog Memory accumulated in netty PoolChunk that Adding -Dio.netty.allocator.type=unpooled showed significant reduction in the memory. Where do we need to add this property in our tomcat servers?

Docker doesn't kill containers on OOM

I made two containers, both malloc in a loop until the server runs out of memory on a remote server running Debian 9 with enabled swap (4 GB RAM 1 GB swap). When running a single one (the host doesn't have any other services running, pretty much only dockerd), it gets killed in a minute or so, and everything is fine. Running 2/3 at the same time cause the server to lock out, making SSH unresponsive. Why don't these containers (I suppose they have really high OOM scores) get killed by OOM?

Why Java 8 allocates 1.07 gb MetaSpace, but uses only 81mb?

I am analyzing GC log from my application.
I wonder why my JVM allocated 1.07 gigabyte for Meta Space, but used only 81 megabytes.
I use jdk8_8.91.14 (Oracle JDK) without any additional settings for memory.
Those numbers come from analyzing GC log file (-XX:+PrintGCDetails ) with http://gceasy.io/
All used metatada was allocated shorty after application was started, and it stays that way for whole applicaton lifetime.
Why JVM defaults are so wastefull when in comes to metadata?
It seems, that in my case I just waste 1GB of memory.
How to safely tune Metadata, so it starts small (like 52Mb), grows only when needed, and grows in small chunks?
I am running application on (Virtual Machine), CentOS Linux release 7.2.1511 (Core).
Inside that VM , I have docker with Ubuntu 14.04.4 LTS

Buffer/Cache use 100% Memory

I have a Linux box installed centos 6.6 with 7 GB RAM running Apache on top of it, every night Buffer and cache consume 6 GB memory out of 7 GB but when i check it through top command no process use that much RAM but only Buffer/Cache does...please help.
Linux tries to make good use of all the free memory, so it is used to cache the system I/O (files transfered to/from memory) in order to reduce further disk access (in your case, serving faster the static content.)
It dynamically reduces the buffer/cache when the processes require more space. For example, changing the Apache configuration to use more modules or spawn more workers.

Java OutOfMemoryError in Windows Azure Virtual Machine

When I run my Java applications on a Window Azure's Ubuntu 12.04 VM,
with 4 by 1.6GHZ core and 7G RAM, I get the following out of memory error after a few minutes.
java.lang.OutOfMemoryError: GC overhead limit exceeded
I have a swap size of 15G byte, and the max heap size is set to 2G. I am using a Oracle Java 1.6. Increase the max heap size only delays the out of memory error.
It seems the JVM is not doing garbage collection.
However, when I run the above Java application on my local Windows 8 PC (core i7) , with the same JVM parameters, it runs fine. The heap size never exceed 1G.
Is there any extra setting on Windows Azure linux VM for running Java apps ?
On Azure VM, I used the following JVM parameters
-XX:+HeapDumpOnOutOfMemoryError
to get a heap dump. The heap dump shows an actor mailbox and Camel messages are taking up all the 2G.
In my Akka application, I have used Akka Camel Redis to publish processed messages to a Redis channel.
The out of memory error goes away when I stub out the above Camel Actor. It looks as though Akka Camel Redis Actor
is not performant on the VM, which has a slower cpu clock speed than my Xeon CPU.
Shing
The GC throws this exception when too much time is spent in garbage collection without collecting anything. I believe the default settings are 98% of CPU time being spent on GC with only 2% of heap being recovered.
This is to prevent applications from running for an extended period of time while making no progress because the heap is too small.
You can turn this off with the command line option -XX:-UseGCOverheadLimit

Resources