Buffer/Cache use 100% Memory - linux

I have a Linux box installed centos 6.6 with 7 GB RAM running Apache on top of it, every night Buffer and cache consume 6 GB memory out of 7 GB but when i check it through top command no process use that much RAM but only Buffer/Cache does...please help.

Linux tries to make good use of all the free memory, so it is used to cache the system I/O (files transfered to/from memory) in order to reduce further disk access (in your case, serving faster the static content.)
It dynamically reduces the buffer/cache when the processes require more space. For example, changing the Apache configuration to use more modules or spawn more workers.

Related

GraphX Disk size running low

I am currently using Apache Spark with Graphx, I have noticed lately that when I run my application with a lots of data the application is using a large part of my disk, for example before I start the program the disk is around 8 GB and during the application runs it goes down to 1 GB, when I close the application the disk is restored but not in full. I have lost some GB, at first I though that it had to do with swap memory and logs, but I can not find what is stored to my disk after the execution of the application.
Can someone explain why is this happening?

Too many Cassandra processes on the server

The following is the screenshot of htop on my dev server [arranged by MEM% used]:
I have only one cassandra instance running, but there are so many cassandra processes in htop, which is taking up 16 gb of ram.
The server is not being used in production, hence there are no queries being run on it at the moment.
I don't understand the reason why so many cassandra processes are running on my system, and how can I control this. Any suggestions will be highly appreciated.
Cassandra is a greedy process, It wont leave the RAM unless asked for.
You do not need to worry about the used RAM. If any other process will request for RAM, Cassandra process will leave the RAM.
Cassandra typically can take upto 16 GB RAM, which is the minimum prod recommendation from a performance point of view. Along with Cassandra there are a number of other processes which get the memory allocation like the JVM heap here. And as mentioned above it is a memory intensive technology.

Why Java 8 allocates 1.07 gb MetaSpace, but uses only 81mb?

I am analyzing GC log from my application.
I wonder why my JVM allocated 1.07 gigabyte for Meta Space, but used only 81 megabytes.
I use jdk8_8.91.14 (Oracle JDK) without any additional settings for memory.
Those numbers come from analyzing GC log file (-XX:+PrintGCDetails ) with http://gceasy.io/
All used metatada was allocated shorty after application was started, and it stays that way for whole applicaton lifetime.
Why JVM defaults are so wastefull when in comes to metadata?
It seems, that in my case I just waste 1GB of memory.
How to safely tune Metadata, so it starts small (like 52Mb), grows only when needed, and grows in small chunks?
I am running application on (Virtual Machine), CentOS Linux release 7.2.1511 (Core).
Inside that VM , I have docker with Ubuntu 14.04.4 LTS

Potential Memory leak in Suse

I've a SUSE server running tomcat with my web application (which has threads running in the backend to update database).
The server has 4GB RAM and tomcat is configured to use maximum of 1GB.
After running for few days, the free command shows that system has only 300MB free memory. Tomcat uses only 400MB and no other process seems to use unreasonable amount of memory.
Adding up the memory usage of all process (returned from ps aux command) shows only 2GB is in use.
Is there any way to identify if there is leak at system level?

GC in Server Mode Not Collecting the Memory

IIS hosted WCF service is consuming Large memory like 18 GB and the server has slowed down.
I Analyzed Mini dump file and it shows only 1 GB or active objects. I understand the GC is not clearing the memory and GC must be running in server mode in 64 bit System. Any idea why the whole computer is stalling and app is taking huge memory?
The GC was running on Server Mode it was configured for better performance. I Understand GC running in Server mode will have a performance improvement because the GC's will not be triggered frequently due to high available memory and in server mode it will have high limit on memory usage. Here the problem was when the high limit is reached for the process CLR triggered the GC and it was trying to clear the Huge 18 GB of memory in one shot, so it was using 90% of system resource and rest applications were lagging.
We tried restarting but it was forever going so We had to kill the process. and now with Workstation mode GC smooth and clean. The only difference is response time has some delay due to GC after 1.5 GB allocation.
One more info: .NET 4.5 version has revision regarding this which has resolved this issue in GC.

Resources