Unknown memory utilization in Ubuntu14.04 Trusty - ubuntu-14.04

I'm running Ubuntu Trusty 14.04 on a new machine with 8GB of RAM, and it seems to be locking up periodically and nothing is in syslog file. I've installed Nagios and have been watching the graphs, and it looks like memory is going high from 7% to 72% in just a span of 10 mins. Only node process are running on server. In top I found all process are running very normal memory consumption. Even after stopping node process. Memory remains with same utilization.
free agrees, claiming I'm using more than 5.7G of memory:
free -h
total used free shared buffers cached
Mem: 7.8G 6.5G 1.3G 2.2M 233M 612M
-/+ buffers/cache: 5.7G 2.1G
Swap: 2.0G 0B 2.0G
This other formula for totaling the memory roughly agrees:
# ps -e -orss=,args= | sort -b -k1,1n | awk '{total = total + $1}END{print total}'
503612
If the processes only total 500 MiB, where's the rest of the memory going?

I've got solution on this... so just wanna to update the same...
echo 2 > /proc/sys/vm/drop_caches
This resolved my issue. So I have added the same in my cron for every 5 mins on each of ubuntu server

Related

buff/cache gets filled, how to clear it AZURE VM

my RAM got filled completely, and the buff/cache occupied all the memory pl z do let me know how to make it clear when I run the command free -m I get this output,
this is a Linux VM made on azure for WordPress bitnami
$ free -m
total used free shared buff/cache available
Mem: 64320 19319 3164 115 41837 44168
Swap: 0
image
I tried to reproduce the same in my environment I got some buff/cache like below.
To clear the buff/cache try to use the below command.
Every Linux has three options to clear cache
First - clear page cache only
echo 1 > /proc/sys/vm/drop_caches
Second - clear dentries and inodes
echo 2 > /proc/sys/vm/drop_caches
Third - clear page cache and dentries and inodes
echo 3 > /proc/sys/vm/drop_caches
When I try to run this above script, there is 912M of total RAM, and 144M as used. And before I have 352M as free after exciting the script my buff/cache were reduced from 415 to 58 and free Ram increased upto 708M
Reference:
Clear RAM Memory Cache, Buffer, and Swap Space on Linux Credits by Arpit-saini

Explanation of available disk space on linux

Can you explain why the available space is only 78G but the different beetween 905G and 782 is 123?
Where is the other 45?
/dev/md2 905G 782G 78G 92% /var
File system is EXT-3
i check that is normal on ext-3 that filesystem reserve 5% of disk.
For free that you can do this command
tune2fs -m 0 /dev/sda1

Solving the SIGKILL killing signal

I'm trying to run a simulation on my local computer in university, but after some iterations it's being killed by a SIGKILL. Even when I check the available swap space it shows that still I have enough space !!!
:~$ free -m
total used free shared buffers cached
Mem: 3937 2091 1845 0 64 677
-/+ buffers/cache: 1349 2587
Swap: 3860 738 3122
The same story repeats when I use another server by ssh
:~$ free -m
total used free shared buffers cached
Mem: 129043 98281 30761 52 4 32901
-/+ buffers/cache: 65375 63668
Swap: 4095 120 3975
When I run it on my own laptop it works properly.
I'd really appreciate if help me out.
Are you checking the swap space after the fact or during the run? If there is a memory crunch the operating system's out of memory killer (OOM Killer) may kill the process ( depending on the configuration this could be the worst offender, random or anything else). Execute "sar" command and see the system state around the time your process got killed.

CentOs 7 fails to boot crash kernel and generate dump in /var/crash

We have an issue where our CentOS 7 server will not generate a kernel dump file in /var/crash upon Kernel panic. It appears the crash kernel never boots. We’ve followed the Rhel guide (http://red.ht/1sCztdv) on configuring crash dumps and at first glance everything appears to be configured correctly. We are triggering a panic like this:
echo 1 > /proc/sys/kernel/sysrq
echo c > /proc/sysrq-trigger
This causes the system to freeze. We get no messages on the console and the console becomes unresponsive. At this point I would imagine the system would boot a crash kernel and begin writing a dump out to /var/crash. I’ve left it in this frozen state for up to 30 minutes to give it time to complete the entire dump. However after a hard cold reboot /var/crash is empty.
Additionally, I've replicated the configuration in a KVM virtual machine and kdump words as expected. So there is either something wrong with my configuration on the physical system or something odd about that hardware config that causes the hang rather than the dump.
Our server is an HP G9 with 24 cores and 128GB of memory. Here are some other details:
[user#host]$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.10.0-123.el7.x86_64 root=UUID=287798f7-fe7a-4172-a35a-6a78051af4d2 ro rd.lvm.lv=vg_sda/lv_root vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_sda/lv_swap crashkernel=auto vconsole.keymap=us rhgb nosoftlockup intel_idle.max_cstate=0 mce=ignore_ce processor.max_cstate=0 idle=mwait isolcpus=2-11,14-23
[user#host]$ systemctl is-active kdump
active
[user#host]$ cat /etc/kdump.conf
path /var/crash
core_collector makedumpfile -l --message-level 1 -d 31 -c
[user#host]$ cat /proc/iomem |grep Crash
2b000000-357fffff : Crash kernel
[user#host]$ dmesg|grep Reserving
[ 0.000000] Reserving 168MB of memory at 688MB for crashkernel (System RAM: 131037MB)
[user#host]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_sda-lv_root 133G 4.7G 128G 4% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 63G 9.1M 63G 1% /run
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda1 492M 175M 318M 36% /boot
/dev/mapper/vg_sdb-lv_data 2.8T 145G 2.6T 6% /data
After modifying the following parameters we were able to reliably get crash dumps:
Changed crashkernel=auto to crashkernel=1G: I'm not sure why we need 1G as the formula indicated 128M+64M for every 1TB of ram.
/etc/sysconfig/kdump: Removed everything from KDUMP_COMMANDLINE_APPEND excpet irqpoll nr_cpus=1 resulting in: KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1
/etc/kdump.cfg: Add compression (“-c”) to makedump
Not 100% sure why this works but it does. Would love to know what others think
Eric
Eric,
1G seems a bit large. I've never seen anything larger than 200M for a normal server. Not sure about the sysconfig settings. Compression is a good idea but I don't think it would affect the issue since you're target is close to total memory and you're only dumping the kernel ring.

Why is the system CPU time (% sy) high?

I am running a script that loads big files. I ran the same script in a single core OpenSuSe server and quad core PC. As expected in my PC it is much more faster than in the server. But, the script slows down the server and makes it impossible to do anything else.
My script is
for 100 iterations
Load saved data (about 10 mb)
time myscript (in PC)
real 0m52.564s
user 0m51.768s
sys 0m0.524s
time myscript (in server)
real 32m32.810s
user 4m37.677s
sys 12m51.524s
I wonder why "sys" is so high when i run the code in server. I used top command to check the memory and cpu usage.
It seems there is still free memory, so swapping is not the reason. % sy is so high, its probably the reason for the speed of server but I dont know what is causing % sy so high. The process that is using highest percent of CPU (99%) is "myscript". %wa is zero in the screenshot but sometimes it gets very high (50 %).
When the script is running, load average is greater than 1 but have never seen to be as high as 2.
I also checked my disc:
strt:~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 16480 MB in 2.00 seconds = 8247.94 MB/sec
Timing buffered disk reads: 20 MB in 3.44 seconds = 5.81 MB/sec
john#strt:~> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 245G 102G 131G 44% /
udev 4.0G 152K 4.0G 1% /dev
tmpfs 4.0G 76K 4.0G 1% /dev/shm
I have checked these things but I am still not sure what is the real problem in my server and how to fix it. Can anyone identify a probable reason for the slowness? What could be the solution?
Or is there anything else I should check?
Thanks!
You're getting a high sys activity because the load of the data you're doing takes system calls that happen in kernel. To resolve your slowness problems without upgrading the server might be possible. You can modify scheduling priority. See the man pages for nice and renice. See here and especially:
Niceness values range from -20 (the highest priority, lowest niceness) and 19 (the lowest priority, highest niceness).
$ ps -lp 941
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 941 1 0 70 -10 - 1713 poll_s ? 00:00:00 sshd
$ nice -n 19 ./test.sh
My niceness value is 19
$ renice -n 10 -p 941
941 (process ID) old priority -10, new priority 10

Resources