Please help me,my live application sometimes throw exception out of memory java heap
however I set the max size to 512M half of virtual server size
I've searched on google and traced my Server like attached image
can anyone tell me where is the error please ?
the data in console is below
System load: 0.01 Processes: 74
Usage of /: 16.2% of 29.40GB Users logged in: 0
Memory usage: 60%
Swap usage: 0%
developer#pc:/$ free -m
total used free shared buffers cached
Mem: 994 754 239 0 24 138
-/+ buffers/cache: 592 401
Swap: 0 0 0
Related
I am using forever package to run my Node.js script. (not a web server). However, because of it, I have memory leak and even after stopping all processes, my memory is still taken:
root#aviok-cdc-elas-001:~# forever stopall
info: No forever processes running
root#aviok-cdc-elas-001:~# forever list
info: No forever processes running
root#aviok-cdc-elas-001:~# free -lm
total used free shared buffers cached
Mem: 11721 6900 4821 5 188 1242
Low: 11721 6900 4821
High: 0 0 0
-/+ buffers/cache: 5469 6252
Swap: 0 0 0
Also to mention, there is no memory leak from the script when ran locally without forever. I run it on Ubuntu server. And if I would reboot server now:
root#aviok-cdc-elas-001:~# reboot
Broadcast message from root#aviok-cdc-elas-001
(/dev/pts/0) at 3:19 ...
The system is going down for reboot NOW!
My RAM would be free again:
root#aviok-cdc-elas-001:~# free -lm
total used free shared buffers cached
Mem: 11721 1259 10462 5 64 288
Low: 11721 1259 10462
High: 0 0 0
-/+ buffers/cache: 905 10816
Swap: 0 0 0
I also want to mention that, when my script finishes what it is doing (and it does eventually) I have db.close and process.exit calls to make sure everything is killed from the side of my script. However, even after that RAM is taken away. Now I am aware that forever will run that script again after it is killed. So my questions are:
How do I tell forever to not execute script again if it is finished?
How do I stop forever properly so it does NOT take any RAM after I stopped it?
The reason I am using forever package for this is because my script needs a lot of time to do what it does and my SSH session would end, and so would Node script which I ran in a regular way.
From what I can see, the RAM isn't taken away, or leaking, it's being used by Linux as file system cache (because unused RAM is wasted RAM).
From the 6900 megs of "used" RAM, 5469 is used as buffer cache. Linux will reduce this amount automatically when processes request memory.
If you want a long-running process to keep running after you log out (or after your SSH session gets killed), you have various options that don't require forever:
Background the process, making sure that any "logout" signals are ignored:
$ nohup node script.js &
Use a terminal multiplexer like tmux or screen.
I have been running a tcpdump based script on Ubuntu for some time, and recently I have been asked to run it on CentOS 6.5 and I'm noticing some very interesting differences
I'm running tcpdump 4.6.2, libpcap 1.6.2 on both setups, both are actually running on the same hardware (dual booted)
I'm running the same command on both OS'.
sudo /usr/sbin/tcpdump -s 0 -nei eth9 -w /mnt/tmpfs/eth9_rx.pcap -B 2000000
From "free -k", I see about 2G allocated on Ubuntu
Before:
free -k
total used free shared buffers cached
Mem: 65928188 1337008 64591180 1164 26556 68596
-/+ buffers/cache: 1241856 64686332
Swap: 67063804 0 67063804
After:
free -k
total used free shared buffers cached
Mem: 65928188 3341680 62586508 1160 26572 68592
-/+ buffers/cache: 3246516 62681672
Swap: 67063804 0 67063804
expr 3341680 - 1337184
2004496
One CentOS, I see twice the amount of memory (4G) being allocated from the same command
Before:
free -k
total used free shared buffers cached
Mem: 16225932 394000 15831932 0 15308 85384
-/+ buffers/cache: 293308 15932624
Swap: 8183804 0 8183804
After:
free -k
total used free shared buffers cached
Mem: 16225932 4401652 11824280 0 14896 84884
-/+ buffers/cache: 4301872 11924060
Swap: 8183804 0 8183804
expr 4401652 - 394000
4007652
From the command, I'm listening against an interface and dumping into a RAMdisk.
On Ubuntu, I can capture packets at line rate for large size packets (10G, 1024 byte frames)
But on CentOS, I can only capture packets at 60% of line rate (10G, 1024 byte frames)
Also, both OS's are running the same version of NIC drivers and driver configurations.
My goal is to achieve the same performance on CentOS as I have with Ubuntu.
I googled around and there seems to be the magic of libpcap behaving differently with different kernels. I'm curious if there's any kernel side options I have to tweek on the CentOS side to achieve the same performance on Ubuntu.
This has been answered. According to Guy Harris from tcpdump/libpcap, the difference is due to CentOS6.5 running 2.6.X kernel. Below is his response:
"
3.2 introduced the TPACKET_V3 version of the "T(urbo)PACKET" memory-mapped packet capture mechanism for PF_PACKET sockets; newer versions of libpcap (1.5 and later) support TPACKET_V3 and will use it if the kernel supports it. TPACKET_V3 makes much more efficient use of the capture buffer; in at least one test, it dropped fewer packets. It also might impose less overhead, so that asking for a 2GB buffer takes less kernel memory."
ive recently activated opcache but it doesn't appear to be working.
It's confirmed activated via phpinfo()
As you can see
0 hits
1 miss
1 cached script (opcached gui)
What am I missing?
Server is a Linux server centos 6.5 vps
PHP 5.5
A bit more info about opcache configuration
opcache_enabled true
cache_full false
restart_pending false
restart_in_progress false
used_memory 8.54 MB
free_memory 503.46 MB
wasted_memory 0 bytes
current_wasted_percentage 0.00%
buffer_size 4194304
used_memory 446.41 kB
free_memory 3.56 MB
number_of_strings 4895
num_cached_scripts 1
num_cached_keys 1
max_cached_keys 65407
hits 0
start_time Sat, 26 Jul 14 23:20:32 +0000
last_restart_time never
oom_restarts 0
hash_restarts 0
manual_restarts 0
misses 1
blacklist_misses 0
blacklist_miss_ratio 0.00%
opcache_hit_rate 0.00%
This looks like you are using cgi rather than mod_php5. The shared memory area (SMA) is used for both, but it only persists request-to-request for the latter.
I had this issue on a WHM/cPanel server today. As TerryE suggests, you are probably running CGI or suPHP. Change to DSO.
Hi I just installed Cloudera Manager on my cluster, 1 namenode and 4 datanodes, each data nodes has 64 GB RAM, 24 cores Xeon CPU, 16 1T disks SAS..etc.
I installed brand new Redhat Linux and upgraded to 6.5, each disk has been logically set up as RAID0 since there is no JBOD option available on the array controller.
I am running a hive query and here is the top command on the data node. I am so confused and wondering if some experienced hadoop admin could help me understand if my cluster is working fine.
Why there is only 1 task running out of 897 while the other 896 sleeping? There are 2271 mappers for that hive query and it is only 80% on the mapper side.
The load average is 8.66, I read from here that if you computer is working hard, the load average should be around the number of cores. Is my datanode working hard enought?
List item 69/70 memory has been "used", seems like the active yarn process is fairly low memory cost, how could those 64GB memory be so easily used up?
Here is the top output:
top - 22:50:24 up 1 day, 8:24, 3 users, load average: 8.66, 8.50, 7.95
Tasks: 897 total, 1 running, 896 sleeping, 0 stopped, 0 zombie
Cpu(s): 32.3%us, 5.2%sy, 0.0%ni, 62.3%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st
Mem: 70096068k total, 69286800k used, 809268k free, 222268k buffers
Swap: 4194296k total, 0k used, 4194296k free, 61468376k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
439 yarn 20 0 1417m 591m 19m S 193.9 0.9 1:06.12 java
561 yarn 20 0 1401m 581m 19m S 193.2 0.8 0:19.75 java
721 yarn 20 0 1415m 561m 19m S 172.0 0.8 0:08.54 java
611 yarn 20 0 1415m 574m 19m S 127.0 0.8 0:16.87 java
354 yarn 20 0 1428m 595m 19m S 121.4 0.9 0:35.96 java
27418 yarn 20 0 1513m 483m 18m S 13.6 0.7 18:26.14 java
16895 hdfs 20 0 1438m 410m 18m S 9.6 0.6 103:23.70 java
3726 hdfs 20 0 860m 249m 21m S 1.7 0.4 2:12.28 java
I am fairly new at system admin and any metric tool or common sense will be much appreciated! Thanks!
I am using a NFS mounted File system for Linux based embedded system box. I have few shared libraries, sizes of which varies from 1MB to 20MB. I am running the application which is dependent on these libraries.
While running the application, i checked the /proc/TaskPID/smap.
Size: 4692 kB
Rss: 1880 kB
Pss: 1880 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 1880 kB
Private_Dirty: 0 kB
Referenced: 1880 kB
Anonymous: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Now as per my understanding, it means that Library is partially loaded (Since RSS says lesser value to Size)? If so, on a reference to other portion, trying to get that part into memory (Hope my understanding is correct) will be more costlier in case of NFS mounted system.So can we make it load every thing before running?