Node forever (npm package) memory leak on the server - node.js

I am using forever package to run my Node.js script. (not a web server). However, because of it, I have memory leak and even after stopping all processes, my memory is still taken:
root#aviok-cdc-elas-001:~# forever stopall
info: No forever processes running
root#aviok-cdc-elas-001:~# forever list
info: No forever processes running
root#aviok-cdc-elas-001:~# free -lm
total used free shared buffers cached
Mem: 11721 6900 4821 5 188 1242
Low: 11721 6900 4821
High: 0 0 0
-/+ buffers/cache: 5469 6252
Swap: 0 0 0
Also to mention, there is no memory leak from the script when ran locally without forever. I run it on Ubuntu server. And if I would reboot server now:
root#aviok-cdc-elas-001:~# reboot
Broadcast message from root#aviok-cdc-elas-001
(/dev/pts/0) at 3:19 ...
The system is going down for reboot NOW!
My RAM would be free again:
root#aviok-cdc-elas-001:~# free -lm
total used free shared buffers cached
Mem: 11721 1259 10462 5 64 288
Low: 11721 1259 10462
High: 0 0 0
-/+ buffers/cache: 905 10816
Swap: 0 0 0
I also want to mention that, when my script finishes what it is doing (and it does eventually) I have db.close and process.exit calls to make sure everything is killed from the side of my script. However, even after that RAM is taken away. Now I am aware that forever will run that script again after it is killed. So my questions are:
How do I tell forever to not execute script again if it is finished?
How do I stop forever properly so it does NOT take any RAM after I stopped it?
The reason I am using forever package for this is because my script needs a lot of time to do what it does and my SSH session would end, and so would Node script which I ran in a regular way.

From what I can see, the RAM isn't taken away, or leaking, it's being used by Linux as file system cache (because unused RAM is wasted RAM).
From the 6900 megs of "used" RAM, 5469 is used as buffer cache. Linux will reduce this amount automatically when processes request memory.
If you want a long-running process to keep running after you log out (or after your SSH session gets killed), you have various options that don't require forever:
Background the process, making sure that any "logout" signals are ignored:
$ nohup node script.js &
Use a terminal multiplexer like tmux or screen.

Related

How to kill locked Node process in WSL

Running my web application in WSL is occasionally getting stuck, I am able to close the script. But the process in the background is stuck and has my files locked.
Detailed info
The Web application is running with webpack dev server listening on changes in the code. When doing git operations, sometimes the files are locked and I cannot perform changes.
I can see the process by running
$ps aux
The process is taking a lot of memory.
I tried killing the process with
$kill -9 604
$pkill -f node
$kill -SIGKILL 604
But none of them works
I even try to kill the process from Task Manager but its still there.
(Windows Subsystem for Linux running on Windows 10)
Hello same problem on win11
sudo kill -9 2769
mike 2769 1.5 5.7 13169872 1914268 ? Z 11:54 7:42 /home/mike/.nvm/versions/node/v14.17.4/bin/node /mnt/d/dev/repo/mikecodeur/react-course-app/example/react-fundamentals/node_modules/react-scripts/scripts/start.js
mike 2907 0.1 0.0 0 0 ? Z 11:54 0:36 [node] <defunct>

Java heap out of memory exception tomcat linux

Please help me,my live application sometimes throw exception out of memory java heap
however I set the max size to 512M half of virtual server size
I've searched on google and traced my Server like attached image
can anyone tell me where is the error please ?
the data in console is below
System load: 0.01 Processes: 74
Usage of /: 16.2% of 29.40GB Users logged in: 0
Memory usage: 60%
Swap usage: 0%
developer#pc:/$ free -m
total used free shared buffers cached
Mem: 994 754 239 0 24 138
-/+ buffers/cache: 592 401
Swap: 0 0 0

libpcap performance and behavior differences between Ubuntu 14.40 and CentOS 6.5

I have been running a tcpdump based script on Ubuntu for some time, and recently I have been asked to run it on CentOS 6.5 and I'm noticing some very interesting differences
I'm running tcpdump 4.6.2, libpcap 1.6.2 on both setups, both are actually running on the same hardware (dual booted)
I'm running the same command on both OS'.
sudo /usr/sbin/tcpdump -s 0 -nei eth9 -w /mnt/tmpfs/eth9_rx.pcap -B 2000000
From "free -k", I see about 2G allocated on Ubuntu
Before:
free -k
total used free shared buffers cached
Mem: 65928188 1337008 64591180 1164 26556 68596
-/+ buffers/cache: 1241856 64686332
Swap: 67063804 0 67063804
After:
free -k
total used free shared buffers cached
Mem: 65928188 3341680 62586508 1160 26572 68592
-/+ buffers/cache: 3246516 62681672
Swap: 67063804 0 67063804
expr 3341680 - 1337184
2004496
One CentOS, I see twice the amount of memory (4G) being allocated from the same command
Before:
free -k
total used free shared buffers cached
Mem: 16225932 394000 15831932 0 15308 85384
-/+ buffers/cache: 293308 15932624
Swap: 8183804 0 8183804
After:
free -k
total used free shared buffers cached
Mem: 16225932 4401652 11824280 0 14896 84884
-/+ buffers/cache: 4301872 11924060
Swap: 8183804 0 8183804
expr 4401652 - 394000
4007652
From the command, I'm listening against an interface and dumping into a RAMdisk.
On Ubuntu, I can capture packets at line rate for large size packets (10G, 1024 byte frames)
But on CentOS, I can only capture packets at 60% of line rate (10G, 1024 byte frames)
Also, both OS's are running the same version of NIC drivers and driver configurations.
My goal is to achieve the same performance on CentOS as I have with Ubuntu.
I googled around and there seems to be the magic of libpcap behaving differently with different kernels. I'm curious if there's any kernel side options I have to tweek on the CentOS side to achieve the same performance on Ubuntu.
This has been answered. According to Guy Harris from tcpdump/libpcap, the difference is due to CentOS6.5 running 2.6.X kernel. Below is his response:
"
3.2 introduced the TPACKET_V3 version of the "T(urbo)PACKET" memory-mapped packet capture mechanism for PF_PACKET sockets; newer versions of libpcap (1.5 and later) support TPACKET_V3 and will use it if the kernel supports it. TPACKET_V3 makes much more efficient use of the capture buffer; in at least one test, it dropped fewer packets. It also might impose less overhead, so that asking for a 2GB buffer takes less kernel memory."

ubuntu 14.04.1 server idle load average 1.00

Scratching my head here. Hoping someone can help me troubleshoot.
I have a Dell PowerEdge SC1435 server which had been running with a previous version of ubuntu for a while. (I believe it was 13.10 server x64)
I recently reformatted the drive (SSD) and installed ubuntu server 14.04.1 x64.
All seemed fine through the install but the machine hung on first boot at the end of the kernel output, just before I would expect the screen to clear and a logon prompt appear. There were no obvious errors at the end of the kernel output that I saw. (There was a message about "not using cpu thermal sensor that is unreliable" but that appears to be there regardless of whether it boots or not)
I gave it a good 5 minutes and then forced a reboot. To my surprise it booted to the logon prompt in about 1-2 seconds after bios post. I rebooted again and it seemed to pause for a few extra seconds where it hung before, but proceeded to the login screen. Rebooting again it was fast again. So at this point I thought it was just one of those random one-off glitches that I would never explain so I moved on.
I installed a few packages (exact same packages installed on the same OS version on other hardware), did apt upgrade and dist-upgrade then rebooted. It seemed to hang again so I drove to the datacentre and connected a console only to get a blank screen. Forced reboot again. (also setup ipmi for remote rebooting and got rid of the grub recordfail so it would not wait for me to press enter!)
That was very late last night. I came home, did a few reboots with no issue so went to bed.
Today I did a reboot again to check it and again it crashed somewhere. I remotely force rebooted it.
As this point I started digging a little more and immediately noticed something really strange.
top - 14:18:35 up 8 min, 1 user, load average: 1.00, 0.85, 0.45
Tasks: 148 total, 1 running, 147 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.3 sy, 0.0 ni, 99.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 33013620 total, 338928 used, 32674692 free, 9740 buffers
KiB Swap: 3906556 total, 0 used, 3906556 free. 47780 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 33508 2772 1404 S 0.0 0.0 0:03.82 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/u16:0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.24 rcu_sched
9 root 20 0 0 0 0 S 0.0 0.0 0:00.02 rcuos/0
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/1
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/2
This server is completely unused and idle, yet it has a 1 minute load average of exactly 1.00?
As I watch the other values - the 5 minute and 15 minute also appear to be heading towards 1.00 so I assume they will all reach 1.00 at some point. (The "1 Running" is the top process)
I have never had this before and since I have no idea what is causing the startup crashing, I am assuming at this point that the two are likely related.
What I would like to do is identify (and hopefully eliminate) what is causing that false load average and my crashing issue.
So far I have been unable to identify what process could be waiting for a resource of some kind to generate that load average.
I would very much appreciate it if someone could help me to try and track it down.
top shows all processes pretty much always sleeping. Some occasionally popping up top but I think that's pretty normal. CPU usage is mostly showing 100% IDLE, with very occasional dips to 99% or so.
nmon doesn't show me much. everything just looks idle.
iotop shows pretty much no traffic whatsoever. (again, very occasional spots of disk access)
interrupt frequency seems low. way below 100/sec from what I can see.
I saw numerous google discussions suggesting this:
echo 100 > /sys/module/ipmi_si/parameters/kipmid_max_busy_us
..no effect.
RAM in the server is ECC and test passes.
Server install was 'minimal' (F4 option) with OpenSSH server ticked during install.
Installed a few packages afterwards including vim, bcache-tools, bridge-utils, qemu, software-properties-common, open-iscsi, qemu-kvm, cpu-checker, socat, ntp and nodejs. (Think that is about it)
I have tried disabling and removing the bcache kernel module. no effect.
stopped iscsi service.. no effect. (although there is absolutely nothing configured on this server yet)
I will leave it there before this gets insanely long. If anyone could help me try to figure this out it would be very much appreciated.
Cheers,
James
the load average of 1.0 is an artefact of bcache write-back thread staying in uninterruptible sleep. It may be corrected in 3.19 kernels or newer. See this Debian bug report for instance.

Error initializing sockets: port=6000. Address already in use

I lunched a simulator program which developed on C++ in my Ubuntu 11 when i want kill this process from process list of Linux and want to run it again, i faced to this error:
Error initializing sockets: port=6000. Address already in use
I used lsof command to find PID of process:
saman#jack:~$ lsof -i:6000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rcssserve 8764 saman 3u IPv4 81762 0t0 UDP *:x11
after that i tried to kill PID of 8764. but still it has error.
How can i fix it?
I think the problem you are having is that the socket if it is not shutdown correctly then it is still reserved and waiting for a timeout to be closed by the kernel.
Try doing a netstat -nutap and see if there's a line like this:
tcp 0 0 AAA.AAA.AAA.AAA:6000 XXX.XXX.XXX.XXX:YYYY TIME_WAIT -
if that's the case you just have to wait until the kernel drops it (30 secs approx) until you can open the socket at 6000 without conflict
It would seem that port 6000 is used by the X windowing system (the GUI part of linux) and is probably just restarted when you kill the process... either you'll need run the simulation without X-windows running, or you tweak the code to use a different port..

Resources