How to kill locked Node process in WSL - windows-10

Running my web application in WSL is occasionally getting stuck, I am able to close the script. But the process in the background is stuck and has my files locked.
Detailed info
The Web application is running with webpack dev server listening on changes in the code. When doing git operations, sometimes the files are locked and I cannot perform changes.
I can see the process by running
$ps aux
The process is taking a lot of memory.
I tried killing the process with
$kill -9 604
$pkill -f node
$kill -SIGKILL 604
But none of them works
I even try to kill the process from Task Manager but its still there.
(Windows Subsystem for Linux running on Windows 10)

Hello same problem on win11
sudo kill -9 2769
mike 2769 1.5 5.7 13169872 1914268 ? Z 11:54 7:42 /home/mike/.nvm/versions/node/v14.17.4/bin/node /mnt/d/dev/repo/mikecodeur/react-course-app/example/react-fundamentals/node_modules/react-scripts/scripts/start.js
mike 2907 0.1 0.0 0 0 ? Z 11:54 0:36 [node] <defunct>

Related

sudo ./jetty Stop or Start Failure

The jetty on our linux server is not installed as a service as we have multiple jetty servers on different ports. And we use command./jetty.sh stop and ./jetty.sh start to stop and start jetty.
However, when I add sudo to the command, the server never stop/start successfully. When I run sudo ./jetty.sh stop, it shows
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
and the server was not stopped.
When I run sudo ./jetty.sh start, it shows
Starting Jetty: FAILED Tue Apr 23 23:07:15 CST 2019
How could this happen? From my understanding. Using sudo gives you more power and privilege to run commands. If you can successfully execute without sudo, then the command should never fail with sudo, since it only grants superuser privilege.
As a user it uses $HOME.
As root it uses system paths.
The error you got ..
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
... means that there was a bad pid file sitting around for a process that no longer exists.
Short answer, the processing is different if you are root (a service) vs a user (just an application).

Node forever (npm package) memory leak on the server

I am using forever package to run my Node.js script. (not a web server). However, because of it, I have memory leak and even after stopping all processes, my memory is still taken:
root#aviok-cdc-elas-001:~# forever stopall
info: No forever processes running
root#aviok-cdc-elas-001:~# forever list
info: No forever processes running
root#aviok-cdc-elas-001:~# free -lm
total used free shared buffers cached
Mem: 11721 6900 4821 5 188 1242
Low: 11721 6900 4821
High: 0 0 0
-/+ buffers/cache: 5469 6252
Swap: 0 0 0
Also to mention, there is no memory leak from the script when ran locally without forever. I run it on Ubuntu server. And if I would reboot server now:
root#aviok-cdc-elas-001:~# reboot
Broadcast message from root#aviok-cdc-elas-001
(/dev/pts/0) at 3:19 ...
The system is going down for reboot NOW!
My RAM would be free again:
root#aviok-cdc-elas-001:~# free -lm
total used free shared buffers cached
Mem: 11721 1259 10462 5 64 288
Low: 11721 1259 10462
High: 0 0 0
-/+ buffers/cache: 905 10816
Swap: 0 0 0
I also want to mention that, when my script finishes what it is doing (and it does eventually) I have db.close and process.exit calls to make sure everything is killed from the side of my script. However, even after that RAM is taken away. Now I am aware that forever will run that script again after it is killed. So my questions are:
How do I tell forever to not execute script again if it is finished?
How do I stop forever properly so it does NOT take any RAM after I stopped it?
The reason I am using forever package for this is because my script needs a lot of time to do what it does and my SSH session would end, and so would Node script which I ran in a regular way.
From what I can see, the RAM isn't taken away, or leaking, it's being used by Linux as file system cache (because unused RAM is wasted RAM).
From the 6900 megs of "used" RAM, 5469 is used as buffer cache. Linux will reduce this amount automatically when processes request memory.
If you want a long-running process to keep running after you log out (or after your SSH session gets killed), you have various options that don't require forever:
Background the process, making sure that any "logout" signals are ignored:
$ nohup node script.js &
Use a terminal multiplexer like tmux or screen.

Inconsistent systemd startup of freeswitch

I have two problems running freeswitch from systemd :
EDIT 2 - I have moved the slow start up question to here (Freeswitch pauses on check_ip at boot on centos 7.1) as although they may be related it's probably good as a standalone.
EDIT - I have noticed something else. Look at these next lines captured from the terminal output when running it from there. The gap is 4 minutes but it has been around 10 minutes before. I noticed it because I was trying to find out why port 8021 was taking several minutes to accept the fs_cli connection. Why does this happen? Never happened to me before and I've installed loads of FS boxes. This does the same thing on both 1.7 & todays 1.6.
2015-10-23 12:57:35.280984 [DEBUG] switch_scheduler.c:249 Added task 1 heartbeat (core) to run at 1445601455
2015-10-23 12:57:35.281046 [DEBUG] switch_scheduler.c:249 Added task 2 check_ip (core) to run at 1445601455
2015-10-23 13:01:31.100892 [NOTICE] switch_core.c:1386 Created ip list rfc6598.auto default (deny)
I sometimes get double processes started. Here is my status line after such an occurrence :
# systemctl status freeswitch -l
freeswitch.service - freeswitch
Loaded: loaded (/etc/systemd/system/multi-user.target.wants/freeswitch.service)
Active: activating (start) since Fri 2015-10-23 01:31:53 BST; 18s ago
Main PID: 2571 (code=exited, status=0/SUCCESS); : 2742 (freeswitch)
CGroup: /system.slice/freeswitch.service
├─usr/bin/freeswitch -ncwait -core -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
└─usr/bin/freeswitch -ncwait -core -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
Oct 23 01:31:53 fswitch-1 systemd[1]: Starting freeswitch...
Oct 23 01:31:53 fswitch-1 freeswitch[2742]: 2743 Backgrounding.
and there are two processes running.
The PID file is sometimes not written fast enough for the systemd process to pick it up, but by the time I see this (no matter how fast I run the command) it's always there by the time I do :
Oct 23 02:00:26 arribacom-sbc-1 systemd[1]: PID file
/usr/local/freeswitch/run/freeswitch.pid not readable (yet?) after
start.
Now, in (2) everything seems to work ok, and I can shut down the freeswitch process using
systemctl stop freeswitch
without any issues, but in (1) it just doesn't seem to do anything.
I'm wondering if the two are related, and that freeswitch is reporting back to systemd that the program is running before it actually is. Then systemd is either starting up another process or (sometimes) not.
Can anyone offer any pointers? I have tried to mail the freeswitch users list but despite being registered I simply cannot get any emails to appear on the list (but that's another problem).
* Update *
If I remove the -ncwait it seems to improve the double process starting but I still get the can't read PID warning, so I'm still sure there's an issue present, possibly around timing(?).
I'm on Centos 7.1, & my freeswitch version is
FreeSWITCH Version 1.7.0+git~20151021T165609Z~9fee9bc613~64bit (git
9fee9bc 2015-10-21 16:56:09Z 64bit)
and here's my freeswitch.service file (some things have been commented out until I understand what they are doing and any side effects they may have) :
[Unit]
Description=freeswitch
After=syslog.target network.target
#
[Service]
Type=forking
PIDFile=/usr/local/freeswitch/run/freeswitch.pid
PermissionsStartOnly=true
ExecStart=/usr/bin/freeswitch -nc -core -db /dev/shm -log /usr/local/freeswitch/log -conf /u
ExecReload=/usr/bin/kill -HUP $MAINPID
#ExecStop=/usr/bin/freeswitch -stop
TimeoutSec=120s
#
WorkingDirectory=/usr/bin
User=freeswitch
Group=freeswitch
LimitCORE=infinity
LimitNOFILE=999999
LimitNPROC=60000
LimitSTACK=245760
LimitRTPRIO=infinity
LimitRTTIME=7000000
#IOSchedulingClass=realtime
#IOSchedulingPriority=2
#CPUSchedulingPolicy=rr
#CPUSchedulingPriority=89
#UMask=0007
#
[Install]
WantedBy=multi-user.target
In the current master branch, take the two files from debian/ directory:
freeswitch-systemd.freeswitch.service -- should go as /lib/systemd/system/freeswitch.service
freeswitch-systemd.freeswitch.tmpfile -- should go as /usr/lib/tmpfiles.d/freeswitch.conf
You probably need to adapt the paths, or build FreeSWITCH to use standard Debian paths.

ubuntu 14.04.1 server idle load average 1.00

Scratching my head here. Hoping someone can help me troubleshoot.
I have a Dell PowerEdge SC1435 server which had been running with a previous version of ubuntu for a while. (I believe it was 13.10 server x64)
I recently reformatted the drive (SSD) and installed ubuntu server 14.04.1 x64.
All seemed fine through the install but the machine hung on first boot at the end of the kernel output, just before I would expect the screen to clear and a logon prompt appear. There were no obvious errors at the end of the kernel output that I saw. (There was a message about "not using cpu thermal sensor that is unreliable" but that appears to be there regardless of whether it boots or not)
I gave it a good 5 minutes and then forced a reboot. To my surprise it booted to the logon prompt in about 1-2 seconds after bios post. I rebooted again and it seemed to pause for a few extra seconds where it hung before, but proceeded to the login screen. Rebooting again it was fast again. So at this point I thought it was just one of those random one-off glitches that I would never explain so I moved on.
I installed a few packages (exact same packages installed on the same OS version on other hardware), did apt upgrade and dist-upgrade then rebooted. It seemed to hang again so I drove to the datacentre and connected a console only to get a blank screen. Forced reboot again. (also setup ipmi for remote rebooting and got rid of the grub recordfail so it would not wait for me to press enter!)
That was very late last night. I came home, did a few reboots with no issue so went to bed.
Today I did a reboot again to check it and again it crashed somewhere. I remotely force rebooted it.
As this point I started digging a little more and immediately noticed something really strange.
top - 14:18:35 up 8 min, 1 user, load average: 1.00, 0.85, 0.45
Tasks: 148 total, 1 running, 147 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.3 sy, 0.0 ni, 99.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 33013620 total, 338928 used, 32674692 free, 9740 buffers
KiB Swap: 3906556 total, 0 used, 3906556 free. 47780 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 33508 2772 1404 S 0.0 0.0 0:03.82 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/u16:0
8 root 20 0 0 0 0 S 0.0 0.0 0:00.24 rcu_sched
9 root 20 0 0 0 0 S 0.0 0.0 0:00.02 rcuos/0
10 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/1
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcuos/2
This server is completely unused and idle, yet it has a 1 minute load average of exactly 1.00?
As I watch the other values - the 5 minute and 15 minute also appear to be heading towards 1.00 so I assume they will all reach 1.00 at some point. (The "1 Running" is the top process)
I have never had this before and since I have no idea what is causing the startup crashing, I am assuming at this point that the two are likely related.
What I would like to do is identify (and hopefully eliminate) what is causing that false load average and my crashing issue.
So far I have been unable to identify what process could be waiting for a resource of some kind to generate that load average.
I would very much appreciate it if someone could help me to try and track it down.
top shows all processes pretty much always sleeping. Some occasionally popping up top but I think that's pretty normal. CPU usage is mostly showing 100% IDLE, with very occasional dips to 99% or so.
nmon doesn't show me much. everything just looks idle.
iotop shows pretty much no traffic whatsoever. (again, very occasional spots of disk access)
interrupt frequency seems low. way below 100/sec from what I can see.
I saw numerous google discussions suggesting this:
echo 100 > /sys/module/ipmi_si/parameters/kipmid_max_busy_us
..no effect.
RAM in the server is ECC and test passes.
Server install was 'minimal' (F4 option) with OpenSSH server ticked during install.
Installed a few packages afterwards including vim, bcache-tools, bridge-utils, qemu, software-properties-common, open-iscsi, qemu-kvm, cpu-checker, socat, ntp and nodejs. (Think that is about it)
I have tried disabling and removing the bcache kernel module. no effect.
stopped iscsi service.. no effect. (although there is absolutely nothing configured on this server yet)
I will leave it there before this gets insanely long. If anyone could help me try to figure this out it would be very much appreciated.
Cheers,
James
the load average of 1.0 is an artefact of bcache write-back thread staying in uninterruptible sleep. It may be corrected in 3.19 kernels or newer. See this Debian bug report for instance.

JBoss5 and JBoss7 installation conflict same Linux Machine

I had a properly working JBoss7 installation, but recently on my machine my team mate installed JBoss5.1.0GA, Since then i am facing two problems and still unable to resolve them.
Whene ever i stop the jboss with init.d script. I get this error.
[root ~]# service jboss stop
Stopping jboss-as: [ OK ]
[root ~]# *** JBossAS process (25571) received KILL signal ***
grep: /var/run/jboss-as/jboss-as-standalone.pid: No such file or directory
Could there be any conflict with the processID file that jboss generates to check weather server is running or not.
I doubt that there is a conflict with another JBoss5 Installation.
Second Issue is I am unable to connect with the server via jboss-cli.sh
[root bin]# sh jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect localhost
The controller is not available at localhost:9999
[disconnected /] connect localhost
One thing i want you to check, the ps auxwww |grep jboss commands result
I can see two process , is this any conflict ? With PId
root 25970 0.0 0.0 161476 1960 pts/0 S 07:58 0:00 su - jboss -c LAUNCH_JBOSS_IN_BACKGROUND=1 JBOSS_PIDFILE=/var/run/jboss-as/jboss-as-standalone.pid /usr/share/jboss-as/bin/standalone.sh -c standalone.xml
jboss 25973 0.0 0.0 106096 1344 ? Ss 07:58 0:00 /bin/sh /usr/share/jboss-as/bin/standalone.sh -c standalone.xml
jboss 26022 8.7 8.7 1027368 342776 ? Sl 07:58 0:45 /usr/java/jdk1.7.0_25/bin/java -D[Standalone] -server -XX:+TieredCompilation -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Djboss.server.default.config=standalone.xml -Dspring.profiles.active=dev -Dorg.jboss.boot.log.file=/usr/share/jboss-as/standalone/log/boot.log -Dlogging.configuration=file:/usr/share/jboss-as/standalone/configuration/logging.properties -jar /usr/share/jboss-as/jboss-modules.jar -mp /usr/share/jboss-as/modules -jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -Djboss.home.dir=/usr/share/jboss-as -c standalone.xml
root 26365 0.0 0.0 103244 848 pts/0 S+ 08:07 0:00 grep jboss
I can see multiple process started with command sh standalone.sh command.
Is this the interference ?
If there is no PID file, then you might have a permissions issue on /var/run/jboss-as
You should also check JBOSS_HOME/standalone/log/console.log to see if there are any errors there
Replying after a long time, but we were successfully able to run two instances of JBoss, with using Port Binding Offset and while connecting via Jboss-cli.sh provide your incremented Port.
If you set port.binding.offset=2 then connect to port 10001 i.2 9999+2

Resources