"forever list" says "No forever process running" but it is running an app - node.js

I have started an app with
forever start app.js
After that I typed,
forever list
and it shows that
The "sys" module is now called "util". It should have a similar interface.
info: No forever processes running
But I checked my processes with
ps aux | grep node
and it shows that
root 1184 0.1 1.5 642916 9672 ? Ss 05:37 0:00 node
/usr/local/bin/forever start app.js
root 1185 0.1 2.1 641408 13200 ? Sl 05:37 0:00 node
/var/www/app.js
ubuntu 1217 0.0 0.1 7928 1060 pts/0 S+ 05:41 0:00 grep --color=auto node
I cannot control over the process, since I cannot list the process in "forever list"
How can I let "Forever" knowing its running processes and let having control over its running processes.

forever list should be invoked with same user as that of processes.
Generally it is root user (in case of ubuntu upstart unless specified) so you can switch to root user using sudo su and then try forever list.
PS. Moved to pm2 recently which has lot more features than forever.

i had the same problem today.
in my case: i'm using NVM and forgot that it doesn't set/modify the global node path, so i had to set it manually
export NODE_PATH="/root/.nvm/v0.6.0/bin/node"

If you exec the forever start app.js within init.d you should later type sudo HOME=/home/pi/devel/web-app -u root forever list to have the correct list.

A fix would be great for this.
Encountered this one as well.
I believe an this issue was logged here.
What I could recommend for now is to find the process that's using your node port e.g. 3000. Like so:
sudo lsof -t -i:3000
That command will show the process id.
Then kill the process by performing:
kill PID

sudo su
forever list
This will output the correct list (processes started by root user).

Related

How detect and restart a Node.js script running via SSH?

I'm running a .js script with Node via SSH on my web host (Bluehost). I have a shared hosting, so I just downloaded/unzipped node and I run a script in SSH terminal like so:
> ./node/bin/node ./script.js
the script is continuously printing some output in an endless loop but after some time (about an hour) it gets killed by the server.
How do I detect it and restart the script?
I tried to create a cron job that runs restart.sh every minute in hopes to run my shell command if the process is not detected:
#!/bin/bash
if pgrep node >/dev/null
then
echo "Process is running." > /home2/xxxx/txt.txt
else
ps aux > /home2/xxxx/txt.txt
fi
but don't see any processes in the txt.txt:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
xxx 2 0.0 0.0 113292 2696 ? SN 05:27 0:00 /bin/bash /home2/xxx/restart.sh
xxx 4 0.0 0.0 155460 3988 ? RN 05:27 0:00 ps aux
You can use pm2 for monitoring your node process
npm install -g pm2
Start a node process :
pm2 start -n "My process" node myscript.js
PM2 site : https://pm2.keymetrics.io/
PM2 will restart your script if needed.
To see node process :
pm2 list
But if your script end after 1 hour, perharps is there a problem with your code.
You can find out and error logs in : ~user/.pm2/logs

supervisord not killing all spawned node processes on stop command

I encountered something weird when deploying a new service with supervisord. These are the relevant parts:
# supervisord.conf
[program:express]
command=yarn re-express-start
# package.json
{
"scripts": {
"re-express-start": "node lib/js/client/Express.bs.js",
}
}
When I run supervisorctl start, the node server is started as expected. But after I run supervisorctl stop, the server keeps on running even though supervisor thinks it's been killed.
If I change the supervisord.conf file to execute node lib/js/client/Express.bs.js directly (without going through yarn), then this works as expected. But I want to go through the package.json-defined script.
I looked into how the process tree looks like and but I don't quite understand why. Below are the processes before and after stopping the supervisord-managed service.
$ ps aux | grep node
user 12785 1.4 3.5 846404 72912 ? Sl 16:30 0:00 node /usr/bin/yarn re-express-start
user 12796 0.0 0.0 4516 708 ? S 16:30 0:00 /bin/sh -c node lib/js/client/Express.bs.js
user 12797 5.2 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12830 0.0 0.0 14216 1004 pts/1 S+ 16:30 0:00 grep --color=auto node
$ pstree -c -l -p -s 12785
systemd(1)───supervisord(7153)───node(12785)─┬─sh(12796)───node(12797)─┬─{node}(12798)
│ └─{node}(12807)
├─{node}(12786)
└─{node}(12795)
$ supervisorctl stop express
$ ps aux | grep node
user 12797 0.7 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12975 0.0 0.0 14216 980 pts/1 S+ 16:32 0:00 grep --color=auto node
$ pstree -c -l -p -s 12797
systemd(1)───node(12797)─┬─{node}(12798)
└─{node}(12807)
$ kill 12797
$ ps aux | grep node
root 13426 0.0 0.0 14216 976 pts/1 S+ 16:37 0:00 grep --color=auto node
From the above, the "actual" workload process doing the server stuff has PID 12797. It is spawned by the supervisor process and nested under a few more.
Stopping the supervisor process stops the processes with PIDs 12785 and 12796, but not the 12797 which is actually reattached to the init process.
Any ideas on what is happening here? Is this due to something ignoring some SIGxxx signals? I assume it's the yarn invocation somehow eating those,
but I don't know how and how to reconfigure.
I ran into this issue as well when I was running a node Express app. The problem seemed to be that I was having supervisor call npm start which refers to the package.json start script. That script simply calls node app.js. The solution seemed to be to directly call that command from the supervisor config file like so:
[program:node]
...
command=node app.js
...
stopasgroup=true
stopsignal=QUIT
In addition, I added stopasgroup and changed the stopsignal to QUIT. The stopsignal seemed to be required in order to properly kill the process.
I can now freely call supervisorctl restart node:node_00 without having any ERROR (spawn error) errors.

Why does trying to kill a process in Docker container take me out of it?

I have a v6.10.0 Node server on my macOS that is automatically started from the CMD in the Dockerfile. Normally in my local development un-containerized environment I will use CTRL+C to kill the server. Not being able to (or not knowing how to) do this in the container, I resort to ps aux | grep node to try to manually kill the processes. So, I get something like this:
myapp [master] :> kubectl exec -it web-3127363242-xb50k bash
root#web-3127363242-xb50k:/usr/src/app# ps aux | grep node
root 15 0.4 0.9 883000 35804 ? Sl 05:49 0:00 node /usr/src/app/node_modules/.bin/concurrent --kill-others npm run start-prod npm run start-prod-api
root 43 0.1 0.6 743636 25240 ? Sl 05:49 0:00 node /usr/src/app/node_modules/.bin/better-npm-run start-prod
root 44 0.1 0.6 743636 25140 ? Sl 05:49 0:00 node /usr/src/app/node_modules/.bin/better-npm-run start-prod-api
root 55 0.0 0.0 4356 740 ? S 05:49 0:00 sh -c node ./bin/server.js
root 56 0.0 0.0 4356 820 ? S 05:49 0:00 sh -c node ./bin/api.js
root 57 18.6 4.9 1018088 189416 ? Sl 05:49 0:08 node ./bin/server.js
root 58 13.9 5.2 1343296 197576 ? Sl 05:49 0:06 node ./bin/api.js
root 77 0.0 0.0 11128 1024 ? S+ 05:50 0:00 grep node
When I try to kill one of them by
kill -9 15
I am taken out of my container's shell and back to my computer's shell. When I enter the container again, I see that the process is still there with the same process id. This example uses a Kubernetes pod but I believe I have the same result with entering a Docker container using the docker exec command.
Every docker container has an ENTRYPOINT that will either be set in the dockerfile, using ENTRYPOINTor CMD declarations, or specified in the run command docker run myimage:tag "entrypoint_command". When the ENTRYPOINT process is killed, I think the container gets killed as well. The docker exec, as I understand it, is kind of like "attaching" command to a container. But if the ENTRYPOINT is down there is no container to attach to.
Kubernetes will restart a container after failure as far as I understand it. Which might be the reason you see the process is back up. I haven't really worked with Kubernetes but I'd try and play around with the way that the replications are scaled to terminate your process.
Containers isolate your desired app as pid 1 inside the namespace. The desired app being your entrypoint or cmd if you don't have an entrypoint defined. If killing a process results in pid 1 exiting, the container will immediately stop (similar to killing pid 1 on a linux host) along with killing all of the other pids. If this container has a restart policy, it will be restarted and the processes will get the same pids as last time it ran (all else being equal which it often is inside of a container).
To keep the container from stopping, you'll need to adjust your entrypoint to remain up even with the child process being killed. That side, having the container exit is typically a preferred behavior to handle unexpected errors by getting back to a clean state.

Error: listen EADDRINUSE node JS

I have 2 Sails application, one depend on another.
First I'm running at port 1337 the second at 1338.
All was working fine until yesterday. Have Mac, now I can run only that at 1337 and then 1338 at the second terminal tab giving me the:
Error: listen EADDRINUSE :::1338
If I'm running killall -9 node
It killing the 1337 then when I'm trying rerun 1337 again I'm getting Error: listen EADDRINUSE :::1337 also,
If I will run on tab of 1337 the killall -9 node
Im getting: No matching processes belonging to you were found
And cannot run any application.
Will help only restarting terminal
Is there any system setting that I can adjust?
I'm pretty new Mac user.
Some process is occupying your 1338 port.
I am not using a mac myself, but I think this might help you check what is using the port, just switch out "80" for "1338"
http://www.databasically.com/2011/06/02/mac-os-x-find-the-program-running-on-a-port/
try this:
ps ax | grep node
a list similar to the following
7200 pts/1 Sl+ 0:00 node /usr/bin/nodemon app.js
11431 pts/1 S+ 0:00 sh -c node app.js
11432 pts/1 Sl+ 0:02 node app.js
11971 pts/4 S+ 0:00 grep --color=auto node
kill all node processes by
sudo kill -9 <pid>
now run your apps(both port again).
If you still get errors then check for the availability of that port with
netstat -anp | grep <portNumber>
Hi Guys just find out the problem.
It is so stupid.
I'm using tunnnelclick for vpn and it is running on port 1337.
Thank you guys for your help!!!!
If you're using VS Code in Windows, You would need to first KILL the previous instances of your Node-App
Open BASH terminal in VS Code and run below command
cmd "/C TASKKILL /IM node.exe /F"

httpd's pid file only contains one ID even though it spawned many

I want to have multiple httpd services running on a CentOS box, so that if I'm developing a mod_perl script and need to restart one of them, the others can run independently. I had this setup on Windows and am migrating.
Naturally this means separate PID files. I configure mine using the PidFile directive in httpd.conf, and point the init.d script to the same place. It creates the file okay, but does not populate it with all PIDs:
$ sudo killall httpd ; sudo service httpd-dev restart
Stopping httpd: cat: /var/run/httpd/httpd-dev.pid: No such file or directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Starting httpd: [ OK ]
$ sudo cat /var/run/httpd/httpd-dev.pid
18279
$ ps -A | grep httpd
18279 ? 00:00:00 httpd
18282 ? 00:00:00 httpd
18283 ? 00:00:00 httpd
18284 ? 00:00:00 httpd
18285 ? 00:00:00 httpd
18286 ? 00:00:00 httpd
18287 ? 00:00:00 httpd
18288 ? 00:00:00 httpd
18289 ? 00:00:00 httpd
...why might this be? Makes it hard to kill just my dev httpd procs later when there will be other httpds. Can't just use 'killall' forever...
$ httpd -v
Server version: Apache/2.2.24 (Unix)
I should note that CentOS 6.4 minimal didn't come with killproc installed, so I changed my init.d to use
kill -9 `cat ${pidfile}`
instead. I guess killproc would search out child PIDs? So I have to install python to install killproc just to use init scripts for httpd?
There are two things here:
Your single Apache instance might have several PIDs associated with it, depending on the type of MPM selected. However, this should not affect you, since you only need to kill the PID written into the PID file, and that process will kill all the rest of the Apache instance.
If you try to run several Apache instances side by side, you'll have to specify a different PID file, one for each. Then you can decide which instances you want to kill - you have to process the PID file of each instance selected. Giving the same PID file to several instances, and expecting them each to put their own PID into the same file, that'll not work.

Resources