My nginx is not starting on 80 port.
I have added the following details:
$ nginx -s reload
2016/03/23 16:11:27 [error] 24992#0: invalid PID number "" in "/run/nginx.pid"
$ ps -ef | grep nginx
root 25057 2840 0 16:16 pts/1 00:00:00 grep --color=auto nginx
$ kill -9 25057
bash: kill: (25057) - No such process
$ service nginx start
Nothing..
Please provide solution..
Trying to run nginx -s reload without first starting nginx will result in an error because nginx will look for the file containing it's master pid when you tell it to restart. In your case it seems that nginx wasn't running, so the file containing that id doesn't exist.
By running kill -9 25057 you tried to kill your own command ps -ef | grep nginx which no longer existed, so you got "No such process".
To make sure all is well I would stop nginx with nginx -s stop then start it with nginx followed by nginx -s reload to check that all is well. In any case the log file might tell you if something bad is going on /var/log/nginx/error.log.
If that works, you can try accessing http://localhost:80 or however you have configured nginx, and also follow the error log, and access log /var/log/nginx/error.log
As a sidenote: If this by any chance happens to be a case where nginx is reloaded by some other tool like confd, you should also check if nginx actually stores it's pid in /run/nginx.pid as opposed to /var/run/nginx/nginx.pid.
Let's talk about what we have here first:
$ nginx -s reload
2016/03/23 16:11:27 [error] 24992#0: invalid PID number "" in "/run/nginx.pid"
It's probably because the /run/nginx.pid file is empty, that causes issues with stop|start|restart commands, so you have to edit it by sudo and put there PID of your current running nginx service (master process). Now, let's have a look at the next lines, which are connected with.
$ ps -ef | grep nginx
root 25057 2840 0 16:16 pts/1 00:00:00 grep --color=auto nginx
$ kill -9 25057
bash: kill: (25057) - No such process
You're trying here to kill NOT a main process of the nginx. First try to run the following command to see the pids of an nginx master process and his worker:
$ ps -aux | grep "nginx"
root 17711 0.0 0.3 126416 6632 ? Ss 18:29 0:00 nginx: master process nginx -c /etc/nginx/nginx.conf
www-data 17857 0.0 0.2 126732 5588 ? S 18:32 0:00 nginx: worker process
ubuntu 18264 0.0 0.0 12916 984 pts/0 S+ 18:51 0:00 grep --color=auto nginx
Next, kill both:
$ sudo kill -9 17711
$ sudo kill -9 17857
and then try to run an nginx again.
$ service nginx start
Nothing..
Have nothing to say here ;)
Summary:
I think editing the /run/nginx.pid file with an nginx master process PID should solve the issue. So according to my example above, this file should looks like this:
17711
Hope that helps!
I have this problem. I restart the nginx.service and it fixed.
Run sudo systemctl restart nginx.service and then run sudo nginx -s reload in ubuntu.
ps -ef | grep nginx | grep root | grep -v grep | awk '{ print $2 }' > /run/nginx.pid
nginx -s reload
Related
I'm trying to kill some process in ubuntu 18.04 for which I am using pkill command. But I am to able to suppress Killed message for some reason.
Here is process which are running.
# ps -a
PID TTY TIME CMD
2346 pts/0 00:00:00 gunicorn
2353 pts/0 00:00:00 sh
2360 pts/0 00:00:00 gunicorn
2363 pts/0 00:00:00 gunicorn
2366 pts/0 00:00:00 ps
My attempts to kill the process and supressing logs
# 1st attempt
# pkill -9 gunicorn 2>&1 /dev/null
pkill: only one pattern can be provided
Try `pkill --help' for more information.
#2nd attempt (This killed process but got output `Killed` and have to press `enter` to get into command line)
# pkill -9 gunicorn > /dev/null
root#my-ubuntu:/# Killed
#3rd attempt(behavior similar to previous attempt)
# pkill -9 gunicorn 2> /dev/null
root#my-ubuntu:/# Killed
root#my-ubuntu:/#
What is it that I am missing?
I think you want this syntax:
pkill -9 gunicorn &>/dev/null
the &> is a somewhat newer addition in Bash ( think 4.0 ??) that is a shorthand way of redirecting both stdout and stderr.
Also, are you running pkill from the same terminal session that gunicorn was started on? I don't think pkill prints a message like "Killed" which makes me wonder if that is coming from some other process....
You might be able to suppress it by running set +m in the terminal (to disable job monitoring). To reenable, run set -m.
I found the only way to prevent output from pkill was to use the advice here:
https://www.cyberciti.biz/faq/how-to-redirect-output-and-errors-to-devnull/
command 1>&- 2>&-
This closes stdout/stderr for the command.
I encountered something weird when deploying a new service with supervisord. These are the relevant parts:
# supervisord.conf
[program:express]
command=yarn re-express-start
# package.json
{
"scripts": {
"re-express-start": "node lib/js/client/Express.bs.js",
}
}
When I run supervisorctl start, the node server is started as expected. But after I run supervisorctl stop, the server keeps on running even though supervisor thinks it's been killed.
If I change the supervisord.conf file to execute node lib/js/client/Express.bs.js directly (without going through yarn), then this works as expected. But I want to go through the package.json-defined script.
I looked into how the process tree looks like and but I don't quite understand why. Below are the processes before and after stopping the supervisord-managed service.
$ ps aux | grep node
user 12785 1.4 3.5 846404 72912 ? Sl 16:30 0:00 node /usr/bin/yarn re-express-start
user 12796 0.0 0.0 4516 708 ? S 16:30 0:00 /bin/sh -c node lib/js/client/Express.bs.js
user 12797 5.2 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12830 0.0 0.0 14216 1004 pts/1 S+ 16:30 0:00 grep --color=auto node
$ pstree -c -l -p -s 12785
systemd(1)───supervisord(7153)───node(12785)─┬─sh(12796)───node(12797)─┬─{node}(12798)
│ └─{node}(12807)
├─{node}(12786)
└─{node}(12795)
$ supervisorctl stop express
$ ps aux | grep node
user 12797 0.7 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12975 0.0 0.0 14216 980 pts/1 S+ 16:32 0:00 grep --color=auto node
$ pstree -c -l -p -s 12797
systemd(1)───node(12797)─┬─{node}(12798)
└─{node}(12807)
$ kill 12797
$ ps aux | grep node
root 13426 0.0 0.0 14216 976 pts/1 S+ 16:37 0:00 grep --color=auto node
From the above, the "actual" workload process doing the server stuff has PID 12797. It is spawned by the supervisor process and nested under a few more.
Stopping the supervisor process stops the processes with PIDs 12785 and 12796, but not the 12797 which is actually reattached to the init process.
Any ideas on what is happening here? Is this due to something ignoring some SIGxxx signals? I assume it's the yarn invocation somehow eating those,
but I don't know how and how to reconfigure.
I ran into this issue as well when I was running a node Express app. The problem seemed to be that I was having supervisor call npm start which refers to the package.json start script. That script simply calls node app.js. The solution seemed to be to directly call that command from the supervisor config file like so:
[program:node]
...
command=node app.js
...
stopasgroup=true
stopsignal=QUIT
In addition, I added stopasgroup and changed the stopsignal to QUIT. The stopsignal seemed to be required in order to properly kill the process.
I can now freely call supervisorctl restart node:node_00 without having any ERROR (spawn error) errors.
I have 2 Sails application, one depend on another.
First I'm running at port 1337 the second at 1338.
All was working fine until yesterday. Have Mac, now I can run only that at 1337 and then 1338 at the second terminal tab giving me the:
Error: listen EADDRINUSE :::1338
If I'm running killall -9 node
It killing the 1337 then when I'm trying rerun 1337 again I'm getting Error: listen EADDRINUSE :::1337 also,
If I will run on tab of 1337 the killall -9 node
Im getting: No matching processes belonging to you were found
And cannot run any application.
Will help only restarting terminal
Is there any system setting that I can adjust?
I'm pretty new Mac user.
Some process is occupying your 1338 port.
I am not using a mac myself, but I think this might help you check what is using the port, just switch out "80" for "1338"
http://www.databasically.com/2011/06/02/mac-os-x-find-the-program-running-on-a-port/
try this:
ps ax | grep node
a list similar to the following
7200 pts/1 Sl+ 0:00 node /usr/bin/nodemon app.js
11431 pts/1 S+ 0:00 sh -c node app.js
11432 pts/1 Sl+ 0:02 node app.js
11971 pts/4 S+ 0:00 grep --color=auto node
kill all node processes by
sudo kill -9 <pid>
now run your apps(both port again).
If you still get errors then check for the availability of that port with
netstat -anp | grep <portNumber>
Hi Guys just find out the problem.
It is so stupid.
I'm using tunnnelclick for vpn and it is running on port 1337.
Thank you guys for your help!!!!
If you're using VS Code in Windows, You would need to first KILL the previous instances of your Node-App
Open BASH terminal in VS Code and run below command
cmd "/C TASKKILL /IM node.exe /F"
I'm trying to start an application (newsbeuter) at boot but I can't.
I'm tyring with:
tmux new-session -d -s main
tmux new-window -t main:1 '/usr/bin/newsbeuter'
Tmux is up but the newsbeuter dont start:
ps -ef | grep -i tmux
root 2118 1 0 16:09 ? 00:00:00 tmux new-session -d -s main
pi 2245 2211 0 16:09 pts/1 00:00:00 grep --color=auto -i tmux pi#raspberrypi
ps -ef | grep -i news
pi 2247 2211 0 16:09 pts/1 00:00:00 grep --color=auto -i news
Could you help me please?
Many thanks and sorry for my english!
Upon startup, Newsbeuter will look for URLs file, first in $XDG_CONFIG_HOME/.config/newsbeuter, then in ~/.newsbeuter (the file should be named urls). If it doesn't find any, it will quit with an error message. I'm supposing that's what's happening in your case: since you're starting things from /etc/rc.local, your $HOME is not your user's, so Newsbeuter doesn't find the file and quits.
One way to correct this would be to su into your user before starting Newsbeuter.
Another would be to provide the path to urls explicitly with --url-file=/home/username/.newsbeuter/urls (and also --cache-file, probably --config-file as well).
To see a possible error message, do tmux set set-remain-on-exit before the tmux new-window, and afterwards attach to the new window and press Ctrl-B Page Up.
I want to have multiple httpd services running on a CentOS box, so that if I'm developing a mod_perl script and need to restart one of them, the others can run independently. I had this setup on Windows and am migrating.
Naturally this means separate PID files. I configure mine using the PidFile directive in httpd.conf, and point the init.d script to the same place. It creates the file okay, but does not populate it with all PIDs:
$ sudo killall httpd ; sudo service httpd-dev restart
Stopping httpd: cat: /var/run/httpd/httpd-dev.pid: No such file or directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Starting httpd: [ OK ]
$ sudo cat /var/run/httpd/httpd-dev.pid
18279
$ ps -A | grep httpd
18279 ? 00:00:00 httpd
18282 ? 00:00:00 httpd
18283 ? 00:00:00 httpd
18284 ? 00:00:00 httpd
18285 ? 00:00:00 httpd
18286 ? 00:00:00 httpd
18287 ? 00:00:00 httpd
18288 ? 00:00:00 httpd
18289 ? 00:00:00 httpd
...why might this be? Makes it hard to kill just my dev httpd procs later when there will be other httpds. Can't just use 'killall' forever...
$ httpd -v
Server version: Apache/2.2.24 (Unix)
I should note that CentOS 6.4 minimal didn't come with killproc installed, so I changed my init.d to use
kill -9 `cat ${pidfile}`
instead. I guess killproc would search out child PIDs? So I have to install python to install killproc just to use init scripts for httpd?
There are two things here:
Your single Apache instance might have several PIDs associated with it, depending on the type of MPM selected. However, this should not affect you, since you only need to kill the PID written into the PID file, and that process will kill all the rest of the Apache instance.
If you try to run several Apache instances side by side, you'll have to specify a different PID file, one for each. Then you can decide which instances you want to kill - you have to process the PID file of each instance selected. Giving the same PID file to several instances, and expecting them each to put their own PID into the same file, that'll not work.