Here is my /root/.bash_profile:
export DISPLAY=:42 && cd /home/df/SimulatedRpu-ex/bin && ./SimulatedRpu-V1 &
When I start my server,I run ps aux | grep SimulatedRpu and here is the output:
root 2758 0.2 1.0 62316 20416 ? Sl 14:35 0:00 ./SimulatedRpu-V1
root 3197 0.5 0.9 61428 19912 pts/0 Sl 14:35 0:00 ./SimulatedRpu-V1
root 3314 0.0 0.0 5112 716 pts/0 S+ 14:35 0:00 grep SimulatedRpu
So,the program print error message about the port is already used.
But why the command in /root/.bash_profile start twice?
Please help me,thank you!By the way,I use Redhat Enterprise 5.5
The profile is read every time you log in. So just by logging in to run the ps aux | grep SimulatedRpu, you run the profile once more and thus start a new process.
You should put the command into an init script instead.
[EDIT] You should also run Xvnc in the same script - that way, you can start and stop the display server together with your app.
Try it like
if ! ps aux | grep '[S]imulateRpu'; then
export DISPLAY=:42 && cd /home/df/SimulatedRpu-ex/bin && ./SimulatedRpu-V1 &
fi;
This way it will first check if if the application is not running yet. The [] around the S are to prevent grep from finding itself ;)
Related
If I'd like to kill all instances of that file manager, I'd do
killall thunar
which gives me
thunar: no process found
But this FM is definitely running!
Similarly ps aux | grep thunar doesn't find anything and yields:
cadoiz 27791 0.0 0.0 9588 2656 pts/0 S+ 11:33 0:00 grep --color=auto thunar
killall seems to be case sensitive and for some reason, Thunar with capital T works:
killall Thunar
You can consider this debian forum discussing the topic.
I encountered something weird when deploying a new service with supervisord. These are the relevant parts:
# supervisord.conf
[program:express]
command=yarn re-express-start
# package.json
{
"scripts": {
"re-express-start": "node lib/js/client/Express.bs.js",
}
}
When I run supervisorctl start, the node server is started as expected. But after I run supervisorctl stop, the server keeps on running even though supervisor thinks it's been killed.
If I change the supervisord.conf file to execute node lib/js/client/Express.bs.js directly (without going through yarn), then this works as expected. But I want to go through the package.json-defined script.
I looked into how the process tree looks like and but I don't quite understand why. Below are the processes before and after stopping the supervisord-managed service.
$ ps aux | grep node
user 12785 1.4 3.5 846404 72912 ? Sl 16:30 0:00 node /usr/bin/yarn re-express-start
user 12796 0.0 0.0 4516 708 ? S 16:30 0:00 /bin/sh -c node lib/js/client/Express.bs.js
user 12797 5.2 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12830 0.0 0.0 14216 1004 pts/1 S+ 16:30 0:00 grep --color=auto node
$ pstree -c -l -p -s 12785
systemd(1)───supervisord(7153)───node(12785)─┬─sh(12796)───node(12797)─┬─{node}(12798)
│ └─{node}(12807)
├─{node}(12786)
└─{node}(12795)
$ supervisorctl stop express
$ ps aux | grep node
user 12797 0.7 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12975 0.0 0.0 14216 980 pts/1 S+ 16:32 0:00 grep --color=auto node
$ pstree -c -l -p -s 12797
systemd(1)───node(12797)─┬─{node}(12798)
└─{node}(12807)
$ kill 12797
$ ps aux | grep node
root 13426 0.0 0.0 14216 976 pts/1 S+ 16:37 0:00 grep --color=auto node
From the above, the "actual" workload process doing the server stuff has PID 12797. It is spawned by the supervisor process and nested under a few more.
Stopping the supervisor process stops the processes with PIDs 12785 and 12796, but not the 12797 which is actually reattached to the init process.
Any ideas on what is happening here? Is this due to something ignoring some SIGxxx signals? I assume it's the yarn invocation somehow eating those,
but I don't know how and how to reconfigure.
I ran into this issue as well when I was running a node Express app. The problem seemed to be that I was having supervisor call npm start which refers to the package.json start script. That script simply calls node app.js. The solution seemed to be to directly call that command from the supervisor config file like so:
[program:node]
...
command=node app.js
...
stopasgroup=true
stopsignal=QUIT
In addition, I added stopasgroup and changed the stopsignal to QUIT. The stopsignal seemed to be required in order to properly kill the process.
I can now freely call supervisorctl restart node:node_00 without having any ERROR (spawn error) errors.
I have a new regression suite that uses the Wiremock standalone JAR. In order to ensure this is running on the server, I have this script called checkwiremock.sh
#!/bin/bash
cnt=$(ps -eaflc --sort stime | grep wiremock-standalone-2.11.0.jar |grep -v grep | wc -l)
if(test $cnt -eq 1);
then
echo "Service already running..."
else
echo "Starting Service"
nohup java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324 --verbose &
fi
The script works as expected when ran manually
./checkwiremock.sh
However when started using Crontab,
* * * * * /bin/bash /etc/opt/wiremock/checkwiremock.sh
Wiremock returns
No response could be served as there are no stub mappings in this WireMock instance.
The only difference I can see between the manually started process and cron process is the TTY
root 31526 9.5 3.2 1309736 62704 pts/0 Sl 11:28 0:01 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324
root 31729 22.0 1.9 1294104 37808 ? Sl 11:31 0:00 java -jar /etc/opt/wiremock/wiremock-standalone-2.11.0.jar --port 1324
Can't figure out what is wrong here.
Server details:
Red Hat Enterprise Linux Server release 6.5 (Santiago)
*Edit: corrected paths to ones actually used
Change the directory in the checkwiremock.sh to:
cd /path/to/shell/script
My nginx is not starting on 80 port.
I have added the following details:
$ nginx -s reload
2016/03/23 16:11:27 [error] 24992#0: invalid PID number "" in "/run/nginx.pid"
$ ps -ef | grep nginx
root 25057 2840 0 16:16 pts/1 00:00:00 grep --color=auto nginx
$ kill -9 25057
bash: kill: (25057) - No such process
$ service nginx start
Nothing..
Please provide solution..
Trying to run nginx -s reload without first starting nginx will result in an error because nginx will look for the file containing it's master pid when you tell it to restart. In your case it seems that nginx wasn't running, so the file containing that id doesn't exist.
By running kill -9 25057 you tried to kill your own command ps -ef | grep nginx which no longer existed, so you got "No such process".
To make sure all is well I would stop nginx with nginx -s stop then start it with nginx followed by nginx -s reload to check that all is well. In any case the log file might tell you if something bad is going on /var/log/nginx/error.log.
If that works, you can try accessing http://localhost:80 or however you have configured nginx, and also follow the error log, and access log /var/log/nginx/error.log
As a sidenote: If this by any chance happens to be a case where nginx is reloaded by some other tool like confd, you should also check if nginx actually stores it's pid in /run/nginx.pid as opposed to /var/run/nginx/nginx.pid.
Let's talk about what we have here first:
$ nginx -s reload
2016/03/23 16:11:27 [error] 24992#0: invalid PID number "" in "/run/nginx.pid"
It's probably because the /run/nginx.pid file is empty, that causes issues with stop|start|restart commands, so you have to edit it by sudo and put there PID of your current running nginx service (master process). Now, let's have a look at the next lines, which are connected with.
$ ps -ef | grep nginx
root 25057 2840 0 16:16 pts/1 00:00:00 grep --color=auto nginx
$ kill -9 25057
bash: kill: (25057) - No such process
You're trying here to kill NOT a main process of the nginx. First try to run the following command to see the pids of an nginx master process and his worker:
$ ps -aux | grep "nginx"
root 17711 0.0 0.3 126416 6632 ? Ss 18:29 0:00 nginx: master process nginx -c /etc/nginx/nginx.conf
www-data 17857 0.0 0.2 126732 5588 ? S 18:32 0:00 nginx: worker process
ubuntu 18264 0.0 0.0 12916 984 pts/0 S+ 18:51 0:00 grep --color=auto nginx
Next, kill both:
$ sudo kill -9 17711
$ sudo kill -9 17857
and then try to run an nginx again.
$ service nginx start
Nothing..
Have nothing to say here ;)
Summary:
I think editing the /run/nginx.pid file with an nginx master process PID should solve the issue. So according to my example above, this file should looks like this:
17711
Hope that helps!
I have this problem. I restart the nginx.service and it fixed.
Run sudo systemctl restart nginx.service and then run sudo nginx -s reload in ubuntu.
ps -ef | grep nginx | grep root | grep -v grep | awk '{ print $2 }' > /run/nginx.pid
nginx -s reload
I have a bitnami Jenkins VM, how do I tell what user Jenkins is running as? I suspect it is Tomcat.
If you have access to the gui, you can go to "manage jenkins" > "system information" and look for "user.name".
I would use ps to get the uid of the process, and grep for that in /etc/passwd
You could also create a Jenkins job containing a shell script box with the "whoami" command.
Use this command to see under which process your Jenkins server works on:
ps axufwwww | grep 'jenkins\|java' -
To interpret the results, look for:
jenkins 1087 0.0 0.0 18740 396 ? S 08:00 0:00 /usr/bin/daemon --name=jenkins
jenkins 1088 1.6 20.7 3600900 840116 ? Sl 08:00 2:12 \_ /usr/bin/java
1087 and 1088 are the PIDs. They might differ for you.
ps aux | grep '/usr/bin/daemon' | grep 'jenkins' | awk {'print $1'}
The command will show running processes, then grep for a process running as a daemon that includes the string 'jenkins'. Finally, get the first row of the piped output which is the user that is running Jenkins.