How do you kill the thunar process? - linux

If I'd like to kill all instances of that file manager, I'd do
killall thunar
which gives me
thunar: no process found
But this FM is definitely running!
Similarly ps aux | grep thunar doesn't find anything and yields:
cadoiz 27791 0.0 0.0 9588 2656 pts/0 S+ 11:33 0:00 grep --color=auto thunar

killall seems to be case sensitive and for some reason, Thunar with capital T works:
killall Thunar
You can consider this debian forum discussing the topic.

Related

supervisord not killing all spawned node processes on stop command

I encountered something weird when deploying a new service with supervisord. These are the relevant parts:
# supervisord.conf
[program:express]
command=yarn re-express-start
# package.json
{
"scripts": {
"re-express-start": "node lib/js/client/Express.bs.js",
}
}
When I run supervisorctl start, the node server is started as expected. But after I run supervisorctl stop, the server keeps on running even though supervisor thinks it's been killed.
If I change the supervisord.conf file to execute node lib/js/client/Express.bs.js directly (without going through yarn), then this works as expected. But I want to go through the package.json-defined script.
I looked into how the process tree looks like and but I don't quite understand why. Below are the processes before and after stopping the supervisord-managed service.
$ ps aux | grep node
user 12785 1.4 3.5 846404 72912 ? Sl 16:30 0:00 node /usr/bin/yarn re-express-start
user 12796 0.0 0.0 4516 708 ? S 16:30 0:00 /bin/sh -c node lib/js/client/Express.bs.js
user 12797 5.2 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12830 0.0 0.0 14216 1004 pts/1 S+ 16:30 0:00 grep --color=auto node
$ pstree -c -l -p -s 12785
systemd(1)───supervisord(7153)───node(12785)─┬─sh(12796)───node(12797)─┬─{node}(12798)
│ └─{node}(12807)
├─{node}(12786)
└─{node}(12795)
$ supervisorctl stop express
$ ps aux | grep node
user 12797 0.7 2.7 697648 56384 ? Sl 16:30 0:00 /usr/bin/node lib/js/client/Express.bs.js
root 12975 0.0 0.0 14216 980 pts/1 S+ 16:32 0:00 grep --color=auto node
$ pstree -c -l -p -s 12797
systemd(1)───node(12797)─┬─{node}(12798)
└─{node}(12807)
$ kill 12797
$ ps aux | grep node
root 13426 0.0 0.0 14216 976 pts/1 S+ 16:37 0:00 grep --color=auto node
From the above, the "actual" workload process doing the server stuff has PID 12797. It is spawned by the supervisor process and nested under a few more.
Stopping the supervisor process stops the processes with PIDs 12785 and 12796, but not the 12797 which is actually reattached to the init process.
Any ideas on what is happening here? Is this due to something ignoring some SIGxxx signals? I assume it's the yarn invocation somehow eating those,
but I don't know how and how to reconfigure.
I ran into this issue as well when I was running a node Express app. The problem seemed to be that I was having supervisor call npm start which refers to the package.json start script. That script simply calls node app.js. The solution seemed to be to directly call that command from the supervisor config file like so:
[program:node]
...
command=node app.js
...
stopasgroup=true
stopsignal=QUIT
In addition, I added stopasgroup and changed the stopsignal to QUIT. The stopsignal seemed to be required in order to properly kill the process.
I can now freely call supervisorctl restart node:node_00 without having any ERROR (spawn error) errors.

"No such process" when trying to kill a running python script

I saw and tried many solutions.
I used ps aux | grep script.py to get the pid of the process. I got the following output: bioseq 24739 0.0 0.0 112884 1200 pts/1 R+ 13:20 0:00 grep --color=auto /script.py
, and then typed: kill 112884 and got the output 112884: No such process.
I also tried a similar command with grep -i, which yielded a different pid. kill <pid> also yielded <pid> No such process.
Try a pkill to kill the process, but you might also check your cron: it's possible that you kill the process but that the crontab restarts it constantly.
First of all, check whether The listed process was probably a
zombie process? therefore you cannot kill. Its live-time is depending on its parent process.
If you add the u flag to the call of ps, it displays also the STAT column which is Z for zombie processes.
if its a zombie process this has perfect explanation
How to kill zombie process
if its not a zombie process try this, killall [process name] command.
expects a process name, e.g. killall gedit which kills all such processes.
For more refer man killall

unable to start a aplication with tmux at startup

I'm trying to start an application (newsbeuter) at boot but I can't.
I'm tyring with:
tmux new-session -d -s main
tmux new-window -t main:1 '/usr/bin/newsbeuter'
Tmux is up but the newsbeuter dont start:
ps -ef | grep -i tmux
root 2118 1 0 16:09 ? 00:00:00 tmux new-session -d -s main
pi 2245 2211 0 16:09 pts/1 00:00:00 grep --color=auto -i tmux pi#raspberrypi
ps -ef | grep -i news
pi 2247 2211 0 16:09 pts/1 00:00:00 grep --color=auto -i news
Could you help me please?
Many thanks and sorry for my english!
Upon startup, Newsbeuter will look for URLs file, first in $XDG_CONFIG_HOME/.config/newsbeuter, then in ~/.newsbeuter (the file should be named urls). If it doesn't find any, it will quit with an error message. I'm supposing that's what's happening in your case: since you're starting things from /etc/rc.local, your $HOME is not your user's, so Newsbeuter doesn't find the file and quits.
One way to correct this would be to su into your user before starting Newsbeuter.
Another would be to provide the path to urls explicitly with --url-file=/home/username/.newsbeuter/urls (and also --cache-file, probably --config-file as well).
To see a possible error message, do tmux set set-remain-on-exit before the tmux new-window, and afterwards attach to the new window and press Ctrl-B Page Up.

Why the command in /root/.bash_profile start twice?

Here is my /root/.bash_profile:
export DISPLAY=:42 && cd /home/df/SimulatedRpu-ex/bin && ./SimulatedRpu-V1 &
When I start my server,I run ps aux | grep SimulatedRpu and here is the output:
root 2758 0.2 1.0 62316 20416 ? Sl 14:35 0:00 ./SimulatedRpu-V1
root 3197 0.5 0.9 61428 19912 pts/0 Sl 14:35 0:00 ./SimulatedRpu-V1
root 3314 0.0 0.0 5112 716 pts/0 S+ 14:35 0:00 grep SimulatedRpu
So,the program print error message about the port is already used.
But why the command in /root/.bash_profile start twice?
Please help me,thank you!By the way,I use Redhat Enterprise 5.5
The profile is read every time you log in. So just by logging in to run the ps aux | grep SimulatedRpu, you run the profile once more and thus start a new process.
You should put the command into an init script instead.
[EDIT] You should also run Xvnc in the same script - that way, you can start and stop the display server together with your app.
Try it like
if ! ps aux | grep '[S]imulateRpu'; then
export DISPLAY=:42 && cd /home/df/SimulatedRpu-ex/bin && ./SimulatedRpu-V1 &
fi;
This way it will first check if if the application is not running yet. The [] around the S are to prevent grep from finding itself ;)

Why is my nohup invalid in putty?

In my putty terminal, i typed the command as follows:
[username#vm186 bin]$ nohup ./mongod --dbpath ~/mongodb-data/ &
[1] 5967
[username#vm186 bin]$ nohup: appending output to `nohup.out'
then, ps showed the nohup is apparently invalid !!
[username#vm186 bin]$ ps -auxw | grep mongo
username 5967 0.0 0.0 76172 4716 pts/8 Sl 10:03 0:00 ./mongod --dbpath /home/username/mongodb-data/
username 6140 0.0 0.0 61192 780 pts/8 S+ 10:04 0:00 grep mongo
So, when i close the window, mongod will receive the signal and quit.
What's wrong with my command? or something wrong with my putty configuration?
On my system (FreeBSD) nohup won't show with ps, but the program it starts will show, and will survive closing putty. Did your program exit after closing putty?
Nohup is not supposed to continue running. It just redirects standard output and standard error, ignores SIGHUP, and executes the program you requested. The requested process totally replaces nohup but inherits the file descriptors and SIGHUP ignoring. That's what prevents the process from terminating when you log out. For more information, look at the source. You're probably using nohup from coreutils.

Resources