I found a bunch of long running scripts (script.php) on a server and want to kill them all.
ps aux | grep script.php
user 6270 0.1 0.1 375580 50476 ? Ss Aug20 2:18 php /path/to/script.php
user 6290 0.1 0.1 375580 50476 ? Ss 15:34 0:00 php /path/to/script.php
user 7439 0.1 0.1 375580 50476 ? Ss Aug18 2:05 php /path/to/script.php
user 8270 0.1 0.1 375580 50476 ? Ss Aug17 7:18 php /path/to/script.php
user 8548 0.1 0.1 375580 50476 ? Ss Aug15 0:15 php /path/to/script.php
user 8898 0.1 0.1 375580 50476 ? Ss Aug17 3:01 php /path/to/script.php
user 9875 0.1 0.1 375580 50476 ? Ss Aug18 2:18 php /path/to/script.php
I can kill them one at a time like so:
kill 6270
But how can I kill all of them at once?
With Linux:
pkill -f "php /path/to/script.php"
you can use the pkill command.
see http://en.wikipedia.org/wiki/Pkill and here http://www.unix.com/man-page/opensolaris/1/pkill/
Must be something like 'pkill -n script.php'
Related
I'm really struggling with how to find processes by name in linux. I'm sure it's probably something simple that I'm missing.
Are you looking for the command ps ?
Here an example
nabil#LAPTOP:~$ ps xua | grep python
rootwsl 327 0.0 0.1 29568 17880 ? Ss Jan30 0:02 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
rootwsl 411 0.0 0.1 108116 20740 ? Ssl Jan30 0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
nabil 106387 0.0 0.0 3444 736 pts/1 S+ 23:26 0:00 grep --color=auto python
I'm running an rsync daemon (providing a mirror for the SaneSecurity signatures).
rsync is started like this (from runit):
/usr/bin/rsync -v --daemon --no-detach
And the config contains:
use chroot = no
munge symlinks = no
max connections = 200
timeout = 30
syslog facility = local5
transfer logging = no
log file = /var/log/rsync.log
reverse lookup = no
[sanesecurity]
comment = SaneSecurity ClamAV Mirror
path = /srv/mirror/sanesecurity
read only = yes
list = no
uid = nobody
gid = nogroup
But what I'm seeing is a lot of "lingering" rsync processes:
# ps auxwww|grep rsync
root 423 0.0 0.0 4244 1140 ? Ss Oct30 0:00 runsv rsync
root 2529 0.0 0.0 11156 2196 ? S 15:00 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 4788 0.0 0.0 20536 2860 ? S 15:10 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 5094 0.0 0.0 19604 2448 ? S 15:13 0:00 /usr/bin/rsync -v --daemon --no-detach
root 5304 0.0 0.0 11156 180 ? S 15:15 0:00 /usr/bin/rsync -v --daemon --no-detach
root 5435 0.0 0.0 11156 180 ? S 15:16 0:00 /usr/bin/rsync -v --daemon --no-detach
root 5797 0.0 0.0 11156 180 ? S 15:19 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 5913 0.0 0.0 20536 2860 ? S 15:20 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 6032 0.0 0.0 20536 2860 ? S 15:21 0:00 /usr/bin/rsync -v --daemon --no-detach
root 6207 0.0 0.0 11156 180 ? S 15:22 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 6292 0.0 0.0 20544 2744 ? S 15:23 0:00 /usr/bin/rsync -v --daemon --no-detach
root 6467 0.0 0.0 11156 180 ? S 15:25 0:00 /usr/bin/rsync -v --daemon --no-detach
root 6905 0.0 0.0 11156 180 ? S 15:29 0:00 /usr/bin/rsync -v --daemon --no-detach
(it's currently 15:30)
So there's processes (not even having dropped privileges!) hanging around since 15:10, 15:13 and the like.
And what are they doing?
Let's check:
# strace -p 5304
strace: Process 5304 attached
select(4, [3], NULL, [3], {25, 19185}^C
strace: Process 5304 detached
<detached ...>
# strace -p 5797
strace: Process 5797 attached
select(4, [3], NULL, [3], {48, 634487}^C
strace: Process 5797 detached
<detached ...>
This happended with both rsync from Ubuntu Xenial as well as installed from PPA (currently using rsync 3.1.2-1~ubuntu16.04.1york0 )
One process is created for each connection. Before a client selects the module the process does not know if it should drop privileges.
You can easily create such a process.
nc $host 873
You will notice that the connection will not be closed after 30s because the timeout is just a disk i/o timeout. The rsync client have a --contimeout option, but it seems that a server side option is missing.
In the end, I resorted to invoking rsync from (x)inetd instead of running it standalone.
service rsync
{
disable = no
socket_type = stream
wait = no
user = root
server = /usr/bin/timeout
server_args = -k 60s 60s /usr/bin/rsync --daemon
log_on_failure += USERID
flags = IPv6
}
As an additional twist, I wrapped the rsync invocation with timeout, adding another safeguard against long-running processes.
Tell me please is there any way to resolve this issue without installing a login manager please?
I've enabled the auto login for startx, following the steps from the beyond link:
How to make auto login work in Ubuntu? (no display manager)
Auto login is functioning now.
On the device is a command line installation from Minimal Lubuntu 16.10 mini.iso. without any desktop, only the Kernel and some restricted modules. The only environment installed is fluxbox.
After booting in Fluxbox, I can't open gnome-terminal at all, until I will not do the next steps. xterm can start.
ctrl+alt+del in the running Fluxbox, it will redirect me for a second-two in tty, but because auto login is enable it will redirect me back automatically from tty1 to Fluxbox. So, in order to remain in tty I will keep pressing continuously ctrl+c.
Now, being in tty I will
sudo -i
su myusername
startx
Being again in fluxbox, I can run the terminal normally.
Do you please have any clues please, why I can't open the terminal without doing the above?
Trying to start gnome-terminal from xterm when at the first login.
gnome-terminal
Error constructing proxy for org.gnome.Terminal:/org/gnome/Terminal/Factory0: Error calling StartServiceByName for org.gnome.Terminal: Timeout was reached
Excuse me, I am not sure that DBUS_SESSION_BUS_ADDRESS is active or not.
/var/log/Xorg.0.log
Output of env from xterm (after logging manually again to startx)
TERM=xterm
SHELL=/bin/bash
WINDOWID=8388621
XTERM_SHELL=/bin/bash
USER=root
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
SUDO_USER=xdpsx
SUDO_UID=1000
USERNAME=root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
MAIL=/var/mail/root
PWD=/home/xdpsx
LANG=en_US.UTF-8
XTERM_LOCALE=en_US.UTF-8
XTERM_VERSION=XTerm(324)
HOME=/root
SUDO_COMMAND=/bin/su
SHLVL=2
LOGNAME=root
LESSOPEN=| /usr/bin/lesspipe %s
DISPLAY=:0.0
SUDO_GID=1000
LESSCLOSE=/usr/bin/lesspipe %s %s
XAUTHORITY=/home/xdpsx/.Xauthority
COLORTERM=truecolor
_=/usr/bin/env
Output of ps aux | grep dbus from xterm (after logging manually again to startx)
message+ 668 0.0 0.0 6420 3936 ? Ss 19:11 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
nobody 811 0.0 0.1 9316 4000 ? S 19:11 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/var/run/NetworkManager/dnsmasq.pid --listen-address=127.0.1.1 --cache-size=0 --conf-file=/dev/null --proxy-dnssec --enable-dbus=org.freedesktop.NetworkManager.dnsmasq --conf-dir=/etc/NetworkManager/dnsmasq.d
xdpsx 1375 0.0 0.0 6136 3460 ? Ss 19:11 0:00 /usr/bin/dbus-daemon --session --address=systemd: --nofork --nopidfile --systemd-activation
xdpsx 1381 0.0 0.0 6136 3316 ? S 19:11 0:00 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
xdpsx 1505 0.0 0.0 7004 312 ? S 19:12 0:00 dbus-launch --autolaunch fedd8908d0d244c498876a97f5b34c28 --binary-syntax --close-stderr
xdpsx 1506 0.0 0.0 6136 3060 ? Ss 19:12 0:00 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
xdpsx 1529 0.0 0.0 6136 3356 ? S 19:14 0:00 /usr/bin/dbus-daemon --config-file=/usr/share/defaults/at-spi2/accessibility.conf --nofork --print-address 3
root 1834 0.0 0.0 5144 828 pts/1 S+ 19:25 0:00 grep --color=auto dbus
Thank you.
I am seeing a similar problem since upgrading to Ubuntu 16.10. I discovered that I can fix the environment and start gnome-terminal like this:
dbus-update-activation-environment --systemd --all
gnome-terminal &
If that doesn't work, then you could try this:
dbus-launch gnome-terminal &
I'm facing an issue.
We have a clean script using to clean old files, and sometimes we need stop it for and will start it again later. Like the below processes. We use kill -STOP $pid and kill -CONT $pid in check.sh to control the clean.sh, $pid is all the pids of clean.sh (at there, they are 23939, 25804):
root 4321 0.0 0.0 74876 1184 ? Ss 2015 0:25 crond
root 23547 0.0 0.0 102084 1604 ? S 2015 0:00 \_ crond
root 23571 0.0 0.0 8728 972 ? Ss 2015 0:00 \_ /bin/bash -c bash /home/test/sbin/check.sh >>/home/test/log/check.log 2>&1
root 23577 0.0 0.0 8732 1092 ? S 2015 0:00 \_ bash /home/test/sbin/check.sh
root 23939 0.0 0.0 8860 1192 ? S 2015 0:45 \_ bash /home/test/bin/clean.sh 30
root 25804 0.0 0.0 8860 620 ? S 2015 0:00 \_ bash /home/test/bin/clean.sh 30
root 25805 0.0 0.0 14432 284 ? T 2015 0:00 \_ ls -d ./455bb4cba6142427156d2b959b8b0986/120x60/ ./455bb4cba6142427156d2b959b8b0986/80x
root 25808 0.0 0.0 3816 432 ? S 2015 0:00 \_ wc -l
Once the check.sh stopped clean.sh, hours later, check.sh started clean.sh, but there is a strange thing, after a stop and continue, there is a child process 'ls -d ....', it's still stopping.
Could you tell me if it's caused by wrong use of the signal? And how can I modify it?
ok, same like my description is not clear, my bad English...
Not sure what's the reason, but there is a way to sovle it:
kill -CONT $pid
pkill -CONT -P $pid
This will continue the child process.
I have a bunch of processes owned by apache that are running for days because they are stuck.
apache 11173 0.1 0.0 228248 27744 ? Ss Sep27 3:58 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 12609 0.1 0.0 228244 27744 ? Ss Sep18 19:30 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 14646 0.1 0.0 228244 27744 ? Ss Sep17 21:30 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 15900 0.1 0.0 228244 27744 ? Ss Sep20 15:46 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 16169 0.1 0.0 228248 27752 ? Ss Sep22 12:16 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 16887 0.1 0.0 228244 27748 ? Ss Sep21 14:04 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 16950 0.1 0.0 228244 27744 ? Ss Sep28 2:25 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 19195 0.1 0.0 228244 27748 ? Ss Sep23 10:29 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 24605 0.1 0.0 228248 27752 ? Ss Sep24 8:48 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 26442 0.1 0.0 228244 27744 ? Ss 03:45 0:50 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 29714 0.1 0.0 228248 27752 ? Ss Sep25 7:06 php /var/www/html/myproj/symfony cron:aggregation --env=prod
apache 31031 0.1 0.0 228248 27752 ? Ss Sep26 5:30 php /var/www/html/myproj/symfony cron:aggregation --env=prod
I need to kill them all. And obviously I want to do it safely.
Thus, ideally I should kill them as apache using something like this:
kill 11173
The problem is that the apache userdoesn't have a shell.
So it seems the only way is escalate to root and kill the process as root. But it is not safely at all (I may kill other processes by mistake).
Has anybody got a better solution?
Thanks,
Daniele
sudo -u apache kill 11173
This should belong on http://serverfault.com I guess... but if you want to kill all processes named apache, run killall apache as root. Alternatively, change identity to your apache user with su apache and kill your processes there using kill as you did.