I have several instances of a process (i.e. with a common command line). I would like to kill all of them at one go. How to achieve it?
Options:
killall
ps|awk|xargs kill
tag-and-kill in htop
Killall is super powerful, but I find it hazardous to use indiscriminately. Option 2 is awkward to use, but I often find myself in environments that don't have killall; also, leaving out the xargs bit on the first pass lets me review the condemned processes before I swing the blade. Ultimately, I usually favour htop, since it lets me pick and choose before hitting the big "k".
You are probably looking for the killall command. For example:
killall perl
Would kill off all perl processes that are running on your machine. See http://linux.die.net/man/1/killall for more details.
killall will do that for you. Use man killall for the options but I usually do:
killall myProgName
Just be very careful (eg, use ps first to make sure it will only kill what you want).
NOTE: killall is the answer... IF you're on Linux. SysV also has a killall command, but it does a very, very different thing: it's part of the shutting-down-processes-prior-to-system-halt. So, yes, killall's the easiest, but if you often shuttle between Linux and SysV systems, I might recommend writing up a quick script to do what you want, instead.
Related
Is it possible to find out all the programs getting executed in Linux. There will be many scripts and other executables getting launched and killed during the life time of a system and I would like to get a list of these (or a print while execuction start). I am looking for this to understand program flow in an embedded board.
Type ps aux in the terminal will give information about the processes(Started time, runtime), more information about keep track of processes
There's a kernel interface which will notify a client program of every fork, exec, and exit. See this answer: https://stackoverflow.com/a/8255487/5844347
Take a look at ps -e and eventually crontab if you want to take that information periodically.
I have multiple cron jobs written in Perl and they appear to be causing a high server load.
When I run top through SSH it shows that Perl is using the most CPU and memory. However, since there's multiple cron jobs, I need to know specifically which one is using the most resources.
Is there any way to check which of the Perl files is using the most resources?
Note the PID of the process top shows is using cpu. Then do a
ps -ef | grep perl
Match the PID to one listed and you'll see the full commandline of the perl process for the high cpu job.
Well, if you look at ps -ef you should see which script maps to that process id. You could also use strace -fTt -p <pid> to attach a debugger to a specific process id to see what it's doing.
Or you could modify the script to change $0 to something meaningful that tells you which script is which.
But it's hard to say without a bit more detail. Is there any chance the script is taking longer to run than you cron job takes to 'fire'? Because if you start backlogging a cron job you'll slowly get worse as more and more start piling up behind it.
For example the bash pid is 3000, and I want to limit the child pid to be in the range [3001,3010].
I want this because I am writing a infinite while loop in bash, and the pid will explode.
while true;do
something;
sleep 5;
done;
Every loop spawned at least 3 child processes(true, something, sleep). So the pid whill grow at the speed of at least 3 per sec. After a time, ps aux will show an awkwardly big pid, I think it is not a good thing.
The PIDs don't "explode". They are recycled by the kernel. You can see the maximum PID number in /proc/sys/kernel/pid_max. Of course, you can modify this value, if you wish.
No, that is totally impossible, since the pid is assigned by the kernel when it is running the fork(2) syscall. A user-level application or library cannot change that. Indeed the kernel is at some time recycling pids, as answered by Alex.
You might do a big loop on fork and exit immediately the child process if its pid (with getpid) is unsuitable, but that approach is insane, since you might need to fork several thousands times before being lucky.
Perhaps namespaces(7) could be relevant.
There is no sane way to do that from user space, and no sane reason to want to.
If you are convinced that you have to solve this problem, and you have the sources to something, hack it to run an infinite loop with a sleep between iterations. Then you will stay in the same PID forever.
(Or slightly more off the wall, hack Bash so that sleep and something are built-ins, too. For the record, true is already a Bash built-in, so it will not actually spawn a subprocess.)
I am developing a sandbox on linux. And now i am confused terminating all process in the sandbox.
My sandbox works as follows:
At first only one process run in the sandbox.
Then it can create several child process.
And child process will create their subprocess also.
And parent process may exit at some time before its children exited.
At last sandbox will terminate all the process.
I used to do this by using killall or pkill -u with a unique user attached to the sandbox.But it seems doesn't work on the program which uses fork() fastly.
Then I search for the source code of pkill and realized that pkill is lose of atomicity.
So how could i achieve my goal ?
You could use process groups setpgid(2) and sessions setsid(2), but I don't qualify what you do as a sandbox (in particular because if one of the processes is setuid or change its process group or session itself, you'll lose it; read execve(2) carefully and several times!). Notice that kill(2) with a negative pid kills an entire process group.
Read a good book like Advanced Linux Programming. Consider also using chroot(2).
And explain what and why you really want to do. sandboxing is harder that what you think. See also capabilities(7), credentials(7) and SElinux.
I'm writing a program to be run from the Linux user space which spawns another process. How can it determine which files were modified by the spawned process after it completes?
Call it under strace and parse the output you care about.
Inject your own replacement for fopen(3) that records the names and modes.
Maybe g++ itself spawns other processes? Than "strace -fF -efile program" plus some filtering will probably help you.