Kill all subprocesses that are using a file - multithreading

So I have a (Python) script that creates subprocesses in separate threads. And sometimes, when it's all finished, somehow ghost threads are left hanging and the have a handle on some files which I'd like to get back. This is what it looks like in Process Explorer :
If I use taskkill /F /PID <pid> on each individual PID value for <pid>, I can get rid of the processes and resume work. Can I somehow automatize this in CMD by asking for the list of processes that still have a handle to files that match a pattern ? I don't know how to do that.

Related

Linux shell scripting: How can I stop a first program when the second will have finished?

I have two programs in Linux (shell scripts, for example):
NeverEnding.sh
AllwaysEnds.sh
The first one does never stop, so I wanna run it in background.
The second one does stop with no problem.
I would like to make a Linux shell script that calls them both, but automatically stops (kill, for example) the first when the second will have finished.
Specific command-line tools allowed, if needed.
You can send the first into the background with & and get the PID of it by $!. Then after the second finishes in the foreground you can kill the first:
#!/bin/bash
NeverEnding.sh &
pid=$!
AllwaysEnds.sh
kill $pid
You don't actually need to save the pid in a variable, since $! only gets updated when you start a background process, it's just make it more easy to read.

Monitor multiple instances of same process

I'm trying to monitor multiple instances of the same process. I can't for the life of me do this without running into a problem.
All the examples I have seen so far on the internet involve me writing out the PID or monitoring the process itself. The issue is that if one instance fails, it doesn't mean all the rest have failed as well.
In order for me to write out the PID for each process it would mean I'd probably have to run each process with a short delay to record the correct, seeing as the way I need to record the PID is done through the process name being probed.
If I'm wrong on this, please correct me. But so far I haven't found a way to monitor each individual process, which all have the same name.
To add to the above, the processes are run in a batch script and each one is run in its own screen (ffmpeg would otherwise not be able to run in the background).
If anyone can point me vaguely in the right direction on how to do this in Linux I would really appreciate it. I read somewhere that it would be possible to set up symlinks which would then give me fake process names and that way I can monitor the 'fake' process name.
man wait. For example, in shell script:
wget "$url1" &
pid1=$!
wget "$url2" &
pid2=$!
wait $pid1 $pid2
will launch both wget processes, and wait until both processes are finished (or failed)

How to set process ID in Linux for a specific program

I was wondering if there is some way to force to use some specific process ID to Linux to some application before running it. I need to know in advance the process ID.
Actually, there is a way to do this. Since kernel 3.3 with CONFIG_CHECKPOINT_RESTORE set(which is set in most distros), there is /proc/sys/kernel/ns_last_pid which contains last pid generated by kernel. So, if you want to set PID for forked program, you need to perform these actions:
Open /proc/sys/kernel/ns_last_pid and get fd
flock it with LOCK_EX
write PID-1
fork
VoilĂ ! Child will have PID that you wanted.
Also, don't forget to unlock (flock with LOCK_UN) and close ns_last_pid.
You can checkout C code at my blog here.
As many already suggested you cannot set directly a PID but usually shells have facilities to know which is the last forked process ID.
For example in bash you can lunch an executable in background (appending &) and find its PID in the variable $!.
Example:
$ lsof >/dev/null &
[1] 15458
$ echo $!
15458
On CentOS7.2 you can simply do the following:
Let's say you want to execute the sleep command with a PID of 1894.
sudo echo 1893 > /proc/sys/kernel/ns_last_pid; sleep 1000
(However, keep in mind that if by chance another process executes in the extremely brief amount of time between the echo and sleep command you could end up with a PID of 1895+. I've tested it hundreds of times and it has never happened to me. If you want to guarantee the PID you will need to lock the file after you write to it, execute sleep, then unlock the file as suggested in Ruslan's answer above.)
There's no way to force to use specific PID for process. As Wikipedia says:
Process IDs are usually allocated on a sequential basis, beginning at
0 and rising to a maximum value which varies from system to system.
Once this limit is reached, allocation restarts at 300 and again
increases. In Mac OS X and HP-UX, allocation restarts at 100. However,
for this and subsequent passes any PIDs still assigned to processes
are skipped
You could just repeatedly call fork() to create new child processes until you get a child with the desired PID. Remember to call wait() often, or you will hit the per-user process limit quickly.
This method assumes that the OS assigns new PIDs sequentially, which appears to be the case eg. on Linux 3.3.
The advantage over the ns_last_pid method is that it doesn't require root permissions.
Every process on a linux system is generated by fork() so there should be no way to force a specific PID.
From Linux 5.5 you can pass an array of PIDs to the clone3 system call to be assigned to the new process, up to one for each nested PID namespace, from the inside out. This requires CAP_SYS_ADMIN or (since Linux 5.9) CAP_CHECKPOINT_RESTORE over the PID namespace.
If you a not concerned with PID namespaces use an array of size one.

Confusion with pid's and processes on linux

from reading docs and online most people have been saying that to kill a process in linux, only the command kill "pid" is needed.
For example to kill memcached would be kill $(cat memcached.pid)
But for pretty much every process that i've tried to kill including the one above, this would not work. I managed to get it to work with a different command:
ps aux | grep (process name here)
That command, for whatever reason would get a different pid, which would work when killing the program.
I guess my question is, why are there different pid's? Isn't the point of an id to be unique? Why do celery, memcached, and other processes all have a different pid's when using the aux | grep command, versus the pid in the .pid file? Is this some kinda error on my configuration or is it ment to be like this?
Also, where is it possible to get all arguments and descriptions for an executable in linux?
I know the "man" command is useful for some functions, but it wont work for many executables, like celery for example.
Thanks!
The process ID (pid) is assigned by the operating system on-the-fly when a process starts up. It's unique in the sense that no two processes have the same ID. However, the actual value is not guaranteed to be the same from one run of the process to another. The best way to think of it is like those "now serving" tickets:
You are correct that you can look up an ID via ps and grep, though you may find it easier to just use:
pgrep (process name here)
Also, if you just want to kill the process, you can even skip the above step and use:
pkill (process name here)

Linux i/o to running daemon / process

Is it possible to i/o to a running process?
I have multiple game servers running like this:
cd /path/to/game/server/binary
./binary arg1 arg2 ... argn &
Is it possible to write a message to a server if i know the process id?
Something like this would be handy:
echo "quit" > process1234
Where process1234 is the process (with sid 1234).
The game server is not a binary written by me, but it is a Call of Duty binary. So i can't change anything to the code.
Yes, you can start up the process with a pipe as its stdin and then write to the pipe. You can used a named or anonymous pipe.
Normally a parent process would be needed to do this, which would create an anonmyous pipe and supply that to the child process as its stdin - popen() does this, many libraries also implement it (see Perl's IPC::Open2 for example)
Another way would be to run it under a pseudo tty, which is what "screen" does. Screen itself may also have a mechanism for doing this.
Only if the process is listening for some message somewhere. For instance, your game server can be waiting for input on a file, over a network connection, or from standard input.
If your process is not actively listening for something, the only things you can really do is halt or kill it.
Now if your process is waiting on standard input, and you ran it like so:
$ myprocess &
Then (in linux) you should be able to try the following:
$ jobs
[1]+ Running myprocess &
$ fg 1
And at this point you are typing standard input into your process.
You can only do that if the process is explicitly designed for that.
But since you example is requesting the process quit, I'd recommend trying signals. First try to send the TERM (i.e. terminate) signal which is the default:
kill _pid_
If that doesn't work, you can try other signals such as QUIT:
kill -QUIT _pid_
If all else fails, you can use the KILL signal. This is guaranteed (*) to stop the process but the process will have no change to clean up:
kill -KILL _pid_
* - in the past, kill -KILL would not work if the process was hung when on a flaky network file server. Don't know if they ever fixed this.
I'm pretty sure this would work, since the server has a console on stdin:
echo "quit" > /proc/<server pid>/fd/0
You mention in a comment below that your process does not appear to read from the console on fd 0. But it must on some fd. ls -l /proc/<server pid/>/fd/ and look for one that's pointing at /dev/pts/ if the process is running in a gnome-terminal or xterm or something.
If you want to do a few simple operations on your server, use signals as mentioned elsewhere. Set up signal handlers in the server and have each signal perform a different action e.g.:
SIGINT: Reread config file
SIGHUP: quit
...
Highly hackish, don't do this if you have a saner alternative, but you can redirect a process's file descriptors on the fly if you have ptrace permissions.
$ echo quit > /tmp/quitfile
$ gdb binary 1234
(gdb) call dup2(open("/tmp/quitfile", 0), 0)
(gdb) continue
open("/tmp/quitfile", O_RDONLY) returns a file descriptor to /tmp/quitfile. dup2(..., STDIN_FILENO) replaces the existing standard input by the new file descriptor.
We inject this code into the application using gdb (but with numeric constants, as #define constants may not be available), and taadaah.
Simply run it under screen and don't background it. Then you can either connect to it with screen interactively and tell it to quit, or (with a bit of expect hackery) write a script that will connect to screen, send the quit message, and disconnect.

Resources