Difference between qdel and kill commands [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Are there any differences between running qdel command or kill-9 to kill a job that is running using several compute nodes on a HPC cluster?
The effect of kill -9 seems to be immediate while qdel takes 5-10 minutes to change the job status from running (R) to canceled (C) before it stops.

kill -9 is a sledgehammer, machine gun and nuclear bomb all wrapped in one. Processes killed by kill -9 get no chance to clean up any resources they may have allocated. kill -9 will not directly remove your jobs from the job queue.
Think of kill -9 as the zombies in World War Z. Kill at all costs, no matter what.
As I understand it, qdel is a little more friendly in that it does 2 things, not necessarily in the order listed.
a controlled stop of a job and allows it to clean up
removes the job from the job queue
You can think of qdel as Dr. Kevorkian... He's all nice and friendly and wants to help you so he cooperates to a point... But ultimately, he's there to kill you too.

Related

Who terminates my processes when I shut down my Linux desktop (after they called setsid())? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
On a usual recent Linux desktop (in my case: Debian 11 with X11 and Plasma), imagine that I start a process which turns into a background process by fork()ing and then calling setsid().
(in my particular case, all that happens inside the execution of something like subprocess.Popen(["sleep", "999999"], start_new_session=True) in Python, if I understand it correctly)
I can see that my process survives a logout. That's fine. But when I shutdown or reboot my machine, at some point it gets terminated, right? I assume that it will receive SIGTERM from somewhere, and SIGKILL a bit later if still alive. SIGHUP would not surprise me either. Is that correct?
But what part of the system exactly does that? What are the modalities? When does that happen (i.e. what parts of the system can I assume to be still operating at that time)? How much time does the process have before getting killed? And who exactly does that (systemd? the display manager? something else?)?
I tried to handle SIGTERM in my Python code, and to make some cleanup there. It seems to work partially, but gets interrupted. The entire shutdown procedure takes just a few seconds, so it's not stuck. I tried finding something in journalctl -xb -1, but it's large, and I was not able to find any interesting traces.

How to see who kills execution of the script? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
There's a complicated script that starts other scripts. It all runs for about 6 hours. But I've noticed that one or two child scripts are being killed from time to time.
All I get is a line in log saying that script is killed.
How do I get some info on who kills it? Is it possible?
The nature of killing a process does not provide an originator. A bit is set in a kernel structure associated with the process, indicating a signal is pending. If the signalling process does not indicate it is signalling, there's no way to find out.
Some processes do in fact announce their signalling. On Linux, the OOM (Out of Memory) killer might write a log entry to /var/log/messages. If the reason for the signalling to your script is an OOM condition, this might be the place to look.
See also Who "Killed" my process and why?

Pkill guarantees [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Can pkill guarantee the following situation never happens:
I use pkill -f "abc"
pkill finds process by name and remembers pid
process ends
Linux starts a new process with the same pid
pkill kills the process started at step 4
Pids do wrap and do eventually get reused. However, pids assigned to recently running processes are not soon reused.
so, in practice the problem you're worried about never happens.
It is theoretically possible as far as I can tell.
However, that would mean that
pkill was running slow enough that a whole bunch of new process IDs could get allocated between finding the process and killing it
the rest of the system was running fast enough to create all those processes and get to a point where the recently used pid was freed.
As pointed out in comments, either you are root or the process is running as the same user
It's possible there is some way of attacking pkill so it's that slow, but such an attack would almost certainly be a kernel bug.
I've never been in a situation where worrying about this problem was the right design decision.

Is it ok to use "kill" as a standard-way to end processes? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I want to play a video using vlc in a loop until an operation is finished. My thoughts would be to do something like this:
vlc -f --loop loopvideo.mpg
[do some other stuff]
if (finished)
killall vlc
fi
coming from windows originally I still feel that killing a process is something bad you only do when something crashed. Is there a "cleaner" approach on *Nix systems than the "kill"-command or is this just fine?
kill is fine, by default it should send a signal to the process to ask it to terminate. You can be sure of this by using killall -s TERM vlc if you like. The process should intercept the signal and close itself down gracefully.
Truly killing a process you would need to send a KILL rather than TERM. That signal is sent to the operating system rather than the process, so the O/S will actually stop the process without waiting for the process to clean up. You may have seen this command in its shorter form of kill -9 <pid> where -9 = -KILL

Executing a few commands in bash once an executable closes? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have an application that will shut itself down at a specific time, is there a way for a bash script to execute some commands (like move log files, clean temp files) after the application has closed then restart it?
It's possible to reinvent this wheel, though not at all a good idea.
This can be as simple as:
while :; do
./run-your-process
do-some-cleanup
done
But really -- don't. Use runit, upstart, daemontools, systemd, supervisord, or one of the many, many other tools which will automate this process for you.
If you start the main process in your script in the background, you could turn on job control in bash with 'set -bm' and then trap SIGCHLD and do your cleanup and restart in the signal handler.
#!/bin/bash
set -bm
childexit() {
... cleanup and restart
}
trap 'childexit' SIGCHLD
mainprocess &

Resources