Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
There's a complicated script that starts other scripts. It all runs for about 6 hours. But I've noticed that one or two child scripts are being killed from time to time.
All I get is a line in log saying that script is killed.
How do I get some info on who kills it? Is it possible?
The nature of killing a process does not provide an originator. A bit is set in a kernel structure associated with the process, indicating a signal is pending. If the signalling process does not indicate it is signalling, there's no way to find out.
Some processes do in fact announce their signalling. On Linux, the OOM (Out of Memory) killer might write a log entry to /var/log/messages. If the reason for the signalling to your script is an OOM condition, this might be the place to look.
See also Who "Killed" my process and why?
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
On a usual recent Linux desktop (in my case: Debian 11 with X11 and Plasma), imagine that I start a process which turns into a background process by fork()ing and then calling setsid().
(in my particular case, all that happens inside the execution of something like subprocess.Popen(["sleep", "999999"], start_new_session=True) in Python, if I understand it correctly)
I can see that my process survives a logout. That's fine. But when I shutdown or reboot my machine, at some point it gets terminated, right? I assume that it will receive SIGTERM from somewhere, and SIGKILL a bit later if still alive. SIGHUP would not surprise me either. Is that correct?
But what part of the system exactly does that? What are the modalities? When does that happen (i.e. what parts of the system can I assume to be still operating at that time)? How much time does the process have before getting killed? And who exactly does that (systemd? the display manager? something else?)?
I tried to handle SIGTERM in my Python code, and to make some cleanup there. It seems to work partially, but gets interrupted. The entire shutdown procedure takes just a few seconds, so it's not stuck. I tried finding something in journalctl -xb -1, but it's large, and I was not able to find any interesting traces.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Can pkill guarantee the following situation never happens:
I use pkill -f "abc"
pkill finds process by name and remembers pid
process ends
Linux starts a new process with the same pid
pkill kills the process started at step 4
Pids do wrap and do eventually get reused. However, pids assigned to recently running processes are not soon reused.
so, in practice the problem you're worried about never happens.
It is theoretically possible as far as I can tell.
However, that would mean that
pkill was running slow enough that a whole bunch of new process IDs could get allocated between finding the process and killing it
the rest of the system was running fast enough to create all those processes and get to a point where the recently used pid was freed.
As pointed out in comments, either you are root or the process is running as the same user
It's possible there is some way of attacking pkill so it's that slow, but such an attack would almost certainly be a kernel bug.
I've never been in a situation where worrying about this problem was the right design decision.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am just wondering how a fork bomb works, I know that there are similar questions but the answers aren't quite what I am looking for (or maybe I just haven't been able to come across one)
How does it work in terms of processes?
Do children keep being produced and then replicating themselves? is the only way to get out of it is by rebooting the system?
Are there any long lasting consequences on the system because of a fork bomb?
Thanks!
How does it work in terms of processes?
It creates so many processes that the system is not able to create any more.
Do children keep being produced and then replicating themselves?
Yes, fork in the name means replication.
is the only way to get out of it is by rebooting the system?
No, it can be stopped by some automated security measures, eg. limiting the number of processes per user.
Are there any long lasting consequences on the system because of a fork bomb?
A fork bomb itself does not change any data but can temporarily (while it runs) cause timeouts, unreachable services or OOM. A well-designed system should handle that but, well, reality may differ.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I understand that /proc/* contains directories that are actually PIDs.
I have a custom process that is killed and spawned every few minutes.
What are the chances of a PID (for example, 1009) getting reused by the custom process? (After wrapping around pid_max)
Is it likely enough to happen that my code should deal with it?
High enough that you should expect it to happen and be prepared to deal with it. The actual probability will of course depend on how often other processes are being created on your system. There is certainly no guarantee that it won't happen, though, so you must assume that it will.
"What are the odds" is a statistics question, and the answer depends on how many other processes there are, and how often they fork() and how often they exit(), so the exact answer is difficult to calculate. Anywhere between "almost impossible to happen" and "nearly guaranteed to happen every minute."
If the question is "could this happen in my lifetime and should I handle that in my code" then the answer is yes.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am using a Linux computer (Ubuntu) with n processors (15 as listed by /proc/cpuinfo). I have to run several applications and would like to run one in each processor. Is there a way to assign a processor to each application, or is it something that Linux does automatically?
Thank you very much
What you are looking for is called affinity.
Linux should already handle this on its own, but there are ways of changing the affinity of a process (sched_setaffinity) and also the command line tool taskset(1).
taskset is used to set or retrieve the CPU affinity of a
running process given its PID or to launch a new COMMAND with a
given CPU affinity.
Using taskset you can launch a process that will only become eligible to run on the cores you specify.
I'm not entirely sure they're the best tool for the job, but you might also want to investigate cgroups. I am almost certain they also allow pinning processes on certain CPUs.