I try to understand inter process communication in C with pipes.
In the following code snippet I fork my program.
Thats why I think both processes should work without waiting for the other. But when I run, I can type sth. in my keyboard and AFTER THIS, the parent process print out my entered text in upper letters and print out the message "Hello, I am the parent process process. I've waited. "
Its exactly this order:
Hello! I am the child process. Why does my parent process wait for me?
Test input
TEST INPUT
Hello, I am the parent process process. I've waited.
Process finished with exit code 0
But I excepted that both run parallel, so the parent process should exit before I've entered sth..
Also I can't understand why I get the text converted to upper letters before I get the message "Hello, I am the parent process process. I've waited.". In my code its the reversed order. At first I print out the information and after that, I print out the "test input" in upper case letters.
This is my code:
https://github.com/marvpaul/CPipesTest/blob/master/main.c
The read command will wait for the write command for this pipe.
Related
q= queue.Queue()
for i in [3,2,1]:
def f():
time.sleep(i)
print(i)
q.put(i)
threading.Thread(target=f).start()
print(q.get())
For this piece of code, it returns 1. The reason for this is because the queue is FIFO and "1" is put first as it slept the least time.
extended question,
If I continue to run q.get() twice, it still outputs the same value "1" rather than "2" and "3". Can anyone tell me why that is? Is there anything to do with threading?
Another extended question,
When the code finishes running completely, but there are still threads that haven't finished, will they get shut down immediately as the whole program finishes?
q.get()
#this gives me 1, but I suppose it should give me 2
q.get()
#this gives me 1, but I suppose it should give me 3
Update:
It is a Python 3 code.
Assuming that the language is Python3.
The second and third calls to q.get() return 1 because each of the three threads puts a 1 into the queue. There is never a 2 or a 3 in the queue.
I don't fully understand what to expect in this case—I'm not a Python expert—but the function, f does not appear to capture the value of the loop variable, i. The i in the function f appears to be the same variable as the i in the loop, and the loop leaves i==1 before any of the three threads wakes up from sleeping. So, in all three threads, i==1 by the time q.put(i) is called.
When the code finishes running completely, but there are still threads that haven't finished, will they get shut down immediately?
No. The process won't exit until all of its threads (including the main thread) have terminated. If you want to create a thread that will be automatically, forcibly, abruptly terminated when all of the "normal" threads are finished, then you can make that thread a daemon thread.
See https://docs.python.org/3/library/threading.html, and search for "daemon".
I have a monitoring program that I'd like to check on various processes in the system, and know when they terminate. I'd also like to know their exit code, in case they crash. However, my program is not a parent of the processes to be monitored.
In Windows, this is easy: OpenProcess for SYNCHRONIZE rights, WaitForMultipleObjectsEx to wait for any of them to terminate, then GetExitCodeProcess to find out why it terminated (with NTSTATUS error codes if the reason was an exception).
But in Linux, the equivalent of these, waitpid, only works on your own child processes, not unrelated processes. We tried ptrace, but this caused its own issues, such as greatly slowing down signal processing.
This program is intended to run as root.
Is there a way to implement this, other than just polling /proc/12345 until it disappears?
Can't think of an easy way to collect the termination statuses, but as for simple death events, you can, as root, inject an open call to a file you'll have the other end of and then you can do select on your end of the file descriptor.
When the other end dies, it'll generate a close event on the filedescriptor you have the other end of.
A (very ugly) example:
mkfifo /tmp/fifo #A channel to communicate death events
sleep 1000 & #Simulate your victim process
echo $! #Make note of the pid you want
#In another terminal
sudo gdb -ex "attach $thePid" -ex ' call open("/tmp/fifo",0,0)' -ex 'quit'
exec 3>/tmp/fifo
ruby -e 'fd = IO.select([IO.for_fd(3)]); puts "died" '
#In yet another terminal
kill $thePid #the previous terminal will print `died` immediately
#even though it's not the parent of $thePid
i have a question about linux process, and i can not figure it out.
This problem came from the book “Advanced Bash Script”, code is here: (I have simplified it)
#! /bin/bash
# spawn.sh
sleep 1
sh $0 # fork a child process here and into infinite loop
exit 0 # the script will never come here
when i run ./spawn.sh in the shell, the process will be stuck there, and there will be a lot of "sh spawn.sh" processes there after a while.
i think the relationship among processes now is like:
./spawn.sh process(pid: 10000) ---> child process(pid: 10001) ---> child process(pid:1002) --->child process(pid:1003) ---> and so on
when i push Control-C in the Shell, the parent process is over, and all its child processes are over too. this is where i can not understand. why all child processed perish ? i think the relationship among processes should be like:
init(pid: 1) ---> child process(pid: 10001) ---> child process(pid:1002) --->child process(pid:1003) ---> and so on
But the fact is that as if parent process sends a signal to it child process when it is over, thus cause all child processes perish one by one. Is this normal or a feature of shell script?
thank you very much in advance.
when i push Control-C in the Shell, the parent process is over, and
all its child processes are over too. this is where i can not
understand. why all child processed perish
When you hit Ctrl-C, SIGINT is sent not only to the parent process but to the entire process group. What this means is that all three processes get a SIGINT, so they die. To see this in action, add a
trap "process $$ exiting" INT
A quick way to see that children don't react to their parents' demise is to have a script spawn a single child and then kill the parent.
Control-C is killing the most recent child, not the original shell. When sh $0 exits, the next line of code causes the current shell to exit as well, which causes a cascade of completed processes all the way back to the original parent.
i think i know the answer now. it is because that Control-C will send SIGINT to not only the shell script process, but also its sub processes. Since these processes do not trap SIGINT, so they are over.
Command kill -2 is not the same as Ctrl-C, for detail, see this: http://ajhaupt.blogspot.com/2011/01/whats-difference-between-ctrl-c-and.html
And thank you all guys help me :)
The behavior I describe is currently observed to happen on OS X.
Bash script parent-script.sh contains a command eval $SCRIPT where SCRIPT is a string such as ./another-script.sh.
If another-script.sh is a long-running script, and I send a signal to parent-script while it runs, what appears to happen (I check using pstree) is that the subshell executing another-script is not terminated, it becomes a child of launchd.
It has STDOUT still bound to the terminal that launched the parent-script.
How can I modify this behavior? Also, what really causes this behavior? (so that I can learn about when to expect this behavior)
The Linux tag is in here because I would like to know how Linux behaves here if it does differently.
I am also beginning to realize I am perhaps scratching the surface of a very deep topic. So links to good reading material are welcome!
This is normal and expected.
You sent a signal (presumably SIGTERM) to parent-script and it died, but no signal was sent to another-script. It keeps on running.
This is different than what happens when the parent-script job is running interactively on a terminal and you type ^C (or ^Z). In that case, a SIGINT signal is automatically sent to the whole foreground process group. Since another-script is un the same process group as parent-script (by default), they both get the signal and they both die.
If you want another-script to die automatically when its parent dies in any other context than when it's a job running in a terminal with job control, you have a few options.
parent-script can trap the SIGTERM signal. In the signal handler, it kills its child, and then exits itself. This, of course, is not reliable: if parent-script crashes or is killed with an untrappable signal, the cleanup won't happen. But it's usually considered good enough.
Often, another-script will naturally exit when its parent exits and you don't have to do anything special. This is often true if the two processes are exchanging data through pipes: the child will get an EOF (if reading) or a SIGPIPE or EPIPE (if writing) and therefore notice that its parent is gone and exit by itself.
Otherwise, another-script can take explicit steps to check periodically whether its parent is gone, and exit automatically if it finds it it has become a child of init/launchd. Again, you can use a pipe or other IPC mechanism for this, but the simplest is probably for it to check its own parent process ID.
I tried to emulate what you observed (on Mac OS X 10.8.4), using the pair of scripts:
parent-script.sh
SCRIPT=./another-script.sh
trap "echo Exiting; exit 1" 0 1 2 3 13 15
eval $SCRIPT
trap 0
echo "Normal exit"
another-script.sh
for i in $(seq 1 100)
do echo "Sequence $i"; sleep 2; done
When I ran it, I got, for example:
$ ./parent-script.sh
Sequence 1
Sequence 2
Sequence 3
Sequence 4
^CExiting
Exiting
$
When I modified another-script.sh like this:
trap '' 1 2 3 13 15
for i in $(seq 1 100)
do echo "Sequence $i"; sleep 2; done
I got the (aberrant?) behaviour:
$ ./parent-script.sh
Sequence 1
Sequence 2
Sequence 3
Sequence 4
^CSequence 5
Sequence 6
^CSequence 7
^CSequence 8
Sequence 9
^CSequence 10
^CSequence 11
^\Sequence 12
Sequence 13
Sequence 14
Sequence 15
Sequence 16
Sequence 17
Sequence 18
Sequence 19
Sequence 20
./parent-script.sh: line 3: 71135 Abort trap: 6 ./another-script.sh
Exiting
Exiting
$
I had to create another terminal window and send kill -6 71135 to stop the parent process, which I think is completely wrong. I sent interrupts to the parent process; it should exit immediately (via the trap) when I do that. It should leave the child process running; that has insulated itself from interrupts. I've been exasperated by this behaviour before. The eval is not required to get the effect; it is sufficient to execute the script without it.
However, ksh also behaved the same way. So, either Apple has been careful to introduce the same bug into both shells or that's the way someone (POSIX?) dictates that shells should behave.
Python 2.7.3 on Solaris 10
Questions
When my subprocess has an internal Segmentation Fault(core) issue or a user externally kills it from the shell with a SIGTERM or SIGKILL, my main program's signal handler handles a SIGTERM(-15) and my parent program exits. Is this real? or is it a bad python build?
Background and Code
I have a python script that first spawns a worker management thread. The worker management thread then spawns one or more worker threads. I have other stuff going on in my main thread that I cannot block. My management thread stuff and worker threads are rock-solid. My services run for years without restarts but then we have this subprocess.Popen scenario:
In the run method of the worker thread, I am using:
class workerThread(threading.Thread):
def __init__(self) :
super(workerThread, self).__init__()
...
def run(self)
...
atempfile = tempfile.NamedTempFile(delete=False)
myprocess = subprocess.Popen( ['third-party-cmd', 'with', 'arguments'], shell=False, stdin=subprocess.PIPE, stdout=atempfile, stderr=subprocess.STDOUT,close_fds=True)
...
I need to use myprocess.poll() to check for process termination because I need to scan the atempfile until I find relevant information (the file may be > 1 GiB) and I need to terminate the process because of user request or because the process has been running too long. Once I find what I am looking for, I will stop checking the stdout temp file. I will clean it up after the external process is dead and before the worker thread terminates. I need the stdin PIPE in case I need to inject a response to something interactive in the child's stdin stream.
In my main program, I set a SIGINT and SIGTERM handler for me to perform cleanup, if my main python program is terminated with SIGTERM or SIGINT(Ctrl-C) if running from the shell.
Does anyone have a solid 2.x recipe for child signal handling in threads?
ctypes sigprocmask, etc.
Any help would be very appreciated. I am just looking for an 'official' recipe or the BEST hack, if one even exists.
Notes
I am using a restricted build of Python. I must use 2.7.3. Third-party-cmd is a program I do not have source for - modifying it is not possible.
There are many things in your description that look strange. First thing, you have a couple of different threads and processes. Who is crashing, who's receinving SIGTERM and who's receiving SIGKILL and due to which operations ?
Second: why does your parent receive SIGTERM ? It can't be implicitly sent. Someone is calling kill to your parent process, either directly or indirectly (for example, by killing the whole parent group).
Third point: how's your program terminating when you're handling SIGTERM ? By definition, the program terminates if it's not handled. If it's handled, it's not terminated. What's really happenning ?
Suggestions:
$ cat crsh.c
#include <stdio.h>
int main(void)
{
int *f = 0x0;
puts("Crashing");
*f = 0;
puts("Crashed");
return 0;
}
$ cat a.py
import subprocess, sys
print('begin')
p = subprocess.Popen('./crsh')
a = raw_input()
print(a)
p.wait()
print('end')
$ python a.py
begin
Crashing
abcd
abcd
end
This works. No signal delivered to the parent. Did you isolate the problem in your program ?
If the problem is a signal sent to multiple processes: can you use setpgid to set up a separate process group for the child ?
Is there any reason for creating the temporary file ? It's 1 GB files being created in your temporary directory. Why not piping stdout ?
If you're really sure you need to handle signals in your parent program (why didn't you try/except KeyboardInterrupt, for example ?): could signal() unspecified behavior with multi threaded programs be causing those problems (for example, dispatching a signal to a thread that does not handle signals) ?
NOTES
The effects of signal() in a multithreaded process are unspecified.
Anyway, try to explain with more precision what are the threads and process of your program, what they do, how were the signal handlers set up and why, who is sending signals, who is receiving, etc, etc, etc, etc, etc.