Exit status of $? using python when segmentation fault occured - python-3.x

I need to execute echo $? using python3 and capture the exit status. I need this especially for capturing the Segmentation fault (core dumped) status.
I tried :
>>> os.system('echo $?')
0
0
got 0 0. Also, for segfault,
>>> os.system('./a.out')
Segmentation fault (core dumped)
35584
After above command, I again got:
>>> os.system('echo $?')
0
0
Also, why is 0 getting printed twice?
I went through the doccumentation of python-3 which says:
os.system(command)
On Unix, the return value is the exit status of the process encoded in the format specified for wait(). Note that POSIX does not specify the meaning of the return value of the C system() function, so the return value of the Python function is system-dependent.
Does this something say about such behavior?
Help me clarify on this.
Note: I already ran the ulimit -c unlimited before all above steps. The expected result should be non-zero or 139(to be specific).
Edit: I am thinking if there is a limitation on this!
Thanks!

No, you don't need to execute echo $?. It wouldn't be useful. The exit status of the program is the return value of the function os.system. That's what the number 35584 is. The documentation os os.system tells you to read the documentation of os.wait which explains
a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced.
However, note that depending on the shell, with os.system('./a.out'), you may be getting the exit status of a.out or the exit status of the shell itself. Normally there's no difference, because the exit status of the shell is the exit status of the last command it executes. But if the command dies from a signal, then there is a difference. The shell won't kill itself with the same signal, it will return a status that encodes the signal. In most shells, that's 128 + signal_number. For example, if the program dies of signal 11 (segfault on Linux) and leaves a core dump, then its status as returned by wait is 11. But if there's a shell in between, then the shell will exit normally with the exit code 128+11. That's what you're seeing: 35584 is (128 + 11) << 8.
To avoid this complication, use subprocess.call or one of its variants (you can use subprocess.run if you don't need your code to run Python <=3.4).
returncode = subprocess.call(['./a.out'], shell=False).returncode
if returncode & 0xff == 0:
exit_code = returncode >> 8
print('The program exited normally with status {}.'.format(exit_code))
else:
print('The program was killed by signal {}.'.format(returncode))
If you run os.system('echo $?'), this starts a new shell. You're printing the initial value of $? in that shell, before it has run any command, and the initial value of $? in a shell is 0.
You see 0 twice in the interactive environment because the first one is the one printed by the echo command and the second one is the value of the Python expression. Compare os.system('echo hello').
Note that with os.system, you can't access the output of the command, so if you print something with echo, you can't use it in the program. You'd have to use functions in the subprocess module for that, but you need this only if you need the output of ./a.out, not to get its exit status.

When running:
>>> os.system('echo $?')
0
0
and if your previous command was successful a first 0 will be print by echo $? and another one will be the return code of the call to echo $? that has just succeeded so you will have another 0 printed.
The return code of the script/command that you execute will directly be returned to your python program by os.system function so you do not need to use echo $?
Examples:
$ more return_code*
::::::::::::::
return_code1.py
::::::::::::::
import os
print os.system('sleep 1')
#will print 0 after 1sec
::::::::::::::
return_code2.py
::::::::::::::
import os
print os.system('ls abcdef')
#will print a rc!=0 if the file abcdef is not present in your working directory
Executions:
$ python return_code1.py
0
and
$ python return_code2.py
ls: cannot access 'abcdef': No such file or directory
512

I have written the following code for the above question and worked as expected. I used subprocess.Popen() method for achieving my requirement. I used sample.returncode to get the exit status of the shell.
def run_cmd():
ret = 0
sample_cmd = "./a.out"
sample = subprocess.Popen(sample_cmd, shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out_stdout, out_stderr = sample.communicate()
if sample.returncode != 0:
print ("OUTPUT: %s\nERROR: %s\n"%(out_stdout, out_stderr))
print ("Command: %s \nStatus: FAIL "%(sample_cmd))
sys.stdout.flush()
if sample.returncode == 139:
print('Segmentation fauilt(core dumped) occured...with status: ', sample.returncode)
ret = sample.returncode
else:
ret = 1
else:
print ("OUTPUT: %s\n"%(out_stdout))
print ("Command: %s \nStatus: PASS "%(sample_cmd))
ret = 0

Related

How to get exit code of a script launched via system() in C++?

I would like to run a script in an C++ application and capture from it exit code. So I did so in the app:
std::string script_path = "/home/john/script.sh";
int i = system(script_path.c_str());
std::cout << "ERROR: " << i << std::endl;
I wrote a simple script to see if it would catch the error number:
#!/bin/sh
exit 5
but the program shows:
ERROR: 1280
and i don't know why since i'm returning 5 in the script. How could I fix it? I use Linux
How could I fix it? I use Linux
From man 3 system:
RETURN VALUE
If command is NULL, then a nonzero value if a shell is available, or 0 if no shell is available.
If a child process could not be created, or its status could not be retrieved, the return value is -1 and errno is set to indicate the error.
If a shell could not be executed in the child process, then the return value is as though the child shell terminated by calling _exit(2) with
the status 127.
if all system calls succeed, then the return value is the termination status of the child shell used to execute command. (The termination sta‐
tus of a shell is the termination status of the last command it executes.)
In the last two cases, the return value is a "wait status" that can be examined using the macros described in waitpid(2). (i.e., WIFEXITED(),
WEXITSTATUS(), and so on).
You could use:
std::cout << "ERROR: " << ( (i != -1 && i != 127 && WIFEXITED(i)) ? WEXITSTATUS(i) : -1) << std::endl;

How to get the multiple child process exit status whether it is failed or success in linux shell scripting

How can return or get the exit status of child process individually .
here is the child process
process()
{
rem=$(( $PID % 2 ))
if [ $rem -eq 0 ]
then
echo "Number is even $PID"
exit 0
else
echo "Number is odd $PID"
exit 1
fi
echo "fred $return"
exit $rem
}
for i in {1..100}; do
process $i &
PID="$!"
echo "$PID:$file"
PID_LIST+="$PID "
done
for process in ${PID_LIST[#]};do
echo "current_PID=$process"
wait $process
exit_status=$?
echo "$process => $exit_status"
done
echo " The END"
what i am expecting is every even number exit status must be 0 and odd number exit status must be 1.
but the above script gives the below output, where the few even number has exit status 1 and few odd number has exit status 0.
can some one correct me.
16687:
16688:
/home/nzv1dtr/sample_file.sh: line 3: % 2 : syntax error: operand expected (error token is "% 2 ")
16689:
Number is odd 16687
16690:
Number is even 16688
16691:
Number is odd 16689
current_PID=16687
16687 => 1
current_PID=16688
16688 => 1
current_PID=16689
Number is even 16690
16689 => 0
current_PID=16690
16690 => 1
current_PID=16691
16691 => 0
There is a bit more going on in here. Essentially you're on the right track, wait can collect and report return status of a child, like so:
for i in {0..20}; do
if [[ $((i % 2)) -eq 1 ]]; then
/bin/true &
else
/bin/false &
fi
a[${i}]=$!
done
for i in ${a[#]}; do
wait ${i}; echo "PID(${i}) returned: $?"
done
Why do you not see the same?
Well, for starters, process is not (really) a process, but a function (hence as mentioned in comment, exit is not the correct way to terminate it, if called in a script, it would terminate the whole script, not just the function). It does become a process, but how is part of it. Shell will spawn a new subshell and run your function (hence the exit is not deadly to the outer script). What status your shell was at the time it spawned it is important here.
You're also comparing to ${PID} which is actually last subshell's PID and for first call yields an error. You probably wanted to look for $$, except for the above paragraph would mean, all functions (sub-shells) would use the same value (of the parent process).
Equipped with that information, a minimal change to your script would be to use $$ in the process function, export the function so that we can use it in a new shell instance we fork, we track PID of that new shell:
process()
{
rem=$(( $$ % 2 ))
...
}
export -f process
for i in {1..100}; do
bash -c "process" $i &
...

Input loop with FIFO problems

I'm having some trouble using FIFOs for stdin.
I have a script like this:
#!/usr/bin/env ruby
while true do
data = gets
puts "Got: #{data}"
end
Then I run it like:
$ ./script < input.fifo &
$ echo testdata > input.fifo
It will print something like:
Got: testdata
Got:
Got:
Got:
Got:
Got:
etc.
My suspicion is that something is wrong with the FIFO. That something is not getting cleared out after it is sent to the script.
I tried the same thing with a C program with a similar input loop using a scanf("%d" ...) and it acted like this:
$ echo 1 > input.fifo
Got: 1
Got: 1
Got: 1
Got: 1
etc.
So it would seem that the last thing in the FIFO gets stuck there. In the ruby example, it is a null line, because gets captures the \n. In the second it is the 1 itself.
Can anyone offer any insight?
Thanks!
The situation is simple, after:
'echo 1 > input.fifo', the file 'input.fifo' was opened, "1" was written to it,
and it is closed.
The problem is that it is closed. When fifo closed from writing side, this is equal "end of file" for reading side. So if you check return code from "scanf" in your C example it will be equal to EOF constant.
And after "end of file" when you read all data from fifo, and try read something,
the "reading" code will alaways return immediately and report "end of file".

Implementing background processing

UPDATE:
There was an obvious debugging step I forget. What happens if I try a command like ps &
in the regular old bash shell? The answer is that I see the same behavior. For example:
[ahoffer#uw1-320-21 ~/program1]$ ps &
[1] 30166
[ahoffer#uw1-320-21 ~/program1]$ PID TTY TIME CMD
26423 pts/0 00:00:00 bash
30166 pts/0 00:00:00 ps
<no prompt!>
If I then press Enter, the command shell reports the exit status and the console displays the exit status and the prompt:
[ahoffer#uw1-320-21 ~/program1]$ ps&
[1] 30166
[ahoffer#uw1-320-21 ~/program1]$ PID TTY TIME CMD
26423 pts/0 00:00:00 bash
30166 pts/0 00:00:00 ps
[1] Done ps
[ahoffer#uw1-320-21 ~/program1]$
PS: I am using PuttY to access the Linux machine via SSH on port 22.
ORIGINAL QUESTION:
I am working on a homework assignment. The task is to implement part of a command shell interpreter on Linux using functions like fork(), exec(). I have a strange bug that occurs when my code executes a command as a background process.
For example, in the code below, the command ls correctly executes ls and prints its output to the console. When the command is finished, the The event loop in the calling code correctly prints the prompt, "% ", to the console.
However, when ls & is executed, ls executes correctly and its output is printed to the console. However, the prompt, " %", is never printed!
The code is simple. Here is what the pseudo code looks like:
int child_pid;
if ( (child_pid=fork()) == 0 ) {
//child process
...execute the command...
}
else {
//Parent process
if( delim == ';' )
waidpid(child_pid);
}
//end of function.
The parent process blocks if the delimiter is a semicolon. Otherwise the function ends and the code re-enters the event loop. However, if the parent sleeps while the the background command executes, the prompt appears correctly:
...
//Parent process
if( delim == ';' ) {
waidpid(child_pid)
}
else if( delim == '&' ) {
sleep(1);
//The prompt, " %", is correctly printed to the
// console when the parent wakes up.
}
No one in class knows why this happens. The OS is RedHat Enterprise 5 and the compiler is g++.

How to write data to existing process's STDIN from external process?

I'm seeking for ways to write data to the existing process's STDIN from external processes, and found similar question How do you stream data into the STDIN of a program from different local/remote processes in Python? in stackoverlow.
In that thread, #Michael says that we can get file descriptors of existing process in path like below, and permitted to write data into them on Linux.
/proc/$PID/fd/
So, I've created a simple script listed below to test writing data to the script's STDIN (and TTY) from external process.
#!/usr/bin/env python
import os, sys
def get_ttyname():
for f in sys.stdin, sys.stdout, sys.stderr:
if f.isatty():
return os.ttyname(f.fileno())
return None
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > {0}".format(get_ttyname()))
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
print("read :: [" + sys.stdin.readline() + "]")
This test script shows paths of STDIN and TTY and then, wait for one to write it's STDIN.
I launched this script and got messages below.
Try commands below
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/3308/fd/0
So, I executed the command echo 'foobar' > /dev/pts/6 and echo 'foobar' > /proc/3308/fd/0 from other terminal. After execution of both commands, message foobar is displayed twice on the terminal the test script is running on, but that's all. The line print("read :: [" + sys.stdin.readline() + "]") was not executed.
Are there any ways to write data from external processes to the existing process's STDIN (or other file descriptors), i.e. invoke execution of the lineprint("read :: [" + sys.stdin.readline() + "]") from other processes?
Your code will not work.
/proc/pid/fd/0 is a link to the /dev/pts/6 file.
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/pid/fd/0
Since both the commands write to the terminal. This input goes to terminal and not to the process.
It will work if stdin intially is a pipe.
For example, test.py is :
#!/usr/bin/python
import os, sys
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
while True:
print("read :: [" + sys.stdin.readline() + "]")
pass
Run this as:
$ (while [ 1 ]; do sleep 1; done) | python test.py
Now from another terminal write something to /proc/pid/fd/0 and it will come to test.py
I want to leave here an example I found useful. It's a slight modification of the while true trick above that failed intermittently on my machine.
# pipe cat to your long running process
( cat ) | ./your_server &
server_pid=$!
# send an echo to your cat process that will close cat and in my hypothetical case the server too
echo "quit\n" > "/proc/$server_pid/fd/0"
It was helpful to me because for particular reasons I couldn't use mkfifo, which is perfect for this scenario.

Resources