Linux commands - abort operation with timeout - linux

I'm writing a script and at some point I call to "command1" which
does not stop until it CTRL+C is invoked.
Is there a way to state a timeout for a command? Like:
command1 arguments -timeout 10
How do I write a CTRL+C command in textual form?
Thx!

You can use the timeout command from GNU coreutils (you may need to install it first, but it comes in most, if not all, Linux distributions):
timeout [OPTION] DURATION COMMAND [ARG]...
For instance:
timeout 5 ./test.sh
will terminate the script after 5 seconds of execution. If you want to send a KILL signal (instead of TERM), use -kflag.
Here you have the full description of the timeout command.

I just tried
jekyll -server & sleep 10;pkill jekyll
Could do for your situation.

in your script you can set a wait. wait 10 would wait 10 seconds and as for exiting the program without CTRL + C look into the exit command. If you use exit 0 it means ok. There are different versions, but I don't know what they mean exactly off the top of my head.
exit 1
exit 2..... so on and so forth
Update
# Celada
No need to bash. You could have just said "maybe you didn't understand the question correctly" Stackoverflow is here to help people learn, not tear them down. They invented Reddit for that. As for the exit codes, you can force the program to exit by issuing the exit() command with a code. Directly from linux.die.net.
Exit Code Number Meaning Example Comments
1 Catchall for general errors let "var1 = 1/0" Miscellaneous errors, such as "divide by zero"
2 Misuse of shell builtins (according to Bash documentation) Seldom seen, usually defaults to exit code 1
126 Command invoked cannot execute Permission problem or command is not an executable
127 "command not found" Possible problem with $PATH or a typo
128 Invalid argument to exit exit 3.14159 exit takes only integer args in the range 0 - 255 (see footnote)
128+n Fatal error signal "n" kill -9 $PPID of script $? returns 137 (128 + 9)
130 Script terminated by Control-C Control-C is fatal error signal 2, (130 = 128 + 2, see above)
255* Exit status out of range exit -1 exit takes only integer args in the range 0 - 255

Related

Linux wait process

I have a basic script to test Linux wait method, the code is as below.
#!/bin/bash
echo "testing wait command 1" &
process_id=$!
echo "testing wait command 2" &
wait $process_id
echo "command 1 completed"
echo "command 2 completed"
According to my understand, the output should like this
testing wait command 1
testing wait command 2
command 1 completed
command 2 completed
But the actual output is this
testing wait command 2
testing wait command 1
command 1 completed
command 2 completed
I do not understand why command 2 come in front of command 1.
If I remove & from echo "testing wait command 1" &, the output is what I expected.
I use GNU bash, version 5.0.18(1)-release (x86_64-slackware-linux-gnu) and have run your script several
times and always got testing wait command 1 printed first.
That being said, when & is used Bash forks a new process and you
cannot expect a given order in which kernel scheduler will run
different processes or more technically speaking threads on the
system. It may seem to you that all processes on the system are run at
the same time but in fact system scheduler will constantly switch
between them and lets them use CPU. This switch is known as context
switch and it happens so frequently that humans don't even
notice. Process can be preempted at any time, not only to let other
process use CPU resources but also due to hardware interrupt which is
triggered by hardware and handled by kernel. You cannot even expect
that script execution will take the same amount of time every time you
run it.
This means command 2 runs faster and is completed before command 1. If you want strict order - remove '&' after command 1.

timeout in shell script and report those input with timeout

I would like to conduct analysis using program Arlsumstat_64bit with thousand of input files.
Arlsumstat_64bit reads input files (.arp) and write result file (sumstat.out).
Each input will append new line on the result file (sumstat.out) based on the argument "0 1"
Therefore, I wrote a shell script to execute all the input (*.arp) in the same folder.
However, if the input files contain error, the shell script will be stuck without any subsequently process. Therefore, I found a command with "timeout" to deal my issue.
I made a shell script as following
#!/bin/bash
for sp in $(ls *.arp) ;
do
echo "process start: $sp"
timeout 10 arlsumstat_64bit ${sp}.arp sumstat.out 1 0
rm -r ${sp}.res
echo "process done: $sp"
done
However, I still need to know which input files failed.
How could make a list telling me which input files are "timeout"?
See the man page for the timeout command http://man7.org/linux/man-pages/man1/timeout.1.html
If the command times out, and --preserve-status is not set, then exit
with status 124. Otherwise, exit with the status of COMMAND. If no
signal is specified, send the TERM signal upon timeout. The TERM
signal kills any process that does not block or catch that signal.
It may be necessary to use the KILL (9) signal, since this signal
cannot be caught, in which case the exit status is 128+9 rather than
124.
You should find out which exit codes are possible for the program arlsumstat_64bit. I assume it should exit with status 0 on success. Otherwise the script below will not work. If you need to distinguish between timeout and other errors it should not use exit status 124 or which is used by timeout to indicate a timeout. So you can check the exit status of your command to distinguish between success, error or timeout as necessary.
To keep the script simple I assume you don't need to distingish between timeout and other errors.
I added some comments where I modified your script to improve it or to show alternatives.
#!/bin/bash
# don't parse the output of ls
for sp in *.arp
do
echo "process start: $sp"
# instead of using "if timeout 10 arlsumstat_64bit ..." you could also run
# timeout 10 arlsumstat_64bit... and check the value of `$?` afterwards,
# e.g. if you want to distinguish between error and timeout.
# $sp will already contain .arp so ${sp}.arp is wrong
# use quotes in case a file name contains spaces
if timeout 10 arlsumstat_64bit "${sp}" sumstat.out 1 0
then
echo "process done: $sp"
else
echo "processing failed or timeout: $sp"
fi
# If the result for foo.arp is foo.res, the .arp must be removed
# If it is foo.arp.res, rm -r "${sp}.res" would be correct
# use quotes
rm -r "${sp%.arp}.res"
done
Below code should work for you:
#!/bin/bash
for sp in $(ls *.arp) ;
do
echo "process start: $sp"
timeout 10 arlsumstat_64bit ${sp}.arp sumstat.out 1 0
if [ $? -eq 0 ]
then
echo "process done sucessfully: $sp"
else
echo "process failed: $sp"
fi
echo "Deleting ${sp}.res"
rm -r ${sp}.res
done

The Linux timeout command and exit codes

In a Linux shell script I would like to use the timeout command to end another command if some time limit is reached. In general:
timeout -s SIGTERM 100 command
But I also want that my shell script exits when the command is failing for some reason. If the command is failing early enough, the time limit will not be reached, and timeout will exit with exit code 0. Thus the error cannot be trapped with trap or set -e, as least I have tried it and it did not work. How can I achieve what I want to do?
Your situation isn't very clear because you haven't included your code in the post.
timeout does exit with the exit code of the command if it finishes before the timeout value.
For example:
timeout 5 ls -l non_existent_file
# outputs ERROR: ls: cannot access non_existent_file: No such file or directory
echo $?
# outputs 2 (which is the exit code of ls)
From man timeout:
If the command times out, and --preserve-status is not set, then
exit with status 124. Otherwise, exit with the status of COMMAND. If
no signal is specified, send the TERM signal upon timeout. The TERM
signal kills any process that does not block or catch that signal.
It may be necessary to use the KILL (9) signal, since this signal
cannot be caught, in which case the exit status is 128+9 rather than
124.
See BashFAQ105 to understand the pitfalls of set -e.

defer pipe process to background after text match

So I have a bash command to start a server and it outputs some lines before getting to the point where it outputs something like "Server started, Press Control+C to exit". How do I pipe this output so when this line occurs i put this process in the background and continue with another script/function (i.e to do stuff that needs to wait until the server starts such as run tests)
I want to end up with 3 functions
start_server
run_tests
stop_server
I've got something along the lines of:
function read_server_output{
while read data; do
printf "$data"
if [[ $data == "Server started, Press Control+C to exit" ]]; then
# do something here to put server process in the background
# so I can run another function
fi
done
}
function start_server{
# start the server and pipe its output to another function to check its running
start-server-command | read_server_output
}
function run_test{
# do some stuff
}
function stop_server{
# stop the server
}
# run the bash script code
start_server()
run_tests()
stop_tests()
related question possibly SH/BASH - Scan a log file until some text occurs, then exit. How?
Thanks in advance I'm pretty new to this.
First, a note on terminology...
"Background" and "foreground" are controlling-terminal concepts, i.e., they have to do with what happens when you type ctrl+C, ctrl+Z, etc. (which process gets the signal), whether a process can read from the terminal device (a "background" process gets a SIGTTIN that by default causes it to stop), and so on.
It seems clear that this has little to do with what you want to achieve. Instead, you have an ill-behaved program (or suite of programs) that needs some special coddling: when the server is first started, it needs some hand-holding up to some point, after which it's OK. The hand-holding can stop once it outputs some text string (see your related question for that, or the technique below).
There's a big potential problem here: a lot of programs, when their output is redirected to a pipe or file, produce no output until they have printed a "block" worth of output, or are exiting. If this is the case, a simple:
start-server-command | cat
won't print the line you're looking for (so that's a quick way to tell whether you will have to work around this issue as well). If so, you'll need something like expect, which is an entirely different way to achieve what you want.
Assuming that's not a problem, though, let's try an entirely-in-shell approach.
What you need is to run the start-server-command and save the process-ID so that you can (eventually) send it a SIGINT signal (as ctrl+C would if the process were "in the foreground", but you're doing this from a script, not from a controlling terminal, so there's no key the script can press). Fortunately sh has a syntax just for this.
First let's make a temporary file:
#! /bin/sh
# myscript - script to run server, check for startup, then run tests
TMPFILE=$(mktemp -t myscript) || exit 1 # create /tmp/myscript.<unique>
trap "rm -f $TMPFILE" 0 1 2 3 15 # arrange to clean up when done
Now start the server and save its PID:
start-server-command > $TMPFILE & # start server, save output in file
SERVER_PID=$! # and save its PID so we can end it
trap "kill -INT $SERVER_PID; rm -f $TMPFILE" 0 1 2 3 15 # adjust cleanup
Now you'll want to scan through $TMPFILE until the desired output appears, as in the other question. Because this requires a certain amount of polling you should insert a delay. It's also probably wise to check whether the server has failed and terminated without ever getting to the "started" point.
while ! grep '^Server started, Press Control+C to exit$' >/dev/null; do
# message has not yet appeared, is server still starting?
if kill -0 $SERVER_PID 2>/dev/null; then
# server is running; let's wait a bit and try grepping again
sleep 1 # or other delay interval
else
echo "ERROR: server terminated without starting properly" 1>&2
exit 1
fi
done
(Here kill -0 is used to test whether the process still exists; if not, it has exited. The "cleanup" kill -INT will produce an error message, but that's probably OK. If not, either redirect that kill command's error-output, or adjust the cleanup or do it manually, as seen below.)
At this point, the server is running and you can do your tests. When you want it to exit as if the user hit ctrl+C, send it a SIGINT with kill -INT.
Since there's a kill -INT in the trap set for when the script exits (0) as well as when it's terminated by SIGHUP (1), SIGINT (2), SIGQUIT (3), and SIGTERM (15)—that's the:
trap "do some stuff" 0 1 2 3 15
part—you can simply let your script exit at this point, unless you want to specifically wait for the server to exit too. If you want that, perhaps:
kill -INT $SERVER_PID; rm -f $TMPFILE # do the pre-arranged cleanup now
trap - 0 1 2 3 15 # don't need it arranged anymore
wait $SERVER_PID # wait for server to finish exit
would be appropriate.
(Obviously none of the above is tested, but that's the general framework.)
Probably the easiest thing to do is to start it in the background and block on reading its output. Something like:
{ start-server-command & } | {
while read -r line; do
echo "$line"
echo "$line" | grep -q 'Server started' && break
done
cat &
}
echo script continues here after server outputs 'Server started' message
But this is a pretty ugly hack. It would be better if the server could be modified to perform a more specific action which the script could wait for.

Detect program-controlled termination versus crash in Linux

A program running under Linux can terminate for a number of reasons: the program might finish its needed computations and simply exit (normal exit), the code might detect some problem and throw an exception (early exit), and finally, the system might stop the execution because the program tried to do something it should not (e.g., access protected memory) (crash).
Is there a reliable and consistent way I can distinguish between normal/early exit and a crash? That is,
% any_program
...time passes and prompt re-appears...
% (type something here that tells me if the program crashed)
For example, are there values of $? that indicate crashes versus program-controlled termination?
The bash man page states:
The return value of a simple command is its exit status, or 128+n if
the command is terminated by signal n.
You can check for various signals indicating a crash, such as SIGSEGV (11), and SIGABRT(6), by seeing if $? is 139 or 134 respectively:
$ any_program
$ if [ $? = 139 -o $? = 134 ]; then
> echo "Crashed!"
> fi
At the very least, if $? is greater than 128, it indicates something unusual happened, although it may be that the user killed the program by hitting ctrl-c and not an actual crash.

Resources