How can I "delay" in Zsh with a float-point number? - linux

In Zsh there is a wait (for a process or job) command, a while (Seconds == Delay) command, and a sched (Do later if shell still running) command, but no "delay" command. If there were, I fear it would be limited to whole second delays. I need a "delay" statement that can essentially cause the procedure/task to do almost nothing for the time specified in a fixed point number or until an certain clock time.
Most scripts would use "sleep", but I would like to have the delay timer run without having to open the IO; I am seeking the ideal that nearly anything can be accomplished from within Zsh.
Does anyone know how to perhaps make a function (or maybe builtin/module) perform a floating point idle delay in seconds?

I'll argue that you are making the wrong assumption. zsh is a shell, and therefore its purpose is to be a shell. One important point in being a shell is to be a POSIX compatible shell. Moreover since zsh is fully backward compatible with bash, which in turn is fully backward compatible with the bourne shell that should be a POISX shell.
That means that zsh must have access to sleep since sleep is required for a POSIX shell.
And that is as far as we go with the POSIX compatibility argument. Now for a practical use argument. Most systems will use GNU coreutils sleep to implement sleep which allow for floating point arguments. Therefore the following is POSIX portable:
if ! sleep 0.03; then
sleep 1
fi
And should work as a fine grained delay in most cases, whilst still not break in the rare cases the OS does not use GNU coreutils. As far as I am aware these rare cases are just AIX and HP-UX systems.

It seems that as long as the I/O is confined to built-ins, the I/O doesn't create a noticeable lag and doesn't depend on anything outside of Zsh. With helpful input from grochmal and a number of experiments I have come up with a simple looped file descriptor for the read built-in command with the : (null) built-in command:
: $(read -u 1 -t 10)
The standard out of the read command is connected to Zsh for expansion as an argument to : (null), so it is guaranteed to receive no input. Knowing that it will never receive input, we have the read command listen to the standard out with -u 1. The timeout option of Zsh read accepts floating point numbers; It should be consistent on any system that runs Zsh. Finally, even if shell option ERREXIT is on the read timeout-failure status should not be a problem, because read is actually running in a sub-shell, destined to end anyway, and : always returns true. If ERRRETURN option is on, I don't know that behavior yet, but the fix would be to add ||: to the end of the read command.
Now it is possible to create a function or alias-to-anonymous-function that interprets any manner of argument or input to reliably create a delay in a floating point number of seconds:
# function sleep { -- optional switch-out for the system command
# after POSIX & GNU compatibility verified.
function delay {
emulate -LR zsh -o extendedglob -o nullglob
local Delay=1.
if [[ $1 == (#b)([[:digit:]](#c1,).(#c0,1)[[:digit:]](#c0,))(s|m|h|d|w|) ]]
then
if [[ $match[2] == (s|) ]] Delay=$match[1]
if [[ $match[2] == (m) ]] Delay=$[ $match[1] * 60. ** 1 ]
if [[ $match[2] == (h) ]] Delay=$[ $match[1] * 60. ** 2 ]
if [[ $match[2] == (d) ]] Delay=$[ ($match[1] * 60. ** 2) * 24 ]
if [[ $match[2] == (w) ]] Delay=$[ (($match[1] * 60. ** 2) * 24) * 7 ]
: $(read -u 1 -t $Delay)
else
print -u 2 "Invalid delay time: $1"
return 1
fi
}

Related

Recover after "kill 0"

I have a script that invokes kill 0. I want to invoke that script from another script, and have the outer script continue to execute. (kill 0 sends a signal, defaulting to SIGTERM, to every process in the process group of the calling process; see man 2 kill.)
kill0.sh:
#!/bin/sh
kill 0
caller.sh:
#!/bin/sh
echo BEFORE
./kill0.sh
echo AFTER
The current behavior is:
$ ./caller.sh
BEFORE
Terminated
$
How can I modify caller.sh so it prints AFTER after invoking kill0.sh?
Modifying kill0.sh is not an option. Assume that kill0.sh might read from stdin and write to stdout and/or stderr before invoking kill 0, and I don't want to interfere with that. I still want the kill 0 command to kill the kill0.sh process itself; I just don't want it to kill the caller as well.
I'm using Ubuntu 16.10 x86_64, and /bin/sh is a symlink to dash. That shouldn't matter, and I prefer answers that don't depend on that.
This is of course a simplified version of a larger set of scripts, so I'm at some risk of having an XY problem, but I think that a solution to the problem as stated here should let me solve the actual problem. (I have a wrapper script that invokes a specified command, capturing and displaying its output, with some other bells and whistles.)
One solution
You need to trap the signal in the parent, but enable it in the child. So a script like run-kill0.sh could be:
#!/bin/sh
echo BEFORE
trap '' TERM
(trap 15; exec ./kill0.sh)
echo AFTER
The first trap disables the TERM signal. The second trap in the sub-shell re-enables the signal (using the signal number instead of the name — see below) before running the kill0.sh script. Using exec is a minor optimization — you can omit it and it will work the same.
Digression on obscure syntactic details
Why 15 instead of TERM in the sub-shell? Because when I tested it with TERM instead of 15, I got:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' TERM
+ trap TERM
trap: usage: trap [-lp] [arg signal_spec ...]
+ echo AFTER
AFTER
$
When I used 15 in place of TERM (twice), I got:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' 15
+ trap 15
+ exec ./kill0.sh
Terminated: 15
+ echo AFTER
AFTER
$
Using TERM in place of the first 15 would also work.
Bash documentation on trap
Studying the Bash manual for trap shows:
trap [-lp] [arg] [sigspec …]
The commands in arg are to be read and executed when the shell receives signal sigspec. If arg is absent (and there is a single sigspec) or equal to ‘-’, each specified signal’s disposition is reset to the value it had when the shell was started.
A second solution
The second sentence is the key: trap - TERM should (and empirically does) work.
#!/bin/sh
echo BEFORE
trap '' TERM
(trap - TERM; exec ./kill0.sh)
echo AFTER
Running that yields:
$ sh -x run-kill0.sh
+ echo BEFORE
BEFORE
+ trap '' TERM
+ trap - TERM
+ exec ./kill0.sh
Terminated: 15
+ echo AFTER
AFTER
$
I've just re-remembered why I use numbers and not names (but my excuse is that the shell — it wasn't Bash in those days — didn't recognize signal names when I learned it).
POSIX documentation for trap
However, in Bash's defense, the POSIX spec for trap says:
If the first operand is an unsigned decimal integer, the shell shall treat all operands as conditions, and shall reset each condition to the default value. Otherwise, if there are operands, the first is treated as an action and the remaining as conditions.
If action is '-', the shell shall reset each condition to the default value. If action is null ( "" ), the shell shall ignore each specified condition if it arises.
This is clearer than the Bash documentation, IMO. It states why trap 15 works. There's also a minor glitch in the presentation. The synopsis says (on one line):
trap n [condition...]trap [action condition...]
It should say (on two lines):
trapn[condition...]
trap [action condition...]

Shell script's return value is different in ( normal vs. debugging mode ) execution

I was solving this "Factorial function in recursive(= getting n! of argument n)" problem. This is the bash shell script code I came up with. I give one integer as an argument:
#!/bin/bash
# Script name: RecFact.sh
#
# recursive factorial
factorial(){
if [ $1 -eq 0 ]; then
return 1
fi
pro=`expr $pro \* $1`
factorial `expr $1 - 1`
return $pro
}
pro=1
factorial $1
echo "$?"
The problem is, when I run it on the terminal with 1~5 as the one argument it needs(e.g. ./RecFact.sh 5), the returned value(like 120 for 5) is correct.
But when the argument goes above 5, it gets all wrong(like 208 for 6, instead of 720).
What's really strange is, if I run it in debugging mode(e.g. sh -x ./RecFact.sh 6), the debugger gives the correct value(like 720 for 6) for every input value.
What could be the reason?
The error code (which you inspect with $?) is of the range 0-255. And indeed, 720 modulo 256 gives you 208.
Instead of abusing $? you should use a dedicated variable to convey the result.
In normal mode i.e when you are executing ./RecFact.sh 6 you are actually executing
bash RecFact.sh 6
since in your script you have #!/bin/bash
But in debug mode
sh -x ./RecFact.sh 6
you are executing using sh. On my system sh has a link to dash. This may be the case in your system too.
bash and dash are two different shells , most commands work the same, but they are different. Hence you are seeing two different outputs.
So in your script if you change #!/bin/bash to #!/bin/sh , it will work fine.
Also #employee of the month is correct. Do not abuse $?

bash script read pipe or argument

I want my script to read a string either from stdin , if it's piped, or from an argument. So first i want to check if some text is piped and if not it should use an argument as input. My code looks something like this:
value=$(cat) # read from stdin
if [ "$pipe" != "" ]; then #check if pipe is not empty
#Do something with pipe string
else
#Do something with argument string
fi
The problem is when it's not piped, then the script will halt and wait for "ctrl d" and i dont want that. Any suggestions on how to solve this?
Thanks in advance.
/Tomas
What about checking the argument first?
if (($#)) ; then
process "$1"
else
cat | process
fi
Or, just take advantage from the same behaviour of cat:
cat "$#" | process
If you only need to know if it's a pipe or a redirection, it should be sufficient to determine if stdin is a terminal or not:
if [ -t 0 ]; then
# stdin is a tty: process command line
else
# stdin is not a tty: process standard input
fi
[ (aka test) with -t is equivalent to the libc isatty() function.
The above will work with both something | myscript and myscript < infile. This is the simplest solution, assuming your script is for interactive use.
The [ command is a builtin in bash and some other shells, and since [/test with -tis in POSIX, it's portable too (not relying on Linux, bash, or GNU utility features).
There's one edge case, test -t also returns false if the file descriptor is invalid, but it would take some slight adversity to arrange that. test -e will detect this, though assuming you have a filename such as /dev/stdin to use.
The POSIX tty command can also be used, and handles the adversity above. It will print the tty device name and return 0 if stdin is a terminal, and will print "not a tty" and return 1 in any other case:
if tty >/dev/null ; then
# stdin is a tty: process command line
else
# stdin is not a tty: process standard input
fi
(with GNU tty, you can use tty -s for silent operation)
A less portable way, though certainly acceptable on a typical Linux, is to use GNU stat with its %F format specifier, this returns the text "character special file", "fifo" and "regular file" in the cases of terminal, pipe and redirection respectively. stat requires a filename, so you must provide a specially-named file of the form /dev/stdin, /dev/fd/0, or /proc/self/fd/0, and use -L to chase symlinks:
stat -L -c "%F" /dev/stdin
This is probably the best way to handle non-interactive use (since you can't make assumptions about terminals then), or to detect an actual pipe (FIFO) distinct from redirection.
There is a slight gotcha with %F in that you cannot use it to tell the difference between a terminal and certain other device files, for example /dev/zero or /dev/null which are also "character special files" and might reasonably appear. An unpretty solution is to use %t to report the underlying device type (major, in hex), assuming you know what the underlying tty device number ranges are... and that depends on whether you're using BSD style ptys or Unix98 ptys, or whether you're on the actual console, among other things. In the simple case %t will be 0 though for a pipe or a redirection of a normal (non-special) file.
More general solutions to this kind of problem are to use bash's read with a timeout (read -t 0 ...) or non-blocking I/O with GNU dd (dd iflag=nonblock).
The latter will allow you to detect lack of input on stdin, dd will return an exit code of 1 if there is nothing ready to read. However, these are more suitable for non-blocking polling loops, rather than a once-off check: there is a race condition when you start two or more processes in a pipeline as one may be ready to read before another has written.
It's easier to check for command line arguments first and fallback to stdin if no arguments. Shell Parameter Expansion is a nice shorthand instead of the if-else:
value=${*:-`cat`}
# do something with $value

Is there a way to find the running time of the last executed command in the shell?

Is there a command like time that can display the running time details of the last or past executed commands on the shell?
zsh has some built in features to time how long commands take.
If you enable the inc_append_history_time option with
setopt inc_append_history_time
Then the time taken to run every command is saved in your history and then can be viewed with history -D.
I do not know, how it is in bash, but in zsh you can define preexec and precmd functions so that they save the current time to variables $STARTTIME (preexec) and $ENDTIME (precmd) so you will be able to find the approximate running time. Or you can define an accept-line function so that it will prepend time before each command.
UPDATE:
This is the code, which will store elapsed times in the $_elapsed array:
preexec () {
(( $#_elapsed > 1000 )) && set -A _elapsed $_elapsed[-1000,-1]
typeset -ig _start=SECONDS
}
precmd() { set -A _elapsed $_elapsed $(( SECONDS-_start )) }
Then if you run sleep 10s:
% set -A _elapsed # Clear the times
% sleep 10s
% sleep 2s ; echo $_elapsed[-1]
10
% echo $_elapsed
0 10 0
No need in four variables. No problems with names or additional delays. Just note that $_elapsed array may grow very big, so you need to delete the first items (this is done with the following piece of code: (( $#_elapsed > 1000 )) && set -A _elapsed $_elapsed[-1000,-1]).
UPDATE2:
Found the script to support zsh-style precmd and preexec in bash. Maybe you will need to remove typeset -ig (I used just to force $_start to be integer) and replace set -A var ... with var=( ... ) in order to get this working. And I do not know how to slice arrays and get their length in bash.
Script: http://www.twistedmatrix.com/users/glyph/preexec.bash.txt (web.archive)
UPDATE3:
Found one problem: if you hit return with an empty line preexec does not run, while precmd does, so you will get meaningless values in $_elapsed array. In order to fix this replace the precmd function with the following code:
precmd () {
(( _start >= 0 )) && set -A _elapsed $_elapsed $(( SECONDS-_start ))
_start=-1
}
Edit 3:
The structure of this answer:
There is no ready-made way to time a command that has already been run
There are ways that you can deduce a guesstimate of the duration of a command's run time.
A proof of concept is shown (starting with the hypothesis that it can't be done and ending with the conclusion that the hypothesis was wrong).
There are hacks you can put into place before-hand that will record the elapsed time of every command you run
Conclusion
The answer labeled by its parts according to the outline above:
Part 1 - the short answer is "no"
Original
Nope, sorry. You have to use time.
Part 2 - maybe you can deduce a result
In some cases if a program writes output files or information in log files, you might be able to deduce running time, but that would be program-specific and only a guesstimate. If you have HISTTIMEFORMAT set in Bash, you can look at entries in the history file to get an idea of when a program started. But the ending time isn't logged, so you could only deduce a duration if another program was started immediately after the one you're interested in finished.
Part 3 - a hypothesis is falsified
Hypothesis: Idle time will be counted in the elapsed time
Edit:
Here is an example to illustrate my point. It's based on the suggestion by ZyX, but would be similar using other methods.
In zsh:
% precmd() { prevstart=start; start=$SECONDS; }
% preexec() { prevend=$end; end=$SECONDS; }
% echo "T: $SECONDS ps: $prevstart pe: $prevend s: $start e: $end"
T: 1491 ps: 1456 pe: 1458 s: 1459 e: 1491
Now we wait... let's say for 15 seconds, then:
% echo "T: $SECONDS"; sleep 10
T: 1506
Now we wait... let's say for 20 seconds, then:
% echo "T: $SECONDS ps: $prevstart pe: $prevend s: $start e: $end"
T: 1536 ps: 1492 pe: 1506 s: 1516 e: 1536
As you can see, I was wrong. The start time (1516) minus the previous end time (1506) is the duration of the command (sleep 10). Which also shows that the variables I used in the functions need better names.
Hypothesis falsified - it is possible to get the correct elapsed time without including the idle time
Part 4 - a hack to record the elapsed time of every command
Edit 2:
Here are the Bash equivalents to the functions in ZyX's answer (they require the script linked to there):
preexec () {
(( ${#_elapsed[#]} > 1000 )) && _elapsed=(${_elapsed[#]: -1000})
_start=$SECONDS
}
precmd () {
(( _start >= 0 )) && _elapsed+=($(( SECONDS-_start )))
_start=-1
}
After installing preexec.bash (from the linked script) and creating the two functions above, the example run would look like this:
$ _elapsed=() # Clear the times
$ sleep 10s
$ sleep 2s ; echo ${_elapsed[#]: -1}
10
$ echo ${_elapsed[#]}
0 10 2
Part 5 - conclusion
Use time.
I take it you are running commands that take a long time and not realizing at the beginning that you would like to time how long they take to complete. In zsh if you set the environment variable REPORTTIME to a number, any command taking longer than that number of seconds will have the time it took printed as if you had run it with the time command in front of it. You can set it in your .zshrc so that long running commands will always have their time printed. Note that time spent sleeping (as opposed to user/system time) is not counted towards triggering the timer but is still tracked.
I think you can only get timing statistics for commands you run using the command 'time'.
From the man page:
time [options] command [arguments...]
DESCRIPTION
The time command runs the specified program command with the given arguments. When command finishes,
time writes a message to standard error giving timing statistics about this program run.
I wrote a tool (discalimer!) which tracks the command-start- and stop-time, besides other command meta-data (e.g. cwd or modified files) by hijacking into a running shell-process (currently only bash is supported). In contrast to the other solutions posted here, it also correctly tracks the run-time of async commands, e.g. sleep 5 &.
Check it out here:
https://github.com/tycho-kirchner/shournal
One can use the history command with epoch seconds %s and the built-in variable $EPOCHSECONDS to calculate when the command finished by leveraging only $PROMPT_COMMAND.
# Save start time before executing command (does not work due to PS0 sub-shell)
# preexec() {
# STARTTIME=$EPOCHSECONDS
# }
# PS0=preexec
# Save end time, without duplicating commands when pressing Enter on an empty line
precmd() {
local st=$(HISTTIMEFORMAT='%s ' history 1 | awk '{print $2}');
if [[ -z "$STARTTIME" || (-n "$STARTTIME" && "$STARTTIME" -ne "$st") ]]; then
ENDTIME=$EPOCHSECONDS
STARTTIME=$st
else
ENDTIME=0
fi
}
__timeit() {
precmd;
if ((ENDTIME - STARTTIME >= 0)); then
printf 'Command took %d seconds.\n' "$((ENDTIME - STARTTIME))";
fi
# Do not forget your:
# - OSC 0 (set title)
# - OSC 777 (notification in gnome-terminal, urxvt; note, this one has preexec and precmd as OSC 777 features)
# - OSC 99 (notification in kitty)
# - OSC 7 (set url) - out of scope for this question
}
export PROMPT_COMMAND=__timeit
Note: If you have ignoredups in your $HISTCONTROL, then this will not report back for a command that is re-run.
Reference (copied from my own answer to a similar question):
Use PS0 and PS1 to display execution time of each bash command

Multi-threaded BASH programming - generalized method?

Ok, I was running POV-Ray on all the demos, but POV's still single-threaded and wouldn't utilize more than one core. So, I started thinking about a solution in BASH.
I wrote a general function that takes a list of commands and runs them in the designated number of sub-shells. This actually works but I don't like the way it handles accessing the next command in a thread-safe multi-process way:
It takes, as an argument, a file with commands (1 per line),
To get the "next" command, each process ("thread") will:
Waits until it can create a lock file, with: ln $CMDFILE $LOCKFILE
Read the command from the file,
Modifies $CMDFILE by removing the first line,
Removes the $LOCKFILE.
Is there a cleaner way to do this? I couldn't get the sub-shells to read a single line from a FIFO correctly.
Incidentally, the point of this is to enhance what I can do on a BASH command line, and not to find non-bash solutions. I tend to perform a lot of complicated tasks from the command line and want another tool in the toolbox.
Meanwhile, here's the function that handles getting the next line from the file. As you can see, it modifies an on-disk file each time it reads/removes a line. That's what seems hackish, but I'm not coming up with anything better, since FIFO's didn't work w/o setvbuf() in bash.
#
# Get/remove the first line from FILE, using LOCK as a semaphore (with
# short sleep for collisions). Returns the text on standard output,
# returns zero on success, non-zero when file is empty.
#
parallel__nextLine()
{
local line rest file=$1 lock=$2
# Wait for lock...
until ln "${file}" "${lock}" 2>/dev/null
do sleep 1
[ -s "${file}" ] || return $?
done
# Open, read one "line" save "rest" back to the file:
exec 3<"$file"
read line <&3 ; rest=$(cat<&3)
exec 3<&-
# After last line, make sure file is empty:
( [ -z "$rest" ] || echo "$rest" ) > "${file}"
# Remove lock and 'return' the line read:
rm -f "${lock}"
[ -n "$line" ] && echo "$line"
}
#adjust these as required
args_per_proc=1 #1 is fine for long running tasks
procs_in_parallel=4
xargs -n$args_per_proc -P$procs_in_parallel povray < list
Note the nproc command coming soon to coreutils will auto determine
the number of available processing units which can then be passed to -P
If you need real thread safety, I would recommend to migrate to a better scripting system.
With python, for example, you can create real threads with safe synchronization using semaphores/queues.
sorry to bump this after so long, but I pieced together a fairly good solution for this IMO
It doesnt work perfectly, but it will limit the script to a certain number of child tasks running, and then wait for all the rest at the end.
#!/bin/bash
pids=()
thread() {
local this
while [ ${#} -gt 6 ]; do
this=${1}
wait "$this"
shift
done
pids=($1 $2 $3 $4 $5 $6)
}
for i in 1 2 3 4 5 6 7 8 9 10
do
sleep 5 &
pids=( ${pids[#]-} $(echo $!) )
thread ${pids[#]}
done
for pid in ${pids[#]}
do
wait "$pid"
done
it seems to work great for what I'm doing (handling parallel uploading of a bunch of files at once) and keeps it from breaking my server, while still making sure all the files get uploaded before it finishes the script
I believe you're actually forking processes here, and not threading. I would recommend looking for threading support in a different scripting language like perl, python, or ruby.

Resources