Redirected output hangs when using tee - linux

I'd like to provide an optional logging parameter in a bash script, and would like to use exec to tee a pipe from the beginning. However, opening the tee process is causing the script to hang, I believe because stdout is not closed:
# Output to a log file, if set
if [[ $OPT_LOG ]]; then
exec > >(tee -a $OPT_LOG)
fi
I've attempted to close with:
exec >&-
But it still hangs - is there another way to properly close tee so the script will exit properly at the end of execution?

It seems like, for some reason, using tee stops the prompt ($PS1) from appearing because the shell script has not exited. As a workaround, I generally use a short sleep after calling tee.
#!/bin/bash
exec > >(tee -a mylog)
sleep .1
# my code

Converting comments into an answer, with minor edits.
I noted:
The following code exits OK for me on Mac OS X 10.10.3.
OPT_LOG=file.name
if [[ $OPT_LOG ]]
then exec > >(tee -a $OPT_LOG)
fi
for ((i = 0; i < 10; i++))
do
echo "Logging message $i at $(date)"
sleep 1
done
Your problem is probably in the code you've not shown. What commands are you running? What do you get from bash -x yourscript.sh?
And chatraed observed
If you remove the sleep and date calls from your example, the script does not exit properly any more, as told by Andrew:
OPT_LOG=file.name;
if [[ $OPT_LOG ]]; then exec > >(tee -a $OPT_LOG); fi;
for ((i = 0; i < 10; i++)); do echo "Logging message $i"; done
And I responded:
Now that's an interesting observation! I can reproduce your result.
I experimented a bit:
I added a pwd before the for loop with 'just the echo', and that didn't affect things (but pwd is probably a Bash built-in).
I added bash -c 'exit 0' (which is not a built-in) and the code terminated OK.
I tried using > >(tee -a $OPT_LOG &) and the output didn't appear — on screen or in the file. (I find this surprising, but it was an attempted workaround, not a major part of the work.)
My impression is that Andrew has found a bug in Bash that could viably be reported. (See the Bash manual on Reporting Bugs for how to do that.) I don't think that not exiting simply because no external commands have been executed since the I/O redirection is OK. I can confirm that Apple's Bash 3.2.57 has the problem; so does Bash 4.3.27 (which I built for myself so it is patched with the fixes to the ShellShock bug).
It is relatively unusual that a shell script does not invoke any external command, and invoking any external command after the exec redirection seems to suppress the bug.
It also confirms that chatraed's workaround works, albeit more slowly than minimally necessary. For production use, I'd choose sleep 0, which is most of a tenth of a second faster than sleep 0.1. And sleeping for a fraction of a second only works on some systems; classically (and according to POSIX), sleep does not sleep for fractional seconds. OTOH, if you write sleep 0.1, on systems without support for fractional seconds, including the leading 0 probably makes it sleep for zero seconds anyway; writing .1 might or might not have the same effect.

Related

Parallel run and wait for pocesses from subshell

Hi all/ I'm trying to make something like parallel tool for shell simply because the functionality of parallel is not enough for my task. The reason is that I need to run different versions of compiler.
Imagine that I need to compile 12 programs with different compilers, but I can run only 4 of them simultaneously (otherwise PC runs out of memory and crashes :). I also want to be able to observe what's going on with each compile, therefore I execute every compile in new window.
Just to make it easier here I'll replace compiler that I run with small script that waits and returns it's process id sleep.sh:
#!/bin/bash
sleep 30
echo $$
So the main script should look like parallel_run.sh :
#!/bin/bash
for i in {0..11}; do
xfce4-terminal -H -e "./sleep.sh" &
pids[$i]=$!
pstree -p $pids
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
fi
done
The problem is that with $! I get pid of xfce4-terminal and not the program it executes. So if I look at ptree of 1st iteration I can see output from main script:
xfce4-terminal(31666)----{xfce4-terminal}(31668)
|--{xfce4-terminal}(31669)
and sleep.sh says that it had pid = 30876 at that time. Thus wait doesn't work at all in this case.
Q: How to get right PID of compiler that runs in subshell?
Maybe there is the other way to solve task like this?
It seems like there is no way to trace PID from parent to child if you invoke process in new xfce4-terminal as terminal process dies right after it executed given command. So I came to the solution which is not perfect, but acceptable in my situation. I run and put compiler's processes in background and redirect output to .log file. Then I run tail on these logfiles and I kill all tails which belongs to current $USER when compilers from current batch are done, then I run the other batch.
#!/bin/bash
for i in {1..8}; do
./sleep.sh > ./process_$i.log &
prcid=$!
xfce4-terminal -e "tail -f ./process_$i.log" &
pids[$i]=$prcid
if (( $i % 4 == 0 ))
then
for pid in ${pids[*]}; do
wait $pid
done
killall -u $USER tail
fi
done
Hopefully there will be no other tails running at that time :)

How can I make a ksh script terminate itself if any issues?

I have written a few ksh scripts, about 6 scripts.
These are written to handle huge data files, something like 207 MB big. while running the script, sometimes it gets stuck and does not end.
Human interruption is required.
In production environment, I want it to run automatically, and should be able to end automatically if any issues without the need of any human interruption.
If there are some issues with a file, the script should end and start executing the next file.
How can make it terminate itself, if it gets stuck?
I assume, that the only way you see the issues is that the script takes too long. In that case a simple script that kills the process after a time-out should be sufficient:
#!/bin/bash
# Killersrcipt
PID=$1
TIME=$2
typeset -i i
i=0
while [ $i -lt $TIME ] ; do
if ps $PID > /dev/null ; then
i=$i+1
sleep 1
else
exit 0
fi
done
kill $PID
Your workflow would then be something like:
#!/bin/bash
process_1 &
killerscript $! 60
process_2 &
killerscript $! 30
...
If you have other ways to detect issues in your processes, you can easily add them to the loop in your killerscript.

How to check if file size is not incrementing ,if not then kill the $$ of script

I am trying to figure out a way to monitor the files I am dumping from my script. If there is no increment seen in child files then kill my script.
I am doing this to free up the resources when not needed. Here is what I think of , but I think my apporch is going to add burden to CPU. Can anyone please suggest more efficent way of doing this?
Below script is suppose to poll in every 15 sec and collect two file size of same file, if the size of the two samples are same then exit.
checkUsage() {
while true; do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]; then
echo -e "[Error]: No activity noted on this window from 5 sec. Exiting..."
kill -9 $$
fi
done
}
And I am planning to call it as follow (in background):
checkUsage /var/log/messages &
I can also get solution if, someone suggest how to monitor tail command and if nothing printing on tail then exit. NOT SURE WHY PEOPLE ARE CONFUSED. End goal of this question is to ,check if the some file is edited in last 15 seconds. If not exit or throw some error.
I have achived this by above script,but I don't know if this is the smartest way of achiveing this. I have asked this question to know views from other if there is any alternative way or better way of doing it.
I would based the check on file modification time instead of size, so something like this (untested code):
checkUsage() {
while true; do
# Test if file mtime is 'second arg' seconds older than date, default to 10 seconds
if [ $(( $(date +"%s") - $(stat -c%Y /var/log/message) )) -gt ${2-10} ]; then
echo -e "[Error]: No activity noted on this window from ${2-10} sec. Exiting..."
return 1
fi
#Sleep 'first arg' second, 15 seconds by default
sleep ${1-15}
done
}
The idea is to compare the file mtime with current time, if it's greater than second argument seconds, print the message and return.
And then I would call it like this later (or with no args to use defaults):
[ checkusage 20 10 ] || exit 1
Which would exit the script with code 1 as when the function return from it's infinite loop (as long as the file is modified)
Edit: reading me again, the target file could be a parameter too, to allow a better reuse of the function, left as an exercise to the reader.
If on Linux, in a local file system (Ext4, BTRFS, ...) -not a network file system- then you could consider inotify(7) facilities: something could be triggered when some file or directory changes or is accessed.
In particular, you might have some incron job thru incrontab(5) file; maybe it could communicate with some other job ...
PS. I am not sure to understand what you really want to do...
I suppose an external programme is modifying /var/log/messages.
If this is the case, below is my script (with minor changes to yours)
#Bash script to monitor changes to file
#!/bin/bash
checkUsage() # Note that U is in caps
{
while true
do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]
then
echo -e "[Notice : ] no changes noted in $1 : gracefully exiting"
exit # previously this was kill -9 $$
# changing this to exit would end the program gracefully.
# use kill -9 to kill a process which is not under your control.
# kill -9 sends the SIGKILL signal.
fi
done
}
checkUsage $1 # I have added this to your script
#End of the script
Save the script as checkusage and run it like :
./checkusage /var/log/messages &
Edit :
Since you're looking for better solutions I would suggest inotifywait, thanks for the suggestion from the other answerer.
Below would be my code :
while inotifywait -t 10 -q -e modify $1 >/dev/null
do
sleep 15 # as you said the polling would happen in 15 seconds.
done
echo "Script exited gracefully : $1 has not been changed"
Below are the details from the inotifywait manpage
-t <seconds>, --timeout <seconds> Exit if an appropriate event has not occurred within <seconds> seconds. If <seconds> is zero (the default),
wait indefinitely for an event.
-e <event>, --event <event> Listen for specific event(s) only. The events which can be listened for are listed in the EVENTS section.
This option can be specified more than once. If omitted, all events
are listened for.
-q, --quiet If specified once, the program will be less verbose. Specifically, it will not state when it has completed establishing all
inotify watches.
modify(Event) A watched file or a file within a watched directory was
written to.
Notes
You might have to install the inotify-tools first to make use of the inotifywait command. Check the inotify-tools page at Github.

Keep a script running through ssh after logout

This is the first question that I post here. I tried to do a throughout search, but if I haven't (and the answer is obvious somewhere else), please just let me know.
I have a script that runs a program for me, here it is:
csv_file=../data/teste_nohup.csv
trace_file=../data/gnp.trace
declare -i n=100
declare -i p=1
declare -i counter=0
while [ $counter -lt 3 ];
do
n=100
while true
do
nice -19 sage gnptest.py ${n} ${p} | tee -a ${csv_file}
notify-send "finished test gnp ${n} ${p}"
done
done
So, what I'm trying to do is run the gnptest.py program a few times, and have the result be written to the csv_file.
The problem is, that depending on the input, the program may take a long time to complete. So I'd like to connect to the server over ssh, start the program, close the terminal, and check the output file from time to time.
I've tried nohup and disown. Nohup creates a huge nohup.out file, full with errors that I don't get while normally running the script (it complains about using the -lt operand, for example). But the biggest problem that I'm facing is that no command (nohup ou disown -h) is executing the program and sending the output to the file that I've specified in the csv_file variable, which is being done using the tee command. Also, none of them seem to continue running after I logout...
Any help will be much appreciated.
Thanks in advance!!
i hv just joined so cannt add comment
Please try by using redirection instead of tee in script
And to get rid of Nohup.out use following to run script
nohup script.sh > /dev/null 2>&1 &
If above produces error use
nohup script.sh > /dev/null 2>&1 </dev/null &
Hope this will help.

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources