Terminating half of a pipe on Linux does not terminate the other half - linux

I have a filewatch program:
#!/bin/sh
# On Linux, uses inotifywait -mre close_write, and on OS X uses fswatch.
set -e
[[ "$#" -ne 1 ]] && echo "args count" && exit 2
if [[ `uname` = "Linux" ]]; then
inotifywait -mcre close_write "$1" | sed 's/,".*",//'
elif [[ `uname` = "Darwin" ]]; then
# sed on OSX/BSD wants -l for line-buffering
fswatch "$1" | sed -l 's/^[a-f0-9]\{1,\} //'
fi
echo "fswatch: $$ exiting"
And a construct i'm trying to use from a script (and I am testing with it on the command line on CentOS now):
filewatch . | while read line; do echo "file $line has changed\!\!"; done &
So what I am hoping this does is it will let me process, one line at a time, the output of inotify, which of course sends out one line for each file it has detected a change on.
Now for my script to clean stuff up properly I need to be able to kill this whole backgrounded pipeline when the script exits.
So i run it and then if I run kill on either the first part of the pipe or the second part, the other part does not terminate.
So I think if I kill the while read line part (which should be sh (zsh in the case of running on the cmd line)) then filewatch should be receiving a SIGPIPE. Okay so I am not handling that, I guess it can keep running.
If I kill filewatch, though, it looks like zsh continues with its while read line. Why?

Related

Getting PID of launched process

For some monitoring purposes, for a given list of files (that change path daily) I want to launch a tail -f in the background searching for specific strings
How do I properly get the exact pid of the tail commands generated by the for loop, so I could kill them later ?
Here's what I'm trying to do which I think gives me the pid of the script and not the tail process.
for f in $(find file...)
do
tail -f $f | while read line
do case $line in
*string_to_search*) echo " $line" | mutt -s "string detected in file : $1" mail#mail.com;
;;
esac
done &
echo $! >> pids.txt
done

bash: loop through procces output and terminate the process

I need some help with the following:
I use linux to script commands sent to a device. I need to submit a grep logcat command to the device and then iterate its output as it is being generated and look for a particular string. Once this string is found I want my script to move to the following command.
in pseudocode
for line in "adb shell logcat | grep TestProccess"
do
if "TestProccess test service stopped" in line:
print line
print "TestService finished \n"
break
else:
print line
done
adb shell logcat | grep TestProcess | while read line
do
echo "$line"
if [ "$line" = "TestProces test service stopped" ]
then echo "TestService finished"
break
fi
done
adb shell logcat | grep -Fqm 1 "TestProcess test service stopped" && echo "Test Service finished"
The grep flags:
-F - treat the string literally, not as a regular expression
-q - don't print anything to standard output
-m 1 - stop after the first match
The command after && only executes if grep finds a match. As long as you "know" grep will eventually match and want to unconditionally continue once it returns, just leave off the && ...
You could use an until loop.
adb shell logcat | grep TestProccess | until read line && [[ "$line" =~ "TestProccess test service stopped" ]]; do
echo $line;
done && echo -n "$line\nTestService finished"

Bash: How do I make sub-processes of a script be terminated, when the script is terminated?

The question applies to a script such as the following:
Script
#!/bin/sh
SRC="/tmp/my-server-logs"
echo "STARTING GREP JOBS..."
for f in `find ${SRC} -name '*log*2011*' | sort --reverse`
do
(
OUT=`nice grep -ci -E "${1}" "${f}"`
if [ "${OUT}" != "0" ]
then
printf '%7s : %s\n' "${OUT}" "${f}"
else
printf '%7s %s\n' "(none)" "${f}"
fi
) &
done
echo "WAITING..."
wait
echo "FINISHED!"
Current behavior
Pressing Ctrl+C in console terminates the script but not the already running grep processes.
Write a trap for Ctrl+c and in the trap kill all of the subprocesses. Put this before your wait command.
function handle_sigint()
{
for proc in `jobs -p`
do
kill $proc
done
}
trap handle_sigint SIGINT
A simple alternative is using a cat pipe. The following worked for me:
echo "-" > test.text;
for x in 1 2 3; do
( sleep $x; echo $x | tee --append test.text; ) &
done | cat
If I press Ctrl-C before the last number is printed to stdout. It also works if the text-generating command is something that takes a long time such as "find /", i.e. it is not only the connection to stdout through cat that is killed but actually the child process.
For large scripts that make extensive use of subprocesses the easiest way to ensure the indented Ctrl-C behaviour is wrapping the whole script into such a subshell, e.g.
#!/usr/bin/bash
(
...
) | cat
I am not sure though if this has the exactly same effect as Andrew's answer (i.e. I'm not sure what signal is sent to the subprocesses). Also I only tested this with cygwin, not with a native Linux shell.

Bash - Update terminal title by running a second command

On my terminal in Ubuntu, I often run programs which keep running for a long time. And since there are a lot of these programs, I keep forgetting which terminal is for which program, unless I tab through all of those. So I wanted to find a way to update my terminal title to the program name, whenever I run a command. I don't want to do it manually.
I use gnome-terminal, but answer shouldn't really depend on that. Basically, If I'm able to run a second command, then I can simply use gconftool command to update the title. So I was hoping to find a way to capture the command in bash and update the title after every command. How do I do that?
I have some answers for you :) You're right that it shouldn't matter that you're using gnome-terminal, but it does matter what command shell you're using. This is a lot easier in zsh, but in what follows I'm going to assume you're using bash, and that it's a fairly recent version (> 3.1).
First of all:
Which environment variable would
contain the current 'command'?
There is an environment variable which has more-or-less what you want - $BASH_COMMAND. There's only one small hitch, which is that it will only show you the last command in a pipe. I'm not 100% sure what it will do with combinations of subshells, either :)
So I was hoping to find a way to
capture the command in bash and update
the title after every command.
I've been thinking about this, and now that I understand what you want to do, I realized the real problem is that you need to update the title before every command. This means that the $PROMPT_COMMAND and $PS1 environment variables are out as possible solutions, since they're only executed after the command returns.
In bash, the only way I can think of to achieve what you want is to (ab)use the DEBUG SIGNAL. So here's a solution -- stick this at the end of your .bashrc:
trap 'printf "\033]0;%s\007" "${BASH_COMMAND//[^[:print:]]/}"' DEBUG
To get around the problem with pipes, I've been messing around with this:
function settitle () {
export PREV_COMMAND=${PREV_COMMAND}${#}
printf "\033]0;%s\007" "${BASH_COMMAND//[^[:print:]]/}"
export PREV_COMMAND=${PREV_COMMAND}' | '
}
export PROMPT_COMMAND=${PROMPT_COMMAND}';export PREV_COMMAND=""'
trap 'settitle "$BASH_COMMAND"' DEBUG
but I don't promise it's perfect!
Try this:
trap 'echo -ne "\033]2;$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//g")\007"' DEBUG
Thanks to the history 1 it works even with complicated expressions like:
true && (false); echo $? | cat
For which approaches relying on $BASH_COMMAND or $# fail. For example simon's displays:
true | echo $? | cat
Thanks to Gilles and simon for providing inspiration.
I see what stoutie is trying to do, except it's a lot more work than needed. And doesn't cause all sorts of other potentially bad things that can occur as a result of redefining 'cd' and putting in all of that testing just to change directories. Bash has built in support for most of this.
You can put this in your .bashrc anywhere after you set your current PS1 prompt (this way it just prepends it)
# If this is an xterm set the titlebar to user#host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;\u#\h: \w\a\]$PS1"
;;
*)
;;
esac
The OP asked for bash, but others might be interested to learn that (as mentioned above) this is indeed a lot easier using the zsh shell. Example:
# Set window title to command just before running it.
preexec() { printf "\x1b]0;%s\x07" "$1"; }
# Set window title to current working directory after returning from a command.
precmd() { printf "\x1b]0;%s\x07" "$PWD" }
In preexec, $1 contains the command as typed (requires shell history to be enabled, which seems to be a fair assumption), $2 the expanded command (shell aliases etc.) and $3 the "very expanded" command (shell function bodies). (more)
I'm doing something like this, to show my pwd in the title, which could be modified to do whatever you want to do with the title:
function title { echo -en "\033]2;$1\007"; }
function cd { dir=$1; if [ -z "$dir" ]; then dir=~; fi; builtin cd "$dir" && title `pwd`; }
I just threw this in my ~/.bash_aliases.
Update
I ran into strange bugs with my original answer. I ended up picking apart the default Ubuntu PS1 and breaking it into parts only to realize one of the parts was the title:
# simple prompt
COLOR_YELLOW_BOLD="\[\033[1;33m\]"
COLOR_DEFAULT="\[\033[0m\]"
TITLE="\[\e]0;\u#\h:\w\a\]"
PROMPT="\w\n$ "
HUH="${debian_chroot:+($debian_chroot)}"
PS1="${COLOR_YELLOW_BOLD}${TITLE}${HUH}${PROMPT}${COLOR_DEFAULT}"
Without breaking into variables, it would look like this:
PS1="\[\033[1;33m\]\[\e]0;\u#\h:\w\a\]${debian_chroot:+($debian_chroot)}\w\n$ \[\033[0m\]"
I have tested three method, all is OK, use any one for your pleasure.
export PROMPT_COMMAND='echo -ne "\033]2;$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//g")\007"'
trap 'echo -ne "\033]2;$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//g")\007"' DEBUG
trap 'echo -ne "\e]0;"; echo -n $BASH_COMMAND; echo -ne "\a"' DEBUG
please note if use $BASH_COMMAND, it don't recognize bash alias, and use PROMPT_COMMAND show finished command, but use trap show running command.
Based on the the need to auto position putty windows I have modified my /etc/bash.bashrc file on a Debian/Ubuntu system. I have posted the full contents for completeness but the relevant bit to starts on the # Display command ... comment line.
# System-wide .bashrc file for interactive bash(1) shells.
# To enable the settings / commands in this file for login shells as well,
# this file has to be sourced in /etc/profile.
# If not running interactively, don't do anything
[ -z "$PS1" ] && return
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, overwrite the one in /etc/profile)
PS1='${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
# Display command run in title which allows us to distinguish Kitty/Putty
# windows and re-position easily using AutoSizer window utility. Based on a
# post here: http://mg.pov.lt/blog/bash-prompt.html
case "$TERM" in
xterm*|rxvt*)
# Show the currently running command in the terminal title:
# http://www.davidpashley.com/articles/xterm-titles-with-bash.html
show_command_in_title_bar()
{
case "$BASH_COMMAND" in
*\033]0*)
# The command is trying to set the title bar as well;
# this is most likely the execution of $PROMPT_COMMAND.
# In any case nested escapes confuse the terminal, so don't
# output them.
;;
*)
echo -ne "\033]0;${USER}#${HOSTNAME}: ${BASH_COMMAND}\007"
;;
esac
}
trap show_command_in_title_bar DEBUG
;;
*)
;;
esac
# Commented out, don't overwrite xterm -T "title" -n "icontitle" by default.
# If this is an xterm set the title to user#host:dir
#case "$TERM" in
#xterm*|rxvt*)
# PROMPT_COMMAND='echo -ne "\033]0;${USER}#${HOSTNAME}: ${PWD}\007"'
# ;;
#*)
# ;;
#esac
# enable bash completion in interactive shells
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
# if the command-not-found package is installed, use it
if [ -x /usr/lib/command-not-found -o -x /usr/share/command-not-found/command-not-found ]; then
function command_not_found_handle {
# check because c-n-f could've been removed in the meantime
if [ -x /usr/lib/command-not-found ]; then
/usr/bin/python /usr/lib/command-not-found -- "$1"
return $?
elif [ -x /usr/share/command-not-found/command-not-found ]; then
/usr/bin/python /usr/share/command-not-found/command-not-found -- "$1"
return $?
else
printf "%s: command not found\n" "$1" >&2
return 127
fi
}
fi
You can set up bash such that it sends a certain escape sequence to the terminal every time it starts an external program. If you use the escape sequence that terminals use to update their titles, your problem should be solved.
I have used that before, so I know it is possible. but I cannot remember it off the top of my head and do not have time to research the details right now, though.
Some of the old methods were removed from gnome-terminal 3.14 due to these two bugs (724110 and 740188).
In Ubuntu 20.04
PS1=$PS1"\[\e]0;New_Terminal_Name\a\]"
\[ begin a sequence of non-printing characters
\e]0; is the char sequence for setting the terminal title. Bash identifies this sequence and set the tile with the following characters. Number 0 turns out to be the value to reference the title property.
New_Terminal_Name is the tile we gave
\a is the ASCII bell character, also in this case, it marks the end of the tile to read from Bash.
\] end a sequence of non-printing characters
We can create a function for future use
function set_title(){
if [ -z "$PS1_BACK" ]; # set backup if it is empty
then
PS1_BACK="$PS1"
fi
TITLE="\[\e]0;$*\a\]"
PS1="${PS1_BACK}${TITLE}"
}
Open the ~/.bashrc file in your home directory with a text editor and append the above function at the end of it. Save and close.
To use it immediately source it to the current terminal.
source ~/.bashrc
We can use it then like this
set_title <New terminal tab title>
My terminal window titler script
This dynamic backgrounded script show all running command with pid number and elapsed time in seconds, like if I run du -h | less, this will build title looking like:
204640 6 du -h | 204641 6 less
Then when no command (other than himself) are running, don't change the terminal title, so standard behaviours works normaly.
First run start backgroud task. Second run in same terminal ask for kill previous backgrounded task.
Save this into a file, set execute flag then run it without argument:
cat <<"EOF" >titleWin.sh
#!/bin/bash
## Ask for kill process if already started
mapfile -t pids < <(ps -C ${0##*/} ho pid)
for pid in ${pids[#]} ;do
if [[ $pid != $$ ]] && [ -d /proc/$pid ]; then
echo -n "STARTED: [$pid]: ${0##*/}. Kill them (Y/n)? "
read -rsn 1 act
case $act in
n|N ) echo No;;
* ) echo Yes;kill $pid ;;
esac
exit
fi
done
## Title win for xterm or screen (or tmux).
case $TERM in
xterm*|rxvt* ) titleFmt='\e];%s\a';;
screen* ) titleFmt='\ek%s\e\\';;
* ) echo "Unable to title window.";exit 1;;
esac
tty=$(tty)
## Date to epochseconds converter
exec {dateout}<> <(:)
exec {datein}> >(exec stdbuf -o0 date -f - +%s >&$dateout)
DPID=$!
trap "echo TRAP;kill $DPID" 1 2 3 6 9 15
# Main loop
while :;do
string=""
while read -r pid wday mon day time year cmd; do
if [[ $pid != $$ ]] && [[ $pid != $PPID ]] && [[ $pid != $BASHPID ]] &&
[[ $pid != $DPID ]] && [ "${cmd#*pid,lstart,cmd}" ] &&
[ -d /proc/$pid ] ;then
echo >&${datein} $wday $mon $day $time $year
read -ru $dateout date
string+="$pid $((EPOCHSECONDS-date)) $cmd | "
fi
done < <(exec ps --tty ${tty#*/dev/} ho pid,lstart,cmd)
[[ "$string" ]] && printf "$titleFmt" "${string% | }"
sleep .333
done &
EOF
chmod +x titleWin.sh
./titleWin.sh

Bash script does not continue to read the next line of file

I have a shell script that saves the output of a command that is executed to a CSV file. It reads the command it has to execute from a shell script which is in this format:
ffmpeg -i /home/test/videos/avi/418kb.avi /home/test/videos/done/418kb.flv
ffmpeg -i /home/test/videos/avi/1253kb.avi /home/test/videos/done/1253kb.flv
ffmpeg -i /home/test/videos/avi/2093kb.avi /home/test/videos/done/2093kb.flv
You can see each line is an ffmpeg command. However, the script just executes the first line. Just a minute ago it was doing nearly all of the commands. It was missing half for some reason. I edited the text file that contained the commands and now it will only do the first line. Here is my bash script:
#!/bin/bash
# Shell script utility to read a file line line.
# Once line is read it will run processLine() function
#Function processLine
processLine(){
line="$#"
START=$(date +%s.%N)
eval $line > /dev/null 2>&1
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF" >> file.csv 2>&1
echo "It took $DIFF seconds"
echo $line
}
# Store file name
FILE=""
# get file name as command line argument
# Else read it from standard input device
if [ "$1" == "" ]; then
FILE="/dev/stdin"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists"
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not read"
exit 2
fi
fi
# read $FILE using the file descriptors
# Set loop separator to end of line
BAKIFS=$IFS
IFS=$(echo -en "\n\b")
exec 3<&0
exec 0<$FILE
while read line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
BAKIFS=$ORIGIFS
exit 0
Thank you for any help.
UPDATE 2
Its the ffmpeg commands rather than the shell script that isn't working. But I should of been using just "\b" as Paul pointed out. I am also making use of Johannes's shorter script.
I think that should do the same and seems to be correct:
#!/bin/bash
CSVFILE=/tmp/file.csv
cat "$#" | while read line; do
echo "Executing '$line'"
START=$(date +%s)
eval $line &> /dev/null
END=$(date +%s)
let DIFF=$END-$START
echo "$line, $START, $END, $DIFF" >> "$CSVFILE"
echo "It took ${DIFF}s"
done
no?
ffmpeg reads STDIN and exhausts it. The solution is to call ffmpeg with:
ffmpeg </dev/null ...
See the detailed explanation here: http://mywiki.wooledge.org/BashFAQ/089
Update:
Since ffmpeg version 1.0, there is also the -nostdin option, so this can be used instead:
ffmpeg -nostdin ...
I just had the same problem.
I believe ffmpeg is responsible for this behaviour.
My solution for this problem:
1) Call ffmpeg with an "&" at the end of your ffmpeg command line
2) Since now the skript will not wait till completion of the ffmpeg process,
we have to prevent our script from starting several ffmpeg processes.
We achieve this goal by delaying the loop pass while there is at least
one running ffmpeg process.
#!/bin/bash
cat FileList.txt |
while read VideoFile; do
<place your ffmpeg command line here> &
FFMPEGStillRunning="true"
while [ "$FFMPEGStillRunning" = "true" ]; do
Process=$(ps -C ffmpeg | grep -o -e "ffmpeg" )
if [ -n "$Process" ]; then
FFMPEGStillRunning="true"
else
FFMPEGStillRunning="false"
fi
sleep 2s
done
done
I would add echos before and after the eval to see what it's about to eval (in case it's treating the whole file as one big long line) and after (in case one of the ffmpeg commands is taking forever).
Unless you are planning to read something from standard input after the loop, you don't need to preserve and restore the original standard input (though it is good to see you know how).
Similarly, I don't see a reason for dinking with IFS at all. There is certainly no need to restore the value of IFS before exit - this is a real shell you are using, not a DOS BAT file.
When you do:
read var1 var2 var3
the shell assigns the first field to $var1, the second to $var2, and the rest of the line to $var3. In the case where there's just one variable - your script, for example - the whole line goes into the variable, just as you want it to.
Inside the process line function, you probably don't want to throw away error output from the executed command. You probably do want to think about checking the exit status of the command. The echo with error redirection is ... unusual, and overkill. If you're sufficiently sure that the commands can't fail, then go ahead with ignoring the error. Is the command 'chatty'; if so, throw away the chat by all means. If not, maybe you don't need to throw away standard output, either.
The script as a whole should probably diagnose when it is given multiple files to process since it ignores the extraneous ones.
You could simplify your file handling by using just:
cat "$#" |
while read line
do
processline "$line"
done
The cat command automatically reports errors (and continues after them) and processes all the input files, or reads standard input if there are no arguments left. The use of double quotes around the variable means that it is passed as a single unit (and therefore unparsed into separate words).
The use of date and bc is interesting - I'd not seen that before.
All in all, I'd be looking at something like:
#!/bin/bash
# Time execution of commands read from a file, line by line.
# Log commands and times to CSV logfile "file.csv"
processLine(){
START=$(date +%s.%N)
eval "$#" > /dev/null
STATUS=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
echo "$line, $START, $END, $DIFF, $STATUS" >> file.csv
echo "${DIFF}s: $STATUS: $line"
}
cat "$#" |
while read line
do
processLine "$line"
done

Resources