How can I access the STDIN of a subprocess? - linux

I want to run the command:
nc localhost 9998
Then I want my script to monitor a file and echo the contents of the file to this sub process whenever the file changes.
I can't work out the re-direction scheme. How can get access to the STDIN of the subprocess?

How about
tail -f $file |nc localhost 9998
Edit:
Since you already have a buffer, then you can try something like this:
while [ 1 ]; do
# Your stuff here.
buf=yourfunctionhere
buffer=$buffer$buf
if [ ! -z $buffer ]; then
echo $buffer |nc localhost 9998
# Empty buffer on success.
if [ $? -eq 0 ]; then
buffer="";
fi
fi
done

mkfifo X
some_program <X >output &
create_input >X
some_program will block reading X until create_input writes to it.

Two solutions that I found acceptable:
1) use coprocess, this way we have access to stdin and stdout of the process created by the coprocess command via the COPROC[0/1] array.
2) What I ultimately did is separate my application into two code blocks as shown below. The first block writes to stdout, that is then piped to the stdin of the second block. This gives me a clean way to buffer data when there are issues with netcat in the second code block:
{ while true;
write to STDOUT; } |
{ while true
nc localhost 9998 }
(in actuality the script is far more complex as the second command provides to-disk buffering when netcat is unable to connect, but the use of the pipe provides buffering so that data isn't lost when a network issue interrupts netcat)

I found a solution using diff and a simple bash script.
The following script execute cat $file > $namedpipe when file change. This is the script I made check-file.sh:
#!/bin/bash
file=$1
tmp=`mktemp`
cp "$file" "$tmp"
namedpipe=`mktemp`
rm -rf $namedpipe
mkfifo $namedpipe
function cleanup() {
echo "end of program"
rm -rf $tmp
rm -rf $namedpipe
exit 1;
}
trap cleanup SIGINT
tail -f $namedpipe 2> /dev/null | netcat localhost 9998 &
while true; do
diff=$(diff "$file" "$tmp")
if [ ! -z "$diff" ]; then
cat $file > $namedpipe
cp $file $tmp
fi
sleep 1
done
This script take as an input the name of a file. For example try these commands in your environment (whit netcat -l 9998 running):
touch /tmp/test
bash check-file.sh /tmp/test &
echo "change 1" > /tmp/test
sleep 1
echo "change 2" > /tmp/test
sleep 1
echo "change 3" > /tmp/test
Note: The temp file get cleaned up by the trap, so you can interrupt this script gracefuly.

Related

Use of echo >> produces inconsistent results

I've been trying to understand a problem that's cropped up with some of the scripts we use at work.
To generate many of our script logs, we utilize the exec command and file redirects to print all output from the script to both the terminal and a log file. Occasionally, for information that doesn't need to be displayed to the user, we do a straight redirect to the log file.
The issue we're seeing occurs on the last line of output to the file when we're printing the number of errors that occurred during that execution: The text doesn't get printed to the file.
In an attempt to diagnose the problem, I wrote a simplified version of our production script (script1.bash) and a test script (script2.bash) to try to tease out the problem.
script1.bash
#!/bin/bash
log_name="${USER}_`date +"%Y%m%d-%H%M%S"`_${HOST}_${1}.log"
log="/tmp/${log_name}"
log_tmp="/tmp/temp_logs"
err_count=0
finish()
{
local ecode=0
if [ $# -eq 1 ]; then
ecode=${1}
fi
# This is the problem line
echo "Error Count: ${err_count}" >> "${log}"
mvlog
local success=$?
exec 1>&3 2>&4
if [ ${success} -ne 0 ]; then
echo ""
echo "WARNING: Failed to save log file to ${log_tmp}"
echo ""
ecode=$((ecode+1))
fi
exit ${ecode}
}
mvlog()
{
local ecode=1
if [ ! -d "${log_tmp}" ]; then
mkdir -p "${log_tmp}"
chmod 775 "${log_tmp}"
fi
if [ -d "${log_tmp}" ]; then
rsync -pt --bwlimit=4096 "${log}" "${log_tmp}/${log_name}" 2> /dev/null
[ $? -eq 0 ] && ecode=0
if [ ${ecode} -eq 0 ]; then
rm -f "${log}"
fi
fi
}
exec 3>&1 4>&2 >(tee "${log}") 2>&1
ecode=0
echo
echo "Some text"
echo
finish ${ecode}
script2.bash
#!/bin/bash
runs=10000
logdir="/tmp/temp_logs"
if [ -d "${logdir}" ]; then
rm -rf "${logdir}"
fi
for i in $(seq 1 ${runs}); do
echo "Conducting run #${i}/${runs}..."
${HOME}/bin/script1.bash ${i}
done
echo "Scanning logs from runs..."
total_count=`find "${logdir}" -type f -name "*.log*" | wc -l`
missing_count=`grep -L 'Error Count:' ${logdir}/*.log* | grep -c /`
echo "Number of runs performed: ${runs}"
echo "Number of log files generated: ${total_count}"
echo "Number of log files missing text: ${missing_count}"
My first test indicated roughly 1% of the time the line isn't written to the log file. I then proceeded to try several different methods of handling this line of output.
Echo and Wait
echo "Error Count: ${err_count}" >> "${log}"
wait
Alternate print method
printf "Error Count: %d\n" ${err_count} >> "${log}"
No Explicit File Redirection
echo "Error Count: ${err_count}"
Echo and Sleep
echo "Error Count: ${err_count}" >> "${log}"
sleep 0.2
Of these, #1 and #2 each had a 1% fail rate while #4 had a staggering 99% fail rate. #3 was the only methodology that had a 0% fail rate.
At this point, I'm at a loss for why this behavior is occurring, so I'm asking the gurus here for any insight.
(Note that the simple solution is to implement #3, but I want to know why this is happening.)
Without testing, this looks like a race condition between your script and tee. It's generally better to avoid multiple programs writing to the same file at the same time.
If you do insist on having multiple writers, make sure they are all in append mode, in this case by using tee -a. Appends to the local filesystem are atomic, so all writes should make it (this is not necessarily true for networked file systems).

Unable to array values outside of function in shell script [duplicate]

Please explain to me why the very last echo statement is blank? I expect that XCODE is incremented in the while loop to a value of 1:
#!/bin/bash
OUTPUT="name1 ip ip status" # normally output of another command with multi line output
if [ -z "$OUTPUT" ]
then
echo "Status WARN: No messages from SMcli"
exit $STATE_WARNING
else
echo "$OUTPUT"|while read NAME IP1 IP2 STATUS
do
if [ "$STATUS" != "Optimal" ]
then
echo "CRIT: $NAME - $STATUS"
echo $((++XCODE))
else
echo "OK: $NAME - $STATUS"
fi
done
fi
echo $XCODE
I've tried using the following statement instead of the ++XCODE method
XCODE=`expr $XCODE + 1`
and it too won't print outside of the while statement. I think I'm missing something about variable scope here, but the ol' man page isn't showing it to me.
Because you're piping into the while loop, a sub-shell is created to run the while loop.
Now this child process has its own copy of the environment and can't pass any
variables back to its parent (as in any unix process).
Therefore you'll need to restructure so that you're not piping into the loop.
Alternatively you could run in a function, for example, and echo the value you
want returned from the sub-process.
http://tldp.org/LDP/abs/html/subshells.html#SUBSHELL
The problem is that processes put together with a pipe are executed in subshells (and therefore have their own environment). Whatever happens within the while does not affect anything outside of the pipe.
Your specific example can be solved by rewriting the pipe to
while ... do ... done <<< "$OUTPUT"
or perhaps
while ... do ... done < <(echo "$OUTPUT")
This should work as well (because echo and while are in same subshell):
#!/bin/bash
cat /tmp/randomFile | (while read line
do
LINE="$LINE $line"
done && echo $LINE )
One more option:
#!/bin/bash
cat /some/file | while read line
do
var="abc"
echo $var | xsel -i -p # redirect stdin to the X primary selection
done
var=$(xsel -o -p) # redirect back to stdout
echo $var
EDIT:
Here, xsel is a requirement (install it).
Alternatively, you can use xclip:
xclip -i -selection clipboard
instead of
xsel -i -p
I got around this when I was making my own little du:
ls -l | sed '/total/d ; s/ */\t/g' | cut -f 5 |
( SUM=0; while read SIZE; do SUM=$(($SUM+$SIZE)); done; echo "$(($SUM/1024/1024/1024))GB" )
The point is that I make a subshell with ( ) containing my SUM variable and the while, but I pipe into the whole ( ) instead of into the while itself, which avoids the gotcha.
#!/bin/bash
OUTPUT="name1 ip ip status"
+export XCODE=0;
if [ -z "$OUTPUT" ]
----
echo "CRIT: $NAME - $STATUS"
- echo $((++XCODE))
+ export XCODE=$(( $XCODE + 1 ))
else
echo $XCODE
see if those changes help
Another option is to output the results into a file from the subshell and then read it in the parent shell. something like
#!/bin/bash
EXPORTFILE=/tmp/exportfile${RANDOM}
cat /tmp/randomFile | while read line
do
LINE="$LINE $line"
echo $LINE > $EXPORTFILE
done
LINE=$(cat $EXPORTFILE)

Background rsync and pid from a shell script

I have a shell script that does a backup. I set this script in a cron but the problem is that the backup is heavy so it is possible to execute a second rsync before the first ends up.
I thought to launch rsync in a script and then get PID and write a file that script checks if the process exist or not (if this file exist or not).
If I put rsync in background I get the PID but I don't know how to know when rsync ends up but, if I set rsync (no background) I can't get PID before the process finish so I can't write a file whit PID.
I don't know what is the best way to "have rsync control" and know when it finish.
My script
#!/bin/bash
pidfile="/home/${USER}/.rsync_repository"
if [ -f $pidfile ];
then
echo "PID file exists " $(date +"%Y-%m-%d %H:%M:%S")
else
rsync -zrt --delete-before /repository/ /mnt/backup/repositorio/ < /dev/null &
echo $$ > $pidfile
# If I uncomment this 'rm' and rsync is running in background, the file is deleted so I can't "control" when rsync finish
# rm $pidfile
fi
Can anybody help me?!
Thanks in advance !! :)
# check to make sure script isn't still running
# if it's still running then exit this script
sScriptName="$(basename $0)"
if [ $(pidof -x ${sScriptName}| wc -w) -gt 2 ]; then
exit
fi
pidof finds the pid of a process
-x tells it to look for scripts too
${sScriptName} is just the name of the script...you can hardcode this
wc -w returns the word count by words
-gt 2 no more than one instance running (instance plus 1 for the pidof check)
if more than one instance running then exit script
Let me know if this works for you.
Test both for presence of pid file and status of the running process like this:
#!/bin/bash
pidfile="/home/${USER}/.rsync_repository"
is_running =0
if [ -f $pidfile ];
then
echo "PID file exists " $(date +"%Y-%m-%d %H:%M:%S")
previous_pid=`cat $pidfile`
is_running=`ps -ef | grep $previous_pid | wc -l`
fi
if [ $is_running -gt 0 ];
then
echo "Previous process didn't quit yet"
else
rsync -zrt --delete-before /repository/ /mnt/backup/repositorio/ < /dev/null &
echo $$ > $pidfile
fi
Hope this helps!!!

In Bash, how to not create the redirect output file once the command fails

Usually we may redirect a command output to a file, as following:
cat a.txt >> output.txt
As I tried, if cat failed, the output.txt will still be created, which isn't my expected. I know I could test as this:
if [ "$?" -ne "0"]; then
rm output.txt
fi
But this may cause some issues overhead when there's already such output.txt prior to my cat execution.
So I also need store the output.txt state before cat, if there's already such output.txt before cat execution, I should not rm output.txt by mistake... but there may still be problem on race condition, what if any other process create this output.txt right before my cat very closely?
So is there any simple way that, if the command fails, the redirection output.txt will be removed, or even not created?
Fixed output file names are bad news; don't use them.
You should probably redesign the processing so that you have a date-stamped file name. Failing that, you should use the mktemp command to create a temporary file, have the command you want executed write to that, and when the command is successful, you can move the temporary to the 'final' output — and you can automatically clean up the temporary on failure.
outfile="./output-$(date +%Y-%m-%d.%H:%M:%S).txt"
tmpfile="$(mktemp ./gadget-maker.XXXXXXXX)"
trap "rm -f '$tmpfile'; exit 1" 0 1 2 3 13 15
if cat a.txt > "$tmpfile"
then mv "$tmpfile" "$outfile"
else rm "$tmpfile"
fi
trap 0
You can simplify the outfile to output.txt if you insist (but it isn't safe). You can use any prefix you like with the mktemp command. Note that by creating the temporary file in the current directory, where the final output file will be created too, you avoid cross-device file copying at the mv phase of operations — it is a link() and an unlink() system call (or maybe even a rename() system call if such a thing exists on your machine; it does on Mac OS X) only.
You can't tell that the command has failed until it terminates, and by then it might have produced some output.
Probably a more useful condition is to avoid creating the output file until the command actually produces some output, and not worry about its status code.
This comes close:
command | { IFS= read -rn1 -d '' a &&
{ printf %s "$a" >> output.txt
cat >> output.txt
}
}
However, if the first character output by command is a NUL byte, the NUL won't be written to the output file. Since the extension of the output file is .txt, that's unlikely in this particular case, but it could be handled by adding the command
[[ -z $a ]] && printf '\0' >> output.txt
after the printf and before the cat.
I think this will work, check this out.
[ -e output.txt ] && (mv output.txt output.txt_bkp)
cat a.txt > /dev/null 2>&1;[ $? -eq 0 ] && (cat a.txt > output.txt)
another way as suggested by Jonathan,
[ -e output.txt ] && (mv output.txt output.txt_bkp)
if cat a.txt > /dev/null 2>&1
then
cat a.txt > output.txt
fi

BASH: redirect for log dillema / duplicate redirection for each loop iteration

I've got a redirect dilemma that I can't get past in a bash backup script I'm developing in CentOS 6.4. I want to redirect all output to two separate files: one tmp and one permanent. The script loops through an external source list and I'd like for the tmp log files to be specific to the source, so that I can send an email if that specific source had errors containing that log (and conversely remove the tmp if the backup completes without error).
I'm using exec to tee my output:
exec > >(tee -a ${templog} /var/log/rob/rob.log) 2>&1
This works if I place at the top of the script, but here the variable isn't defined yet, so I can't do source-specific logs.
If I place this within the while loop, it grabs the variable, but writes a copy of each line determined by the total iterations of the loop; for the example below, I have four sources it iterates through, so I get output for each source in quadruplicate:
-S-07/11/14 09:15:35 ROB-Source Process for cc2-gamma has started-S-
-S-07/11/14 09:15:35 ROB-Source Process for cc2-gamma has started-S-
-S-07/11/14 09:15:35 ROB-Source Process for cc2-gamma has started-S-
-S-07/11/14 09:15:35 ROB-Source Process for cc2-gamma has started-S-
Share cc2-gamma is not Mounted. Try 1 of 5 to mount...
Share cc2-gamma is not Mounted. Try 1 of 5 to mount...
Share cc2-gamma is not Mounted. Try 1 of 5 to mount...
Share cc2-gamma is not Mounted. Try 1 of 5 to mount...
Is there a different way to tee the output within the loop to prevent this (without touching each line of course)? Or is there something rotten in my loops that I'm not seeing? Here's the whole script. Please excuse the mess and style.. I'm clearly not finished. I didn't include the config.conf and backup source file as they don't affect the output. Let me know if needed. Thanks.
#!/bin/bash
#V.2014.0723 - Radation Oncology Backup script
#declarations
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/rob
source /rob/conf/config.conf
while read smbdir 'smbpath' exclfile drive foldername; do
#loop declarations
mountedfile=/rob/${smbdir}.MOUNTED
runningfile=/rob/${smbdir}.RUNNING
lastrunfile=/rob/${smbdir}_${foldername}.LASTRUN
templog=/rob/${smbdir}_${foldername}.TMPLOG
errorfile=/rob/${smbdir}_${foldername}.HAD_ERRORS
backupfile=/rob/${baname:0:3}_rtbackup.sql.bz2 # for the -l seccton below -- sql backup of backup.sql
#exec > >(tee -a ${templog} /var/log/rob/rob.log) 2>&1
### SOURCE BACKUP ##############################################################################################
if [ "$1" == "-s" ]
then
exec > >(tee -a /var/log/rob/rob.log ${templog}) 2>&1
#Write Source STDOUT and STDERR to both permanent and temporary log file. Must be in loop to use variables.
#exec > >(tee -a ${templog} /var/log/rob/rob.log) 2>&1
#exec > >(tee -a /var/log/rob.log ${templog}) 2>
if [ "${sources_active}" == "1" ]
then
echo "-S-$(date "+%m/%d/%y %T") ROB-Source Process for $smbdir has started-S-"
# unmount all cifs shares, due to duplicate mounts, write file to prevent concurrentcy
umount -a -t cifs > /dev/null
# The following will test to see if the souce is mounted, and if not, mount it.
for i in {1..5}
do
if mountpoint -q /mnt/${smbdir}/${drive}/${foldername}
then
echo "Share ${smbdir} is Mounted."
touch $mountedfile
break
else
sleep 2
echo "Share ${smbdir} is not Mounted. Try $i of 5 to mount..."
mkdir -p /mnt/${smbdir}/${drive}/${foldername} > /dev/null
mount -t cifs ${smbpath} -o ro,username=<USER>,password=<PW>,workgroup=<DOMAIN> /mnt/${smbdir}/${drive}/${foldername}
fi
done
# Test to see if above was successful, and if rob is not already running, run the backup.
if [[ -f ${mountedfile}&& ! -f ${runningfile} ]]
then
src="/mnt/${smbdir}/$drive"
dst="/backup/rob/"
touch ${runningfile}
/root/bin/rtbackup -m /mnt -p ${src}/${foldername} -b ${dst} -x #${exclfile}
if [ "$?" -ne "0" ]; then
#Errors Running RTBackup
rm -f ${runningfile} > /dev/null 2>&1
rm -f ${mountedfile}> /dev/null 2>&1
echo "$(date "+%m/%d/%y %T") Source Process for ${smbdir} had errors running:-SSS"
echo "$errors" >&2
touch ${errorfile}
exit 1
else
echo "What the hell is this doing?"
fi
#NO Errors Running RTBACKUP
rm -f ${templog}
rm -f ${runningfile} > /dev/null 2>&1
rm -f ${mountedfile} > /dev/null 2>&1
echo "$(date "+%m/%d/%y %T") Source Process for ${smbdir} did not have any errors"
else
#backup will *NOT* run, cleaning up and logging
rm -f ${mountedfile} > /dev/null 2>&1
echo "$(date "+%m/%d/%y %T") ${smbdir} could not be mounted, or is already in progress. Backup could not complete."
touch ${errorfile}
tail /var/log/rob/robso.log | mail -s "ROBSO Failed to run for ${smbdir} on ${baname}" ${email}
fi
echo "-F-$(date "+%m/%d/%y %T") ROB-Source Process for ${smbdir} has finished-F-"
#break
elif [[ "${sources_active}" == "0" ]]
then
echo "***$(date "+%m/%d/%y %T") ROB-Source Process for ${smbdir} did not run because the job is not set as active***"
#break
fi
done < /rob/conf/${baname}.conf
if [ $? -eq 10 ]; then exit 0; fi
You can use curly braces to redirect a set of commands; as it says in the bash manual about command grouping, "When commands are grouped, redirections may be applied to the entire command list". It behaves more-or-less like an anonymous function.
{
command1
command2
} > >(tee -a ${templog} /var/log/rob/rob.log) 2>&1
You can do the same with a named function, too, if you're so inclined, but I don't know offhand what environment would be used to expand the redirections. (If you do, please edit this answer!)
# Untested. This MIGHT work.
your_log_command() {
command1
command2
} > >(tee -a $1 /var/log/rob/rob.log) 2>&1
your_log_command $templog
your_log_command $something_else

Resources