In Bash, how to not create the redirect output file once the command fails - linux

Usually we may redirect a command output to a file, as following:
cat a.txt >> output.txt
As I tried, if cat failed, the output.txt will still be created, which isn't my expected. I know I could test as this:
if [ "$?" -ne "0"]; then
rm output.txt
fi
But this may cause some issues overhead when there's already such output.txt prior to my cat execution.
So I also need store the output.txt state before cat, if there's already such output.txt before cat execution, I should not rm output.txt by mistake... but there may still be problem on race condition, what if any other process create this output.txt right before my cat very closely?
So is there any simple way that, if the command fails, the redirection output.txt will be removed, or even not created?

Fixed output file names are bad news; don't use them.
You should probably redesign the processing so that you have a date-stamped file name. Failing that, you should use the mktemp command to create a temporary file, have the command you want executed write to that, and when the command is successful, you can move the temporary to the 'final' output — and you can automatically clean up the temporary on failure.
outfile="./output-$(date +%Y-%m-%d.%H:%M:%S).txt"
tmpfile="$(mktemp ./gadget-maker.XXXXXXXX)"
trap "rm -f '$tmpfile'; exit 1" 0 1 2 3 13 15
if cat a.txt > "$tmpfile"
then mv "$tmpfile" "$outfile"
else rm "$tmpfile"
fi
trap 0
You can simplify the outfile to output.txt if you insist (but it isn't safe). You can use any prefix you like with the mktemp command. Note that by creating the temporary file in the current directory, where the final output file will be created too, you avoid cross-device file copying at the mv phase of operations — it is a link() and an unlink() system call (or maybe even a rename() system call if such a thing exists on your machine; it does on Mac OS X) only.

You can't tell that the command has failed until it terminates, and by then it might have produced some output.
Probably a more useful condition is to avoid creating the output file until the command actually produces some output, and not worry about its status code.
This comes close:
command | { IFS= read -rn1 -d '' a &&
{ printf %s "$a" >> output.txt
cat >> output.txt
}
}
However, if the first character output by command is a NUL byte, the NUL won't be written to the output file. Since the extension of the output file is .txt, that's unlikely in this particular case, but it could be handled by adding the command
[[ -z $a ]] && printf '\0' >> output.txt
after the printf and before the cat.

I think this will work, check this out.
[ -e output.txt ] && (mv output.txt output.txt_bkp)
cat a.txt > /dev/null 2>&1;[ $? -eq 0 ] && (cat a.txt > output.txt)
another way as suggested by Jonathan,
[ -e output.txt ] && (mv output.txt output.txt_bkp)
if cat a.txt > /dev/null 2>&1
then
cat a.txt > output.txt
fi

Related

Why is a part of the code inside a (False) if statement executed?

I wrote a small script which:
prints the content of a file (generated by another application) on paper with a matrix printer
prints the same line into a backup file
removes the original file.
The script runs every minute by a cronjob and works fine as long as there are files to print. If there are no files to print, it prints an empty line on the matrix printer and in the backup file. I don't understand why this happens as i implemented an if statement which checks if there is a file to print before the print command is executed. This behaviour only happens if the script is executed by the cron and not if i execute it manually with ./script.sh. What's the reason of this? and how can i solve it?
Something i noticed on the side is that if I place an echo "hi" command in the script, its printed to the matrix printer and the backup file. I expected that its printed to the console console when it has no >> something behind. How does this work?
The script:
#!/bin/bash
# Make sure the backup directory exists
if [ ! -d /home/user/backup_logprint ]
then
mkdir /home/user/backup_logprint
fi
# Print the records if there are any
date=`date +%Y-%m-%d`
filename='_logprint_backup'
printer_path="/dev/usb/lp0"
if [ `ls /tmp/ | grep logprint | wc -l` -gt 0 ]
then
for f in `ls /tmp | grep logprint`
do
echo `cat /tmp/$f` >> "/home/user/backup_logprint/$date$filename"
echo `cat /tmp/$f` >> $printer_path
rm "/tmp/$f"
done
fi
There's no need for ls or an if statement. Just use a proper glob in the for loop, and if no file match, the loop won't be entered.
#!/bin/bash
# Don't check first; just let mkdir decide if
# anything actually needs to be created.
d=/home/user/backup_logprint
mkdir -p "$d"
filename=$(date +"$d/%Y-%m-%d_logprint_backup")
printer_path="/dev/usb/lp0"
# Cause non-matching globs to expand to an empty
# sequence instead of being treated literally.
shopt -s nullglob
for f in /tmp/*logprint*; do
cat "$f" > "$printer_path" && mv "$f" "$d"
done

Bash output to screen and logfile differently

I have been trying to get a bash script to output different things on the terminal and logfile but am unsure of what command to use.
For example,
#!/bin/bash
freespace=$(df -h / | grep -E "/" | awk '{print $4}')
greentext="\033[32m"
bold="\033[1m"
normal="\033[0m"
logdate=$(date +"%Y%m%d")
logfile="$logdate"_report.log
exec > >(tee -i $logfile)
echo -e $bold"Quick system report for "$greentext"$HOSTNAME"$normal
printf "\tSystem type:\t%s\n" $MACHTYPE
printf "\tBash Version:\t%s\n" $BASH_VERSION
printf "\tFree Space:\t%s\n" $freespace
printf "\tFiles in dir:\t%s\n" $(ls | wc -l)
printf "\tGenerated on:\t%s\n" $(date +"%m/%d/%y") # US date format
echo -e $greentext"A summary of this info has been saved to $logfile"$normal
I want to omit the last output (echo "A summary...") in the logfile while displaying it in the terminal. Is there a command to do so? It would be great if a general solution can be provided instead of a specific one because I want to apply this to other scripts.
EDIT 1 (after applying >&6):
Files in dir: 7
A summary of this info has been saved to 20160915_report.log
Generated on: 09/15/16
One option:
exec 6>&1 # save the existing stdout
exec > >(tee -i $logfile) # like you had it
#... all your outputs
echo -e $greentext"A summary of this info has been saved to $logfile"$normal >&6
# writes to the original stdout, saved in file descriptor 6 ------------^^^
The >&6 sends echo's output to the saved file descriptor 6 (the terminal, if you're running this from an interactive shell) rather than to the output path set up by tee (which is on file descriptor 1). Tested on bash 4.3.46.
References: "Using exec" and "I/O Redirection"
Edit As OP found, the >&6 message is not guaranteed to appear after the lines printed by tee off stdout. One option is to use script, e.g., as in the answers to this question, instead of tee, and then print the final message outside of the script. Per the docs, the stdbuf answers to that question won't work with tee.
Try a dirty hack:
#... all your outputs
echo >&6 # <-- New line
echo -e $greentext ... >&6
Or, equally hackish, (Note that, per OP, this worked)
#... all your outputs
sleep 0.25s # or whatever time you want <-- New line
echo -e ... >&6

Execute and delete command from a file

I have multiple files with an insanely long list of commands. I can't run them all in one go, so I need a smart way to read and execute from file as well as delete the command after completion.
So far I have tried
for i in filename.txt ; do ; execute $i ; sed -s 's/$i//' ; done ;
but it doesn't work. Before I introduced sed, $i was executing. Now even that is not working.
I thought of a workaround where I will read first line and delete first line till file is empty.
Any better ideas or commands?
This should work for you, list.txt is your file containing commands.
Make sure you backup the command file before running.
while read line; do $line;sed -i '1d' list.txt;done < "list.txt"
sed -i edits in-place so list.txt will be changed along the loop and you will end up with a empty file.
I think what you want to do is something like this:
while read -r -- i; do $i; sed -i "0,/$i/s/$i//;/^$/d" filename.txt; done < filename.txt
The file is read into the loop. Each line is executed, and the sed command will delete only the first entry it finds, then delete the empty line.
I think that one way to do it is to have the source file of all the commands to be executed, and the script that executes the commands also writes a second log file that lists the files as they are executed.
If you need to resume the process, you work on the lines in the source file that are not present in the log file.
logfile=commands.log
srcfile=commands.src
oldfile=commands.old
trap "mv $oldfile $logfile; exit 1" 0 1 2 3 13 15
[ -f $logfile ] || cp /dev/null $logfile
cp $logfile $oldfile
comm -23 $srcfile $logfile |
while read -r line
do
echo "$line" >> $oldfile
($line) < /dev/null
done
mv $oldfile $logfile
trap 0

Bash shell `if` command returns something `then` do something

I am trying to do an if/then statement, where if there is non-empty output from a ls | grep something command then I want to execute some statements. I am do not know the syntax I should be using. I have tried several variations of this:
if [[ `ls | grep log ` ]]; then echo "there are files of type log";
Well, that's close, but you need to finish the if with fi.
Also, if just runs a command and executes the conditional code if the command succeeds (exits with status code 0), which grep does only if it finds at least one match. So you don't need to check the output:
if ls | grep -q log; then echo "there are files of type log"; fi
If you're on a system with an older or non-GNU version of grep that doesn't support the -q ("quiet") option, you can achieve the same result by redirecting its output to /dev/null:
if ls | grep log >/dev/null; then echo "there are files of type log"; fi
But since ls also returns nonzero if it doesn't find a specified file, you can do the same thing without the grep at all, as in D.Shawley's answer:
if ls *log* >&/dev/null; then echo "there are files of type log"; fi
You also can do it using only the shell, without even ls, though it's a bit wordier:
for f in *log*; do
# even if there are no matching files, the body of this loop will run once
# with $f set to the literal string "*log*", so make sure there's really
# a file there:
if [ -e "$f" ]; then
echo "there are files of type log"
break
fi
done
As long as you're using bash specifically, you can set the nullglob option to simplify that somewhat:
shopt -s nullglob
for f in *log*; do
echo "There are files of type log"
break
done
Or without if; then; fi:
ls | grep -q log && echo 'there are files of type log'
Or even:
ls *log* &>/dev/null && echo 'there are files of type log'
The if built-in executes a shell command and selects the block based on the return value of the command. ls returns a distinct status code if it does not find the requested files so there is no need for the grep part. The [[ utility is actually a built-in command from bash, IIRC, that performs arithmetic operations. I could be wrong on that part since I rarely stray far from Bourne shell syntax.
Anyway, if you put all of this together, then you end up with the following command:
if ls *log* > /dev/null 2>&1
then
echo "there are files of type log"
fi

Bash: Create a file if it does not exist, otherwise check to see if it is writeable

I have a bash program that will write to an output file. This file may or may not exist, but the script must check permissions and fail early. I can't find an elegant way to make this happen. Here's what I have tried.
set +e
touch $file
set -e
if [ $? -ne 0 ]; then exit;fi
I keep set -e on for this script so it fails if there is ever an error on any line. Is there an easier way to do the above script?
Why complicate things?
file=exists_and_writeable
if [ ! -e "$file" ] ; then
touch "$file"
fi
if [ ! -w "$file" ] ; then
echo cannot write to $file
exit 1
fi
Or, more concisely,
( [ -e "$file" ] || touch "$file" ) && [ ! -w "$file" ] && echo cannot write to $file && exit 1
Rather than check $? on a different line, check the return value immediately like this:
touch file || exit
As long as your umask doesn't restrict the write bit from being set, you can just rely on the return value of touch
You can use -w to check if a file is writable (search for it in the bash man page).
if [[ ! -w $file ]]; then exit; fi
Why must the script fail early? By separating the writable test and the file open() you introduce a race condition. Instead, why not try to open (truncate/append) the file for writing, and deal with the error if it occurs? Something like:
$ echo foo > output.txt
$ if [ $? -ne 0 ]; then die("Couldn't echo foo")
As others mention, the "noclobber" option might be useful if you want to avoid overwriting existing files.
Open the file for writing. In the shell, this is done with an output redirection. You can redirect the shell's standard output by putting the redirection on the exec built-in with no argument.
set -e
exec >shell.out # exit if shell.out can't be opened
echo "This will appear in shell.out"
Make sure you haven't set the noclobber option (which is useful interactively but often unusable in scripts). Use > if you want to truncate the file if it exists, and >> if you want to append instead.
If you only want to test permissions, you can run : >foo.out to create the file (or truncate it if it exists).
If you only want some commands to write to the file, open it on some other descriptor, then redirect as needed.
set -e
exec 3>foo.out
echo "This will appear on the standard output"
echo >&3 "This will appear in foo.out"
echo "This will appear both on standard output and in foo.out" | tee /dev/fd/3
(/dev/fd is not supported everywhere; it's available at least on Linux, *BSD, Solaris and Cygwin.)

Resources