preventing scripts from stopping on errors - linux

My issue is likely a rather simple one, I just haven't found a satisfactory answer anywhere I've looked so far.
I have a script which runs a program/script, when it encounters an error it just hangs and will not continue along.
What is the best method to fix this?
To clarify, usually there is no output when it hangs but sometimes there is an error message, depending on which script I am using to work on my data set.
Example script:
#!/bin/bash
# iterate over a NUL-delimited stream of directory names
while IFS='' read -r -d '' dirname; do
# ...then list files in each directory:
for file in "$dirname"/*; do
# ignore directory contents that are not files
[[ -f $file ]] || continue
# run analysis tool
if [[ $file == *.dmp ]]; then
echo $dirname;
tshark -PVx -r "$file" >> $dirname/TEXT_out.txt #with packet summary line, packet details expanded, packet bytes
#ls $dirname;
echo "complete";
continue
fi
done
done < <(find . -type d -print0)

In nutshell, you are asking - how to handle if the child process hangs (from parent perspective). If that is the basic issue your are facing, then you can check this link.
HTH!

Related

I am Attempting to use the output of ls command as an array. And I want it to be line by line [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 months ago.
Improve this question
I need to find the encrypted(.enc) files from a folder and decrypt them.
I will find the .enc files
if [[ -n "$(ls -A /sodaman/tempPrabhu/temp/*.enc 2>/dev/null)" ]]; then
And I used enc_files=($( ls *.enc )) but it takes all the files as one and fails. It considers all the files for the output as one line. So I replaced it with mapfile to decrypt the files one by one but it throws an error
test.sh: line 31: syntax error near unexpected token `<' test.sh: line 31: ` mapfile -t enc_files < <(ls *.enc)'
Below is the script:
if [[ -n "$(ls -A /sodaman/tempPrabhu/temp/*.enc 2>/dev/null)" ]]; then
#create array of encrypted files
mapfile -t enc_files < <(ls *.enc)
enc_files=($( ls *.enc ))
#echo Creating array $enc_files
#decrypt all encrypted files.
echo Creating loop for encryped files
for m in "${enc_files[#]}"
do
d=$(echo "$m" | cut -f 1 -d '.')
echo $d
d=$d.dat
echo $d
/empty/extproc/hfencrypt_plus $m $d Decrypt /empty/extproc/hfsymmetrickey.dat log.log infa91punv
echo /empty/extproc/hfencrypt_plus $m $d Decrypt /empty/extproc/hfsymmetrickey.dat log.log infa91punv
if [[ -f "$d" ]]; then
mv $m /empty/sodaman/tempPrabhu/blr_temp
#echo Moving file to encrypted archive : mv "$m" /empty/sodaman/enc_archive
echo removing log file : rm log.log
rm log.log
else
echo File was not decrypted successfully
fi
done
fi
Here's a refactoring which avoids several of the http://shellcheck.net/ violations in your attempt.
for file in /sodaman/tempPrabhu/temp/*.enc; do
# avoid nullglob
test -e "$file" || continue
# prefer parameter expansion
d=${file%%.*}.dat
if /empty/extproc/hfencrypt_plus "$file" "$d" Decrypt /empty/extproc/hfsymmetrickey.dat log.log infa91punv
then
# assume hfencrypt sets exit code
mv "$file" /empty/sodaman/tempPrabhu/blr_temp
else
# print diagnostics to stderr
# mention which file failed
# mention which script emitted the warning
echo "$0: $file was not decrypted successfully" >&2
sed "s%^%log.log: $file: %" log.log >&2
fi
rm -f log.log
done
This assumes that you wanted to loop over the files in the directory you examine at the beginning of your script, not in the current directory (perhaps see also What exactly is current working directory?) and that the encryption utility sets its exit code to nonzero if encryption failed (perhaps see also Why is testing “$?” to see if a command succeeded or not, an anti-pattern? which discusses idioms around conditions involving errors). I added a sed command to include the output from log.log in the diagnostics after a failure, though perhaps you would like a different error-handling strategy (exit immediately and let the user troubleshoot? Or rename the log file to a unique name and keep it around for later?)
Don't parse ls results, but use this:
find /sodaman/tempPrabhu/temp/ -maxdepth 1 -name "*.enc"

Not every command is being for in a while loop

I am trying to make a script what looks at a folder and will automatically encode files that go into that folder using hand brake. I want to do this doing monitoring the folder using inotify putting the new additions to the folder into a list then using a cron job to encode them overnight. However when using a while loop to loop over the list handbrake only encodes the first file exists then the scripts carrys on to after the loop without doing every file in the list. Here is the script that is calling handbrake:
#!/bin/bash
while IFS= read -r line
do
echo "$(basename "$line")"
HandBrakeCLI -Z "Very Fast 1080p30" -i "$line" -o "$line.m4v"
rm "$line"
done < list.txt
> list.txt
When testing the loop with a simple echo instead of the HandBrakeCLI it works fine and prints out every file so I have no idea what is wrong.
Here is the scripts that is monitoring the folder incase that is the problem:
#!/bin/bash
if ! [ -f list.txt ]
then
touch list.txt
fi
inotifywait -m -e create --format "%w%f" tv-shows | while read FILE
do
echo "$FILE" >> list.txt
done
Any help would be great, thanks
EDIT:
Just to be more specific, the script works fine for the first file in the list.txt, it encodes it no problem and removes the old version, but then it doesn't do any of the others in the list
Taken from here
To solve the problem simply
echo "" | HandBrakeCLI ......
or
HandBrakeCLI ...... < /dev/null

Silent while loop in bash

I am looking to create a bash script that keeps checking a file in directory and perform certain operation on it. I am using while loop, if file does not exists I want that while loop stays quite and keeps on checking condition. Here is what i created but it keeps throwing error that file not found, if file is not there.
while [ ! -f /home/master/applications/tmp/mydata.txt ]
do
cat mydata.txt;
rm mydata.txt;
sleep 1; done
There are two issue in your implementation:
You should use the same (absolute or relative) path in your while loop test statement [ ! -f $file ] and in your cat and rm commands.
The cat command is looking for the file in the current working directory (pwd) and your while statement might be looking somewhere else and hence, your implementation is buggy and won't work as expected if your pwd isn't /home/master/applications/tmp.
You need to move your cat command and rm command after the while block. It doesn't make sense to cat a file if a file doesn't exist. I think your misplaced those commands.
Try this:
file="/home/master/applications/tmp/mydata.txt"
while [ ! -f "$file" ]
do
sleep 1
done
cat $file
rm $file
EDIT
As per suggestion from #Ivan, you could use until instead of while as it suits more to your requirements.
file="/home/master/applications/tmp/mydata.txt"
until [ -f "$file" ]; do sleep 1; done
cat $file
rm $file
Making a different assumption than abhiarora, I'll guess maybe you meant for the file to reappear, and you want it shown each time.
file=/home/master/applications/tmp/mydata.txt
while :
do if [[ -f "$file" ]]
then echo "$(<"$file")"
rm "$file"
fi
sleep 1
done
This creates an infinite loop. If that's NOT what you wanted, use abhiarora's solution.

Bash script to iterate contents of directory moving only the files not currently open by other process

I have people uploading files to a directory on my Ubuntu Server.
I need to move those files to the final location (another directory) only when I know these files are fully uploaded.
Here's my script so far:
#!/bin/bash
cd /var/uploaded_by_users
for filename in *; do
lsof $filename
if [ -z $? ]; then
# file has been closed, move it
else
echo "*** File is open. Skipping..."
fi
done
cd -
However it's not working as it says some files are open when that's not true. I supposed $? would have 0 if the file was closed and 1 if it wasn't but I think that's wrong.
I'm not linux expert so I'm looking to know how to implement this simple script that will run on a cron job every 1 minute.
[ -z $? ] checks if $? is of zero length or not. Since $? will never be a null string, your check will always fail and result in else part being executed.
You need to test for numeric zero, as below:
lsof "$filename" >/dev/null; lsof_status=$?
if [ "$lsof_status" -eq 0 ]; then
# file is open, skipping
else
# move it
fi
Or more simply (as Benjamin pointed out):
if lsof "$filename" >/dev/null; then
# file is open, skip
else
# move it
fi
Using negation, we can shorten the if statement (as dimo414 pointed out):
if ! lsof "$filename" >/dev/null; then
# move it
fi
You can shorten it even further, using &&:
for filename in *; do
lsof "$filename" >/dev/null && continue # skip if the file is open
# move the file
done
You may not need to worry about when the write is complete, if you are moving the file to a different location in the same file system. As long as the client is using the same file descriptor to write to the file, you can simply create a new hard link for the upload file, then remove the original link. The client's file descriptor won't be affected by one of the links being removed.
cd /var/uploaded_by_users
for f in *; do
ln "$f" /somewhere/else/"$f"
rm "$f"
done

For every file modification copy it into another file bash

I want to run service to listen on file modifying and for every add to file delete it from file and append to another file
I tried this code but it is not working it like going in infinite loop
inotifywait -m -e modify "$1" |
while read folder eventlist eventfile
do
cat "$1">>$DESTINATION_FILE
>$1
done
Each time you truncate the file, that registers as a modification, which triggers another truncation, etc. Try testing if the file contains anything in the body of the loop.
inotifywait -m -e modify "$1" |
while read folder eventlist eventfile
do
# Only copy-and-clear if the file is not empty
if [ -s "$1" ]; then
cat "$1" >> "$DESTINATION_FILE"
# What if the file is modified here?
>$1
fi
done
See my comment between cat and the truncation. You would never put those modifications in $DESTINATION_FILE, because you would erase them before the next iteration of the loop. This isn't really avoidable, unless your operating system allows you to obtain a lock on $1 prior to the cat, then release the lock after the truncation, so that only one process can write to the file at a time.
As pointed out by chepner, the reverting of the changes will also be treated as file modify.
A way out is:
remove -m parameter
Manually implement a while loop in bash
e.g.
cp "$1" "$1.bak"
while true; do inotifywait -e modify "$1" | {
read folder eventlist eventfile;
cat "$1" >> "$DESTINATION_FILE";
# OR
# diff "$1" "$1.bak" >> "$DESTINATION_FILE";
cp "$1.bak" "$1";
}
done
Note: I haven't tested above code myself.
Note2: There may be atomicity issues. There are times when the file modifications are not being monitored. Hence, when this cat > operation or cp operations are in progress, someone may attempt to write to "$1" file, which will be missed.

Resources