How to handle errors in shell script - linux

I am writing shell script to install my application. I have more number of commands in my script such as copy, unzip, move, if and so on. I want to know the error if any of the commands fails. Also I don't want to send exit codes other than zero.
Order of script installation(root-file.sh):-
./script-to-install-mongodb
./script-to-install-jdk8
./script-to-install-myapplicaiton
Sample script file:-
cp sourceDir destinationDir
unzip filename
if [ true]
// success code
if
I want to know by using variable or any message if any of my scripts command failed in root-file.sh.
I don't want to write code to check every command status. Sometimes cp or mv command may fail due to invalid directory. At the end of script execution, I want to know all commands were executed successfully or error in it?
Is there a way to do it?
Note: I am using shell script not bash

/* the status of your last command stores in special variable $?, you can define variable for $? doing export var=$? */
unzip filename
export unzipStatus=$?
./script1.sh
export script1Status=$?
if [ !["$unzipStatus" || "$script1Status"]]
then
echo "Everything successful!"
else
echo "unsuccessful"
fi

Well as you are using shell script to achieve this there's not much external tooling. So the default $? should be of help. You may want to check for retrieval value in between the script. The code will look like this:
./script_1
retval=$?
if $retval==0; then
echo "script_1 successfully executed ..."
continue
else;
echo "script_1 failed with error exit code !"
break
fi
./script_2
Lemme know if this added any value to your scenario.

Exception handling in linux shell scripting can be done as follows
command || fallback_command
If you have multiple commands then you can do
(command_one && command_two) || fallback_command
Here fallback_command can be an echo or log details in a file etc.
I don't know if you have tried putting set -x on top of your script to see detailed execution.

Want to give my 2 cents here. Run your shell like this
sh root-file.sh 2> errors.txt
grep patterns from errors.txt
grep -e "root-file.sh: line" -e "script-to-install-mongodb.sh: line" -e "script-to-install-jdk8.sh: line" -e "script-to-install-myapplicaiton.sh: line" errors.txt
Output of above grep command will display commands which had errors in it along with line no. Let say output is:-
test.sh: line 8: file3: Permission denied
You can just go and check line no.(here it is 8) which had issue. refer this go to line no. in vi.
or this can also be automated: grep specific line from your shell script. grep line with had issue here it is 8.
head -8 test1.sh |tail -1
hope it helps.

Related

I have some bash scripts. How to store the success or failure of their output into a file?

So I wanted to know how to write some kind of command in each of the scripts that will output if the script was successful or not and append it to a file. Is it possible?
You can achieve this by using trap -
https://www.linuxjournal.com/content/bash-trap-command
add this command in your bash file
# add these lines at the top of your bash script
echo success > your_output_file
trap 'echo failed > your_output_file' ERR
You can use standard error and standard output to the file. For example
echo standard output >> standard_file
echo standard error 2>> standard_error

shell script can't see files in remote directory

I'm trying to write an interactive script on a remote server, whose default shell is zsh. I've been trying two different approaches to get this to work:
Approach 1: ssh -t <user>#<host> "$(<serverStatusReport.sh)"
Approach 2: ssh <user>#<host> "bash -s" < serverStatusReport.sh
I've been using approach 1 just fine up until now, when I ran into the following issue - I have a block of code that runs depending on whether certain files exist in the current directory:
filename="./service_log.*"
if ls $filename 1> /dev/null 2>&1 ; then
echo "$filename found."
##process files
else
echo "$filename not found."
fi
If I ssh into the server and run the command directly, I see "$filename found."
If I run the block of code above using Approach 1, I see "$filename not found".
If I copy this block into a new script (lets call this script2), and run it using Approach 2, then I see "$filename found".
I can't for the life of me figure out where this discrepancy is coming from. I thought that the difference may be that script2 is piped into bash whereas my original script is being run with zsh... but considering that running the same command verbatim on the server, with its default zsh shell, returns correctly... I'm stumped.
:( any help would be greatly appreciated!
I guess that when executing your approach 1 it is the local shell that expands "$(<serverStatusReport.sh)", not the remote. You can easily check this with:
ssh -t <user>#<host> "$(<hostname)"
Is the serverStatusReport.sh script also in the PATH on the local host?
What I do not understand is why you get this message instead of an error message.

Linux Bash - redirect errors to file

My objective is to run a command in the background and only create a log if something goes wrong.
Can someone tell me if this command is OK for that?
bash:
./command > app/tmp/logs/model/123.log 2>&1 & echo $! >> /dev/null &
The command itself is unimportant (just a random PHP script).
And / or explain how to route the results of my command to a file only if it is an errorand?
Also, I cant understand what "echo $!" does (I've copied this from elsewhere)...
Thanks in advance!
If I understand correctly, your goal is to run command in the background and to leave a log file only if an error occurred. In that case:
{ ./command >123.log 2>&1 && rm -f 123.log; } &
How it works:
{...} &
This runs whatever is in braces in the background. The braces are not strictly needed here for this exact command but including them causes no harm and might save you from an unexpected problem later.
./command >123.log 2>&1
This runs command and saves all output to 123.log.
&&
This runs the command that follows only if command succeeded (in other words, if commmand set its exit code to zero).
rm -f 123.log
This removes the log file. Since this command follows the &&, it is only run if command succeeded.
Discussion
You asked about:
echo $! >> /dev/null
echo $! displays the process ID of the previous command that was run in the background. In this case that would be ./command. This display, however, is sent to /dev/null which is, effectively, a trash can.

BASH shell script works properly at command prompt but doesn't work with crontab

Here is the script that I want to execute with crontab.
#!/bin/bash
# File of the path is /home/ksl7922/Memory_test/run_process.sh
# 'mlp' is the name of the process, and 'ksl7922' is my user account.
prgep mlp > /home/ksl7922/proc.txt
# This line will give the number of the process of 'mlp'
result=`sed -n '$=' /home/ksl7922/proc.txt`
echo "result = ${result}"
# if 'mlp' processes run less than six, than read text file one line and delete
# it, and execute this line.
if ((result < 6)); then
filename="/home/ksl7922/Memory_test/task_reserved.txt"
cat $filename | while read LINE
do
# Delete line first.
sed -i "$LINE/d" $filename
# Execute this line
eval $LINE
break;
done
else
echo "You're doing great."
fi
After that, I editted crontab and checked with crontab -l
*/20 * * * * sh /home/ksl7922/Memory_test/run_process.sh
This scripts works properly from command line, however, it doesn't work properly with crontab.
It seems like shell script works with crontab anyway, because 'proc.txt' file was generated, and the first line of 'task_reserved.txt' is removed.
However, I didn't see any messages, and result file of 'mlp' processes.
Since I'm not good at English, so I'm afraid that you guys don't understand my intention.
Anyway, can anyone let me know how to handle this?
My bet is the PATH environment variable is not correctly set within cron. Insert
echo $PATH > /tmp/cron-path.txt
to see what value it currently has. Perhaps you need to manually set it to a proper value within your script.
This is actually FAQ
https://askubuntu.com/questions/23009/reasons-why-crontab-does-not-work
https://askubuntu.com/questions/117978/script-doesnt-run-via-crontab-but-works-fine-standalone
If you don't have any mail installations on your system for cron to forward error messages from your script, it's a good practice to manually redirect all error messages to your preferred location. Eg.
#! /bin/bash
{
date
prgep mlp > /home/ksl7922/proc.txt
... snip ...
fi
} &> /tmp/cron-msg.txt
Have you checked the execute permission for the script? The file should have executable permission.
ls -ltr /home/ksl7922/Memory_test/run_process.sh
chmod 755 /home/ksl7922/Memory_test/run_process.sh

Execute a particular command only for the first time in a shell

I have a script.
I need to execute a particular command only for the first time i try to run this script in my shell. Otherwise, it should not be executed and the rest of the commands should be done.
How can i implement this? All pointers are welcome.
Thanks,
Sen
This is a code which i tried to implement :
start_time=`date +%s`
echo $script_instance
if [ `echo $script_instance` == true ]; then
end_time=`date +%s`
echo '#########################################################################'
echo '# Build Date : '`date`
echo '# Compilation time : '`expr $end_time - $start_time` s
echo '#########################################################################'
else
echo '-------------------------------------------------------------------------'
echo 'Updating the APIs'
echo '-------------------------------------------------------------------------'
fi
script_instance=true
export $script_instance
This is not working correctly. Please correct me if I am wrong somewhere.
You should probably check for the effect of the possible script execution, not the mere fact that it has run.
For instance, does it build anything? If so, check for the intended output.
Otherwise, if anything fails, just create a flag file with touch $HOME/.some_hidden_file and check for its existence.
Setting variables is not persistent in UNIX.
You can set an environment variable the first time you're running the script:
csh/tcsh: setenv MYVARIABLE something
bash: export MYVARIABLE="something"
Then check with an if-clause if the variable is set. If so, do the other stuff, if not then this is the first time the script is executed.
I don't know the context, but maybe you can just remove executable permission from the script at the end:
chmod a-x $0
Then, until you chmod a+x <scriptname> again, you will not be able to execute it. You will get "Permission denied".
Note after comment:
In this case, you can split script into always.sh and firstonly.sh. Use chmod a-x $0 in firstonly.sh and in always.sh do:
[ -x firstonly.sh ] && ./firstonly.sh
Alternatively, you can use some kind of flag file as suggested in other answers.

Resources