why if expression is always true in bash script - linux

I'm very new in shell script and I wrote this code to copy an input file from directory new1 to directory new2 if the file doesn't exist in second directory.
the problem is that the first if expression is always true and the code always print "file copied successfully" even if the file exists in second directory.
here is my code:
while true; do
echo "enter a file name from directory new1 to copy it to directory new2 "
echo "or enter ctrl+c to exit: "
read input
i=0
cd ~/new2
if [ -f ~/new1/$input ]; then
i=1
fi
if [ $i -eq 0 ];then
cp ~/new1/$input ~/new2/
echo "####" $input "copied successfully ####"
else
echo "#### this file exist ####"
fi
done
I will be appreciated if any one tell me how to fix this problem

You are comparing the wrong file. In addition, you probably want to refactor your logic. There is no need to keep a separate variable to remember what you just did.
while true; do
echo "enter a file name from directory new1 to copy it to directory new2 "
echo "or enter ctrl+c to exit: "
read input
#i=0 # no use
#cd ~/new2 # definitely no use
if [ -f ~/new2/"$input" ]; then # fix s/new1/new2/
# diagnostics to stderr; prefix messages with script's name
echo "$0: file ~/new2/$input already exists" >&2
else
cp ~/new1/"$input" ~/new2/
echo "$0: ~/new1/$input copied to ~/new2 successfully" >&2
fi
done
Take care to make your diagnostic messages specific enough to be useful. Too many beginner scripts tell you "file not found" 23 times but you don't know which of the 50 files you tried to access were not found. Similarly, including the name of the script or tool which produces a diagnostic in the diagnostic message helps identify the culprit and facilitate debugging as you start to build scripts which call scripts which call scripts ...
As you learn to use the command line, you will find that scripts which require interactive input are a dog to use because they don't offer command history, tab completion of file names, and other niceties which are trivially available to any tool which accepts a command-line file argument.
cp -u already does what this script attempts to implement, so the script isn't particularly useful per se.
Note also that ~ is a Bash-only feature which does not work with sh. Your script otherwise seems to be compatible with POSIX sh and could actually benefit from some Bash extensions such as [[ if you are going to use Bash features anyway.

Related

have arbitrary executable inherit errexit, if script is bash

I have a folder of executable scripts, and some of them have Python shebangs, while others have Bash shebangs, etc. We have a cron job that runs this folder of scripts nightly, and the hope is that any error in any script will exit the job.
The scripts are run with something like: for FILE in $FILES; do ./$FILE; done
The scripts are provided by various people, and while the Python scripts always exit after an error, sometimes developers forget to add set -e in their Bash scripts.
I could have the for-loop use bash -e, but then I need to detect whether the current script is Bash/Python/etc.
I could set -e from the parent script, and then source scripts, but I still need to know which language each script is in, and I'd prefer them to run as subshells so script contributors don't have to worry about messing up the parent.
greping the shebangs is a short tweak, but knowing the flexibility of Bash, I'd be surprised if there weren't a way to "export" an option that affected all child scripts, in the same way you can export a variable. And, there have been many cases in general where I've forgotten "set -e", so it could be nice to know more options for fool-proofing things.
I see some options for inheriting -e for subshells involved in command substitution, but not in general.
Disclaimer: Never, ever do this! It's a huge disservice to everyone involved. You will introduce failures both in scripts with meticulous error handling, and in scripts without it.
Anyways, no one likes being told "don't do that" on StackOverflow, so my suggestion would be to identify scripts and invoke them with their shebang string plus -e:
for f in ./*
do
# Determine if the script is a shell script
if [[ $(file -i "$f") == *text/x-shellscript* ]]
then
# Read the first line
read -r shebang < "$f"
# The script shouldn't have been identified as a shell script without
# a shebang, but check anyways
if [[ $shebang != "#!"* ]]
then
echo "No idea what $f is" >&2
continue
fi
# Strip off the #! and run it with -e and the file
shebang=${shebang#??}
$shebang -e "$f"
else
# It's some other kind of executable, just run it directly
"$f"
fi
done
Here's a script with correct error handling that now stops working:
#!/bin/bash
my-service start
ret=$?
if [ $ret -eq 127 ]
then
# Use legacy invocation instead
start-my-service
ret=$?
fi
exit "$ret"
Here's a script without error handling that now stops working:
#!/bin/sh
err=$(grep "ERROR" file.log)
if [ -z "$err" ]
then
echo "Run was successful"
exit 0
else
echo "Run failed: $err"
exit 1
fi

Backup the first argument on bash script

I wrote a script to backup the first argument that the user input with the script:
#!/bin/bash
file=$1/$(date +"_%Y-%m-%d").tar.gz
if [ $1 -eq 0 ]
then
echo "We need first argument to backup"
else
if [ ! -e "$file" ]; then
tar -zcvf $1/$(date +"_%Y-%m-%d").tar.gz $1
else
exit
fi
fi
The result that i want from the script is
backup folder the first argument that user input
save the backup file into folder that user input with date time format.
but the script is not running when I try to input the argument. What's wrong with the script?
The backup part of your script seem to be working well, but not the part where you check that $1 is not empty.
Firstly you would need quotes around $1, to prevent that it expends to nothing. Without the quotes the shell sees it as
if [ -eq 0 ]
and throws an error.
Secondly it would be better to use the -z operator to test if the variable exists:
if [ -z "$1" ]
Now you script should work as expected
I see several problems:
As H. Gourlé pointed out, the test for whether an argument was passed is wrong. Use if [ -z "$1" ] to check for a missing/blank argument.
Also, it's almost always a good idea to wrap variable references in double-quotes, as in "$1" above. You do this in the test for whether $file exists, but not in the tar command. There are places where it's safe to leave the double-quotes off, but the rules are complicated; it's easier to just always double-quote.
In addition to checking whether $1 was passed, I'd recommend checking whether it corresponds to a directory (or possibly file) that actually exists. Use something like:
if [ -z "$1" ]; then
echo "$0: We need first argument to backup" >&2
elif [ ! -d "$1" ]; then
echo "$0: backup source $1 not found or is not a directory" >&2
BTW, note how the error messages start with $0 (the name the script was run as) and are directed to error output (the >&2 part)? These are both standard conventions for error messages.
This isn't serious, but it really bugs me: you calculate $1/$(date +"_%Y-%m-%d").tar.gz, store it in the file variable, test to see whether something by that name exists, and then calculate it again when creating the backup file. There's no reason to do that; just use the file variable again. The reason it bugs me is partly that it violates the DRY ("Don't Repeat Yourself") principle, partly that if you ever change the naming convention you have to change it consistently in two places or the script will not work, and partly because in principle it's possible that the script will run just at midnight, and the first calculation will get one day and the second will get a different day.
Speaking of naming conventions, there's a problem with how you store the backup file. If you put it in the directory that's being backed up, then the first day you'll get a .tar.gz file containing the previous contents of the directory. The second day you'll get a file containing the regular contents plus the first backup file. Thus, the second day's backup will be about twice as big. The third day's backup will contain the regular contents, plus the first two backup files, so it'll be four times as big. And the fourth day's will be eight times as big, then 16 times, then 32 times, etc.
You need to either store the backup file somewhere outside the directory being backed up, or add something like --exclude="*.tar.gz" to the arguments to tar. The disadvantage of the --exclude option is that it may exclude other .tar.gz files from the backup, so I'd really recommend the first option. And if you followed my advice about using "$file" everywhere instead of recalculating the name, you only need to make a change in one place to change where the backup goes.
One final note: run your scripts through shellcheck.net. It'll point out a lot of common errors and bad practices before you discover them the hard way.
Here's a corrected version of the script (storing the backup in the directory, and excluding .tar.gz files; again, I recommend the other option):
#!/bin/bash
file="$1/$(date +"_%Y-%m-%d").tar.gz"
if [ -z "$1" ]; then
echo "$0: We need first argument to backup" >&2
elif [ ! -d "$1" ]; then
echo "$0: backup source $1 not found or is not a directory" >&2
elif [ -e "$file" ]; then
echo "$0: A backup already exists for today" >&2
else
tar --exclude="*.tar.gz" -zcvf "$file" "$1"
fi

Reading multiple files/folders with shellscript

I am using the read command to capture the files/folder names and further checking if they exist but below script only works with a single file/folder and does not work to capture multiple files/folders. Please help!
Thank you!
echo -n "Please enter a name of file/folder you wish to backup: "
read FILE
while [ ! -e "$FILE" ] ;do
read -p "The file ["$FILE"] does not exist."
echo -n "Please enter a name of file/folder you wish to backup: "
read FILE
done
I recommend taking an approach similar to this:
You can use CTRL + C to force an exit.
Please note that this should really be an if statement like to
ensure proper flow of logic as && || does not equate to if
then else logic
if [ "foo" = "foo" ]; then
echo expression evaluated as true
else
echo expression evaluated as false
fi
https://github.com/koalaman/shellcheck/wiki/SC2015

How to create a shell script that can scan a file for a specific word?

one of the questions that I have been given to do for my Computer Science GCSE was:
Write a shell script that takes a string input from a user, asks for a file name and reports whether that string is present in the file.
However way I try to do it, I cannot create a shell script.
I don't need you to tell me the whole number, however, I have no idea where to start. I input the variable and the file name, however, I have no idea how to search for the chosen word in the chosen file. Any ideas?
Using grep can get this working, for example
viewEntry()
{
echo "Entering view entry"
echo -n "Enter Name: "
read input
if grep -q "$input" datafile
then
echo ""
echo -n "Information -> "
grep -w "$input" datafile
echo ""
else
echo "/!\Name Not Found/!\\"
fi
echo "Exiting view entry"
echo ""
}
dataFile is the file you would be reading from. Then making use of -q and -w arguments of grep, you should be able to navigate your chosen file.
This site does a great job explaining grep and your exact problem: http://www.cyberciti.biz/faq/howto-use-grep-command-in-linux-unix/
The following shell-script is a very quick approach to do what you suggested:
#!/bin/sh # Tell your shell with what program this script should be exectued
echo "Please enter the filename: "
read filename # read user input into variable filename
count=`grep -c $1 $filename` # store result of grep into variable count
if [ $count -gt 0 ] # check if count is greater than 0
then
echo "String is present:" $1
else
echo "String not found:" $1
fi
You should look at some tutorials to get the basics of shell-scripting. Your task isn't very complex, so after some reading you should be able understand what the script does and modify it according your needs.

Equivalent of %~dp0 (retrieving source file name) in sh

I'm converting some Windows batch files to Unix scripts using sh. I have problems because some behavior is dependent on the %~dp0 macro available in batch files.
Is there any sh equivalent to this? Any way to obtain the directory where the executing script lives?
The problem (for you) with $0 is that it is set to whatever command line was use to invoke the script, not the location of the script itself. This can make it difficult to get the full path of the directory containing the script which is what you get from %~dp0 in a Windows batch file.
For example, consider the following script, dollar.sh:
#!/bin/bash
echo $0
If you'd run it you'll get the following output:
# ./dollar.sh
./dollar.sh
# /tmp/dollar.sh
/tmp/dollar.sh
So to get the fully qualified directory name of a script I do the following:
cd `dirname $0`
SCRIPTDIR=`pwd`
cd -
This works as follows:
cd to the directory of the script, using either the relative or absolute path from the command line.
Gets the absolute path of this directory and stores it in SCRIPTDIR.
Goes back to the previous working directory using "cd -".
Yes, you can! It's in the arguments. :)
look at
${0}
combining that with
{$var%Pattern}
Remove from $var the shortest part of $Pattern that matches the back end of $var.
what you want is just
${0%/*}
I recommend the Advanced Bash Scripting Guide
(that is also where the above information is from).
Especiall the part on Converting DOS Batch Files to Shell Scripts
might be useful for you. :)
If I have misunderstood you, you may have to combine that with the output of "pwd". Since it only contains the path the script was called with!
Try the following script:
#!/bin/bash
called_path=${0%/*}
stripped=${called_path#[^/]*}
real_path=`pwd`$stripped
echo "called path: $called_path"
echo "stripped: $stripped"
echo "pwd: `pwd`"
echo "real path: $real_path
This needs some work though.
I recommend using Dave Webb's approach unless that is impossible.
In bash under linux you can get the full path to the command with:
readlink /proc/$$/fd/255
and to get the directory:
dir=$(dirname $(readlink /proc/$$/fd/255))
It's ugly, but I have yet to find another way.
I was trying to find the path for a script that was being sourced from another script. And that was my problem, when sourcing the text just gets copied into the calling script, so $0 always returns information about the calling script.
I found a workaround, that only works in bash, $BASH_SOURCE always has the info about the script in which it is referred to. Even if the script is sourced it is correctly resolved to the original (sourced) script.
The correct answer is this one:
How do I determine the location of my script? I want to read some config files from the same place.
It is important to realize that in the general case, this problem has no solution. Any approach you might have heard of, and any approach that will be detailed below, has flaws and will only work in specific cases. First and foremost, try to avoid the problem entirely by not depending on the location of your script!
Before we dive into solutions, let's clear up some misunderstandings. It is important to understand that:
Your script does not actually have a location! Wherever the bytes end up coming from, there is no "one canonical path" for it. Never.
$0 is NOT the answer to your problem. If you think it is, you can either stop reading and write more bugs, or you can accept this and read on.
...
Try this:
${0%/*}
This should work for bash shell:
dir=$(dirname $(readlink -m $BASH_SOURCE))
Test script:
#!/bin/bash
echo $(dirname $(readlink -m $BASH_SOURCE))
Run test:
$ ./somedir/test.sh
/tmp/somedir
$ source ./somedir/test.sh
/tmp/somedir
$ bash ./somedir/test.sh
/tmp/somedir
$ . ./somedir/test.sh
/tmp/somedir
This is a script can get the shell file real path when executed or sourced.
Tested in bash, zsh, ksh, dash.
BTW: you shall clean the verbose code by yourself.
#!/usr/bin/env bash
echo "---------------- GET SELF PATH ----------------"
echo "NOW \$(pwd) >>> $(pwd)"
ORIGINAL_PWD_GETSELFPATHVAR=$(pwd)
echo "NOW \$0 >>> $0"
echo "NOW \$_ >>> $_"
echo "NOW \${0##*/} >>> ${0##*/}"
if test -n "$BASH"; then
echo "RUNNING IN BASH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${BASH_SOURCE[0]}
elif test -n "$ZSH_NAME"; then
echo "RUNNING IN ZSH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${(%):-%x}
elif test -n "$KSH_VERSION"; then
echo "RUNNING IN KSH..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=${.sh.file}
else
echo "RUNNING IN DASH OR OTHERS ELSE..."
SH_FILE_RUN_PATH_GETSELFPATHVAR=$(lsof -p $$ -Fn0 | tr -d '\0' | grep "${0##*/}" | tail -1 | sed 's/^[^\/]*//g')
fi
echo "EXECUTING FILE PATH: $SH_FILE_RUN_PATH_GETSELFPATHVAR"
cd "$(dirname "$SH_FILE_RUN_PATH_GETSELFPATHVAR")" || return 1
SH_FILE_RUN_BASENAME_GETSELFPATHVAR=$(basename "$SH_FILE_RUN_PATH_GETSELFPATHVAR")
# Iterate down a (possible) chain of symlinks as lsof of macOS doesn't have -f option.
while [ -L "$SH_FILE_RUN_BASENAME_GETSELFPATHVAR" ]; do
SH_FILE_REAL_PATH_GETSELFPATHVAR=$(readlink "$SH_FILE_RUN_BASENAME_GETSELFPATHVAR")
cd "$(dirname "$SH_FILE_REAL_PATH_GETSELFPATHVAR")" || return 1
SH_FILE_RUN_BASENAME_GETSELFPATHVAR=$(basename "$SH_FILE_REAL_PATH_GETSELFPATHVAR")
done
# Compute the canonicalized name by finding the physical path
# for the directory we're in and appending the target file.
SH_SELF_PATH_DIR_RESULT=$(pwd -P)
SH_FILE_REAL_PATH_GETSELFPATHVAR=$SH_SELF_PATH_DIR_RESULT/$SH_FILE_RUN_BASENAME_GETSELFPATHVAR
echo "EXECUTING REAL PATH: $SH_FILE_REAL_PATH_GETSELFPATHVAR"
echo "EXECUTING FILE DIR: $SH_SELF_PATH_DIR_RESULT"
cd "$ORIGINAL_PWD_GETSELFPATHVAR" || return 1
unset ORIGINAL_PWD_GETSELFPATHVAR
unset SH_FILE_RUN_PATH_GETSELFPATHVAR
unset SH_FILE_RUN_BASENAME_GETSELFPATHVAR
unset SH_FILE_REAL_PATH_GETSELFPATHVAR
echo "---------------- GET SELF PATH ----------------"
# USE $SH_SELF_PATH_DIR_RESULT BEBLOW
I have tried $0 before, namely:
dirname $0
and it just returns "." even when the script is being sourced by another script:
. ../somedir/somescript.sh

Resources