Bash Scripts failed due to spaces in absolute path variable - linux

I am running following script to copy files from CIFS share mounted on the system to another destination system. The absolute path of CIFS share contains few spaces and so it fails for that path, I tried running it on another path which doesn't contains spaces and it works fine. It seems some issues with the way I have declared absolute path for CIFS share:
#!/bin/bash
set -x
BASEPATH="/mnt/smbdisks/IT_linux/IT Linux Systems Dev & Support/Testing/Operation/Hello"
ADVICES="World Country State"
make_folder()
{
if [ ! -d "${1}" ]
then
echo "Warning: [${1}] Folder does not exist, trying to create..."
mkdir "$1"
if [ $? != 0 ]
then
echo "Unable to create folder "${1}" - exiting"
exit 1
fi
fi
}
sync_to_apj()
{
FROM=$1
TO=$2
TUNNEL='ssh -A -i /home/linux/.ssh/id_rsa_hostname root#hostname01.exampple.com ssh -q'
EXCLUDE='--exclude Completed --exclude Failed'
echo in folder [${BASEPATH}]
echo "Now running copying from ${FROM}/tmp/ to root#hostname01:/common/shared/test/${TO}/"
rsync -av -e "${TUNNEL}" "${BASEPATH}/${FROM}/tmp/" root#hostname01:/common/shared/test/${TO}/ ${EXCLUDE}
if [ $? != 0 ]
then
echo "Issue with rsync of $1 advices - exiting"
exit 3
fi
# Set perms to JBOSS.JBOSS on our newly copied files
echo " .. and adjusting permssions to jboss.jboss on root#hostname01:/common/shared/test"
ssh -A -i ~/.ssh/id_rsa_hostname root#hostname01 "ssh -q hostname01.example.com 'chown -R jboss.jboss /common/shared/test'"
}
# Main
echo --- START `date` - $0
echo BASEPATH = ["${BASEPATH}"]
for each_advice in ${ADVICES}
do
echo " Syncing ${each_advice}"
#DEST_ADVICE=`echo ${each_advice} | sed -e 's:$:_advices:g'`
DEST_ADVICE=`echo ${each_advice}`
make_folder "${BASEPATH}/${each_advice}/tmp"
make_folder "${BASEPATH}/${each_advice}/New"
echo "Moving pdf files from ${each_advice} to ${each_advice}/tmp"
cd "${BASEPATH}"
mv ${each_advice}/*.{PDF,pdf} ${each_advice}/tmp 2>/dev/null
sync_to_apj "${each_advice}" "${DEST_ADVICE}"
echo "Moving pdf files from ${each_advice}/tmp to ${each_advice}/New"
cd "${BASEPATH}"
mv ${each_advice}/tmp/*.{PDF,pdf} ${each_advice}/New 2>/dev/null
done
echo --- DONE `date` - $0
It fails with following error:
+ '[' '!' -d '/mnt/smbdisks/IT_linux/IT Linux Systems Dev & Support/Testing/Operation/Hello/World/tmp' ']'
+ echo 'Warning: [/mnt/smbdisks/IT_linux/IT Linux Systems Dev & Support/Testing/Operation/Hello/World/tmp] Folder does not exist, trying to create...'
+ mkdir '/mnt/smbdisks/IT_linux/IT Linux Systems Dev & Support/Testing/Operation/Hello/World/tmp'
mkdir: cannot create directory `/mnt/smbdisks/IT_linux/IT Linux Systems Dev & Support/Testing/Operation/Hello/World/tmp': No such file or directory
+ '[' 1 '!=' 0 ']'
+ echo 'Unable to create folder /mnt/smbdisks/IT_linux/IT' Linux Systems Dev '&' Support/Testing/Operation/Hello/World/tmp - exiting'
+ exit 1

Related

How to preceed the output generated by "exec &" with a time/Date in linux bash scripts?

I have the following script file that writes files to s3 from a local file system:
#!/bin/bash
CURR_DIR=`dirname $0`
SCRIPT_NAME="$(basename $0)"
LOG_FILE=$(echo $SCRIPT_NAME | cut -f 1 -d '.')
TODAY=$(date '+%Y-%m-%d')
NOW=$(date -d "$(date +%Y-%m-%d)" +%Y"-"%m"-"%d)
LOG_PATH="$CURR_DIR"/logs/"$LOG_FILE"-$TODAY.log
LOG="[$(date '+%Y-%m-%d %H:%M:%S,%3N')] INFO {$LOG_FILE} -"
ERROR_LOG="[$(date '+%Y-%m-%d %H:%M:%S,%3N')] ERROR {$LOG_FILE} -"
BUCKET="s3.bucket.example"
OUT_FOLDER="path/to/folderA"
S3_PUSH="s3://$BUCKET/$OUT_FOLDER"
exec &>> $LOG_PATH
echo "$LOG Copying files to local out folder..." >> $LOG_PATH
cp /path/to/folderA/*.* /path/to/folderB
echo "$LOG Command returned code:" $?
if [ "$(ls -A path/to/folderA/)" ]; then
FILES="$(ls path/to/folderA/*)"
for file in $FILES ; do
echo "$LOG File $file found for sync" >> $LOG_PATH
echo "$LOG Pushing $file to S3 /Folder..." >> $LOG_PATH
echo -n "$LOG " ; s3cmd put -c /home/config/.s3cfg "$file" "$S3_PUSH"/
echo "$LOG Command returned code:" $?
echo "$LOG Copying $file to local backup..." >> $LOG_PATH
mv "$file" /path/to/folderA/backup/
echo "$LOG Command returned code:" $? >> $LOG_PATH
RCC=$?
if [ $? -eq 0 ]
then
echo "$LOG Command returned code:" $?
else
echo "$ERROR_LOG Command returned code:" $?
fi
done
else
echo "$LOG No files found for sync." >> $LOG_PATH
fi
And the output is coming out in a specific grok pattern needed for me to parse this output as logs into Elastic Search, however the line 27 output is as follows:
[2021-09-02 08:15:25,629] INFO {TestGrokScriptPattern} - upload: '/path/to/folderA/File.txt' -> 's3://s3.bucket.example/Path/To/Bucket/File.txt' [1 of 1]
0 of 0 0% in 0s 0.00 B/s done
that upload and 0 of 0 0%... Line is created by the exec & command executed on line 16.
How can I get that output to not go to the next line without the date, time and script name preceeding it in order to not break the log pattern I am trying to create?
Rather than redirect output on each line, you can wrap the body of the script in a single block and then handle the output of the entire block in one place. You can then process that output with the stream editor sed. For example:
if true; then # Always true. Just simplifies redirection.
echo "Doing something..."
command_with_output
command_with_more_output
echo "Done."
fi | sed "s/^/${LOG}/" > ${LOG_PATH} 2>&1
The sed expression means: Substitute (s) the beginning of each line (^) with the contents of the LOG variable.
Using 2>&1 at the end also eliminates the need for the exec &>> $LOG_PATH command.

cp command can't parse a path with wildcard in it

I have a function I wrote in bash that copies files.
It was written so it would be less painful for us to turn our batch scripts that use xcopy to bash scripts. This is because the copy commands in Linux work a little bit different.
The function does several things:
It creates a path to the target directory if it doesn't exist yet.
It uses cp to copy files
it uses cp -r to copy directories.
it uses rsync -arv --exclude-from=<FILE> to copy all the files and folders in a gives directory except the files/folders listed in FILE
The problem is, that when I try to copy files with * it gives me an error:
cp: cannot stat 'some dir with * in it': No such file or directory.
I found out that I can instead write something like that: cp "<dir>/"*".<extension>" "<targetDir>" and the command itself works. But when I try to pass that to my function, it gets 3 arguments instead of 2.
How can I use the cp command in my function while being able to pass a path with wildcard in it? meaning the argument will have double quotes in the beginning of the path and in the end of them, for example: Copy "<somePath>/*.zip" "<targetDir>"
function Copy {
echo "number of args is: $#"
LastStringInPath=$(basename "$2")
if [[ "$LastStringInPath" != *.* ]]; then
mkdir -p "$2"
else
newDir=$(dirname "$2")
mkdir -p "newDir"
fi
if [ "$#" == "2" ]; then
echo "Copying $1 to $2"
if [[ -d $1 ]]; then
cp -r "$1" "$2"
else
cp "$1" "$2"
fi
if [ $? -ne 0 ]; then
echo "Error $? while trying to copy $1 to $2"
exit 1
fi
else
rsync -arv --exclude-from="$3" "$1" "$2"
if [ $? -ne 0 ]; then
echo "Error $? while trying to copy $1 to $2"
exit 1
fi
fi
}
Okay, so I couldn't solve this with the suggestions I was given. What was happening is either the * was expanding before it was sent to function or it wouldn't expand at all inside the function. I tried different methods and eventually I decided to rewrite the function so it would instead support multiple arguments.
The expansion of the wild card happens before it sent to my function, and the copy function does all the actions it was doing before while supporting more than one file/dir to copy.
function Copy {
argumentsArray=( "$#" )
#Check if last argument has the word exclude, in this case we must use rsync command
if [[ ${argumentsArray[$#-1],,} == exclude:* ]]; then
mkdir -p "$2"
#get file name from the argument
excludeFile=${3#*:}
rsync -arv --exclude-from="$excludeFile" "$1" "$2"
if [ $? -ne 0 ]; then
echo "Error while to copy $1 to $2"
exit 1
fi
else
mkdir -p "${argumentsArray[$#-1]}"
if [[ -d $1 ]]; then
cp -r "${argumentsArray[#]}"
if [ $? -ne 0 ]; then
exit 1
fi
else
cp "${argumentsArray[#]}"
if [ $? -ne 0 ]; then
exit 1
fi
fi
fi
}

Using Local and Remote Variable in SSH

I have written a shell script which SSH to a remote host and does some processing. The code that executes remotely has to to use the local variables which are read from the properties file. My code is as below. The below code is not executed properly. Its giving an error that
-printf: unknown primary or command.
Please help me with this.
Note: datadir, username and ftphostname are defined in properties file.
. config.properties
ssh $username#$ftphostname << EOF
filelist=;
filelist=($(find "$datadir" -type f -printf "%T# %p\n"| sort -n | head -5 | cut -f2- -d" "));
filecount=\${#filelist[#]};
while [ \${#filelist[#]} -gt 0 ]; do
checkCount=;
filesSize=$(wc -c \${filelist[#]}|tail -n 1 | cut -d " " -f1) ;
if [ "\$filesSize" == "\$fileSizeStored" ]; then
fileSizeStored=0;
printf "\n*********** \$(date) ************* " >> /home/chisan/logs/joblogs.log;
echo "Moved below files" >> /home/joblogs.log;
for i in "\${filelist[#]}"
do
# echo "file is \$i"
checkCount=0;
mv \$i /home/outputdirectory/;
if [ $? -eq 0 ]; then
echo "File Moved to the server: \$i" >> /home/joblogs.log;
else
echo "Error: Failed to move file: \$i" >> /home/joblogs.log;
fi
done
filelist=($(find "$datadir" -type f -printf '%T# %p\n' | sort -n | head -5 | cut -f2- -d" "));
else
((checkCount+=1));
sleep 4;
fileSizeStored=\$filesSize;
fi
done
EOF
But this one works
#ssh to remote system and sort the files and fetch the files which are copied first(based on modification time)
ssh -o StrictHostKeyChecking=no user#server 'filelist=($(find /home/data -type f - printf "%T# %p\n" | sort -n | head -5 | cut -f2- -d" "));
# filelist array variable holds the file names which have the oldest modification date.
#check the directory until it has atleast one file.
while [ ${#filelist[#]} -gt 0 ]; do
filesSize=$(wc -c "${filelist[#]}"|tail -n 1 | cut -d " " -f1) ;
#filesSize contains the total size of the files that are in the filelist array.
if [ -e "$HOME/.storeFilesSize" ]; then
fileSizeStored=$(cat "$HOME/.storeFilesSize");
if [ "$filesSize" == "$fileSizeStored" ]; then
echo "Moved below files" >> /home/joblogs.log;
for i in "${filelist[#]}"
do
mv "$i" /home/dmpdata1 &>/dev/null;
if [ $? -eq 0 ]; then
echo "File Moved to the server: $i" >>/home/joblogs.log;
else
echo "Error: Failed to move file: $i" >>/home/joblogs.log;
fi
done
filelist=($(find /home/data -type f -printf "%T# %p\n" | sort -n | head -5 | cut -f2- -d" "));
else
sleep 4;
echo "$filesSize" > "$HOME/.storeFilesSize";
fi
else
echo "creating new file";
echo "$filesSize" > "$HOME/.storeFilesSize";
fi
done'
I will not answer directly (ie, not with your specific needs and actions), but give a generic possibility and how to use local and remote variables :
Your master script should create a "specific script", locally.
And then copy it over and run it remotely (with additionnal arguments if needed)
Generic example of Master script :
#local Master script: This script creates a local script,
# and then copy it to remotehost and start it
#Some local variables will be defined here.
#They can be used below, and will be replaced by their value locally
localvar1="...."
localvar2="...."
#now we create the script
cat > /tmp/localscript_to_be_copied_to_remote.sh <<EOF
#remote_script
for i in ..... ; do
something ;
somethingelse
done
......
.....
EOF
#in the above, each time you used "$localvar1" or "$localvar2", the script
# /tmp/localscript_to_be_copied_to_remote.sh will instead have their values,
# as the local shell will replace them on the fly during the cat > ... <<EOF .
# if you want to have some remotevariable "as is" (and not as their local value) in the script,
# write them as "\$remotevariable" there, instead of "$remotevariable", so the local shell
# won't interpret them during the 'cat', and the script will receive "$remotevariable"
# as is, instead of its local value.
#then you copy the script:
scp -p /tmp/localscript_to_be_copied_to_remote.sh user#remotehost:/some/dir/name.sh
#and you run it:
# UNCOMMENT the line below ONLY when /tmp/localscript_to_be_copied_to_remote.sh is correct!
# ssh user#remotehost "/some/dir/name.sh" #+ maybe some parameters as well
#end of local Master script.
You then run "local Master script" and have it create the tmp file locally (which you can check to make sure it is supposed to be like this on the remote host), and then copy it remotely and execute it.
Specific example of master script :
#!/bin/bash
local1="/tmp /var /usr /home" # this will be the default name of the dirs (on the remote host)
# that the script will print the size of (+ any additionnal parameters)
cat > /tmp/printsizes.bash <<EOF
#!/bin/bash
for dir in $local1 "\$#" ; do
du -ks "\$dir"
done
EOF
scp -p /tmp/printsizes.bash user#remotehost:/tmp/print_dir_sizes.bash
ssh user#remotehost "/tmp/print_dir_sizes.bash /etc /root"
This (weird...) example will create a LOCAL script containing:
#!/bin/bash
for dir in /tmp /var /usr /home "$#" ; do
du -ks "$dir"
done
And will execute it with:
ssh user#remotehost "/tmp/print_dir_sizes.bash /etc /root"
so it will do remotely:
for dir in /tmp /var /usr /home /etc /root ; do
du -ks "$dir"
done
I hope it helps to see how to use local and remote variables...

How to check for a folder in 2 machines using shell script?

I am working on a shell script which I need to run on machineX. It will check for a certain folder which is in this format YYYYMMDD inside this folder MAPPED_LOCATION in other two machines - machineP and machineQ. So the path will be like this in both machineP and machineQ-
/bat/testdata/t1_snapshot/20140311
And inside the above folder path, there will be some files inside in it. Below is my shell script -
#!/bin/bash
readonly MACHINES=(machineP machineQ)
readonly MAPPED_LOCATION=/bat/testdata/t1_snapshot
readonly FILE_TIMESTAMP=20140311
# old code which I was using to get the latest folder inside each machine (P and Q)
dir1=$(ssh -o "StrictHostKeyChecking no" david#${MACHINES[0]} ls -dt1 "$MAPPED_LOCATION"/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] | head -n1)
dir2=$(ssh -o "StrictHostKeyChecking no" david#${MACHINES[1]} ls -dt1 "$MAPPED_LOCATION"/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] | head -n1)
dir3=$MAPPED_LOCATION/$FILE_TIMESTAMP # /bat/testdata/t1_snapshot/20140311
echo $dir1
echo $dir2
echo $dir3
if dir3 path exists in both the machines (P and Q) and number of files is greater than zero in each machine
then
# then do something here
echo "Hello World"
else
# log an error - folder is missing or number of files is zero in which servers or both servers
fi
Noow what I am supposed to do is - If this path exists /bat/testdata/t1_snapshot/20140311 in both of the machines and number of files are greater than zero in both of the machines, then do somethting. Else if the folder is missing in any of the servers or number of files is zero in any of ther servers, I will exit out of the shell script with non zero status and a message with an actual error.
How can I do this in shell script?
Update:-
for machine in $MACHINES; do
dircheck=($(ssh -o "StrictHostKeyChecking no" david#${machine} [[ ! -d "$dir3" ]] \&\& exit 1 \; ls -t1 "$dir3"))
#On the ssh command, we exit 1 if the folder doesn't exist. We check the return code with `$?`
if [[ $? != 0 ]] ;then
echo "Folder doesn't exist on $machine";
exit 1
fi
# check number of files retrieved
if [[ "${dircheck[#]}" = 0 ]] ;then
echo "0 Files on server $machine";
exit 1
fi
#all good for $machine here
done
echo "Everything is Correct"
If I am adding a new empty folder 20140411 inside machineP and then execute the above script, it always prints out -
echo "Everything is Correct"
Infact, I didn't added any folder in machineQ. Not sure what is the problem?
Another Update-
I have created an empty folder 20140411 in machineP only. And then I ran the script in debug mode -
david#machineX:~$ ./test_file_check_1.sh
+ FILERS_LOCATION=(machineP machineQ)
+ readonly FILERS_LOCATION
+ readonly MEMORY_MAPPED_LOCATION=/bexbat/data/be_t1_snapshot
+ MEMORY_MAPPED_LOCATION=/bexbat/data/be_t1_snapshot
+ readonly FILE_TIMESTAMP=20140411
+ FILE_TIMESTAMP=20140411
+ dir3=/bexbat/data/be_t1_snapshot/20140411
+ echo /bexbat/data/be_t1_snapshot/20140411
/bexbat/data/be_t1_snapshot/20140411
+ for machine in '$FILERS_LOCATION'
+ dircheck=($(ssh -o "StrictHostKeyChecking no" david#${machine} [[ ! -d "$dir3" ]] \&\& exit 1 \; ls -t1 "$dir3"))
++ ssh -o 'StrictHostKeyChecking no' david#machineP '[[' '!' -d /bexbat/data/be_t1_snapshot/20140411 ']]' '&&' exit 1 ';' ls -t1 /bexbat/data/be_t1_snapshot/20140411
+ [[ 0 != 0 ]]
+ [[ '' = 0 ]]
+ echo 'Everything is Correct'
Everything is Correct
What you want to do is, ls the remote directory (remove the -d flag to ls (which lists only folders), and the head -n1 command as it only prints the first file) and retrieve the data in an array variable.
I also added a check for directory existance [[ -d "$dir3" ]] before executing the ls and escaped the && to not be interpreted on the current bash script.
[[ -d "$dir3" ]] \&\& ls -t1 "$dir3"
To define a bash array, add extra ( ) arround the command., then compare the array size.
dir3="$MAPPED_LOCATION/$FILE_TIMESTAMP" # /bat/testdata/t1_snapshot/20140311
for machine in ${MACHINES[*]}; do
dir3check=($(ssh -o "StrictHostKeyChecking no" david#${machine} [[ -d "$dir3" ]] \&\& ls -t1 "$dir3"))
if [[ "${#dir3check[#]}" -gt 0 ]] ;then
# then do something here
echo "Hello World"
else
# log an error - folder is missing or number of files is zero in server $machine
fi
done
UPDATE:
for machine in ${MACHINES[*]}; do
dircheck=($(ssh -o "StrictHostKeyChecking no" david#${machine} [[ ! -d "$dir3" ]] \&\& exit 1 \; ls -t1 "$dir3"))
#On the ssh command, we exit 1 if the folder doesn't exist. We check the return code with `$?`
if [[ $? != 0 ]] ;then
echo "Folder doesn't exist on $machine";
exit 1
fi
# check number of files retrieved
if [[ "${#dircheck[#]}" = 0 ]] ;then
echo "0 Files on server $machine";
exit 1
fi
#all good for $machine here
done
#all good for all machines here

BASH script: handling paths with escaped spaces

I have a bash script which I would like to handle spaces. I know there a ton of questions on here about this, but I was unable to resolve my problem.
According to what I've read, the following should work. The space in
../tool_profile/OS\ Firmware/updater is being escaped. In the script, the $2 variable is being enclosed in quotes when being assigned to DEST.
If I pass this path in to ls enclosed in quotes or with escaped spaces on the command line, it works.
example script command:
./make_initramfs.sh initramfs_root/ ../tool_profile/OS\ Firmware/updater/ initramfs
error from ls in script:
ls: cannot access ../tool_profile/OS Firmware/updater/: No such file or directory
make_initramfs.sh:
#!/bin/bash
if [ $# -ne 3 ]; then
echo "Usage: `basename $0` <root> <dest> <archive_name>"
exit 1
fi
ROOT=$1
DEST="$2"
NAME=$3
echo "[$DEST]"
# cd and hide output
cd $ROOT 2&>/dev/null
if [ $? -eq 1 ]; then
echo "invalid root: $ROOT"
exit 1
fi
ls "$2" # doesn't work
ls "$DEST" # doesn't work
# check for 'ls' errors
#if [ $? -eq 1 ]; then
# echo "invalid dest: $DEST"
# exit 1
#fi
#sudo find . | sudo cpio -H newc -o | gzip --best > "$DEST"/"$NAME"
Thank you for any clues to what I am doing wrong! ^_^
Okay... so right as I submitted this I realized what I was doing wrong.
I was passing two relative paths in and changing to the first one before verifying the second one. So once I changed directory, the second relative path was no longer valid. I will post an updated script once I get it finished.
Edit: I finished my script. See below.
Edit2: I updated this based on everyone's comments. Thanks everyone!
make_initramfs.sh:
#!/bin/bash
if (( $# != 2 )); then
echo "Usage: `basename $0` <root> <dest>"
exit 1
fi
root="$1"
archive="${2##*/}"
dest="$PWD/${2%/*}"
# cd and hide errors
cd "$root" &>/dev/null
if (( $? != 0 )); then
echo "invalid path: $root"
exit 1
fi
if [ ! -d "$dest" ]; then
echo "invalid path: $dest"
exit 1
fi
if [ "$archive" = "" ]; then
echo "no archive file specified"
exit 1
fi
if [ `whoami` != root ]; then
echo "please run this script as root or using sudo"
exit 1
fi
find . | cpio -H newc -o | gzip --best > "$dest"/"$archive"

Resources