Bash Script if a file exists and larger than loop - linux

*Note i edited this so my final functioning code is below
Ok so I'm writing a bash script to backup our mysql database to a directory, delete the oldest backup if 10 exist, and output the results of the backup to a log so I can further create alerts if it fails. Everything works great except the if loop to output the results, thanks again for the help guys code is below!
#! /bin/bash
#THis creates a variable with the date stamp to add to the filename
now=$(date +"%m_%d_%y")
#This moves the bash shell to the directory of the backups
cd /dbbkp/backups/
#Counts the number of files in the direstory with the *.sql extension and deletes the oldest once 10 is reached.
[[ $(ls -ltr *.sql | wc -l) -gt 10 ]] && rm $(ls -ltr *.sql | awk 'NR==1{print $NF}')
#Moves the bash shell to the mysql bin directory to run the backup script
cd /opt/GroupLink/everything_HelpDesk/mysql/bin/
#command to run and dump the mysql db to the directory
./mysqldump -u root -p dbname > /dbbkp/backups/ehdbkp_$now.sql --protocol=socket --socket=/tmp/GLmysql.sock --password=password
#Echo the results to the log file
#Change back to the directory you created the backup in
cd /dbbkp/backups/
#If loop to check if the backup is proper size and if it exists
if find ehdbkp_$now.sql -type f -size +51200c 2>/dev/null | grep -q .; then
echo "The backup has run successfully" >> /var/log/backups
else
echo "The backup was unsuccessful" >> /var/log/backups
fi

Alternatively, you could use stat instead of find.
if [ $(stat -c %s ehdbkp_$now 2>/dev/null || echo 0) -gt 51200 ]; then
echo "The backup has run successfully"
else
echo "The backup was unsuccessful"
fi >> /var/log/backups
Option -c %s tells stat to return the size of file in bytes. This will take care of both the presence of file and size greater than 51200. When the file is missing, stat will err out, thus we redirect error message to /dev/null. The logical or condition || will get executed only when the file is missing thus the comparison will make [ 0 -gt 100 ] false.

To check if the file exists and larger than 51200 bytes you could rewrite your if like this:
if find ehdbkp_$now -type f -size +51200c 2>/dev/null | grep -q .; then
echo "The backup has run successfully"
else
echo "The backup has was unsuccessful"
fi >> /var/log/backups
Other notes:
The find takes care two things at once: checks if file exists and size is greater than 51200.
We redirect stderr to /dev/null to hide the error message if the file doesn't exist.
If there was a file matching both conditions, then grep will match and exit with success, otherwise it will exit with failure
The final outcome of the grep is what decides the if condition
I moved the >> /var/log/backups after the closing fi, as it's equivalent this way and less duplication.
Btw if is NOT a loop, it's a conditional.
UPDATE
As #glennjackman pointed out, a better way to write the if, without grep:
if [[ $(find ehdbkp_$now -type f -size +51200c 2>/dev/null) ]]; then
...

Related

Why is a part of the code inside a (False) if statement executed?

I wrote a small script which:
prints the content of a file (generated by another application) on paper with a matrix printer
prints the same line into a backup file
removes the original file.
The script runs every minute by a cronjob and works fine as long as there are files to print. If there are no files to print, it prints an empty line on the matrix printer and in the backup file. I don't understand why this happens as i implemented an if statement which checks if there is a file to print before the print command is executed. This behaviour only happens if the script is executed by the cron and not if i execute it manually with ./script.sh. What's the reason of this? and how can i solve it?
Something i noticed on the side is that if I place an echo "hi" command in the script, its printed to the matrix printer and the backup file. I expected that its printed to the console console when it has no >> something behind. How does this work?
The script:
#!/bin/bash
# Make sure the backup directory exists
if [ ! -d /home/user/backup_logprint ]
then
mkdir /home/user/backup_logprint
fi
# Print the records if there are any
date=`date +%Y-%m-%d`
filename='_logprint_backup'
printer_path="/dev/usb/lp0"
if [ `ls /tmp/ | grep logprint | wc -l` -gt 0 ]
then
for f in `ls /tmp | grep logprint`
do
echo `cat /tmp/$f` >> "/home/user/backup_logprint/$date$filename"
echo `cat /tmp/$f` >> $printer_path
rm "/tmp/$f"
done
fi
There's no need for ls or an if statement. Just use a proper glob in the for loop, and if no file match, the loop won't be entered.
#!/bin/bash
# Don't check first; just let mkdir decide if
# anything actually needs to be created.
d=/home/user/backup_logprint
mkdir -p "$d"
filename=$(date +"$d/%Y-%m-%d_logprint_backup")
printer_path="/dev/usb/lp0"
# Cause non-matching globs to expand to an empty
# sequence instead of being treated literally.
shopt -s nullglob
for f in /tmp/*logprint*; do
cat "$f" > "$printer_path" && mv "$f" "$d"
done

Renaming directories at multiple levels using find from bash

I'm looping over the results of find, and I'm changing every one of those folders, so my problem is that when I encounter:
/aaaa/logs/ and after that: /aaaa/logs/bbb/logs, when I try to mv /aaaa/logs/bbb/logs /aaaa/log/bbb/log it can't find the folder because it has already been renamed. That is, the output from find may report that the name is /aaaa/logs/bbb/logs, when the script previously moved output to /aaaa/log/bbb/.
Simple code:
#!/bin/bash
script_log="/myPath"
echo "Info" > $script_log
search_names_folders=`find /home/ -type d -name "logs*"`
while read -r line; do
mv $line ${line//logs/log} >>$script_log 2>&1
done <<< "$search_names_folders"
My Solution is:
#!/bin/bash
script_log="/myPath"
echo "Info" > $script_log
search_names_folders=`find /home/ -type d -name "logs*"`
while read -r line; do
number_of_occurrences=$(grep -o "logs" <<< "$line" | wc -l)
if [ "$number_of_occurrences" != "1" ]; then
real_path=${line//logs/log} ## get the full path, the suffix will be incorrect
real_path=${real_path%/*} ## get the prefix until the last /
suffix=${line##*/} ## get the real suffix
line=$real_path/$suffix ## add the full correct path to line
mv $line ${line//logs/log} >>$script_log 2>&1
fi
done <<< "$search_names_folders"
But its bad idea, Has anyone have other solutions?
Thanks!
Use the -depth option to find. This makes it process directory contents before it processes the directory itself.

How to extract only file name return from diff command?

I am trying to prepare a bash script for sync 2 directories. But I am not able to file name return from diff. everytime it converts to array.
Here is my code :
#!/bin/bash
DIRS1=`diff -r /opt/lampp/htdocs/scripts/dev/ /opt/lampp/htdocs/scripts/www/ `
for DIR in $DIRS1
do
echo $DIR
done
And if I run this script I get out put something like this :
Only
in
/opt/lampp/htdocs/scripts/www/:
file1
diff
-r
"/opt/lampp/htdocs/scripts/dev/File
1.txt"
"/opt/lampp/htdocs/scripts/www/File
1.txt"
0a1
>
sa
das
Only
in
/opt/lampp/htdocs/scripts/www/:
File
1.txt~
Only
in
/opt/lampp/htdocs/scripts/www/:
file
2
-
second
Actually I just want to file name where I find the diffrence so I can take perticular action either copy/delete.
Thanks
I don't think diff produces output which can be parsed easily for your purposes. It's possible to solve your problem by iterating over the files in the two directories and running diff on them, using the return value from diff instead (and throwing the diff output away).
The code to do this is a bit long, but here it is:
DIR1=./one # set as required
DIR2=./two # set as required
# Process any files in $DIR1 only, or in both $DIR1 and $DIR2
find $DIR1 -type f -print0 | while read -d $'\0' -r file1; do
relative_path=${file1#${DIR1}/};
file2="$DIR2/$relative_path"
if [[ ! -f "$file2" ]]; then
echo "'$relative_path' in '$DIR1' only"
# Do more stuff here
elif diff -q "$file1" "$file2" >/dev/null; then
echo "'$relative_path' same in '$DIR1' and '$DIR2'"
# Do more stuff here
else
echo "'$relative_path' different between '$DIR1' and '$DIR2'"
# Do more stuff here
fi
done
# Process files in $DIR2 only
find $DIR2 -type f -print0 | while read -d $'\0' -r file2; do
relative_path=${file2#${DIR2}/};
file1="$DIR1/$relative_path"
if [[ ! -f "$file2" ]]; then
echo "'$relative_path' in '$DIR2 only'"
# Do more stuff here
fi
done
This code leverages some tricks to safely handle files which contain spaces, which would be very difficult to get working by parsing diff output. You can find more details on that topic here.
Of course this doesn't do anything regarding files which have the same contents but different names or are located in different directories.
I tested by populating two test directories as follows:
echo "dir one only" > "$DIR1/dir one only.txt"
echo "dir two only" > "$DIR2/dir two only.txt"
echo "in both, same" > $DIR1/"in both, same.txt"
echo "in both, same" > $DIR2/"in both, same.txt"
echo "in both, and different" > $DIR1/"in both, different.txt"
echo "in both, but different" > $DIR2/"in both, different.txt"
My output was:
'dir one only.txt' in './one' only
'in both, different.txt' different between './one' and './two'
'in both, same.txt' same in './one' and './two'
Use -q flag and avoid the for loop:
diff -rq /opt/lampp/htdocs/scripts/dev/ /opt/lampp/htdocs/scripts/www/
If you only want the files that differs:
diff -rq /opt/lampp/htdocs/scripts/dev/ /opt/lampp/htdocs/scripts/www/ |grep -Po '(?<=Files )\w+'|while read file; do
echo $file
done
-q --brief
Output only whether files differ.
But defitnitely you should check rsync: http://linux.die.net/man/1/rsync

Shell script: Count files, delete 'X' oldest file

I am new to scripting. Currently I have a script that backs up a directory every day to a file server. It deletes the oldest file outside of 14 days. My issue is I need it to count the actual files and delete the 14th oldest one. When going by days, if the file server or host is down for a few days or longer, when back up it will delete a couple days worth of backups or even all of them. Pending down time. I want it to always have 14 days worth of backups.
I tried searching around and could only find solutions related to deleting by dates. Like what I have now.
Thank you for the help/advice!
My code I have, sorry its my first attempt at scripting:
#! /bin/sh
#Check for file. If not found, the connection to the file server is down!
if
[ -f /backup/connection ];
then
echo "File Server is connected!"
#Directory to be backed up.
backup_source="/var/www/html/moin-1.9.7"
#Backup directory.
backup_destination="/backup"
#Current date to name files.
date=`date '+%m%d%y'`
#naming the file.
filename="$date.tgz"
echo "Backing up directory"
#Creating the back up of the backup_source directory and placing it into the backup_destination directory.
tar -cvpzf $backup_destination/$filename $backup_source
echo "Backup Finished!"
#Search for folders older than '+X' days and delete them.
find /backup -type f -ctime +13 -exec rm -rf {} \;
else
echo "File Server is NOT connected! Date:`date '+%m-%d-%y'` Time:`date '+%H:%M:%S'`" > /user/Desktop/error/`date '+%m-%d-%y'`
fi
Something along the lines like this might work:
ls -1t /path/to/directory/ | head -n 14 | tail -n 1
in the ls command, -1 is to list just the filenames (nothing else), -t is to list them in chronological order (newest first). Piping through the head command takes just the first 14 from the output of the ls command, then tail -n 1 takes just the last from that list. This should give the the file that is 14th newest.
Here is another suggestion. The following script simply enumerates the backups. This eases the task of keeping track of the last n backups. If you need to know the actual creation date you can simply check the file metadata, e.g. using stat.
#!/bin/sh
set -e
backup_source='somedir'
backup_destination='backup'
retain=14
filename="backup-$retain.tgz"
check_fileserver() {
nc -z -w 5 file.server.net 80 2>/dev/null || exit 1
}
backup_advance() {
if [ -f "$backup_destination/$filename" ]; then
echo "removing $filename"
rm "$backup_destination/$filename"
fi
for i in $(seq $(($retain)) -1 2); do
file_to="backup-$i.tgz"
file_from="backup-$(($i - 1)).tgz"
if [ -f "$backup_destination/$file_from" ]; then
echo "moving $backup_destination/$file_from to $backup_destination/$file_to"
mv "$backup_destination/$file_from" "$backup_destination/$file_to"
fi
done
}
do_backup() {
tar czf "$backup_destination/backup-1.tgz" "$backup_source"
}
check_fileserver
backup_advance
do_backup
exit 0

Looping through files in different directory given command line argument

I'm trying to extend a script that implements something like a recycling bin for files on Linux. I have the code that I'm extending at the bottom.
In my extension, when the script is presented with the command line argument -cleanup I want to loop through files that are in the /home/7/bearm/.garbage directory, and have the user decide whether they want to delete the file or not.
However, I don't know how to detect when the command line argument is there. The command line can have other parameters, I just want to loop through the files when -cleanup is used.
I also do not know how to loop through files that are in a different directory (/home/7/bearm/.garbage).
How would I go around doing these things?
set directory = '/home/7/bearm/.garbage/'
if(! -d "$directory") then
mkdir .garbage
mv .garbage /home/7/bearm/
endif
set n = 1
while ($n <= $#argv)
set file = $argv[$n]
if(-d $file) then
#do nothing
echo "Cannot trash directory $file"
else
mv $file /home/7/bearm/.garbage
echo "Trashed $file"
endif
# n++
end
du -h /home/7/bearm/.garbage
To test if arguments contains -cleanup, you can do that (tested with ash on Minix3):
if echo "$#" | grep -- "-cleanup" >/dev/null 2>&1; then
echo "-cleanup is present..."
fi
Moreover, if you want a proper solution to use long GNU style options, see http://www.sputnick-area.net/scripts/getopts_long_example.sh and http://www.sputnick-area.net/scripts/getopts_long.sh
A bash version of your pseudo script :
#!/bin/bash
directory='/home/7/bearm/.garbage/'
mkdir -p "$directory"
for arg; do
if [[ -d $arg ]]; then
#do nothing
echo "Cannot trash directory $arg" >&2
else
mv "$arg" "$directory"
echo "Trashed $arg"
fi
done
du -sh "$directory"
Feel free to improve it with -cleanup switch.

Resources