How to get echo to print only deleted file paths? - linux

I'm trying to write a script to create mysqldumps daily in a directory as well as check all the backups in that directory and remove any older than 7 days that is going to run on cron.
So my functions work correctly, it's just my last echo command that is not doing what I want it to. This is what I have so far:
DBNAME=database
DATE=`date +\%Y-\%m-\%d_\%H\%M`
SQLFILE=$DBNAME-${DATE}.sql
curr_dir=$1
#MAIN
mysqldump -u root -ppassword --databases $DBNAME > $SQLFILE
echo "$SQLFILE has been successfully created."
#Remove files older than 7 days
for filepath in "$curr_dir"*
do
find "$filepath" -mtime +7 -type f -delete
echo "$filepath has been deleted."
done
exit
So the backup creations and removal of old files both work. But, my problem is that echo "$filepath has been deleted." is printing all files in the directory instead of just the files older than 7 days that were deleted. Where am I going wrong here?
EDIT (Full solution):
This is the full solution that wound up working for me using everyone's advice from the answers and comments. This works for cron jobs. I had to specify the main function's output filepath because the files were being created in the root directory instead of the path specified in Argument $1.
Thank you everyone for the help! The if statement also checks whether or not $1 is the specified directory I want files to be deleted in.
#Variables
DBNAME=database
DATE=`date +\%Y-\%m-\%d_\%H\%M`
SQLFILE=$DBNAME-${DATE}.sql
curr_dir=$1
#MAIN
mysqldump -u root -ppassword --databases $DBNAME > /path/to/db-backups/directory/$SQLFILE
echo "$SQLFILE has been successfully created."
#Remove files older than 7 days
for filepath in "$curr_dir"*
do
if [[ $1 = "/path/to/db-backups/directory" ]]; then
find "$filepath" -mtime +7 -type f -delete -exec sh -c 'printf "%s has been deleted.\n" "$#"' _ {} +
fi
done
exit

You can merge the echo into the find:
find "$filepath" -mtime +7 -type f -delete -exec echo '{}' "has been deleted." \;
The -delete option is just a shortcut for -exec rm '{}' \; and all the -exec commands are run in the sequence you specify them in.

Related

A bash script to unrar all files in a sub directory in linux

I'm on Ubuntu server 18.04.
My main goal is to run a script from a parent directory which unrars all the files inside all sub directories of the parent directory.
I have also installed apt install unrar and it is located at "/usr/bin/unrar".
This is what I have come up with till now. But it does not seem to work:
for dir in 'pwd/*/'
do
dir=${dir%*/}
cd dir
for file in dir/*/
do
"/usr/bin/unrar" x dir/*.r* dir/
done
I've found a working script for Windows which uses 7zip here
Here's a starting example; adjust pattern and echo statements as needed:
#!/bin/bash
pattern='*/*.rar'
archives=($pattern)
if [[ "${archives[#]}" == "$pattern" ]]; then
echo NONE 1>&2
exit
fi
for rpath in "${archives[#]}"; do
dir=${rpath%/*}
rar=${rpath##*/}
pushd "$dir" > /dev/null
echo -e "\ndir: $dir"
echo unrar args "$rar"
popd > /dev/null
done
If you just want to unrar every rar file in the directory where it was found, find can do that directly. It takes some getting used to, but it's well worth learning.
find . -name '*.rar' -execdir unrar {} \;
Briefly, -execdir says to run the stuff up to \; on each found file in the directory where it was found; the {} placeholder gets replaced with the file name.
find . -name '*.rar' -execdir unrar {} \;
This didn't work for me.
I changed it to this and it worked.
find . -name '*.rar' -execdir unrar e -r {} \;

How to delete older files but keep recent ones during backup?

I have a remote server that copies 30-some backup files to a local server every day and I want to remove the old backups if and only if a newer backup successfully copied.
With different codes I tried, I managed to erase older files, but I got the problem that if it found one new backup, it deleted ALL older ones.
I have something like (picture this with 20 virtual machines):
vm001-2019-08-01.bck
vm001-2019-07-28.bck
vm002-2019-08-01.bck
vm003-2019-07-29.bck
vm004-2019-08-01.bck
vm004-2019-07-31.bck
vm004-2019-07-30.bck
vm004-2019-07-29.bck
...
And I'd want to erase all but keep only the most recent ones.
i.e.: erase:
vm001-2019-07-28.bck
vm002-2019-07-29.bck
vm004-2019-07-31.bck
vm004-2019-07-30.bck
vm004-2019-07-29.bck
and keep only:
vm001-2019-08-01.bck
vm002-2019-08-01.bck
vm003-2019-07-29.bck
vm004-2019-08-01.bck
the problem I had is that if I have any recent backup of any machine, files like vm-003-2019-07-29 get deleted, because they are older, even if they are of different machines.
I know there are several variants of this question in the site, but I can't quite get this to work.
I've been trying variants of this code:
#!/bin/bash
for i in ./*.bck
do
echo "found" "$i"
if [[ -n $(find "$i" -type f -mmin -1440) ]]
then
echo "$i"
find "$i" -type f -mmin +1440 -exec rm -f "$i" {} +
fi
done
(The echos are for debugging purposes only)
At this time, this code finds the newer and the older files, but doesn't delete anything. If I put find "$i" -type f -mmin +1440 -exec echo "$i" {} +, it never prints anything, as if find $i is not finding anything, but when I run it as a solo command in the terminal, it does (minus the -exec part).
I've tested this script generating files with different timestamps using touch -d, but I had no success.
Unless you add the -name test before the filename find is going to consider "$i" to be the name of a directory to search in. So your find command should be:
find -name "$i" -type f -mmin -1440
which will search in the current directory. Or
find /path/to/dir -name "$i" -type f -mmin -1440
which will search in a directory named "/path/to/dir".
But, based on BashFAQ/099, I would do this to delete all but the newest file for each VM (untested):
#!/bin/bash
declare -A newest # associative array to store name of newest file for each VM
for f in *
do
vm=${f%%-*} # extracts vm name from filename (i.e. vmm001 from vm001-2019-08-01.bck)
if [[ -f $f && $f -nt ${newest["$vm"]} ]]
then
newest["$vm"]=$f
fi
done
for f in *
do
vm=${f%%-*}
if [[ -f $f && $f != ${newest["$vm"]} ]]
then
rm "$f"
fi
done
This is set up to run against files in the current directory. It assumes that the files are named as shown in the question (the VM name is separated from the rest of the file name by a hyphen). In order to use an associative array, Bash 4 or higher is required.

Delete all files older than 30 days, based on file name as date

I'm new to bash, I have a task to delete all files older than 30 days, I can figure this out based on the files name Y_M_D.ext 2019_04_30.txt.
I know I can list all files with ls in a the folder containing the files. I know I can get todays date with $ date and can configure that to match the file format $ date "+%Y_%m_%d"
I know I can delete files using rm.
How do I tie all this together into a bash script that deletes files older than 30 days from today?
In pseudo-python code I guess it would look like:
for file in folder:
if file.name to date > 30 day from now:
delete file
I am by no means a systems administrator, but you could consider a simple shell script along the lines of:
# Generate the date in the proper format
discriminant=$(date -d "30 days ago" "+%Y_%m_%d")
# Find files based on the filename pattern and test against the date.
find . -type f -maxdepth 1 -name "*_*_*.txt" -printf "%P\n" |
while IFS= read -r FILE; do
if [ "${discriminant}" ">" "${FILE%.*}" ]; then
echo "${FILE}";
fi
done
Note that this is will probably be considered a "layman" solution by a professional. Maybe this is handled better by awk, which I am unfortunately not accustomed to using.
Here is another solution to delete log files older than 30 days:
#!/bin/sh
# A table that contains the path of directories to clean
rep_log=("/etc/var/log" "/test/nginx/log")
echo "Cleaning logs - $(date)."
#loop for each path provided by rep_log
for element in "${rep_log[#]}"
do
#display the directory
echo "$element";
nb_log=$(find "$element" -type f -mtime +30 -name "*.log*"| wc -l)
if [[ $nb_log != 0 ]]
then
find "$element" -type f -mtime +30 -delete
echo "Successfull!"
else
echo "No log to clean !"
fi
done
allows to include multiple directory where to delete files
rep_log=("/etc/var/log" "/test/nginx/log")
we fill the var: we'r doing a search (in the directory provided) for files which are older than 30 days and whose name contains at least .log. Then counts the number of files.
nb_log=$(find "$element" -type f -mtime +30 -name "*.log*"| wc -l)
we then check if there is a result other than 0 (posisitive), if yes we delete
find "$element" -type f -mtime +30 -delete
For delete file older than X days you can use this command and schedule it in /etc/crontab
find /PATH/TO/LOG/* -mtime +10 | xargs -d '\n' rm
or
find /PATH/TO/LOG/* -type f -mtime +10 -exec rm -f {} \

Include folder name in renaming a file in linux

I've already used that command to rename the files in multiple directories and change JPG to jpg, so I have consistency.
find . -name '*.jpg' -exec sh -c 'mv "$0" "${0%.JPG}$.jpg"' {} \;
Do you have any idea how to change that to include the folder name in the name of the file
I am executing that in a folder that contains about 2000 folders (SKU's) or products ... and inside every SKU folder, there are 9 images. 1.jpg 2.jpg .... 9.jpg.
So the bottom-line is I have 2000 images with name 1.jpg, 2.jpg ... 9.jpg. I need those files to be unique, for example:
folder-name-1.jpg ... folder-name.2.jpg ... so on, in every folder.
Any help will be appreciated.
For example I can do as follows:
$ find . -iname '*.jpg' | while read fn; do name=$(basename "$fn") ; dir=$(dirname "$fn") ; mv "$fn" "$dir/$(basename "$dir")-$name" ;done
./lib/bukovina/version.jpg ./lib/bukovina/bukovina-version.jpg
./lib/bukovina.jpg ./lib/lib-bukovina.jpg
You can use fine one-liner:
find . -name '*.jpg' -execdir \
bash -c 'd="${PWD##*/}"; [[ "$1" != "$d-"* ]] && mv "$1" "./$d-$1"' - '{}' \;
This command uses safe approach to check whether image name is already not prefixed by the current directory name. You can run it multiple times also and image name won't be renamed after first run.
To get the folder name of a file you can do $(basename $(dirname ${FILE})), where ${FILE} is a path that may be relative but must contain at least one folder before the file name in it. This should not be a problem with find. If it is, just run it from one directory up.
find . -name '*.jpg' -exec sh -c 'mv "$0" "$(basename $(dirname $0))-${0%.JPG}$.jpg"' {} \;
Or, if you have JPEGs in your current directory:
find ../<dirname> -name '*.jpg' -exec sh -c 'mv "$0" "$(basename $(dirname $0))-${0%.JPG}$.jpg"' {} \;

check if find command return something (in bash script)

i have the following bash script on my server:
today=$(date +"%Y-%m-%d")
find /backups/www -type f -mtime -1|xargs tar uf /daily/backup-$today.tar
as you can see it creates backups of files modified/created in the last 24h. However if no files are found, it creates corrupted tar file. I would like to wrap it in if..fi statement so id doesn't create empty/corrupted tar files.
Can someone help me modify this script?
Thanks
You can check if result is ok then check if result is empty :
today=$(date +"%Y-%m-%d")
results=`find /backups/www -type f -mtime -1`
if [[ 0 == $? ]] ; then
if [[ -z $results ]] ; then
echo "No files found"
else
tar uf /daily/backup-$today.tar $results
fi
else
echo "Search failed"
fi
find /backups/www -type f -mtime -1 -exec tar uf /daily/backup-$today.tar {} +
Using -exec is preferable to xargs. There's no pipeline needed and it will handle file names with spaces, newlines, and other unusual characters without extra work. The {} at the end is a placeholder for the file names, and + marks the end of the -exec command (in case there were more arguments to find).
As a bonus it won't execute the command if no files are found.
One relatively simple trick would be this:
today=$(date +"%Y-%m-%d")
touch /backups/www/.timestamp
find /backups/www -type f -mtime -1|xargs tar uf /daily/backup-$today.tar
That way you're guaranteed to always find at least one file (and it's minimal in size).
xargs -r does nothing if there is no input.

Resources