how to delete files from an array after a check? - linux

I have this code:
#!/bin/bash
path="/home/asdf"
dateminusoneday=$(date +%m --date='-1 month')
date=$(date +"%Y-$dateminusoneday-%d")
list=$(find /home/asdf | grep -P '\d{4}\-\d{2}\-\d{2}' -o)
listArray=($list)
for i in "${listArray[#]}"
do
echo $i
if [[ $i < $date ]]; then
echo "delete file"
else
echo "no need delete this file" fi done
I need to delete the smallest files that date. but I do not get it
What would be the most optimal way?
thanks all.

From your code I see that you are trying to delete files older than one month. If I am not mistaken and you can accept that (1 month)==(30 days) you can use such one-liner:
find "$path" -mtime +30 -delete
If you want exactly 1 mont (not 30 days) you can use:
#!/bin/bash
path="/home/asdf"
number_of_days=$((($(date '+%s')-$(date -d '1 month ago' '+%s'))/86400))
find "$path" -mtime +$number_of_days -delete

Related

Why check if file is exists in shell always false?

I created a cron using bash to delete files older than 3 days, but when checking the age of the files with mtime +3 &> /dev/null it is always false. here's the script:
now=$(date)
create log file
file_names=('*_takrib_golive.gz' '*_takrib_golive_filestore.tar.gz')
touch /media/nfs/backup/backup_delete.log
echo "Date: $now" >> /media/nfs/backup/backup_delete.log
for filename in "${file_names[#]}";
do
echo $filename
if ls /media/nfs/backup/${filename} &> /dev/null
then
echo "backup files exist"
if find /media/nfs/backup -maxdepth 1 -mtime +3 -name ${filename} -ls &> /dev/null
then
echo "The following backup file was deleted" >> /media/nfs/backup/backup_delete.log
find /media/nfs/backup -maxdepth 1 -mtime +3 -name ${filename} -delete
else
echo "There are no ${filename} files older than 3 days in /media/nfs/backup" &>> /media/nfs/backup/backup_delete.log
fi
else
echo "No ${filename} files found in /media/nfs/backup" >> /media/backup/backup_delete.log
fi
done
exit 0
in if find /media/nfs/backup -maxdepth 1 -mtime +3 -name ${filename} -ls &> /dev/null always goes to else, even though files older than 3 days are in the directory
You are not quoting the -name attribute so it expands to the name of the file which already exists.
I would refactor this rather extensively anyway. Don't parse ls output and perhaps simplify this by making it more obvious when to quote and when not to.
Untested, but hopefully vaguely useful still:
#!/bin/bash
backup=/media/nfs/backup
backuplog=$backup/backup_delete.log
# no need to touch if you write to the file anyway
date +"Date: %C" >> "$backuplog"
# Avoid using a variable, just loop over the stems
for stem in takrib_golive takrib_golive_filestore.tar
do
# echo $filename
# Avoid parsing ls; instead, loop over matches
for filename in "$backup"/*_"$stem".gz; do
pattern="*_$stem.gz"
if [ -e "$filename" ]; then
echo "backup files exist"
if files=$(find "$backup" -maxdepth 1 -mtime +3 -false -o -name "$pattern" -print -delete)
then
echo "The following backup file was deleted" >> "$backuplog"
echo "$files" >> "$backuplog"
else
echo "There are no $pattern files older than 3 days in $backup" >> "$backuplog"
fi
else
echo "No $pattern files found in $backup" >> "$backuplog"
fi
# Either way, we can break the loop after one iteration
break
done
done
# no need to explicitly exit 0
The for + if [ -e ... ] arrangement is slightly clumsy, but that's how you check if a wildcard matched any files. If the wildcard did not match, if [ -e will check for a file whose name is literally the wildcard expression itself, and fail.

Delete files created before '7' days regardless of the modified time

I have to delete the log files older than 7 days even if they were modified within a period of 7 days. But the only solution I can find anywhere is based on find command using mtime option as below:
find /path/to/files -mtime +7 -exec rm {} \;
What is the possible solution to this problem.
If you're using a filesystem that tracks file birth time and a current enough Linux kernel, glibc and GNU coreutils, you can do something like
weekago=$(date -d "last week" +%s)
for file in /path/to/files/*; do
birthdate=$(stat -c %W "$file")
if [[ $birthdate -gt 0 && $birthdate -lt $weekago ]]; then
printf "%s\n" "$file"
# Uncomment when you're sure you're getting just the files you want
# rm -- "$file"
fi
done

Find all files containing the filename of specific date range on Terminal/Linux

I have a surveillance camera which is capturing image base on my given condition. The images are saved on my Linux. Image naming convention are given below-
CAPTURE04.YYYYMMDDHHMMSS.jpg
The directory contains the following files -
CAPTURE04.20171020080501.jpg
CAPTURE04.20171021101309.jpg
CAPTURE04.20171021101913.jpg
CAPTURE04.20171021102517.jpg
CAPTURE04.20171021103422.jpg
CAPTURE04.20171022103909.jpg
CAPTURE04.20171022104512.jpg
CAPTURE04.20171022105604.jpg
CAPTURE04.20171022110101.jpg
CAPTURE04.20171022112513.jpg ... and so on.
However, Actually, now I'm trying to find a way to get all files between a specific date time (filename) range by using the terminal command.
Note: Need to follow the filename (YYYYMMDDHHMMSS), not the file created/modified time.
Such as I need to get all files whose file name is between 2017-10-20 08:30:00 and 2017-10-22 09:30:00
I'm trying and searching google around and got the following command -
find -type f -newermt "2017-10-20 08:30:00" \! -newermt "2017-10-22 09:30:00" -name '*.jpg'
It returns the files which are created/modified on that given date range. But I need to find files base on the given filenames range. So I think it does not work on my condition.
Also trying with the following command-
find . -maxdepth 1 -size +1c -type f \( -name 'CAPTURE04.20171020083000*.jpg' -o -name 'CAPTURE04.2017102209300*.jpg' \) | sort -n
This is not working.. :(
Please help me to write the actual command. Thanks, in advance.
Complete find + bash solution:
find . -type f -regextype posix-egrep -regex ".*CAPTURE04\.[0-9]{14}\.jpg" -exec bash -c \
'fn=${0##*/}; d=${fn:10:-4};
[[ $d -ge 20171020083000 && $d -le 20171022093000 ]] && echo "$0"' {} \;
fn=${0##*/} - obtaining file basename
d=${fn:10:-4} - extracting datetime section from the file's basename
[[ $d -ge 20171020083000 && $d -le 20171022093000 ]] && echo "$0" - print the filepath only if its datetime "section" is in specified range
One way(bash), not an elegant one:
ls CAPTURE04.2017102008{30..59}*.jpg CAPTURE04.2017102009{00..30}*.jpg 2>/dev/null
as maxdepth option is used means all files are in current directory so can be done in a loop with globs
for file in CAPTURE04.201710{20..22}*.jpg; do
if [[ $file > CAPTURE04.20171020083000 && $file < CAPTURE04.2017102209300 ]]; then
... do something with "$file"
fi
done

How to display true if find is not empty

I am very new to bash. I have just started to learn last week. I am trying to search for a file name.
How can I display a message if the file is found?
this is what i have but it keeps saying 'no'
echo ' [Enter] a file name '
read findFile
if [[ -n $(find /$HOME -type f -name "findFile") ]]
then
echo 'yes'
else
echo 'no'
fi
A few issues:
Use var= or read var when defining a variable, but $var when using it.
There is no reason to keep searching after finding a file, so do something like below, where find will -quit after finding a single file and return it as a result of the -print
#!/bin/bash
echo ' [Enter] a file name '
read findFile
if [[ -f $(find "$HOME" -type f -name "$findFile" -print -quit) ]]; then
echo 'yes'
else
echo 'no'
fi
Note that the option -quit will work on GNU and FreeBSD operating systems (which means this will work in most cases), but for example, you will need to change it to -exit on NetBSD.
You can see this answer from Unix/Linux StackExchange for details on this option.
Also note, per Adaephon's comment, that although the / is not needed in front of $HOME, it's not wrong and the files will still be found .
Use wc to count the number of lines in the find output:
if [ $(find $HOME -type f -name "thisFile" 2> /dev/null | wc -l) -gt 0 ]; then
echo 'yes'
else
echo 'no'
fi
the 2> /dev/null part hides possible error messages.

Check if directory has changed

I am working on a backup script and I've got a problem. I would like to backup my documents to a ftp server. Because I don't like encfs so I try to realise this by using z-zip and encrypted archives. This is working well but I would like to create a new archive only when a file inside a subdirectory has changed so lftp is only uploading the changed ones.
My codesnippet looks like this:
cd /mnt/HD_a2/documents
for i in */
do 7za a -t7z /mnt/HD_a2/encrypted/ul_doc/"${i%/}.7z" -p1234 -mhe "$i"
done
How can I change this code so it's only creating a new archive when a file inside "i" has been changed within the last 7 days? (This script is executed by cron every 7 days)
for i in */
do
if [ `find "$i" -type f -mtime -7 | wc -l` -gt 0 ]
then 7za a -t7z /mnt/HD_a2/encrypted/ul_doc/"${i%/}.7z" -p1234 -mhe "$i"
fi
done
So, Barmar`s answer is almost right, but it does not count files correctly. I've looked around for other similar questions and it seems like it is a common mistake(please note that it is not critical for his solution, but it might confuse other programmers if they touch it, because it does not what most people will expect), no one is accounting the fact that filenames can contain newlines. So here is a bit better version that gives you the right file count:
for i in */
do
fileCount=$(find "$i" -type f -mtime -8 -exec printf . \;)
if (( ${#fileCount} > 0 )); then
7za a -t7z /mnt/HD_a2/encrypted/ul_doc/"${i%/}.7z" -p1234 -mhe "$i"
fi
done
But what if you have thousands of files? That would build a string that is exactly as long as the number of your files.
So we can use this:
for i in */
do
fileCount=$(find "$i" -type f -mtime -8 -exec printf . \; | wc -c)
if (( fileCount > 0 )); then
7za a -t7z /mnt/HD_a2/encrypted/ul_doc/"${i%/}.7z" -p1234 -mhe "$i"
fi
done
Or this:
for i in */
do
fileCount=$(find "$i" -type f -mtime -8 -exec echo \; | wc -l)
if (( fileCount > 0 )); then
7za a -t7z /mnt/HD_a2/encrypted/ul_doc/"${i%/}.7z" -p1234 -mhe "$i"
fi
done
Hoping that useless string data is not going to be cached.
But we only need to know if there is AT LEAST one file, we don't have to count all of them! This way we can stop immediately if something was found.
for i in */
do
fileFound=$(find "$i" -type f -mtime -8 | head -n 1)
if [[ $fileFound ]]; then
7za a -t7z /mnt/HD_a2/encrypted/ul_doc/"${i%/}.7z" -p1234 -mhe "$i"
fi
done
That's it. This solution will work MUCH faster because it does not have to look for other files and it does not even have to check their mtime. Try running this code without head and you'll notice a significant difference - from several hours to less than a second for my home folder. (I'm not even sure if it will ever manage to finish it on my pc, I have millions of small files in my home folder...)

Resources