CentOS: delete X-days old files using cron - linux

I want to delete x-days old .tar & .sql files using cronjob.
I have tried multiple commands but it didn't work.
command such as,
find /path/to/files -type f -mtime +10 -delete
find /path/to/folder -name '*.sql' -mtime +30 -delete
find "$FILEDIR" -mtime +14 -delete
[[ $FILEDIR == /home/abc/* ]] && find "$FILEDIR" -mtime +14 -delete
above commands run perfectly but not deleting anything.
Need help.

I use for loops and echo statements for debugging purposes and also to make sure I don't delete anything by mistake.
So try this:
for filename in `find /path/to/files -type f -mtime +10`; do echo "rm $filename"; done
That will list out what it plans to do. Check it, make sure it looks good, then run by removing the echo statement
for filename in `find /path/to/files -type f -mtime +10`; do rm $filename; done
If it doesn't work, then you can output the above to a file and run it as a script
for filename in `find /path/to/files -type f -mtime +10`; do echo "rm $filename"; done > myremove.sh
Then run it
sh -x ./myremove.sh
And check the output above for errors.
Once you got it bug free, you can add to cron as a single command or in a script

Related

Output from script - exec rm

I got a script which deletes files which are older than 2 days, usually it works properly, but for some reason it doesn't work fine today.
I want to find an option to get an output from script with error why the files are not deleted.
Could you tell me is there such option?
script:
#!/bin/bash
#script for cleaning logs from files older than two days
dir_name=/home/albert/scripts/files
file_log=/home/albert/scripts/info.log
{
find ${dir_name} -type f -name '*.log' -mtime +2 -exec rm -v {} \;
} >> ${file_log)
You probably want to add a redirection for standard error too, i.e. in the case you want to send it to the same file:
{
find ${dir_name} -type f -name '*.log' -mtime +2 -exec rm -v {} \;
} >> ${file_log) 2>&1
You could use the find option -delete:
-delete - If the removal failed, an error message is issued.
find "${dir_name}" -type f -name '*.log' -mtime +2 -delete >> ${file_log} 2>&1
Example:
$ find /etc -name passwd -delete
find: cannot delete ‘/etc/pam.d/passwd’: Permission denied
find: cannot delete ‘/etc/passwd’: Permission denied

How to clean up folders efficiently using shell script

I am using a directory structure with various folders. There are new files created daily in some of them.
I have created some programs to clean up the directories, but I would like to use a shell script to make it more efficient.
Therefore I would like to store an "archiving.properties" file in every folder that needs to be cleaned up. The properties file should contain the following variables
file_pattern="*.xml"
days_to_keep=2
Now my clean up routine should:
find all properties files
delete all files that match the file name pattern (file_pattern) and that are older then the defined number of days (days_to_keep) in the directory where the properties file was found.
So my question is how can I do this in the most efficient way?
find . -type f -name "archiving.properties" -print
find . -type f -name "<file_pattern>" -mtime +<days_to_keep> -delete
currently I was trying the following in a single folder. It prints out the command correctly, but it is not executed.
#!/bin/bash
. archiving.properties
find . -type f -name "*.xml" -mtime +1 -exec rm -rf {} \;
echo " find . -type f -name \"${file_pattern}\" -mtime +${days_to_keep} -exec rm -rf {} \;"
Result is: find . -type f -name "*.xml" -mtime +1 -exec rm -rf {} \;
Thanks for your help in advance.
I got a final result
echo "start deleting files in " $(pwd) " ... "
#filename of the properties
properties="clean_up.properties"
#find all properties files
for prop in $(find . -type f -name $properties);do
#init variables
file_pattern="*._html"
days_to_keep=14
#load the variables from the properties file
. "$prop"
#define the folder of the properties file
folder=${prop%?$properties}
#remove all files matching the parameters in the folder where the properties were found
echo ">>> find $folder -type f -name \"${file_pattern}\" -mtime +${days_to_keep} -exec rm -f {} \;"
find $folder -type f -name "${file_pattern}" -mtime +${days_to_keep} -exec rm -f {} \;
done
echo "... done"

Remove files in subdirectories older than 1 day with Linux command

I am honestly nowhere near to be a decent bash scripter, but I made a little research and found a command that seems to be useful
find /path/to/files* -mtime +1 -exec rm {} \;
The question is if this line will remove directories? Because I want to only remove files that are images (actually in a *.jpeg format)
No, rm without the -r flag does not remove directories.
It looks like you want to add some more filters:
-type f to match only files
-name '*.jpeg' to match only files ending with .jpeg
Lastly, instead of -exec rm {} \;, you could use the much simpler -delete.
Putting it together, this looks more appropriate for you:
find /path/to/files* -mtime +1 -type f -name '*.jpeg' -delete
Then narrow your search results to *.jpeg files:
find /path/to/files* -mtime +1 -type f -name "*.jpeg" -exec rm {} \;
It's always better to remove the exec parameter to do a dry run before delete:
find /path/to/files* -mtime +1 -type f -name "*.jpeg"
Each line will be passed to rm command, and nothing more.

Multiple find -exec commands in one bash script doesn't work?

I have a bash script that needs to be run by cron. It works when the script only contains 1 command line, but fails when it's more than 1 line.
#!/bin/sh
find /path/to/file1 -name 'abc_*' -type f -mtime +7 -exec rm {} \;
find /path/to/file2 -name 'def*.gz' -type f -mtime +7 -exec rm {} \;
I received find: missing argument to `-exec' error message. I need to keep only the last 7 days of several different files in several different directories.
Why did I get that error message when all the commands have already seem to be true?
#user1576748
Is there anything that would prevent you from doing this inside one line?
example:
find /path/to/file1 /path/to/file2 -name 'abc*' -o -name 'def*.gz' -type f -mtime +7 -exec rm {} \;
The above works for me.

Linux find and delete files but redirect file names to be deleted

Is there a way to write the file names to a file before they are deleted for reference later to check what has been deleted.
find <PATH> -type f -name "<filePattern>" -mtime +1 -delete
Just add a -print expression to the invocation of find:
find <PATH> -type f -name "<filePattern>" -mtime +1 -delete -print > log
I'm not sure if this prints the name before or after the file is unlinked, but it should not matter. I suspect -delete -print unlinks before it prints, while -print -delete will print before it unlinks.
Like William said, you can use -print. However, instead of -print > log, you can also use the -fprint flag.
You'd want something like:
find <PATH> -type f -name "<filePattern>" -mtime +1 -fprint "<pathToLog>" -delete
For instance, I use this in a script:
find . -type d -name .~tmp~ -fprint /var/log/rsync-index-removal.log -delete
You can use -exec and rm -v:
find <PATH> -type f -name "<filePattern>" -mtime +1 -exec rm -v {} \;
rm -v will report what it is deleting.
With something like this you can execute multiple commands in the exec statement, like log to file, rm file, and whatever more you should need
find <PATH> -type f -name "<filePattern>" -mtime +1 -exec sh -c "echo {} >>mylog; rm -f {}" \;
From a shell script named removelogs.sh
run the command sh removelogs.sh in terminal
this is the text in removelogs.sh file.
cd /var/log;
date >> /var/log/removedlogs.txt;
find . -maxdepth 4 -type f -name \*log.old -delete -print >> /var/log/removedlogs.txt
. - to run at this location !!! so ensure you do not run this in root folder!!!
-maxdepth - to prevent it getting out of control
-type - to ensure just files
-name - to ensure just your filtered names
-print - to send the result to stdout
-delete - to delete said files
>> - appends to files not overwrites > creates new file
works for me on CENTOS7

Resources