How to capture both success and error messages for linux "find" command - linux

I'm trying to run an auto-delete script to free up space on a remote server.
The command I'm thinking to use is:
find . -atime +30 -mtime +30 -type f -delete
What I want is to also capture which files were successfully deleted and which failed because of access issue. How should I do this? I think this command below might take care of the failures only, but I'm not sure.
find . -atime +30 -mtime +30 -type f -delete 2>failed_deletions.txt

find out of the box does not print the files it processes. If you want to list the files, add a -print or -ls before the -delete.
This obviously prints all the files it processes, including the ones it fails to delete for whatever reason.
Redirecting standard output to a different file should be trivial to discover; command >stdout 2>stderr
The final command would become
find . -atime +30 -mtime +30 -type f \
-print -delete >success.txt 2>errors.txt

Less performant, but should do what you wanted:
find . -atime +30 -mtime +30 -type f -exec rm -v {} \; >successful.txt 2>failed.txt

Related

How to delete files older than X days and print a list of removed files

Suppose the following call in a script:
find $dir/$CACHE_DIR_SUBPATH -type f -mtime +$RETENTION_DAYS -delete
Is there a neat way to print the file names that are going to be deleted or is the best option to call find twice like so:
find $dir/$CACHE_DIR_SUBPATH -type f -mtime +$RETENTION_DAYS
find $dir/$CACHE_DIR_SUBPATH -type f -mtime +$RETENTION_DAYS -delete
You should simply introduce into the find command line the -print option.
In others words:
find $dir/$CACHE_DIR_SUBPATH -type f -mtime +$RETENTION_DAYS -print -delete
The alternative way maybe:
find $dir/$CACHE_DIR_SUBPATH -type f -mtime +$RETENTION_DAYS -exec echo "Deleting file '{}'..."; rm {} \;

Delete files older than 30 days, but in 1 directory the retention 6 months

I'd like to create a find which delete a files older than 30 days, but I have 1 directory where the retention should be 6 months.
How would that be possible?
This one would delete all files in all subdirectories which is older than 30 days if I'm correct.
/bin/find /root/script/* -type f -ctime +30 -exec rm {} \;
Bu how I can set that this directory needs different retention:
/root/script/owner
You can exclude the /root/script/owner from the find output using -path or -regex, combined with '!' to negate the test
find /root/script -type f -ctime +30 '!' -path '/root/script/owner/*' -exec rm {} \;
OR
find /root/script -type f -ctime +30 '!' -regex '/root/script/owner/.*' -exec rm {} \;
Then execute the custom delete on the special folder
find /root/script/owner -type f -ctime +180 -exec rm {} \;
You can combine multiple operators so that they restrict what is included. You actually already do this because you have a -type and -ctime, joined with an implicit AND.
The one you need to add is regex, and you can do something like:
/bin/find /root/script/* -type f ! -regex '/root/script/owner/.*' -ctime +30 -exec rm {} \;
This should exclude files in that particular tree since the ! -regex will be false for them. The basic idea is that only those that pass all the conditions will be subject to further operations.
In this case, any non-regular files will be excluded. Of the others, any that don't match the regex will be excluded. Of those remaining ones, we'll throw away any that don't match the 30-day requirement. Whatever's left will be actioned by the rm.
The remaining directory, of course, is done with the greater time:
/bin/find /root/script/owner -type f -ctime +186 -exec rm {} \;

Remove files in subdirectories older than 1 day with Linux command

I am honestly nowhere near to be a decent bash scripter, but I made a little research and found a command that seems to be useful
find /path/to/files* -mtime +1 -exec rm {} \;
The question is if this line will remove directories? Because I want to only remove files that are images (actually in a *.jpeg format)
No, rm without the -r flag does not remove directories.
It looks like you want to add some more filters:
-type f to match only files
-name '*.jpeg' to match only files ending with .jpeg
Lastly, instead of -exec rm {} \;, you could use the much simpler -delete.
Putting it together, this looks more appropriate for you:
find /path/to/files* -mtime +1 -type f -name '*.jpeg' -delete
Then narrow your search results to *.jpeg files:
find /path/to/files* -mtime +1 -type f -name "*.jpeg" -exec rm {} \;
It's always better to remove the exec parameter to do a dry run before delete:
find /path/to/files* -mtime +1 -type f -name "*.jpeg"
Each line will be passed to rm command, and nothing more.

Multiple find -exec commands in one bash script doesn't work?

I have a bash script that needs to be run by cron. It works when the script only contains 1 command line, but fails when it's more than 1 line.
#!/bin/sh
find /path/to/file1 -name 'abc_*' -type f -mtime +7 -exec rm {} \;
find /path/to/file2 -name 'def*.gz' -type f -mtime +7 -exec rm {} \;
I received find: missing argument to `-exec' error message. I need to keep only the last 7 days of several different files in several different directories.
Why did I get that error message when all the commands have already seem to be true?
#user1576748
Is there anything that would prevent you from doing this inside one line?
example:
find /path/to/file1 /path/to/file2 -name 'abc*' -o -name 'def*.gz' -type f -mtime +7 -exec rm {} \;
The above works for me.

Linux find and delete files but redirect file names to be deleted

Is there a way to write the file names to a file before they are deleted for reference later to check what has been deleted.
find <PATH> -type f -name "<filePattern>" -mtime +1 -delete
Just add a -print expression to the invocation of find:
find <PATH> -type f -name "<filePattern>" -mtime +1 -delete -print > log
I'm not sure if this prints the name before or after the file is unlinked, but it should not matter. I suspect -delete -print unlinks before it prints, while -print -delete will print before it unlinks.
Like William said, you can use -print. However, instead of -print > log, you can also use the -fprint flag.
You'd want something like:
find <PATH> -type f -name "<filePattern>" -mtime +1 -fprint "<pathToLog>" -delete
For instance, I use this in a script:
find . -type d -name .~tmp~ -fprint /var/log/rsync-index-removal.log -delete
You can use -exec and rm -v:
find <PATH> -type f -name "<filePattern>" -mtime +1 -exec rm -v {} \;
rm -v will report what it is deleting.
With something like this you can execute multiple commands in the exec statement, like log to file, rm file, and whatever more you should need
find <PATH> -type f -name "<filePattern>" -mtime +1 -exec sh -c "echo {} >>mylog; rm -f {}" \;
From a shell script named removelogs.sh
run the command sh removelogs.sh in terminal
this is the text in removelogs.sh file.
cd /var/log;
date >> /var/log/removedlogs.txt;
find . -maxdepth 4 -type f -name \*log.old -delete -print >> /var/log/removedlogs.txt
. - to run at this location !!! so ensure you do not run this in root folder!!!
-maxdepth - to prevent it getting out of control
-type - to ensure just files
-name - to ensure just your filtered names
-print - to send the result to stdout
-delete - to delete said files
>> - appends to files not overwrites > creates new file
works for me on CENTOS7

Resources