Remove Files older than 3 years - linux

I need to remove any file in the directory that is older than 2 years old. It is very important that I keep the newest files and delete the old files.
I have searched and found this.
find /path/to/files* -mtime +365 -exec rm {} \;
Can I just multiply the number?
find /path/to/files* -mtime +1095 -exec rm {} \;
Is there a way to add a switch that will print the file name to the screen as it removes it? To make sure it is doing what I am expecting?
I have also found this:
find /rec -mtime +365 -print0 | xargs -0 rm -f
Is there a major difference between the two? Is one better than the other? What I have read says that xargs is faster. Would I be able to multiply the mtime number out to a 2nd or 3rd year?
And finally would would I be able to place the code as it is into a cron job that can run daily?
Thank you!

Can I just multiply the number?
find /path/to/files -mtime +1095 -exec rm {} \;
Yes. And to "echo" before you remove
find /path/to/files -mtime +1095 -print
Then the version with -exec rm {} \; to remove the files (when you are ready).

find /path/to/files* -mtime +1095 -exec rm {} \;
That should work fine, you can run a dry a run of this by simply listing the files that are found by the command:
find /path/to/files* -mtime +1095 -exec ls {} \;
To be safe though I would also add in a -type to ensure that other things dont get deleted:
find /path/to/files* -type f -mtime +1095 -exec rm {} \;

To answer the second part of your question.
Yes there is a major difference in using -exec or xargs.
-exec starts a new process of rm for every file found. This creates a lot of overhead and can seriously slow down Systems if you delete a lot of files.
xargs creates only as much rm processes as needed, as it creates a command line containing as much files as possible. So only a few rm processes are created.
But both are better than -delete, because delete is unsave

Related

Write a script that deletes all the regular files (not the directories) with a .js extension that are present in the current directory and all its sub [duplicate]

I'm trying to work out a command which deletes sql files older than 15 days.
The find part is working but not the rm.
rm -f | find -L /usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups -type f \( -name '*.sql' \) -mtime +15
It kicks out a list of exactly the files I want deleted but is not deleting them. The paths are correct.
usage: rm [-f | -i] [-dIPRrvW] file ...
unlink file
/usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/20120601.backup.sql
...
/usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/20120610.backup.sql
What am I doing wrong?
You are actually piping rm's output to the input of find. What you want is to use the output of find as arguments to rm:
find -type f -name '*.sql' -mtime +15 | xargs rm
xargs is the command that "converts" its standard input into arguments of another program, or, as they more accurately put it on the man page,
build and execute command lines from standard input
Note that if file names can contain whitespace characters, you should correct for that:
find -type f -name '*.sql' -mtime +15 -print0 | xargs -0 rm
But actually, find has a shortcut for this: the -delete option:
find -type f -name '*.sql' -mtime +15 -delete
Please be aware of the following warnings in man find:
Warnings: Don't forget that the find command line is evaluated
as an expression, so putting -delete first will make find try to
delete everything below the starting points you specified. When
testing a find command line that you later intend to use with
-delete, you should explicitly specify -depth in order to avoid
later surprises. Because -delete implies -depth, you cannot
usefully use -prune and -delete together.
P.S. Note that piping directly to rm isn't an option, because rm doesn't expect filenames on standard input. What you are currently doing is piping them backwards.
find /usr/www/bar/htdocs -mtime +15 -exec rm {} \;
Will select files in /usr/www/bar/htdocs older than 15 days and remove them.
Another simpler method is to use locate command. Then, pipe the result to xargs.
For example,
locate file | xargs rm
Use xargs to pass arguments, with the option -rd '\n' to ignore spaces in names:
"${command}" | xargs -rd '\n' rm
Include --force if you want to also remove read only files.
Assuming you aren't in the directory containing the *.sql backup files:
find /usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/*.sql -mtime +15 -exec rm -v {} \;
The -v option above is handy it will verbosely output which files are being deleted as they are removed.
I like to list the files that will be deleted first to be sure. E.g:
find /usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/*.sql -mtime +15 -exec ls -lrth {} \;

Remove files in subdirectories older than 1 day with Linux command

I am honestly nowhere near to be a decent bash scripter, but I made a little research and found a command that seems to be useful
find /path/to/files* -mtime +1 -exec rm {} \;
The question is if this line will remove directories? Because I want to only remove files that are images (actually in a *.jpeg format)
No, rm without the -r flag does not remove directories.
It looks like you want to add some more filters:
-type f to match only files
-name '*.jpeg' to match only files ending with .jpeg
Lastly, instead of -exec rm {} \;, you could use the much simpler -delete.
Putting it together, this looks more appropriate for you:
find /path/to/files* -mtime +1 -type f -name '*.jpeg' -delete
Then narrow your search results to *.jpeg files:
find /path/to/files* -mtime +1 -type f -name "*.jpeg" -exec rm {} \;
It's always better to remove the exec parameter to do a dry run before delete:
find /path/to/files* -mtime +1 -type f -name "*.jpeg"
Each line will be passed to rm command, and nothing more.

find command in bash script resulting in "No such file or directory" error only for directories?

UPDATE 2014-03-21
So I realized I wasn't as efficient as I could be, as all the disks that I needed to "scrub" were under /media and named "disk1, disk2,disk3, etc." Here's the final script:
DIRTY_DIR="/media/disk*"
find $DIRTY_DIR -depth -type d -name .AppleDouble -exec rm -rf {} \;
find $DIRTY_DIR -depth -type d -name .AppleDB -exec rm -rf {} \;
find $DIRTY_DIR -depth -type d -name .AppleDesktop -exec rm -rf {} \;
find $DIRTY_DIR -type f -name ".*DS_Store" -exec rm -f {} \;
find $DIRTY_DIR -type f -name ".Thumbs.db" -exec rm -f {} \; # I know, I know, this is a Windows file.
Next will probably to just clean up the code even more, and add features like logging and reporting results (through e-mail or otherwise); excluding system and directories; and allowing people to customize the list of files/directories.
Thanks for all the help!
UPDATE
Before I incorporated the helpful suggestions provided by everyone, I performed some tests, the results of which were very interesting (see below).
As a test, I ran this command:
root#doi:~# find /media/disk3 -type d -name .AppleDouble -exec echo rm -rf {} \;
The results (which is what I expected):
rm -rf /media/disk3/Videos/Chorus/.AppleDouble
However, when I ran the actual command (without echo):
root#doi:~# find /media/disk3 -type d -name .AppleDouble -exec rm -rf {} \;
I received the same "error" output:
find: `/media/disk3/Videos/Chorus/.AppleDouble': No such file or directory
I put "error" in quotes because obviously the folder was removed, as verified by immediately running:
root#doi:~# find /media/disk3 -type d -name .AppleDouble -exec rm -rf {} \;
root#doi:~#
It seems like the find command stored the original results, acted on it by deleting the directory, but then tried to delete it again? Or is the -f option of rm, which is supposed to be for ignoring nonexistent files and arguments, is ignored? I note that when I run tests with the rm command alone without the find command, everything worked as expected. Thus, directly running rm -rf ... \nonexistent_directory, no errors were returned even though the "non_existent_directory" was not there, and directly running rm -r \nonexistent_directory provided the expected:
rm: cannot remove 'non_existent_directory': No such file or directory
Should I use the -delete option instead of the -exec rm ... option? I had wanted to make the script as broadly applicable as possible for systems that didn't have -delete option for find.
Lastly, I don't presume it matters if /media/disk1, /media/disk2, ... are combined in an AUFS filesystem under /media/storage as the find command is operating on the individual disks themselves?
Thanks for all the help so far, guys. I'll publish the script when I'm done.
ORIGINAL POST
I'm writing a bash script to delete a few OS X remnants on my Lubuntu file shares. However, when executing this:
...
BASE_DIR="/media/disk" # I have 4 disks: disk1, disk2, ...
COUNTER=1
while [ $COUNTER -lt 5 ]; do # Iterate through disk1, disk2, ...
DIRTY_DIR=${BASE_DIR}$COUNTER # Look under the current disk counter /media/disk1, /media/disk2, ...
find $DIRTY_DIR -name \.AppleDouble -exec rm -rf {} \; # Delete all .AppleDouble directories
find $DIRTY_DIR -name ".*DS_Store" -exec rm -rf {} \; # Delete all .DS_Store and ._.DS_Store files
COUNTER=$(($COUNTER+1))
done
...
I see the following output:
find: /media/disk1/Pictures/.AppleDouble: No such file or directory
Before I added the -exec rm ... portion the script found the /media/disk1/Pictures/.AppleDouble directory. The script works properly for removing DS_Store files, but what am I missing for the find command for directories?
I'm afraid to screw too much with the -exec portion as I don't want to obliterate directories in error.
tl;dr - Pass -prune if you're deleting directories using find.
For anyone else who stumbles on this question. Running an example like this
find /media/disk3 -type d -name .AppleDouble -exec rm -rf {} \;
results in an error like
rm: cannot remove 'non_existent_directory': No such file or directory
When finding and deleting directories with find, you'll often encounter this error because find stores the directory to process subdirectories, then deletes it with exec, then tries to traverse the subdirectories which no longer exist.
You can either pass -maxdepth 0 or -prune to prevent this issue. Like so:
find /media/disk3 -type d -name .AppleDouble -prune -exec rm -rf {} \;
Now it deletes the directories without any errors. Hurray! :)
You don't need to escape DOT in shell glob as this is not regex. So use .AppleDouble instead of \.AppleDouble:
find $DIRTY_DIR -name .AppleDouble -exec rm -rf '{}' \;
PS: I don't see anywhere $COUNTER being incremented in your script.

Bash-Performing the same command on several directories

I want to create a script that will delete any files older than 7 days on a specified list of directories, but wondering what would be the best way to go about it...
I want to perform the following command on all directories specified:
find DIRECTORY_PATH -type f -mtime +7 -exec rm {} \;
Maybe an array holding a list of directories, and loop through each element in the array performing the find command on that?
Any help/advice would be appreciated.
You can directly store all the directories in a file, say dirs.txt and loop through it:
while read dir
do
find "$dir" -type f -mtime +7 -exec rm {} \;
done < dirs.txt

delete old files in a directory

okay maybe this sounds simple, but it has been a bit challenging to me
I have a directory called backups and it has (backup files + other files)
backups files:
../backups/backup-2013_03_03.zip
../backups/backup-2013_03_05.zip
../backups/backup-2013_01_01.zip
../backups/backup-2013_08_16.zip
../backups/backup-2013_02_28.zip
../backups/backup-2013_01_21.zip
../backups/backup-2013_03_29.zip
../backups/backup-2013_04_05.zip
I'm trying to delete backup files older than 90 days.
find /var/tmp/stuff -mtime +90 -print | xargs /bin/rm
seems to work, but I'm not able to limit the search to backup files only. "files which starts with backup*"
I have tried adding "-iname backup" option to find command argument, thinking it would do the trick but it doesn't seems to work.
Any ideas?
Thank you
You can pipe through grep before calling rm. Something like:
find /var/tmp/stuff -mtime +90 -print | grep 'backup-' | xargs /bin/rm
while the find utility has all kinds of options to single handedly do this, including the deleting as noted in other answers, I can never remember any but the most basic options.
find "stuff" | grep "some_other_stuff" | xargs "do_stuff"
seems much easier to remember for me.
The parameter to iname matches against the full filename, so you need a trailing wildcard:
find /var/tmp/stuff -mtime +90 -iname "backup*" -print | xargs /bin/rm
You could also use find's -exec argument, but personally I find the syntax quite arcane. I prefer xargs.
find /var/tmp/stuff -mtime +90 -iname "backup*" -exec /bin/rm '{}'
Or, as damienfrancois points out, GNU find can take a -delete argument. This is the best solution because a) it is shorter and b) it is more efficient because the deletion happens within the find process. exec and xargs will both spawn one new process per file to delete. Source: GNU manual However, as wildplasser points out, it could also be dangerous - -delete will remove directories by default. To only delete files, use -type f.
find /var/tmp/stuff -type f -mtime +90 -iname "backup*" -delete
You could use -exec option of find along with -iname. Since you want to delete only files, you would need to specify -type f
find /var/tmp/stuff -type f -iname 'backup*' -mtime +90 -exec rm {} +
If you prefer xargs like me
find /var/tmp/stuff -type f -iname 'backup*' -mtime +90 -print0 | xargs -0 rm
Note : It's recommended to use find -print0 with xargs -0 to avoid weird file name caveats

Resources