Write a script that deletes all the regular files (not the directories) with a .js extension that are present in the current directory and all its sub [duplicate] - linux

I'm trying to work out a command which deletes sql files older than 15 days.
The find part is working but not the rm.
rm -f | find -L /usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups -type f \( -name '*.sql' \) -mtime +15
It kicks out a list of exactly the files I want deleted but is not deleting them. The paths are correct.
usage: rm [-f | -i] [-dIPRrvW] file ...
unlink file
/usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/20120601.backup.sql
...
/usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/20120610.backup.sql
What am I doing wrong?

You are actually piping rm's output to the input of find. What you want is to use the output of find as arguments to rm:
find -type f -name '*.sql' -mtime +15 | xargs rm
xargs is the command that "converts" its standard input into arguments of another program, or, as they more accurately put it on the man page,
build and execute command lines from standard input
Note that if file names can contain whitespace characters, you should correct for that:
find -type f -name '*.sql' -mtime +15 -print0 | xargs -0 rm
But actually, find has a shortcut for this: the -delete option:
find -type f -name '*.sql' -mtime +15 -delete
Please be aware of the following warnings in man find:
Warnings: Don't forget that the find command line is evaluated
as an expression, so putting -delete first will make find try to
delete everything below the starting points you specified. When
testing a find command line that you later intend to use with
-delete, you should explicitly specify -depth in order to avoid
later surprises. Because -delete implies -depth, you cannot
usefully use -prune and -delete together.
P.S. Note that piping directly to rm isn't an option, because rm doesn't expect filenames on standard input. What you are currently doing is piping them backwards.

find /usr/www/bar/htdocs -mtime +15 -exec rm {} \;
Will select files in /usr/www/bar/htdocs older than 15 days and remove them.

Another simpler method is to use locate command. Then, pipe the result to xargs.
For example,
locate file | xargs rm

Use xargs to pass arguments, with the option -rd '\n' to ignore spaces in names:
"${command}" | xargs -rd '\n' rm
Include --force if you want to also remove read only files.

Assuming you aren't in the directory containing the *.sql backup files:
find /usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/*.sql -mtime +15 -exec rm -v {} \;
The -v option above is handy it will verbosely output which files are being deleted as they are removed.
I like to list the files that will be deleted first to be sure. E.g:
find /usr/www2/bar/htdocs/foo/rsync/httpdocs/db_backups/*.sql -mtime +15 -exec ls -lrth {} \;

Related

Statement that compress files older than X and after it removes old ones

Trying to do a bash script, that will compress files older than X, and after compressing removes uncompressed version. Tried something like this, but it doesn't work.
find /home/randomcat -mtime +11 -exec gzip {}\ | -exec rm;
By default, gzip will remove the uncompressed file (since it replaces it with the compressed variant). And you don't want it to run on anything else than a plain file (not on directories or devices, not on symbolic links).
So you want at least
find /home/randomcat -mtime +11 -type f -exec gzip {} \;
You could even want find(1) to avoid files with several hard links. And you might also want it to ask you before running the command. Then you could try:
find /home/randomcat -mtime +11 -type f -links 1 -ok gzip {} \;
The find command with -exec or -ok wants a semicolon (or a + sign), and you need to escape that semicolon ; from your shell. You could use ';' instead of \; to quote it...
If you use a + the find command will group several arguments (to a single gzip process), so will run less processes (but they would last longer). So you could try
find /home/randomcat -mtime +11 -type f -links 1 -exec gzip -v {} +
You may want to read more about globbing and how a shell works.
BTW, you don't need any command pipeline (as suggested by the wrong use of | in your question).
You could even consider using GNU parallel to run things in parallel, or feed some shell (with background jobs) with e.g.
find /home/randomcat -mtime +11 -type f -links 1 \
-exec printf "gzip %s &\n" {} \; | bash -x
but in practice you won't speed up a lot your processing.
find /home/randomcat -mtime +11 -exec gzip {} +
This bash script compresses the files you find with the "find command".Instead of generating new files in gzip format, convert the files to gzip format.Let's say you have three files named older than X. And their names are a,b,c.
After running find /home/randomcat -mtime +11 -exec gzip {} + command,
you will see a.gz b.gz c.gz instead of seeing a b c in /home/randomcat directory.
find /location/location -type f -ctime +15 -exec mv {} /location/backup_location \;
This will help you find all the files and move to backup folder

Linux find all files in sub directories and move them

I have a Linux-System where some users put files with ftp in a Directory. In this Directory there are sub-directories which the users can create. Now I need a script that searches for all files in those subdirectories and moves them in a single Directory (for backup). The Problem: The Sub directories shouldn´t be removed.
the directory for the users is /files/media/documents/
and the files have to be moved in the Directory /files/dump/. I don´t care about files in /files/media/documents/, they are already handled by another script.
I already tried this script:
for dir in /files/media/documents/
do
find "$dir/" -iname '*' -print0 | xargs -0 mv -t /files/dump/
done
Instead of iterating, you could just use find. In man-page there is a "-type" option documented, so for moving only files you could do:
find "/files/media/documents/" -type f -print0 | xargs -0 mv -t /files/dump/
You also won't like to find files in /files/media/documents/, but all sub-directories? Simply add "-mindepth":
find "/files/media/documents/" -type f -mindepth 1 -print0 | xargs -0 mv -t /files/dump/
Alternatively you could also use "-exec" to skip a second command (xargs):
find "/files/media/documents/" -type f -mindepth 1 -exec mv {} /files/dump/ \;

cronjob to remove files older than N days with special characters

I'm trying to create a job to delete files on a linux box older than X days. Pretty straightforward with:
find /path/to/files -mtime +X -exec rm {}\;
Problem is all my files have special characters b/c they are pictures from a webcam - most contain parenthesis so the above command fails with "no such file or directory".
Have you tried this:
find /path/to/files -mtime +X -exec rm '{}' \;
Or perhaps:
rm $(find /path/to/files -mtime +X);
Or even this method using xargs instead of -exec:
find /path/to/files -mtime +X | xargs rm -f;
Another twist on xargs is to use -print0 which will help the script differentiate between spaces in filenames & spaces between the returned list by using the ASCII null character as a file separator:
find /path/to/files -mtime +X -print0 | xargs -0 rm -f;
Or as man find explains under -print0:
This primary always evaluates to true. It prints the pathname of
the current file to standard output, followed by an ASCII NUL
character (character code 0).
I would also recommend adding the -maxdepth and -type flags to better control what the script does. So I would use this for a dry-run test:
find /path/to/files -maxdepth 1 -type f -mtime +1 -exec echo '{}' \;
The -maxdepth flag controls how many directories down the find will execute and -type will limit the search to files (aka: f) so the script is focused on files only. This will simply echo the results. Then when you are comfortable with it, change the echo to rm.
Does
find /path/to/files -mtime +X -print | tr '()' '?' | xargs rm -f
work?

delete old files in a directory

okay maybe this sounds simple, but it has been a bit challenging to me
I have a directory called backups and it has (backup files + other files)
backups files:
../backups/backup-2013_03_03.zip
../backups/backup-2013_03_05.zip
../backups/backup-2013_01_01.zip
../backups/backup-2013_08_16.zip
../backups/backup-2013_02_28.zip
../backups/backup-2013_01_21.zip
../backups/backup-2013_03_29.zip
../backups/backup-2013_04_05.zip
I'm trying to delete backup files older than 90 days.
find /var/tmp/stuff -mtime +90 -print | xargs /bin/rm
seems to work, but I'm not able to limit the search to backup files only. "files which starts with backup*"
I have tried adding "-iname backup" option to find command argument, thinking it would do the trick but it doesn't seems to work.
Any ideas?
Thank you
You can pipe through grep before calling rm. Something like:
find /var/tmp/stuff -mtime +90 -print | grep 'backup-' | xargs /bin/rm
while the find utility has all kinds of options to single handedly do this, including the deleting as noted in other answers, I can never remember any but the most basic options.
find "stuff" | grep "some_other_stuff" | xargs "do_stuff"
seems much easier to remember for me.
The parameter to iname matches against the full filename, so you need a trailing wildcard:
find /var/tmp/stuff -mtime +90 -iname "backup*" -print | xargs /bin/rm
You could also use find's -exec argument, but personally I find the syntax quite arcane. I prefer xargs.
find /var/tmp/stuff -mtime +90 -iname "backup*" -exec /bin/rm '{}'
Or, as damienfrancois points out, GNU find can take a -delete argument. This is the best solution because a) it is shorter and b) it is more efficient because the deletion happens within the find process. exec and xargs will both spawn one new process per file to delete. Source: GNU manual However, as wildplasser points out, it could also be dangerous - -delete will remove directories by default. To only delete files, use -type f.
find /var/tmp/stuff -type f -mtime +90 -iname "backup*" -delete
You could use -exec option of find along with -iname. Since you want to delete only files, you would need to specify -type f
find /var/tmp/stuff -type f -iname 'backup*' -mtime +90 -exec rm {} +
If you prefer xargs like me
find /var/tmp/stuff -type f -iname 'backup*' -mtime +90 -print0 | xargs -0 rm
Note : It's recommended to use find -print0 with xargs -0 to avoid weird file name caveats

Linux command for removing all ~ files

What command can I use in Linux to check if there is a file in a given directory (or its subdirectories) that contains a ~at the end of the file's name?
For example, if I'm at a directory called t which contains many subdirectories, etc, I would like to remove all files that end with a ~.
Watch out for filenames with spaces in them!
find ./ -name "*~" -type f -print0 | xargs -0 rm
with GNU find
find /path -type f -name "*~" -exec rm {} +
or
find /path -type f -name "*~" -delete
find ./ -name '*~' -print0 | xargs -0 rm -f
Here find will search the directory ./ and all sub directories, filtering for filenames that match the glob '*~' and printing them (with proper quoting courtesy of alberge). The results are passed to xargs to be appended to rm -f and the resulting string run in a shell. You can use multiple paths, and there are many other filters available (just read man find).
you can use a find, grep, rm combination, something like
find | grep "~" | xargs rm -f
Probably others have better ideas :)

Resources