To get X number of files in a directory, I can do:
$ ls -U | head -40000
How would I then delete these 40,000 files? For example, something like:
$ "rm -rf" (ls -U | head -40000)
The tool you need for this is xargs. It will convert standard input into arguments to a command that you specify. Each line of the input is treated as a single argument.
Thus, something like this would work (see the comment below, though, ls shouldn't be parsed this way normally):
ls -U | head -40000 | xargs rm -rf
I would recommend before trying this to start with a small head size and use xargs echo to print out the filenames being passed so you understand what you'll be deleting.
Be aware if you have files with weird characters that this can sometimes be a problem. If you are on a modern GNU system you may also wish to use the arguments to these commands that use null characters to separate each element. Since a filename cannot contain a null character that will safely parse all possible names. I am not aware of a simple way to take the top X items when they are zero separated.
So, for example you can use this to delete all files in a directory
find . -maxdepth 1 -print0 | xargs -0 rm -rf
Use a bash array and slice it. If the number and size of arguments is likely to get close to the system's limits, you can still use xargs to split up the remainder.
files=( * )
printf '%s\0' "${files[#]:0:40000}" | xargs -0 rm
What about using awk as the filter?
find "$FOLDER" -maxdepth 1 -mindepth 1 -print0 \
| awk -v limit=40000 'NR<=limit;NR>limit{exit}' RS="\0" ORS="\0" \
| xargs -0 rm -rf
It will reliably remove at most 40.000 files (or folders). Reliably means regardless of which characters the filenames may contain.
Btw, to get the number of files in a directory reliably you can do:
find FOLDER -mindepth 1 -maxdepth 1 -printf '.' | wc -c
I ended up doing this since my folders were named with sequential numbers. This should also work for alphabetical folders:
ls -r releases/ | sed '1,3d' | xargs -I {} rm -rf releases/{}
Details:
list all the items in the releases/ folder in reverse order
slice off the first 3 items (which would be the newest if numeric/alpha naming)
for each item, rm it
In your case, you can replace ls -r with ls -U and 1,3d with 1,40000d. That should be the same, I believe.
Related
I'm trying to delete the first largest regular files from the given directory, but it doesn't work for files which contain whitespace caracters.
My code (it works if the files doesn't contain whitespace caracters):
find mydir -type f -exec du -ahb {} + | sort -n -r | cut -f2 | head -n 10 | xargs rm -i
I also tried this, but it gives an error message:
find mydir -type f -exec du -ahb {} + -print 0 | sort -n -r | cut -f2 | head -n 10 | xargs -0 rm -i
The following should work at least with GNU coreutils 8.25 and newer :
find mydir -type f -exec du -0b {} + | sort -znr | cut -zf2 | head -zn 10 | xargs -0pn 1 rm
I made sure every command handled and outputted NUL bytes (\0) separated records rather than linefeed separated records :
du outputs NUL-separated records with -0
sort, cut and head handle and output NUL-separated records with -z
xargs handles NUL-separated records with -0
Additionally, I removed the interactive mode of rm and asked xargs to handle that instead (-p), because xargs didn't provide a prompt to rm when invoking it. I had to limit the number of parameters given at once to rm to 1 for this to work (xargs' -n 1 parameter). There might be a way to preserve the -i and provide rm with an interface to your prompt, but I don't know how.
Last point : I removed du's -human-readable mode because it would have made the sort often fail and it didn't serve any purpose since the filesizes were never displayed to an human.
I have a directory with a large number of files. I am attempting to search for text located in at least one of the files. The text is likely located in one of the more recent files. What is the command to do this? I thought it would look something like ls -t | head -5 | grep abaaba.
For example, if I have 5 files returned from ls -t | head -5:
- file1, file2, file3, file4, file5, I need to know which of those files contains abaaba.
It's not really clear what you are trying to do. But I assume the efficiency is your main goal. I would use something like:
ls -t | while read -r f; do grep -lF abaaba "$f" && break;done
This will print only first file containing the string and stops the search. If you want to see actual lines use -H instead of -l. And if you have regex instead of mere string drop -F which will make grep run slower however.
ls -t | while read -r f; do grep -H abaaba "$f" && break;done
Of course if you want to continue the search I'd suggest dropping "&& break".
ls -t | while read -r f; do grep -HF abaaba "$f";done
If you have some ideas about the time frame, it's good idea to try find.
find . -maxdepth 1 -type f -mtime -2 -exec grep -HF abaaba {} \;
You can raise the number after -mtime to cover more than last 2 days.
If you're just doing this interactively, and you know you don't have spaces in your filenames, then you can do:
grep abaaba $(ls -t | head -5) # DO NOT USE THIS IN A SCRIPT
If writing this in an alias or for repeat future use, do it the "proper" way that takes more typing, but that doesn't break on spaces and other things in filenames.
If you have spaces but not newlines, you can also do
(IFS=$'\n' grep abaaba $(ls -t | head -5) )
I want to find the files not containing a specific string (in a directory and its sub-directories) and remove those files. How I can do this?
The following will work:
find . -type f -print0 | xargs --null grep -Z -L 'my string' | xargs --null rm
This will firstly use find to print the names of all the files in the current directory and any subdirectories. These names are printed with a null terminator rather than the usual newline separator (try piping the output to od -c to see the effect of the -print0 argument.
Then the --null parameter to xargs tells it to accept null-terminated inputs. xargs will then call grep on a list of filenames.
The -Z argument to grep works like the -print0 argument to find, so grep will print out its results null-terminated (which is why the final call to xargs needs a --null option too). The -L argument to grep causes grep to print the filenames of those files on its command line (that xargs has added) which don't match the regular expression:
my string
If you want simple matching without regular expression magic then add the -F option. If you want more powerful regular expressions then give a -E argument. It's a good habit to use single quotes rather than double quotes as this protects you against any shell magic being applied to the string (such as variable substitution)
Finally you call xargs again to get rid of all the files that you've found with the previous calls.
The problem with calling grep directly from the find command with the -exec argument is that grep then gets invoked once per file rather than once for a whole batch of files as xargs does. This is much faster if you have lots of files. Also don't be tempted to do stuff like:
rm $(some command that produces lots of filenames)
It's always better to pass it to xargs as this knows the maximum command-line limits and will call rm multiple times each time with as many arguments as it can.
Note that this solution would have been simpler without the need to cope with files containing white space and new lines.
Alternatively
grep -r -L -Z 'my string' . | xargs --null rm
will work too (and is shorter). The -r argument to grep causes it to read all files in the directory and recursively descend into any subdirectories). Use the find ... approach if you want to do some other tests on the files as well (such as age or permissions).
Note that any of the single letter arguments, with a single dash introducer, can be grouped together (for instance as -rLZ). But note also that find does not use the same conventions and has multi-letter arguments introduced with a single dash. This is for historical reasons and hasn't ever been fixed because it would have broken too many scripts.
GNU grep and bash.
grep -rLZ "$str" . | while IFS= read -rd '' x; do rm "$x"; done
Use a find solution if portability is needed. This is slightly faster.
EDIT: This is how you SHOULD NOT do this! Reason is given here. Thanks to #ormaaj for pointing it out!
find . -type f | grep -v "exclude string" | xargs rm
Note: grep pattern will match against full file path from current directory (see find . -type f output)
One possibility is
find . -type f '!' -exec grep -q "my string" {} \; -exec echo rm {} \;
You can remove the echo if the output of this preview looks correct.
The equivalent with -delete is
find . -type f '!' -exec grep -q "user_id" {} \; -delete
but then you don't get the nice preview option.
To remove files not containing a specific string:
Bash:
To use them, enable the extglob shell option as follows:
shopt -s extglob
And just remove all files that don't have the string "fix":
rm !(*fix*)
If you want to don't delete all the files that don't have the names "fix" and "class":
rm !(*fix*|*class*)
Zsh:
To use them, enable the extended glob zsh shell option as follows:
setopt extended_glob
Remove all files that don't have the string, in this example "fix":
rm -- ^*fix*
If you want to don't delete all the files that don't have the names "fix" and "class":
rm -- ^(*fix*|*class*)
It's possible to use it for extensions, you only need to change the regex: (.zip) , (.doc), etc.
Here are the sources:
https://www.tecmint.com/delete-all-files-in-directory-except-one-few-file-extensions/
https://codeday.me/es/qa/20190819/1296122.html
I can think of a few ways to approach this. Here's one: find and grep to generate a list of files with no match, and then xargs rm them.
find yourdir -type f -exec grep -F -L 'yourstring' '{}' + | xargs -d '\n' rm
This assumes GNU tools (grep -L and xargs -d are non-portable) and of course no filenames with newlines in them. It has the advantage of not running grep and rm once per file, so it'll be reasonably fast. I recommend testing it with "echo" in place of "rm" just to make sure it picks the right files before you unleash the destruction.
This worked for me, you can remove the -f if you're okay with deleting directories.
myString="keepThis"
for x in `find ./`
do if [[ -f $x && ! $x =~ $myString ]]
then rm $x
fi
done
Another solution (although not as fast). The top solution didn't work in my case because the string I needed to use in place of 'my string' has special characters.
find -type f ! -name "*my string*" -exec rm {} \; -print
It's hard to explain in the title.
I have a bash script that runs daily to backup one folder into a zip file. The zip files are named worldYYYYMMDD.zip with YYYYMMDD being the date of backup. What I want to do is delete all but the 5 most recent backups. Sorting the files alphabetically will list the oldest ones first, so I basically need to delete all but the last 5 files when sorted in alphabetical order.
The following line should do the trick.
ls -F world*.zip | head -n -5 | xargs -r rm
ls -F: List the files alphabetically
head -n -5: Filter out all lines except the last 5
xargs -r rm: remove each given file. -r: don't run rm if the input is empty
How about this:
find /your/directory -name 'world*.zip' -mtime +5 | xargs rm
Test it before. This should remove all world*.zip files older than 5 days. So a different logic than you have.
I can't test it right now because I don't have a Linux machine, but I think it should be:
rm `ls -A | head -5`
ls | grep ".*[\.]zip" | sort | tail -n-5 | while read file; do rm $file; done
sort sorts the files
tail -n-5 returns all but the 5 most recent
the while loop does the deleting
ls world*.zip | sort -r | tail n+5 | xargs rm
sort -r will sort in reversed order, so the newest will be at the top
tail n+5 will output lines, starting with the 5th
xargs rm will remove the files. Xargs is used to pass stdin as parameters to rm.
I need to calculate a summary MD5 checksum for all files of a particular type (*.py for example) placed under a directory and all sub-directories.
What is the best way to do that?
The proposed solutions are very nice, but this is not exactly what I need. I'm looking for a solution to get a single summary checksum which will uniquely identify the directory as a whole - including content of all its subdirectories.
Create a tar archive file on the fly and pipe that to md5sum:
tar c dir | md5sum
This produces a single MD5 hash value that should be unique to your file and sub-directory setup. No files are created on disk.
find /path/to/dir/ -type f -name "*.py" -exec md5sum {} + | awk '{print $1}' | sort | md5sum
The find command lists all the files that end in .py.
The MD5 hash value is computed for each .py file. AWK is used to pick off the MD5 hash values (ignoring the filenames, which may not be unique).
The MD5 hash values are sorted. The MD5 hash value of this sorted list is then returned.
I've tested this by copying a test directory:
rsync -a ~/pybin/ ~/pybin2/
I renamed some of the files in ~/pybin2.
The find...md5sum command returns the same output for both directories.
2bcf49a4d19ef9abd284311108d626f1 -
To take into account the file layout (paths), so the checksum changes if a file is renamed or moved, the command can be simplified:
find /path/to/dir/ -type f -name "*.py" -exec md5sum {} + | md5sum
On macOS with md5:
find /path/to/dir/ -type f -name "*.py" -exec md5 {} + | md5
ire_and_curses's suggestion of using tar c <dir> has some issues:
tar processes directory entries in the order which they are stored in the filesystem, and there is no way to change this order. This effectively can yield completely different results if you have the "same" directory on different places, and I know no way to fix this (tar cannot "sort" its input files in a particular order).
I usually care about whether groupid and ownerid numbers are the same, not necessarily whether the string representation of group/owner are the same. This is in line with what for example rsync -a --delete does: it synchronizes virtually everything (minus xattrs and acls), but it will sync owner and group based on their ID, not on string representation. So if you synced to a different system that doesn't necessarily have the same users/groups, you should add the --numeric-owner flag to tar
tar will include the filename of the directory you're checking itself, just something to be aware of.
As long as there is no fix for the first problem (or unless you're sure it does not affect you), I would not use this approach.
The proposed find-based solutions are also no good because they only include files, not directories, which becomes an issue if you the checksumming should keep in mind empty directories.
Finally, most suggested solutions don't sort consistently, because the collation might be different across systems.
This is the solution I came up with:
dir=<mydir>; (find "$dir" -type f -exec md5sum {} +; find "$dir" -type d) | LC_ALL=C sort | md5sum
Notes about this solution:
The LC_ALL=C is to ensure reliable sorting order across systems
This doesn't differentiate between a directory "named\nwithanewline" and two directories "named" and "withanewline", but the chance of that occurring seems very unlikely. One usually fixes this with a -print0 flag for find, but since there's other stuff going on here, I can only see solutions that would make the command more complicated than it's worth.
PS: one of my systems uses a limited busybox find which does not support -exec nor -print0 flags, and also it appends '/' to denote directories, while findutils find doesn't seem to, so for this machine I need to run:
dir=<mydir>; (find "$dir" -type f | while read f; do md5sum "$f"; done; find "$dir" -type d | sed 's#/$##') | LC_ALL=C sort | md5sum
Luckily, I have no files/directories with newlines in their names, so this is not an issue on that system.
If you only care about files and not empty directories, this works nicely:
find /path -type f | sort -u | xargs cat | md5sum
A solution which worked best for me:
find "$path" -type f -print0 | sort -z | xargs -r0 md5sum | md5sum
Reason why it worked best for me:
handles file names containing spaces
Ignores filesystem meta-data
Detects if file has been renamed
Issues with other answers:
Filesystem meta-data is not ignored for:
tar c - "$path" | md5sum
Does not handle file names containing spaces nor detects if file has been renamed:
find /path -type f | sort -u | xargs cat | md5sum
For the sake of completeness, there's md5deep(1); it's not directly applicable due to *.py filter requirement but should do fine together with find(1).
If you want one MD5 hash value spanning the whole directory, I would do something like
cat *.py | md5sum
Checksum all files, including both content and their filenames
grep -ar -e . /your/dir | md5sum | cut -c-32
Same as above, but only including *.py files
grep -ar -e . --include="*.py" /your/dir | md5sum | cut -c-32
You can also follow symlinks if you want
grep -aR -e . /your/dir | md5sum | cut -c-32
Other options you could consider using with grep
-s, --no-messages suppress error messages
-D, --devices=ACTION how to handle devices, FIFOs and sockets;
-Z, --null print 0 byte after FILE name
-U, --binary do not strip CR characters at EOL (MSDOS/Windows)
GNU find
find /path -type f -name "*.py" -exec md5sum "{}" +;
Technically you only need to run ls -lR *.py | md5sum. Unless you are worried about someone modifying the files and touching them back to their original dates and never changing the files' sizes, the output from ls should tell you if the file has changed. My unix-foo is weak so you might need some more command line parameters to get the create time and modification time to print. ls will also tell you if permissions on the files have changed (and I'm sure there are switches to turn that off if you don't care about that).
Using md5deep:
md5deep -r FOLDER | awk '{print $1}' | sort | md5sum
I want to add that if you are trying to do this for files/directories in a Git repository to track if they have changed, then this is the best approach:
git log -1 --format=format:%H --full-diff <file_or_dir_name>
And if it's not a Git directory/repository, then the answer by ire_and_curses is probably the best bet:
tar c <dir_name> | md5sum
However, please note that tar command will change the output hash if you run it in a different OS and stuff. If you want to be immune to that, this is the best approach, even though it doesn't look very elegant on first sight:
find <dir_name> -type f -print0 | sort -z | xargs -0 md5sum | md5sum | awk '{ print $1 }'
md5sum worked fine for me, but I had issues with sort and sorting file names. So instead I sorted by md5sum result. I also needed to exclude some files in order to create comparable results.
find . -type f -print0 \
| xargs -r0 md5sum \
| grep -v ".env" \
| grep -v "vendor/autoload.php" \
| grep -v "vendor/composer/" \
| sort -d \
| md5sum
I had the same problem so I came up with this script that just lists the MD5 hash values of the files in the directory and if it finds a subdirectory it runs again from there, for this to happen the script has to be able to run through the current directory or from a subdirectory if said argument is passed in $1
#!/bin/bash
if [ -z "$1" ] ; then
# loop in current dir
ls | while read line; do
ecriv=`pwd`"/"$line
if [ -f $ecriv ] ; then
md5sum "$ecriv"
elif [ -d $ecriv ] ; then
sh myScript "$line" # call this script again
fi
done
else # if a directory is specified in argument $1
ls "$1" | while read line; do
ecriv=`pwd`"/$1/"$line
if [ -f $ecriv ] ; then
md5sum "$ecriv"
elif [ -d $ecriv ] ; then
sh myScript "$line"
fi
done
fi
If you want really independence from the file system attributes and from the bit-level differences of some tar versions, you could use cpio:
cpio -i -e theDirname | md5sum
There are two more solutions:
Create:
du -csxb /path | md5sum > file
ls -alR -I dev -I run -I sys -I tmp -I proc /path | md5sum > /tmp/file
Check:
du -csxb /path | md5sum -c file
ls -alR -I dev -I run -I sys -I tmp -I proc /path | md5sum -c /tmp/file