I have a set of images like these
12345-image-1-medium.jpg 12345-image-2-medium.png 12345-image-3-large.jpg
what pattern should I write to select these images and delete them
I also have these images that don't want to select
12345-image-profile-small.jpg 12345-image-profile-medium.jpg 12345-image-profile-large.png
I have tried this regex but not worked
1234-image-[0-9]+-small.*
I think bash not support regex as in Javascript, Go, Python or Java
for pic in 12345*.{jpg,png};do rm $pic;done
for more information on wildcards take a look here
So long as you do NOT have filenames with embedded '\n' character, then the following find and grep will do:
find . -type f | grep '^.*/[[:digit:]]\{1,5\}-image-[[:digit:]]\{1,5\}'
It will find all files below the current directory and match (1 to 5 digits) followed by "-image-" followed by another (1 to 5 digits). In your case with the following files:
$ ls -1
123-image-99999-small.jpg
12345-image-1-medium.jpg
12345-image-2-medium.png
12345-image-3-large.jpg
12345-image-profile-large.png
12345-image-profile-medium.jpg
12345-image-profile-small.jpg
The files you request are matched in addition to 123-image-99999-small.jpg, e.g.
$ find . -type f | grep '^.*/[[:digit:]]\{1,5\}-image-[[:digit:]]\{1,5\}'
./123-image-99999-small.jpg
./12345-image-3-large.jpg
./12345-image-2-medium.png
./12345-image-1-medium.jpg
You can use the above in a command substitution to remove the files, e.g.
$ rm $(find . -type f | grep '^.*/[[:digit:]]\{1,5\}-image-[[:digit:]]\{1,5\}')
The remaining files are:
$ l1
12345-image-profile-large.png
12345-image-profile-medium.jpg
12345-image-profile-small.jpg
If Your find Supports -regextype
If your find supports the regextype allowing you to specify which set of regular expression syntax to use, you can use -regextype grep for grep syntax and use something similar to the above to remove the files with the -execdir option, e.g.
$ find . -type f -regextype grep -regex '^.*/[[:digit:]]\+-image-[[:digit:]]\+.*$' -execdir rm '{}' +
I do not know whether this is supported by BSD or Solaris, etc.., so check before turning it loose in a script. Also note, [[:digit:]]\+ tests for (1 or more) digits and is not limited to 5-digits as shown in your question.
Ok I solve it with this pattern
12345-image-*[0-9]-*
eg:
rm -rf 12345-image-*[0-9]-*
it matches all the file names start with 12345-image- then a number then - symbol and any thing after that
as I found it's globbing in bash not regex
and I found this app really use full
Related
I would like to delete all files in a given folder that dont match the pattern ^transactions_[0-9]+
Let's say I have these files in the folder
file_list
transactions_010116.csv
transactions_020116.csv
transactions_check_010116.csv
transactions_check_020116.csv
I would like to delete transactions_check_010116.csv and transactions_check_020116.csv and leave the first two as they are using ^transactions_[0-9]+
I've been trying to use find something like below, but this expression deletes everything in the folder not just the files that dont match the pattern:
find /my_file_location -type f ! -regex '^transactions_[0-9]+' -delete
What i'm trying to do here is using regex find all files in folder that dont start with ^transactions_[0-9]+ and delete them.
Depending on your implementation, you could have to use option -E to allow the use of full regexes. An other problem is that -regex gives you an almost full path starting with the directory you passed.
So the correct command should be:
find -E /my_file_location ! -regex '.*/transactions_[0-9]+$' -type f -delete
But you should first issue the same with -print to be sure...
grep has -v option to grep everything not matching the provided regex:
find . | grep -v '^transactions_[0-9]+' | xargs rm -f
I'm trying to count the total lines in the files within a directory. To do this I am trying to use a combination of find and wc. However, when I run find . -exec wc -l {}\;, I recieve the error find: missing argument to -exec. I can't see any apparent issues, any ideas?
You simply need a space between {} and \;
find . -exec wc -l {} \;
Note that if there are any sub-directories from the current location, wc will generate an error message for each of them that looks something like that:
wc: ./subdir: Is a directory
To avoid that problem, you may want to tell find to restrict the search to files :
find . -type f -exec wc -l {} \;
Another note: good idea using the -exec option . Too many times people pipe commands together thinking to get the same result, for instance here it would be :
find . -type f | xargs wc -l
The problem with piping commands in such a manner is that it breaks if any files has spaces in it. For instance here if a file name was "a b" , wc would receive "a" and then "b" separately and you would obviously get 2 error messages: a: no such file and b: no such file.
Unless you know for a fact that your file names never have any spaces in them (or non-printable characters), if you do need to pipe commands together, you need to tell all the tools you are piping together to use the NULL character (\0) as a separator instead of a space. So the previous command would become:
find . -type f -print0 | xargs -0 wc -l
With version 4.0 or later of bash, you don't need your find command at all:
shopt -s globstar
wc -l **/*
There's no simple way to skip directories, which as pointed out by Gui Rava you might want to do, unless you can differentiate files and directories by name alone. For example, maybe directories never have . in their name, while all the files have at least one extension:
wc -l **/*.*
These SSH commands work in changing text for several files in a directory
replace "old-string" "new-String" -- *.ext
replace "old-string" "new-String" -- *
replace "old-string" "new-String" -- filename
however these won't target subdirectories... anybody knows the command to include ALL subdirectories?
I think sed is better for this. Your first two examples can be rewritten:
find . -type f | xargs sed -i s/old-string/new-string/g
find . -type f -name '*.ext' | xargs sed -i s/old-string/new-string/g
You can also pipe the results of find to your replace command, if that is better for you.
I understand that using something like [^a]* will output all the files that do not start with "a".
If I want to echo files that contain at least 5 characters that do not start with "abc" (but can contain "abc" in the middle of the filename), how should I go about doing so?
I have
echo [^abc]?????*
but the output also removes files like "123abc", which I don't quite understand.
You don't indicate which OS your question applies to, but one way to determine the set of matching files on Mac OS X or Linux would be:
find . -maxdepth 1 -type f -name "?????*" | egrep -v "./abc"
Note that this will list only files in the current directory. If you want to include files in subdirectories, you'll need to remove the maxdepth argument.
Also note that these commands are case-sensitive. You'll need to use -iname and -i to make them case-insensitive.
EDIT:
If you really need to use the echo command, the following will work:
echo `find . -maxdepth 1 -type f -name "?????*" | egrep -v "./abc"`
I have had to do this several times, usually when trying to find in what files a variable or a function is used.
I remember using xargs with grep in the past to do this, but I am wondering if there are any easier ways.
grep -r REGEX .
Replace . with whatever directory you want to search from.
The portable method* of doing this is
find . -type f -print0 | xargs -0 grep pattern
-print0 tells find to use ASCII nuls as the separator and -0 tells xargs the same thing. If you don't use them you will get errors on files and directories that contain spaces in their names.
* as opposed to grep -r, grep -R, or grep --recursive which only work on some machines.
This is one of the cases for which I've started using ack (http://petdance.com/ack/) in lieu of grep. From the site, you can get instructions to install it as a Perl CPAN component, or you can get a self-contained version that can be installed without dealing with dependencies.
Besides the fact that it defaults to recursive searching, it allows you to use Perl-strength regular expressions, use regex's to choose files to search, etc. It has an impressive list of options. I recommend visiting the site and checking it out. I've found it extremely easy to use, and there are tips for integrating it with vi(m), emacs, and even TextMate if you use that.
If you're looking for a string match, use
fgrep -r pattern .
which is faster than using grep.
More about the subject here: http://www.mkssoftware.com/docs/man1/grep.1.asp
grep -r if you're using GNU grep, which comes with most Linux distros.
On most UNIXes it's not installed by default so try this instead:
find . -type f | xargs grep regex
If you use the zsh shell you can use
grep REGEX **/*
or
grep REGEX **/*.java
This can run out of steam if there are too many matching files.
The canonical way though is to use find with exec.
find . -name '*.java' -exec grep REGEX {} \;
or
find . -type f -exec grep REGEX {} \;
The 'type f' bit just means type of file and will match all files.
I suggest changing the answer to:
grep REGEX -r .
The -r switch doesn't indicate regular expression. It tells grep to recurse into the directory provided.
This is a great way to find the exact expression recursively with one or more file types:
find . \\( -name '\''*.java'\'' -o -name '\''*.xml'\'' \\) | xargs egrep
(internal single quotes)
Where
-name '\''*.<filetype>'\'' -o
(again single quotes here)
is repeated in the parenthesis ( ) for how many more filetypes you want to add to your recursive search
an alias looks like this in bash
alias fnd='find . \\( -name '\''*.java'\'' -o -name '\''*.xml'\'' \\) | xargs egrep'