How do I open up multiple files with find from cygwin? - cygwin

This is a follow-up to https://unix.stackexchange.com/questions/4382/how-to-open-multiple-files-from-find-output.
I've already set aliases for both irfanview and NotePad++ (as notepad) but neither of them work.
Here is one of my sample commands:
find . -name '*.txt' -exec notepad {} +

use a for loop
for file in $(find . -name "*.txt" -print)
do
notepad $file &
done

Related

Linux bash command -backup=numbered. Put the number BEFORE the file extension

Using a one-line bash command with GitBash on windows, using find and cp, I am backing up a bunch of script files that exist in multiple sub-directories. I am currently backing them up to a single directory. As you can imagine, naming conflicts arise. This is easy enough to avoid with the --backup=numbered option which creates a copy of the file. However, the problem with this is that it puts the number AFTER the file extension, naming the file like this: example.js.~2~. What I want is to preserve the file extension and name the file like this: example2.js rather than putting the number after the file extension. Is there any way to do this?
Another option would be to prepend the directory name (from the directory that it is being copied from) to the file that is being copied instead of adding a number. I would accept either of these as a solution.
Here is what I have so far:
find . -path "*node_modules*" -prune -o -type f \( -name '*.js' -or -name '*.js.map' -or -name '*.ts' -or -name '*.json' \) -printf "%h\n" -exec cp {} --backup=numbered "/c/test/" \;
Any help would be appreciated! Thank you!
what about :
#!/bin/bash
# your find command here
FILES=$(find . -type f .....)
# loop through files and create a new filename with the path within ( slashes replaced by underscores
for FILE in $FILES; do
NEW_FILENAME=$(printf "%s" "$FILE" | sed s/\\//_/g)
cp "$FILE" "/c/test/${NEW_FILENAME}"
done
from your question, I am unsure if a one liner is mandatory...

What the best way to delete files by extension?

I am looking the best way to delete files from directory by extension.
I am planning to do it by date. But now, i am testing how it works.
This:
dir=/tmp/backup/
mask="jpeg jpg png gif bmp pdf"
for i in $mask; do
find $dir -name "*.$i" -type f -delete
done
Or this ?
find $dir \( -name "*.jpeg" -o -name "*.jpg" -o -name "*.png" \
-o -name "*.gif" -o -name "*.bmp" -o -name "*.pdf" \) -type f -delete
I wan to do it with min resources of machine and operation system. Maybe you know other ways to do it. Because i will delete one year old files. And it can call lags. Thanks.
You can just use:
# to ensure it doesn't return *.jpg if there is no .jpg file
shopt -s nullglob
# list all matching extension filea
echo *.{jpeg,jpg,png,gif,bmp,pdf}
When you are satisfied with the output, just replace echo by rm
However if you want to make use of a variable then store all extensions in a variable then use it like this with find command:
mask="jpeg jpg png gif bmp pdf"
find . -type f -regextype posix-extended -regex ".*\.("${mask// /|}")"

Recursively find files with a specific extension

I'm trying to find files with specific extensions.
For example, I want to find all .pdf and .jpg files that's named Robert
I know I can do this command
$ find . -name '*.h' -o -name '*.cpp'
but I need to specify the name of the file itself besides the extensions.
I just want to see if there's a possible way to avoid writing the file name again and over again
Thank you !
My preference:
find . -name '*.jpg' -o -name '*.png' -print | grep Robert
Using find's -regex argument:
find . -regex '.*/Robert\.\(h\|cpp\)$'
Or just using -name:
find . -name 'Robert.*' -a \( -name '*.cpp' -o -name '*.h' \)
find -name "*Robert*" \( -name "*.pdf" -o -name "*.jpg" \)
The -o repreents an OR condition and you can add as many as you wish within the braces. So this says to find all files containing the word "Robert" anywhere in their names and whose names end in either "pdf" or "jpg".
As an alternative to using -regex option on find, since the question is labeled bash, you can use the brace expansion mechanism:
eval find . -false "-o -name Robert".{jpg,pdf}
This q/a shows how to use find with regular expression: How to use regex with find command?
Pattern could be something like
'^Robert\\.\\(h|cgg\\)$'
As a script you can use:
find "${2:-.}" -iregex ".*${1:-Robert}\.\(h\|cpp\)$" -print
save it as findcc
chmod 755 findcc
and use it as
findcc [name] [[search_direcory]]
e.g.
findcc # default name 'Robert' and directory .
findcc Joe # default directory '.'
findcc Joe /somewhere # no defaults
note you cant use
findcc /some/where #eg without the name...
also as alternative, you can use
find "$1" -print | grep "$#"
and
findcc directory grep_options
like
findcc . -P '/Robert\.(h|cpp)$'
Using bash globbing (if find is not a must)
ls Robert.{pdf,jpg}
Recurisvely with ls: (-al for include hidden folders)
ftype="jpg"
ls -1R *.${ftype} 2> /dev/null
For finding the files in system using the files database:
locate -e --regex "\.(h|cpp)$"
Make sure locate package is installed i.e. mlocate

Exclude list of files from find

If I have a list of filenames in a text file that I want to exclude when I run find, how can I do that? For example, I want to do something like:
find /dir -name "*.gz" -exclude_from skip_files
and get all the .gz files in /dir except for the files listed in skip_files. But find has no -exclude_from flag. How can I skip all the files in skip_files?
I don't think find has an option like this, you could build a command using printf and your exclude list:
find /dir -name "*.gz" $(printf "! -name %s " $(cat skip_files))
Which is the same as doing:
find /dir -name "*.gz" ! -name first_skip ! -name second_skip .... etc
Alternatively you can pipe from find into grep:
find /dir -name "*.gz" | grep -vFf skip_files
This is what i usually do to remove some files from the result (In this case i looked for all text files but wasn't interested in a bunch of valgrind memcheck reports we have here and there):
find . -type f -name '*.txt' ! -name '*mem*.txt'
It seems to be working.
I think you can try like
find /dir \( -name "*.gz" ! -name skip_file1 ! -name skip_file2 ...so on \)
find /var/www/test/ -type f \( -iname "*.*" ! -iname "*.php" ! -iname "*.jpg" ! -iname "*.png" \)
The above command gives list of all files excluding files with .php, .jpg ang .png extension. This command works for me in putty.
Josh Jolly's grep solution works, but has O(N**2) complexity, making it too slow for long lists. If the lists are sorted first (O(N*log(N)) complexity), you can use comm, which has O(N) complexity:
find /dir -name '*.gz' |sort >everything_sorted
sort skip_files >skip_files_sorted
comm -23 everything_sorted skip_files_sorted | xargs . . . etc
man your computer's comm for details.
This solution will go through all files (not exactly excluding from the find command), but will produce an output skipping files from a list of exclusions.
I found that useful while running a time-consuming command (file /dir -exec md5sum {} \;).
You can create a shell script to handle the skipping logic and run commands on the files found (make it executable with chmod, replace echo with other commands):
$ cat skip_file.sh
#!/bin/bash
found=$(grep "^$1$" files_to_skip.txt)
if [ -z "$found" ]; then
# run your command
echo $1
fi
Create a file with the list of files to skip named files_to_skip.txt (on the dir you are running from).
Then use find using it:
find /dir -name "*.gz" -exec ./skip_file.sh {} \;
This should work:
find * -name "*.gz" $(printf "! -path %s " $(<skip_files.txt))
Working out
Assuming skip_files has a filename on each line, you can get the list of filenames via $(<skip_files.txt). E.g. echo $(<skip_files.txt) should print them all out.
For each filename you want to have a ! -path filename expression. To build this, use $(printf "! -path %s " $(<skip_files.txt))
Then, put it together with a filter on -name "*.gz"

Use "find" to list all c/h/cc files but exclude symlinks

I am trying to get a listing of the c/h/cc files in a directory, but I want the file symlink to be excluded.
I used the following command to list the files, which works fine.
find . -name *.c -o -name *.h -o -name *.cc
I tried adding the option -type f to prune the listing of file symlink, but no luck. Basically I want to find to give me listing of (.c or .h or .cc files) and file type regular file.
My goal is to save the list of files into cscope.files and run cscope on it, and currently cscope complains about symlinks.
Thank you.
This definitely works for me. Are you sure your shell isn't expanding the * in your commandline? Or you didn't apply -type f to all your items:
find . -type f -and \( -name "*.c" -o -name "*.h" -o -name "*.cc" \)

Resources