Selecting a file from a list [duplicate] - linux

This question already has answers here:
How to loop through file names returned by find?
(17 answers)
Closed 2 years ago.
How can I select files one by one from a list and work on them? This is my code:
list=$( find $path \( -name "*.c" -or -name "*.cpp" -or -name "*.cxx" -or -name "*.cc" \) )
for file in "$list"
do
done

You can pass to find as many commands as you like:
find $path \( -name "*.c" -or -name "*.cpp" -or -name "*.cxx" -or -name "*.cc" \) \
-exec bash -c "echo cmd1 '{}'; echo cmd2 '{}'; echo etc." \;
an alternative is to write a function:
function safe_copy() {
echo "$1"
}
export -f safe_copy
find $path \( -name "*.c" -or -name "*.cpp" -or -name "*.cxx" -or -name "*.cc" \) \
-exec bash -c "safe_copy '{}'" \;
You can also write a script and call it with exec instead of a function.

You can use -print0 as find argument, to print a null delimited stream of filenames.
Then loop read the null delimited stream.
find "$path" \( -name '*.c' -or -name '*.cpp' -or -name '*.cxx' -or -name '*.cc' \) -print0 |
while IFS= read -r file
do
done

Related

For loop won't repeat itself

I have this block and the for loop doesn't repeat even if the path has more than 2 files.. It executes only once and that's all.. What's the problem? How can I make it run for all files in the list?
list=$(find $path -type f \( -name "*.c" -or -name "*.cpp" -or -name "*.cxx" -or -name "*.cc" \))
for file in "$list";do
#commands
done
You can avoid the use of find entirely here (Assuming the only files with those extensions are regular files; no directories etc.), via bash's extended globbing:
shopt -s extglob globstar
for file in "$path"/**/*.#(c|cpp|cxx|cc); do
# commands
done
Putting $list in quotes makes it just one word, so it doesn't loop.
But if you take out the quotes, it won't work properly if any of the filenames contain whitespace, since they'll be split into multiple words.
Instead of assigning to a variable, pipe the output to a while read loop.
find $path -type f \( -name "*.c" -or -name "*.cpp" -or -name "*.cxx" -or -name "*.cc" \) | while read -r file
do
# commands
done

Linux Find command- exclude find based on file name

I feel like this is a ridiculously easy question but I cannot find a simple regex answer for this. Basically, I am trying to use find to get a list of all files in my system with some exclusions. One of these exclusions is any file that ends in .Foo.cs, or any file named FooInfo.cs. I have successfully excluded a couple directories from my search, but cannot seem to exclude these two files. I've tried using -name, but would -name even work for this? Below is my expression. Thanks.
find . ! -name 'FooInfo.cs' ! -name '*.Foo.cs' -type d \( -name Foo-o -name 2Foo -o -name 2_Foo \) -prune -o -type f ! -size 0 \( -name "*.java" -o -name "*.cs" -o -name "*.cpp" -o -name "*.cxx" -o -name "*.cc" -o -name "*.c" -o -name "*.h" -o -name "*.scala" -o -name "*.css" -o -name "*.html" -o -name "*.bat" -o -name "*.js" \) -exec realpath {} \;| xargs grep -L CUSTOMERINFO | sed -e 's/$/\r/g' >> ../output.txt
So I'm not sure why, but I ended up fixing this by changing the order of what I'm excluding. Instead of excluding at the very beginning, the following worked (moving the ! -name '.FOO.cs' and ! -name '.fooinfo.cs' to right after the declaration type -f).
I'm assuming this worked because they are files so they must be flagged with type -f. But please comment and correct below if you know why.
find . -type d \( -name Foo-o -name 2Foo -o -name 2_Foo \) -prune -o -type f ! -size 0 ! -name 'FooInfo.cs' ! -name '*.Foo.cs' \( -name "*.java" -o -name "*.cs" -o -name "*.cpp" -o -name "*.cxx" -o -name "*.cc" -o -name "*.c" -o -name "*.h" -o -name "*.scala" -o -name "*.css" -o -name "*.html" -o -name "*.bat" -o -name "*.js" \) -exec realpath {} \;| xargs grep -L CUSTOMERINFO | sed -e 's/$/\r/g' >> ../output.txt

How to remove certain pattern files except another certain pattern files from a list?

I have many file with name chr1_gene_*.raw. I would like to keep some of them. So I use following command.
find . -maxdepth 1 -type f -name "*.raw" -not -name "chr1_gene_448.raw" -not -name "chr1_gene_1914.raw" -not -name "chr1_gene_2456.raw" -not -name "chr1_gene_1554.raw" -not -name "chr1_gene_2024.raw" -not -name "chr1_gene_35.raw" -not -name "chr1_gene_509.raw" -not -name "chr1_gene_1952.raw" -not -name "chr1_gene_575.raw" -not -name "chr1_gene_2249.raw" -not -name "chr1_gene_272.raw" -not -name "chr1_gene_2158.raw" -exec rm -rf {} \;
Sometimes there are too many files I want to keep. I do not want to type "-not -name " too many times. Is there a way to put a list in "-not -name"?
You may achieve this using a script say notnamescript.sh :
#!/bin/bash
while read line
do
echo "-not -name " $line
done<notnamelist
Put all the -not -name names in a file called notnamelist. Remember there
should be no trailing empty lines.
find . -maxdepth 1 -type f -name "*.name" $( ./notnamescript.sh ) -exec rm -rf {} \;

'Sed' not working on result of 'find' with multiple parameters

I'm trying to do a find and replace function, finding files which match a criteria then find/replace text within them.
Find statement (works find and returns list of files):
find / -type f -name "*.properties" -o -name "*.xml" -not \( -path '/tmp/*' -o -path '/var/tmp/*' \)
Sed find/replace:
sed -i 's/find/replace/g' {} \;
Putting together:
find / -type f -name "*.properties" -o -name "*.xml" -not \( -path '/tmp/*' -o -path '/var/tmp/*' \) -exec sed -i 's/10\.32\.19\.156/10.32.19.165/g' {} \;
However this does not seem to work. Removing some 'find' parameters causes it to work, for example this works:
find / -type f -name "*.properties" -exec sed -i 's/10\.32\.19\.156/10.32.19.165/g' {} \;
How can I get sed to work with the extended 'find' parameters?
Currently these two 'find' statements return exactly the same result in a test folder with only 2 files:
find /var/tmp/ipreplace/ -type f -name "*.properties"
find /var/tmp/ipreplace/ -type f -name "*.properties" -o -name "*.xml" -not \( -path '/tmp/*' -o -path '/var/tmp/*' \)
I guess the use of -path parameter in your find command is wrong.
Try the following:
find / -not \( -path '/tmp' -prune \) -not \( -path '/var/tmp' -prune \) -type f -name "*.properties" -o -name "*.xml" -exec sed -i 's/10\.32\.19\.156/10.32.19.165/g' {} \;
Look at this post for reference

Recursively find and delete (*.xml / *.txt / *.csv) files older than x days in Linux

From what I gather through a quick search on the web, you can recursively remove any files folder than x days like this:
find /path/to/the/files -mtime +7 -exec rm {} \;
How do you amend this to only delete *.xml / *.csv / *.txt files (not case sensitive) recursively and leave other files/folders alone.
This is what I came up with and I am not sure if this is the correct way to go about this:
find /path/to/the/files -type f \( -name "*.xml" -or -name "*.XML" -or -name "*.csv" -or -name "*.CSV" -or -name "*.txt" -or -name "*.TXT" \) -mtime +7 -exec rm {} \;

Resources