Iterate over a list of files with spaces - linux

I want to iterate over a list of files. This list is the result of a find command, so I came up with:
getlist() {
for f in $(find . -iname "foo*")
do
echo "File found: $f"
# do something useful
done
}
It's fine except if a file has spaces in its name:
$ ls
foo_bar_baz.txt
foo bar baz.txt
$ getlist
File found: foo_bar_baz.txt
File found: foo
File found: bar
File found: baz.txt
What can I do to avoid the split on spaces?

You could replace the word-based iteration with a line-based one:
find . -iname "foo*" | while read f
do
# ... loop body
done

There are several workable ways to accomplish this.
If you wanted to stick closely to your original version it could be done this way:
getlist() {
IFS=$'\n'
for file in $(find . -iname 'foo*') ; do
printf 'File found: %s\n' "$file"
done
}
This will still fail if file names have literal newlines in them, but spaces will not break it.
However, messing with IFS isn't necessary. Here's my preferred way to do this:
getlist() {
while IFS= read -d $'\0' -r file ; do
printf 'File found: %s\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
If you find the < <(command) syntax unfamiliar you should read about process substitution. The advantage of this over for file in $(find ...) is that files with spaces, newlines and other characters are correctly handled. This works because find with -print0 will use a null (aka \0) as the terminator for each file name and, unlike newline, null is not a legal character in a file name.
The advantage to this over the nearly-equivalent version
getlist() {
find . -iname 'foo*' -print0 | while read -d $'\0' -r file ; do
printf 'File found: %s\n' "$file"
done
}
Is that any variable assignment in the body of the while loop is preserved. That is, if you pipe to while as above then the body of the while is in a subshell which may not be what you want.
The advantage of the process substitution version over find ... -print0 | xargs -0 is minimal: The xargs version is fine if all you need is to print a line or perform a single operation on the file, but if you need to perform multiple steps the loop version is easier.
EDIT: Here's a nice test script so you can get an idea of the difference between different attempts at solving this problem
#!/usr/bin/env bash
dir=/tmp/getlist.test/
mkdir -p "$dir"
cd "$dir"
touch 'file not starting foo' foo foobar barfoo 'foo with spaces'\
'foo with'$'\n'newline 'foo with trailing whitespace '
# while with process substitution, null terminated, empty IFS
getlist0() {
while IFS= read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
# while with process substitution, null terminated, default IFS
getlist1() {
while read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
# pipe to while, newline terminated
getlist2() {
find . -iname 'foo*' | while read -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# pipe to while, null terminated
getlist3() {
find . -iname 'foo*' -print0 | while read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# for loop over subshell results, newline terminated, default IFS
getlist4() {
for file in "$(find . -iname 'foo*')" ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# for loop over subshell results, newline terminated, newline IFS
getlist5() {
IFS=$'\n'
for file in $(find . -iname 'foo*') ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# see how they run
for n in {0..5} ; do
printf '\n\ngetlist%d:\n' $n
eval getlist$n
done
rm -rf "$dir"

There is also a very simple solution: rely on bash globbing
$ mkdir test
$ cd test
$ touch "stupid file1"
$ touch "stupid file2"
$ touch "stupid file 3"
$ ls
stupid file 3 stupid file1 stupid file2
$ for file in *; do echo "file: '${file}'"; done
file: 'stupid file 3'
file: 'stupid file1'
file: 'stupid file2'
Note that I am not sure this behavior is the default one but I don't see any special setting in my shopt so I would go and say that it should be "safe" (tested on osx and ubuntu).

find . -iname "foo*" -print0 | xargs -L1 -0 echo "File found:"

find . -name "fo*" -print0 | xargs -0 ls -l
See man xargs.

Since you aren't doing any other type of filtering with find, you can use the following as of bash 4.0:
shopt -s globstar
getlist() {
for f in **/foo*
do
echo "File found: $f"
# do something useful
done
}
The **/ will match zero or more directories, so the full pattern will match foo* in the current directory or any subdirectories.

I really like for loops and array iteration, so I figure I will add this answer to the mix...
I also liked marchelbling's stupid file example. :)
$ mkdir test
$ cd test
$ touch "stupid file1"
$ touch "stupid file2"
$ touch "stupid file 3"
Inside the test directory:
readarray -t arr <<< "`ls -A1`"
This adds each file listing line into a bash array named arr with any trailing newline removed.
Let's say we want to give these files better names...
for i in ${!arr[#]}
do
newname=`echo "${arr[$i]}" | sed 's/stupid/smarter/; s/ */_/g'`;
mv "${arr[$i]}" "$newname"
done
${!arr[#]} expands to 0 1 2 so "${arr[$i]}" is the ith element of the array. The quotes around the variables are important to preserve the spaces.
The result is three renamed files:
$ ls -1
smarter_file1
smarter_file2
smarter_file_3

find has an -exec argument that loops over the find results and executes an arbitrary command. For example:
find . -iname "foo*" -exec echo "File found: {}" \;
Here {} represents the found files, and wrapping it in "" allows for the resultant shell command to deal with spaces in the file name.
In many cases you can replace that last \; (which starts a new command) with a \+, which will put multiple files in the one command (not necessarily all of them at once though, see man find for more details).

I recently had to deal with a similar case, and I built a FILES array to iterate over the filenames:
eval FILES=($(find . -iname "foo*" -printf '"%p" '))
The idea here is to surround each filename with double quotes, separate them with spaces and use the result to initialize the FILES array.
The use of eval is necessary to evaluate the double quotes in the find output correctly for the array initialization.
To iterate over the files, just do:
for f in "${FILES[#]}"; do
# Do something with $f
done

In some cases, here if you just need to copy or move a list of files, you could pipe that list to awk as well.
Important the \"" "\" around the field $0 (in short your files, one line-list = one file).
find . -iname "foo*" | awk '{print "mv \""$0"\" ./MyDir2" | "sh" }'

Ok - my first post on Stack Overflow!
Though my problems with this have always been in csh not bash the solution I present will, I'm sure, work in both. The issue is with the shell's interpretation of the "ls" returns. We can remove "ls" from the problem by simply using the shell expansion of the * wildcard - but this gives a "no match" error if there are no files in the current (or specified folder) - to get around this we simply extend the expansion to include dot-files thus: * .* - this will always yield results since the files . and .. will always be present. So in csh we can use this construct ...
foreach file (* .*)
echo $file
end
if you want to filter out the standard dot-files then that is easy enough ...
foreach file (* .*)
if ("$file" == .) continue
if ("file" == ..) continue
echo $file
end
The code in the first post on this thread would be written thus:-
getlist() {
for f in $(* .*)
do
echo "File found: $f"
# do something useful
done
}
Hope this helps!

Another solution for job...
Goal was :
select/filter filenames recursively in directories
handle each names (whatever space in path...)
#!/bin/bash -e
## #Trick in order handle File with space in their path...
OLD_IFS=${IFS}
IFS=$'\n'
files=($(find ${INPUT_DIR} -type f -name "*.md"))
for filename in ${files[*]}
do
# do your stuff
# ....
done
IFS=${OLD_IFS}

Related

How can i find and rename multiple files

I have multiple files in multiple directories and i have to rename these files from lowercase to uppercase; the file extension may vary and needs to be in lowercase (should be renamed too for files with extensions in uppercase).
NB: I have rename version from util-linux on CentOS Linux7.
i tried this :
find /mydir -depth | xargs -n 1 rename -v 's/(.*)\/([^\/]*)/$1\/\U$2/' {} \;
find /mydir -depth | xargs -n 1 rename -v 's/(.*)\/([^\/]*)/$2\/\L$2/' {} \;
but it's not working it changes nothing and i have no output.
Itried another solution :
for SRC in `find my_root_dir -depth`
do
DST=`dirname "${SRC}"`/`basename "${SRC}" | tr '[A-Z]' '[a-z]'`
if [ "${SRC}" != "${DST}" ]
then
[ ! -e "${DST}" ] && mv -T "${SRC}" "${DST}" || echo "${SRC} was not renamed"
fi
done
this one partially works but transforms the files extensions to uppercase too.
Any suggestions on how to keep/transform the extensions to lowercase ?
Thank you!
Possible solution with Perl rename:
find /mydir -depth -type f -exec rename -v 's/(.*\/)?([^.]*)/$1\U$2/' {} +
The commands in the question have several problems.
You seem to confuse the syntax of find's -exec action and xargs.
find /mydir -depth -type f -exec rename -v 'substitution_command' {} \;
find /mydir -depth -type f| xargs -n 1 rename -v 'substitution_command'
The xargs version has problems in case a file name contains a space.
If you replace \; with +, multiple file names are passed to one invocation of rename.
The substitution command is only supported by the Perl version of the rename command. You might have to install this version. See Get the Perl rename utility instead of the built-in rename
The substitution did not work in my test. I successfully used
rename -v 's/(.*\/)?([^.]*)/$1\U$2/' file ...
The first group (.*\/)? optionally matches a sequence of characters with a trailing /. This is used to copy the directory unchanged.
The second group ([^.]*) matches a sequence of characters except ..
This is the file name part before the first dot (if any) which will be converted to uppercase. In case the file name has more than one extension, all will remain unchanged, e.g.
Path/To/Foo.Bar.Baz -> Path/To/FOO.Bar.Baz
rename-independent solution (using find together with mv)
You can rename all files in a directory with a following command:
for i in $( ls | grep [A-Z] ); do mv -i $i `echo $i | tr 'A-Z' 'a-z'`; done
First part (for i in $( ls | grep [A-Z] );) looks for all uppercase characters and executes until all files are "scanned".
Second part (``) turns all uppercase characters into lowercase ones.
Perl-based rename dependent solution
rename -f 'y/A-Z/a-z/' *
This command changes uppercase characters to the lowercase ones. -f option allows overwriting of existing files, but it is not necessary.
suggesting a trick with awk that will generate all required mv commands:
awk '{f=$0;split($NF,a,".");$NF=tolower(a[1])"."toupper(a[2]);print "mv "f" "$0}' FS=/ OFS=/ <<< $(find . -type f)
Inspect the result, and run all mv commands together:
bash <<< $(awk '{f=$0;split($NF,a,".");$NF=tolower(a[1])"."toupper(a[2]);print "mv "f" "$0}' FS=/ OFS=/ <<< $(find . -type f))
awk script script.awk explanation
BEGIN { # preprocessing configuration
FS="/"; # set awk field separtor to /
OFS="/"; # set awk output field separtor to /
}
{ # for each line in input list
filePath = $0; # save the whole filePath in variable
# fileName is contained in last field $NF
# split fileName by "." to head: splitedFileNameArr[1] and tail: splitedFileNameArr[2]
split($NF,splitedFileNameArr,".");
# recreate fileName from lowercase(head) "." uppercase(tail)
$NF = tolower(splitedFileNameArr[1]) "." toupper(splitedFileNameArr[2]);
# generate a "mv" command from original filePath and regenerated fileName
print "mv "filePath" "$0;
}
Testing:
mkdir {a1,B2}/{A1,b2} -p; touch {a1,B2}/{A1,b2}/{A,b}{b,C}.{c,D}{d,C}
find . -type f
./a1/A1/Ab.cC
./a1/A1/Ab.cd
./a1/A1/Ab.DC
./a1/A1/Ab.Dd
./B2/b2/AC.DC
./B2/b2/AC.Dd
.....
./B2/b2/bC.cd
./B2/b2/bC.DC
./B2/b2/bC.Dd
awk -f script.awk <<< $(find . -type f)
.....
mv ./a1/b2/Ab.cd ./a1/b2/ab.CD
mv ./a1/b2/Ab.DC ./a1/b2/ab.DC
mv ./a1/b2/Ab.Dd ./a1/b2/ab.DD
mv ./B2/A1/bC.Dd ./B2/A1/bc.DD
.....
mv ./B2/b2/bC.DC ./B2/b2/bc.DC
mv ./B2/b2/bC.Dd ./B2/b2/bc.DD
bash <<< $(awk -f script.awk <<< $(find . -type f))
find . -type f

ignore files with db.zip and copy remaing file in a folder in linux

I want to ignore files with _db.zip in a folder and copying remaining zip files in a folder in linux.
I have tried as below:
for filename in *;
do
extension="${filename#*.}" ====> giving output as 33_adc_db.zip
where here i want output as db.zip
if [ "$extension" != .zip]; then
echo ""
fi
Please help me on this as soon as possible.
in one line :
# full path to source dirs
l_src=~/src
# full path to target dirs
l_tgt=~/tgt
find $l_src -type f ! -regex ".+_db\.zip" | xargs -I "{}" mv {} $l_tgt
each command in details
-type f -- get files only
! -regex ".+_db.zip" -- not like "_db.zip". ".+" -- any char "\." -- treat as dot not like any char
xargs -I "{}" -- use symbol "{}" as stdin and get line by line
try this for better understanding find $l_src -type f ! -regex ".+_db\.zip" | xargs -I "{}" echo "mv {} $l_tgt" here we just echo commands
I see you are trying to get the extension of the filename, but in order to use a variable, you need to preceed it with a dollar-sign:
Instead of:
extension="${filename#*.}"
Try:
extension="${$filename#*.}"
Is it going better?
Edit: you might also add an extra space before closing the bracket in your if-clause:
if [ "$extension" != .zip ]; then
I created a few files using touch touch a.zip b.zip bash c.x.zip
And run this simplified bash script:
#!/bin/bash
for filename in *;
do
extension="${filename##*.}"
echo "${filename}"
echo "${extension}"
if [ ${extension} != ".zip" ]; then
echo "hello"
fi
done
To get
a.zip
zip
hello
b.zip
zip
hello
c.x.zip
zip
# no hello for c!

Copy files with date/time range in filename

I have a bash script, which contains the following lines:
for ((iTime=starttime;iTime<=endtime;iTime++))
do
find . -name "*${iTime}*" -exec cp --parents \{\} ${dst} \;
done
I have a structure with a few folders including subfolders and many files at the bottom of the tree. These files are labeled with date and time info in the filename, like "filename_2021063015300000_suffix". The time is in format yyyymmddhhmmss and two digits for 1/10 and 1/100 seconds. I have a lot of files, which means, that my approach is very slow. The files have a time distance of a few minutes, so only a couple of files (e.g. 10 per subfolder out of >10000) should be copied.
How can i find all the files in the time range and get them all in one find and copy command? Maybe get a list of all the files to copy with one find command and then copy the list of filepathes? But how can i do this?
If your time span is reasonably limited, just inline the acceptable file names into the single find command.
find . \( -false $(for ((iTime=starttime;iTime<=endtime;iTime++)); do printf ' %s' -o -name "*$iTime*"; done) \) -exec cp --parents \{\} ${dst} \;
The initial -false predicate inside the parentheses is just to simplify the following predicates so that they can all start with -o -name.
This could end up with an "argument list too long" error if your list of times is long, though. Perhaps a more robust solution is to pass the time resolution into the command.
find . -type f -exec bash -c '
for f; do
for ((iTime=starttime;iTime<=endtime;iTime++)); do
if [[ $f == *"$iTime"* ]]; then
cp --parents "$f" "$0"
break
fi
done' "$dst" {} +
The script inside -exec could probably be more elegant; if your file names have reasonably regular format, maybe just extract the timestamp and compare it numerically to check whether it's in range. Perhaps also notice how we abuse the $0 parameter after bash -c '...' to pass in the value of $dst.
Lose the find. I created -
filename_2020063015300000_suffix
filename_2021053015300000_suffix
filename_2021063015300000_suffix
filename_2022063015300000_suffix
foo/filename_2021053015312345_suffix
bar/baz/filename_2021053015310101_suffix
So if I execute
starttime=2021000000000000
endtime=2022000000000000
shopt -s globstar
for f in **/*_[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]_*; do # for all these
ts=${f//[^0-9]/} # trim to date
(( ts >= starttime )) || continue # skip too old
(( ts <= endtime )) || continue # skip too new
echo "$f" # list matches
done | xargs -I{} echo cp {} /new/dir/ # pass to xargs
I get
cp bar/baz/filename_2021053015310101_suffix /new/dir/
cp filename_2021053015300000_suffix /new/dir/
cp filename_2021063015300000_suffix /new/dir/
cp foo/filename_2021053015312345_suffix /new/dir/
There are ways to simplify that glob. If you use extglob you can make it shorter, and check more carefully with a regex - for example,
shopt -s globstar extglob
for f in **/*_+([0-9])_*; do
[[ "$f" =~ _[0-9]{16}_ ]] || continue;
It starts looking complicated and hard to maintain for the next guy, though.
Try these, replace the dst, starttime, endtime in your case, both work for me on Ubuntu16.04.
find . -type f -regextype sed -regex "[^_]*_[0-9]\{16\}_[^_]*" -exec bash -c 'dt=$(echo "$0" | grep -oP "\d{16}"); [ "$dt" -gt "$2" ] && [ "$dt" -lt "$3" ] && cp -p "$0" "$1"' {} 'dst/' 'starttime' 'endtime' \;
$0 is filename which contain the datetime, $1 is dst directory path, $2 is starttime, $3 is endtime
Or
find . -type f -regextype sed -regex "[^_]*_[0-9]\{16\}_[^_]*" | awk -v dst='/tmp/test_find/' '{if (0 == system("[ $(echo \"" $0 "\"" " | grep -oP \"" "(?<=_)\\d+(?=_)\") -gt starttime ] && [ $(echo \"" $0 "\"" " | grep -oP \"" "(?<=_)\\d+(?=_)\") -lt endtime ]")) {system("cp -p " $0 " " dst)}}'
Both of them, first, use find to find the file name which has the pattern like _2021063015300000_ (maybe this has 16 digital but you say this pattern format yyyymmddhhmmss only has 14 digital) with sed regex.
Then use -exec bash -c "get datetime in filename, compare them with times, and exec cp action"
Or use awk to get the datetime and compare with start or end time by system command, and will execute cp to dst directory at last also by system command.
PS. this pattern are dependent the filename which only has the datetime between two _.

How to delete files similar to items in a string

I want to delete files in the current folder with the following pattern.
0_something.sql.tar
I have a string provided which contains numbers
number_string="0,1,2,3,4"
How can I delete any files not included in the number_string while also keeping to the x_x.sql.tar pattern?
For example, I have these files:
0_something.sql.tar
2_something.sql.tar
4_something.sql.tar
15_something.sql.tar
Based on this logic, and the numbers in the number string - I should only remove 15 because:
It follows the pattern _.sql.tar
It doesnt have a number
in the number string
This might help you out:
s="0,1,2,3,4"
s=",${s},"
for f in *.sql.tar; do
n="${f%_*}"
[ "${n//[0-9]}" ] && continue
[ "$s" == "${s/,${n},/}" ] && echo rm -- "$f"
done
Remove the echo if this answer pleases you
What this is doing is the following:
convert your number_string s into a string which is fully comma-separated and
also starts and ends with a comma (s=",0,1,2,3,"). This allows us to search for entries like ,5,
loop over all files matched by the glob *.sql.tar
n="${f%_*}": Extract the substring before the first underscore `
[ "{n//[0-9]}" ] && continue: validate if the substring is an integer, if not, skip the file and move to the next one.
substitute the number in the number_string (with commas), if the substring does not change, it implies we should not keep the file
# Get the unmatched numbers from the second stream
# ie. files to be removed
join -v2 -o2.2 <(
# output sorted numbers on separate lines
sort <<<${number_string//,/$'\n'}
) <(
# fins all files named in such way
# and print filename, tab and path separated by newlines
find . -name '[0-9]*_something.sql.tar' -printf "%f\t%p\n" |
# extract numbers from filenames only
sed 's/\([0-9]*\)[^\t]*/\1/' |
# sort for join
sort
) |
# pass the input to xargs
# remove echo to really remove files
xargs -d '\n' echo rm
Tested on repl
$IFS can help here.
( IFS=,; for n in $number_string; do echo rm $n\_something.sql.tar; done; )
The parens run the command in a subshell so the reassignment of IFS is scoped.
Setting it to a comma lets the command parser split the string into discrete numbers for you and loop over them.
If that gives you the right list of commands you want to execute, just take out the echo. :)
UPDATE
OH! I see that now. Sorry, my bad, lol...
Well then, let's try a totally different approach. :)
Extended Globbing is likely what you need.
shopt -s extglob # turn extended globbing on
echo rm !(${number_string//,/\|})_something.sql.tar
That'll show you the command that would be executed. If you're satisfied, take the echo off. :)
This skips the need for a brute-force loop.
Explanation -
Once extglob is on, !(...) means "anything that does NOT match any of these patterns."
${number_string//,/\|} replaces all commas in the string with pipe separators, creating a match pattern for the extended glob.
Thus, !(${number_string//,/\|}) means anything NOT matching one of those patterns; !(${number_string//,/\|})_something.sql.tar then means "anything that starts with something NOT one of these patterns, followed by this string."
I created these:
$: printf "%s\n" *_something.sql.tar
0_something.sql.tar
1_something.sql.tar
2_something.sql.tar
3_something.sql.tar
4_something.sql.tar
5_something.sql.tar
6_something.sql.tar
7_something.sql.tar
8_something.sql.tar
9_something.sql.tar
then after setting extglob and using the above value for $number_string, I get this:
$: echo !(${number_string//,/\|})_something.sql.tar
5_something.sql.tar 6_something.sql.tar 7_something.sql.tar 8_something.sql.tar 9_something.sql.tar
Be careful about quoting, though. You can quote it to see the pattern itself, but then it matches nothing.
$: echo "!(${number_string//,/\|})_something.sql.tar"
!(0|1|2|3|4)_something.sql.tar
if you prefer the loop...
for f in *_something.sql.tar # iterating over all these
do case ",${f%_something.sql.tar}," in # for each, with suffix removed
",$number_string,") continue ;; # skip matches
*) rm "$f" ;; # delete nonmatches
esac
done
Write a script to do the matching, and remove those names that do not match. For example:
$ rm -rf foo
$ mkdir foo
$ cd foo
$ touch {2,4,6,8}.tar
$ echo "$number_string" | tr , \\n | sed 's/$/.tar/' > match-list
$ find . -type f -exec sh -c 'echo $1 | grep -f match-list -v -q' _ {} \; -print
./6
./8
./match-list
Replace -print with -delete to actually unlink the names. Note that this will cause problems since match-list will probably get deleted midway through and no longer exist for future matches, so you'll want to modify it a bit. Perhaps:
find . -type f -not -name match-list -name '*.tar' -exec sh -c 'echo $1 | grep -f match-list -v -q' _ {} \; -delete
In this case, there's no need to explicitly exclude 'match-list' since it will not match the -name '*.tar' primitive, but is included here for completeness.
I have sacked some previous answers, but credit is given and the resulting script is nice
$ ls -l
total 4
-rwxr-xr-x 1 boffi boffi 355 Jul 27 10:58 rm_tars_except
$ cat rm_tars_except
#!/usr/bin/env bash
dont_rm="$1"
# https://stackoverflow.com/a/10586169/2749397
IFS=',' read -r -a dont_rm_a <<< "$dont_rm"
for tarfile in ?.tar ; do
digit=$( basename "$tarfile" .tar )
# https://stackoverflow.com/a/15394738/2749397
[[ " ${dont_rm_a[#]} " =~ " ${digit} " ]] && \
echo "# Keep $tarfile" || \
echo "rm $tarfile"
done
$ touch 1.tar 3.tar 5.tar 7.tar
$ ./rm_tars_except 3,5
rm 1.tar
# Keep 3.tar
# Keep 5.tar
rm 7.tar
$ ./rm_tars_except 3,5 | sh
$ ls -l
total 4
-rw-r--r-- 1 boffi boffi 0 Jul 27 11:00 3.tar
-rw-r--r-- 1 boffi boffi 0 Jul 27 11:00 5.tar
-rwxr-xr-x 1 boffi boffi 355 Jul 27 10:58 rm_tars_except
$
If we can remove the restrictions on the "keep info" presented in a comma separated string then the script can be significantly simplified
#!/usr/bin/env bash
for tarfile in ?.tar ; do
digit=$( basename "$tarfile" .tar )
# https://stackoverflow.com/a/15394738/2749397
[[ " ${#} " =~ " ${digit} " ]] && \
echo "# Keep $tarfile" || \
echo "rm $tarfile"
done
that, of course, should be called like this ./rm_tars_except 3 5 | sh
find . -type f -name '*_something.sql.tar' | grep "<input the series with or | symbol>" | xargs rm -f
example:-
find . -type f -name '*_something.sql.tar' | grep "0\|1\|2\|3\|4" | xargs rm -f

how to make 'for in' not to split a string [duplicate]

This question already has answers here:
How to perform a for-each loop over all the files under a specified path?
(4 answers)
Closed 7 years ago.
I'm trying to rename several files. So I need those file names first.
I'm using:
for FILE in $(find . -type f -name "*.flv" -exec basename {} \; ); do
echo "$FILE"
done
When I try just the find command, it returns the number of files correctly, but when Im using the for, I was expecting that ARQ would contain the entire name of a single file, but instead, it returns splited words of the filename.
So how can I get the entire name, not just separated words of it?
There are several ways to get that to work. The simplest is to use find's exec fully:
find . -type f -name "*.flv" -exec bash -c 'f=$(basename "$1"); printf "%s\n" "$f"' _ {} \;
In other words, you can put complex scripts in the -exec clause if you like.
As a second choice, consider this loop:
find . -name '*.flv' -print0 | while IFS= read -d '' -r file
do
f=$(basename "$file")
printf "%s\n" "$f"
done
Using for loop with the result from Command Substitution without double quote causing the result to break on space, tab and newline by default (that's IFS default value).
POSIXly, you don't need anything other than find and an inline-script:
$ find . -type f -name "*.flv" -exec sh -c '
for f do
printf "%s\n" "${f##*/}"
done
' sh {} +
With GNU find, you don't need the inline-script:
$ find . -type f -name "*.flv" -printf '%f\n'
Looking at the title of the question: avoiding splitting a string using for in:
Do not use the IFS field separators in the loop:
:~> a="sdad asd asda ad
> fdvbdsvf
> dfvsdfv
> 4"
:~> for s in $a; do
echo "== $s ==";
done
== sdad ==
== asd ==
== asda ==
== ad ==
== fdvbdsvf ==
== dfvsdfv ==
== 4 ==
:~> (IFS=; for s in $a; do
echo "== $s ==";
done)
== sdad asd asda ad
fdvbdsvf
dfvsdfv
4 ==
I used round brackets for the last command, so that the changed value of IFS is limited to that subprocess.
Instead of using find, use rename command which is designed to rename multiple files.
For example:
rename 's/foo/bar/' **/*.flv
which would replace foo in filename into bar in all *.flv files recursively. If your shell supports a new globbing option (such as Bash 4.x or zsh), make sure the option is enabled by shopt -s globstar.
Or if you're using find with a loop, you can use:
-print0 when piping to external programs such as xargs (with -0),
use -exec cmd to run command directly on the file ({})
use -execdir cmd to execute command in the directory where the file is present

Resources