Remove special characters from "find" output - linux

I am trying the list all the subfolders within a folder:
find . -type d -maxdepth 1 -mindepth 1 2>/dev/null | while read dir
do
echo $dir
done
However, what I get printed out is
./dir1
./dir2
while I would need only
dir1
dir2
Complete use case:
later, I would like to create a new file with name of the folder e.g:
find . -type d -maxdepth 1 -mindepth 1 2>/dev/null | while read dir
do
echo 'MOVING TO'$dir
cd $dir
#SUMMARYLOG=$dir_log_merged # HERE IS WHERE THE ./ IS PROBLEMATIC
# QUESTION EDITED
SUMMARYLOG=${dir}_log_merged # HERE IS WHERE THE ./ IS PROBLEMATIC
echo -e "\n""\n"'SUMMARY LOGS TO '$SUMMARYLOG
touch $SUMMARYLOG
pwd
find . -size +0c -type f -name '*.err' | xargs -I % sh -c 'echo % >> {$SUMMARYLOG}; cat % >> "{$SUMMARYLOG}"; echo -e "\n" >> "{$SUMMARYLOG}"'
cat $SUMMARYLOG
cd ..
done
Basically, I would like to merge a set of .err files in each of the subfolders and create one file with the subfolder name.
I can not create my $SUMMARYLOG so I think the problem is in the find output ./dir...

Instead of find acrobatics, you could use a glob and parameter expansion:
for d in */; do echo "${d%/}"; done
where the "${d%/}" removes the trailing slash from each directory name.
If you have hidden directories, you have to add a second glob as */ ignores them:
for d in */ .[!.]*/; do echo "${d%/}"; done
where .[!.]*/ is a glob for "begins with . and is followed by anything but another .", to exclude . and ...
Apart from that, if you have $dir, you can't use $dir_log_merged to append _log_merged to it, as Bash will look for a variable called dir_log_merged. You have to use ${dir}_log_merged instead.
Another set of problems is in your xargs command that starts with
sh -c 'echo % >> {$SUMMARYLOG};
Single quotes prevent variables from expanding
SUMMARYLOG would be invisible in the subshell; you'd have to export it first
{$SUMMARYLOG} expands to the contents of $SUMMARYLOG (empty string, in your case), then surrounds that with {}, which is why you see the {} file being created
You can't use % like this within the sh -c command. You have to use it as an argument to sh -c and then refer to it like this:
sh -c 'echo "$1"' _ %
with _ as a dummy argument that becomes $0 within the sh -c command.
And finally, I would solve your task as follows:
for f in */*.err; do
! [[ -s $f ]] && continue # Skip empty files
{
echo "${f##*/}" # Basename of file
cat "$f" # File contents
echo # Empty line
} >> "${f%/*}/${f%/*}_log_merged" # Dirname plus new filename
done

Related

How can i find and rename multiple files

I have multiple files in multiple directories and i have to rename these files from lowercase to uppercase; the file extension may vary and needs to be in lowercase (should be renamed too for files with extensions in uppercase).
NB: I have rename version from util-linux on CentOS Linux7.
i tried this :
find /mydir -depth | xargs -n 1 rename -v 's/(.*)\/([^\/]*)/$1\/\U$2/' {} \;
find /mydir -depth | xargs -n 1 rename -v 's/(.*)\/([^\/]*)/$2\/\L$2/' {} \;
but it's not working it changes nothing and i have no output.
Itried another solution :
for SRC in `find my_root_dir -depth`
do
DST=`dirname "${SRC}"`/`basename "${SRC}" | tr '[A-Z]' '[a-z]'`
if [ "${SRC}" != "${DST}" ]
then
[ ! -e "${DST}" ] && mv -T "${SRC}" "${DST}" || echo "${SRC} was not renamed"
fi
done
this one partially works but transforms the files extensions to uppercase too.
Any suggestions on how to keep/transform the extensions to lowercase ?
Thank you!
Possible solution with Perl rename:
find /mydir -depth -type f -exec rename -v 's/(.*\/)?([^.]*)/$1\U$2/' {} +
The commands in the question have several problems.
You seem to confuse the syntax of find's -exec action and xargs.
find /mydir -depth -type f -exec rename -v 'substitution_command' {} \;
find /mydir -depth -type f| xargs -n 1 rename -v 'substitution_command'
The xargs version has problems in case a file name contains a space.
If you replace \; with +, multiple file names are passed to one invocation of rename.
The substitution command is only supported by the Perl version of the rename command. You might have to install this version. See Get the Perl rename utility instead of the built-in rename
The substitution did not work in my test. I successfully used
rename -v 's/(.*\/)?([^.]*)/$1\U$2/' file ...
The first group (.*\/)? optionally matches a sequence of characters with a trailing /. This is used to copy the directory unchanged.
The second group ([^.]*) matches a sequence of characters except ..
This is the file name part before the first dot (if any) which will be converted to uppercase. In case the file name has more than one extension, all will remain unchanged, e.g.
Path/To/Foo.Bar.Baz -> Path/To/FOO.Bar.Baz
rename-independent solution (using find together with mv)
You can rename all files in a directory with a following command:
for i in $( ls | grep [A-Z] ); do mv -i $i `echo $i | tr 'A-Z' 'a-z'`; done
First part (for i in $( ls | grep [A-Z] );) looks for all uppercase characters and executes until all files are "scanned".
Second part (``) turns all uppercase characters into lowercase ones.
Perl-based rename dependent solution
rename -f 'y/A-Z/a-z/' *
This command changes uppercase characters to the lowercase ones. -f option allows overwriting of existing files, but it is not necessary.
suggesting a trick with awk that will generate all required mv commands:
awk '{f=$0;split($NF,a,".");$NF=tolower(a[1])"."toupper(a[2]);print "mv "f" "$0}' FS=/ OFS=/ <<< $(find . -type f)
Inspect the result, and run all mv commands together:
bash <<< $(awk '{f=$0;split($NF,a,".");$NF=tolower(a[1])"."toupper(a[2]);print "mv "f" "$0}' FS=/ OFS=/ <<< $(find . -type f))
awk script script.awk explanation
BEGIN { # preprocessing configuration
FS="/"; # set awk field separtor to /
OFS="/"; # set awk output field separtor to /
}
{ # for each line in input list
filePath = $0; # save the whole filePath in variable
# fileName is contained in last field $NF
# split fileName by "." to head: splitedFileNameArr[1] and tail: splitedFileNameArr[2]
split($NF,splitedFileNameArr,".");
# recreate fileName from lowercase(head) "." uppercase(tail)
$NF = tolower(splitedFileNameArr[1]) "." toupper(splitedFileNameArr[2]);
# generate a "mv" command from original filePath and regenerated fileName
print "mv "filePath" "$0;
}
Testing:
mkdir {a1,B2}/{A1,b2} -p; touch {a1,B2}/{A1,b2}/{A,b}{b,C}.{c,D}{d,C}
find . -type f
./a1/A1/Ab.cC
./a1/A1/Ab.cd
./a1/A1/Ab.DC
./a1/A1/Ab.Dd
./B2/b2/AC.DC
./B2/b2/AC.Dd
.....
./B2/b2/bC.cd
./B2/b2/bC.DC
./B2/b2/bC.Dd
awk -f script.awk <<< $(find . -type f)
.....
mv ./a1/b2/Ab.cd ./a1/b2/ab.CD
mv ./a1/b2/Ab.DC ./a1/b2/ab.DC
mv ./a1/b2/Ab.Dd ./a1/b2/ab.DD
mv ./B2/A1/bC.Dd ./B2/A1/bc.DD
.....
mv ./B2/b2/bC.DC ./B2/b2/bc.DC
mv ./B2/b2/bC.Dd ./B2/b2/bc.DD
bash <<< $(awk -f script.awk <<< $(find . -type f))
find . -type f

ignore files with db.zip and copy remaing file in a folder in linux

I want to ignore files with _db.zip in a folder and copying remaining zip files in a folder in linux.
I have tried as below:
for filename in *;
do
extension="${filename#*.}" ====> giving output as 33_adc_db.zip
where here i want output as db.zip
if [ "$extension" != .zip]; then
echo ""
fi
Please help me on this as soon as possible.
in one line :
# full path to source dirs
l_src=~/src
# full path to target dirs
l_tgt=~/tgt
find $l_src -type f ! -regex ".+_db\.zip" | xargs -I "{}" mv {} $l_tgt
each command in details
-type f -- get files only
! -regex ".+_db.zip" -- not like "_db.zip". ".+" -- any char "\." -- treat as dot not like any char
xargs -I "{}" -- use symbol "{}" as stdin and get line by line
try this for better understanding find $l_src -type f ! -regex ".+_db\.zip" | xargs -I "{}" echo "mv {} $l_tgt" here we just echo commands
I see you are trying to get the extension of the filename, but in order to use a variable, you need to preceed it with a dollar-sign:
Instead of:
extension="${filename#*.}"
Try:
extension="${$filename#*.}"
Is it going better?
Edit: you might also add an extra space before closing the bracket in your if-clause:
if [ "$extension" != .zip ]; then
I created a few files using touch touch a.zip b.zip bash c.x.zip
And run this simplified bash script:
#!/bin/bash
for filename in *;
do
extension="${filename##*.}"
echo "${filename}"
echo "${extension}"
if [ ${extension} != ".zip" ]; then
echo "hello"
fi
done
To get
a.zip
zip
hello
b.zip
zip
hello
c.x.zip
zip
# no hello for c!

How to delete files similar to items in a string

I want to delete files in the current folder with the following pattern.
0_something.sql.tar
I have a string provided which contains numbers
number_string="0,1,2,3,4"
How can I delete any files not included in the number_string while also keeping to the x_x.sql.tar pattern?
For example, I have these files:
0_something.sql.tar
2_something.sql.tar
4_something.sql.tar
15_something.sql.tar
Based on this logic, and the numbers in the number string - I should only remove 15 because:
It follows the pattern _.sql.tar
It doesnt have a number
in the number string
This might help you out:
s="0,1,2,3,4"
s=",${s},"
for f in *.sql.tar; do
n="${f%_*}"
[ "${n//[0-9]}" ] && continue
[ "$s" == "${s/,${n},/}" ] && echo rm -- "$f"
done
Remove the echo if this answer pleases you
What this is doing is the following:
convert your number_string s into a string which is fully comma-separated and
also starts and ends with a comma (s=",0,1,2,3,"). This allows us to search for entries like ,5,
loop over all files matched by the glob *.sql.tar
n="${f%_*}": Extract the substring before the first underscore `
[ "{n//[0-9]}" ] && continue: validate if the substring is an integer, if not, skip the file and move to the next one.
substitute the number in the number_string (with commas), if the substring does not change, it implies we should not keep the file
# Get the unmatched numbers from the second stream
# ie. files to be removed
join -v2 -o2.2 <(
# output sorted numbers on separate lines
sort <<<${number_string//,/$'\n'}
) <(
# fins all files named in such way
# and print filename, tab and path separated by newlines
find . -name '[0-9]*_something.sql.tar' -printf "%f\t%p\n" |
# extract numbers from filenames only
sed 's/\([0-9]*\)[^\t]*/\1/' |
# sort for join
sort
) |
# pass the input to xargs
# remove echo to really remove files
xargs -d '\n' echo rm
Tested on repl
$IFS can help here.
( IFS=,; for n in $number_string; do echo rm $n\_something.sql.tar; done; )
The parens run the command in a subshell so the reassignment of IFS is scoped.
Setting it to a comma lets the command parser split the string into discrete numbers for you and loop over them.
If that gives you the right list of commands you want to execute, just take out the echo. :)
UPDATE
OH! I see that now. Sorry, my bad, lol...
Well then, let's try a totally different approach. :)
Extended Globbing is likely what you need.
shopt -s extglob # turn extended globbing on
echo rm !(${number_string//,/\|})_something.sql.tar
That'll show you the command that would be executed. If you're satisfied, take the echo off. :)
This skips the need for a brute-force loop.
Explanation -
Once extglob is on, !(...) means "anything that does NOT match any of these patterns."
${number_string//,/\|} replaces all commas in the string with pipe separators, creating a match pattern for the extended glob.
Thus, !(${number_string//,/\|}) means anything NOT matching one of those patterns; !(${number_string//,/\|})_something.sql.tar then means "anything that starts with something NOT one of these patterns, followed by this string."
I created these:
$: printf "%s\n" *_something.sql.tar
0_something.sql.tar
1_something.sql.tar
2_something.sql.tar
3_something.sql.tar
4_something.sql.tar
5_something.sql.tar
6_something.sql.tar
7_something.sql.tar
8_something.sql.tar
9_something.sql.tar
then after setting extglob and using the above value for $number_string, I get this:
$: echo !(${number_string//,/\|})_something.sql.tar
5_something.sql.tar 6_something.sql.tar 7_something.sql.tar 8_something.sql.tar 9_something.sql.tar
Be careful about quoting, though. You can quote it to see the pattern itself, but then it matches nothing.
$: echo "!(${number_string//,/\|})_something.sql.tar"
!(0|1|2|3|4)_something.sql.tar
if you prefer the loop...
for f in *_something.sql.tar # iterating over all these
do case ",${f%_something.sql.tar}," in # for each, with suffix removed
",$number_string,") continue ;; # skip matches
*) rm "$f" ;; # delete nonmatches
esac
done
Write a script to do the matching, and remove those names that do not match. For example:
$ rm -rf foo
$ mkdir foo
$ cd foo
$ touch {2,4,6,8}.tar
$ echo "$number_string" | tr , \\n | sed 's/$/.tar/' > match-list
$ find . -type f -exec sh -c 'echo $1 | grep -f match-list -v -q' _ {} \; -print
./6
./8
./match-list
Replace -print with -delete to actually unlink the names. Note that this will cause problems since match-list will probably get deleted midway through and no longer exist for future matches, so you'll want to modify it a bit. Perhaps:
find . -type f -not -name match-list -name '*.tar' -exec sh -c 'echo $1 | grep -f match-list -v -q' _ {} \; -delete
In this case, there's no need to explicitly exclude 'match-list' since it will not match the -name '*.tar' primitive, but is included here for completeness.
I have sacked some previous answers, but credit is given and the resulting script is nice
$ ls -l
total 4
-rwxr-xr-x 1 boffi boffi 355 Jul 27 10:58 rm_tars_except
$ cat rm_tars_except
#!/usr/bin/env bash
dont_rm="$1"
# https://stackoverflow.com/a/10586169/2749397
IFS=',' read -r -a dont_rm_a <<< "$dont_rm"
for tarfile in ?.tar ; do
digit=$( basename "$tarfile" .tar )
# https://stackoverflow.com/a/15394738/2749397
[[ " ${dont_rm_a[#]} " =~ " ${digit} " ]] && \
echo "# Keep $tarfile" || \
echo "rm $tarfile"
done
$ touch 1.tar 3.tar 5.tar 7.tar
$ ./rm_tars_except 3,5
rm 1.tar
# Keep 3.tar
# Keep 5.tar
rm 7.tar
$ ./rm_tars_except 3,5 | sh
$ ls -l
total 4
-rw-r--r-- 1 boffi boffi 0 Jul 27 11:00 3.tar
-rw-r--r-- 1 boffi boffi 0 Jul 27 11:00 5.tar
-rwxr-xr-x 1 boffi boffi 355 Jul 27 10:58 rm_tars_except
$
If we can remove the restrictions on the "keep info" presented in a comma separated string then the script can be significantly simplified
#!/usr/bin/env bash
for tarfile in ?.tar ; do
digit=$( basename "$tarfile" .tar )
# https://stackoverflow.com/a/15394738/2749397
[[ " ${#} " =~ " ${digit} " ]] && \
echo "# Keep $tarfile" || \
echo "rm $tarfile"
done
that, of course, should be called like this ./rm_tars_except 3 5 | sh
find . -type f -name '*_something.sql.tar' | grep "<input the series with or | symbol>" | xargs rm -f
example:-
find . -type f -name '*_something.sql.tar' | grep "0\|1\|2\|3\|4" | xargs rm -f

assigning files in a directory to sub-directories

I have a 1000s of files in a directory with and I want to be able to divide them into sub-directories, with each sub-directory containing a specific number of files. I don't care what files go into what directories, just as long as each contain a specific number. All the file names have a common ending (e.g. .txt) but what goes before varies.
Anyone know an easy way to do this.
Assuming you only have files ending in *.txt, no hidden files and no directories:
#!/bin/bash
shopt -s nullglob
maxf=42
files=( *.txt )
for ((i=0;maxf*i<${#files[#]};++i)); do
s=subdir$i
mkdir -p "$s"
mv -t "$s" -- "${files[#]:i*maxf:maxf}"
done
This will create directories subdirX with X an integer starting from 0, and will put 42 files in each directory.
You can tweak the thing to have padded zeroes for X:
#!/bin/bash
shopt -s nullglob
files=( *.txt )
maxf=42
((l=${#files[#]}/maxf))
p=${#l}
for ((i=0;maxf*i<${#files[#]};++i)); do
printf -v s "subdir%0${p}d" "$i"
mkdir -p "$s"
mv -t "$s" -- "${files[#]:i*maxf:maxf}"
done
max_per_subdir=1000
start=1
while [ -e $(printf %03d $start) ]; do
start=$((start + 1))
done
find -maxdepth 1 -type f ! -name '.*' -name '*.txt' -print0 \
| xargs -0 -n $max_per_subdir echo \
| while read -a files; do
subdir=$(printf %03d $start)
mkdir $subdir || exit 1
mv "${files[#]}" $subdir/ || exit 1
start=$((start + 1))
done
How about
find *.txt -print0 | xargs -0 -n 100 | xargs -I {} echo cp {} '$(md5sum <<< "{}")' | sh
This will create several directories each containing 100 files. The name of each created directory is a md5 hash of the filenames it contains.

Iterate over a list of files with spaces

I want to iterate over a list of files. This list is the result of a find command, so I came up with:
getlist() {
for f in $(find . -iname "foo*")
do
echo "File found: $f"
# do something useful
done
}
It's fine except if a file has spaces in its name:
$ ls
foo_bar_baz.txt
foo bar baz.txt
$ getlist
File found: foo_bar_baz.txt
File found: foo
File found: bar
File found: baz.txt
What can I do to avoid the split on spaces?
You could replace the word-based iteration with a line-based one:
find . -iname "foo*" | while read f
do
# ... loop body
done
There are several workable ways to accomplish this.
If you wanted to stick closely to your original version it could be done this way:
getlist() {
IFS=$'\n'
for file in $(find . -iname 'foo*') ; do
printf 'File found: %s\n' "$file"
done
}
This will still fail if file names have literal newlines in them, but spaces will not break it.
However, messing with IFS isn't necessary. Here's my preferred way to do this:
getlist() {
while IFS= read -d $'\0' -r file ; do
printf 'File found: %s\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
If you find the < <(command) syntax unfamiliar you should read about process substitution. The advantage of this over for file in $(find ...) is that files with spaces, newlines and other characters are correctly handled. This works because find with -print0 will use a null (aka \0) as the terminator for each file name and, unlike newline, null is not a legal character in a file name.
The advantage to this over the nearly-equivalent version
getlist() {
find . -iname 'foo*' -print0 | while read -d $'\0' -r file ; do
printf 'File found: %s\n' "$file"
done
}
Is that any variable assignment in the body of the while loop is preserved. That is, if you pipe to while as above then the body of the while is in a subshell which may not be what you want.
The advantage of the process substitution version over find ... -print0 | xargs -0 is minimal: The xargs version is fine if all you need is to print a line or perform a single operation on the file, but if you need to perform multiple steps the loop version is easier.
EDIT: Here's a nice test script so you can get an idea of the difference between different attempts at solving this problem
#!/usr/bin/env bash
dir=/tmp/getlist.test/
mkdir -p "$dir"
cd "$dir"
touch 'file not starting foo' foo foobar barfoo 'foo with spaces'\
'foo with'$'\n'newline 'foo with trailing whitespace '
# while with process substitution, null terminated, empty IFS
getlist0() {
while IFS= read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
# while with process substitution, null terminated, default IFS
getlist1() {
while read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done < <(find . -iname 'foo*' -print0)
}
# pipe to while, newline terminated
getlist2() {
find . -iname 'foo*' | while read -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# pipe to while, null terminated
getlist3() {
find . -iname 'foo*' -print0 | while read -d $'\0' -r file ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# for loop over subshell results, newline terminated, default IFS
getlist4() {
for file in "$(find . -iname 'foo*')" ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# for loop over subshell results, newline terminated, newline IFS
getlist5() {
IFS=$'\n'
for file in $(find . -iname 'foo*') ; do
printf 'File found: '"'%s'"'\n' "$file"
done
}
# see how they run
for n in {0..5} ; do
printf '\n\ngetlist%d:\n' $n
eval getlist$n
done
rm -rf "$dir"
There is also a very simple solution: rely on bash globbing
$ mkdir test
$ cd test
$ touch "stupid file1"
$ touch "stupid file2"
$ touch "stupid file 3"
$ ls
stupid file 3 stupid file1 stupid file2
$ for file in *; do echo "file: '${file}'"; done
file: 'stupid file 3'
file: 'stupid file1'
file: 'stupid file2'
Note that I am not sure this behavior is the default one but I don't see any special setting in my shopt so I would go and say that it should be "safe" (tested on osx and ubuntu).
find . -iname "foo*" -print0 | xargs -L1 -0 echo "File found:"
find . -name "fo*" -print0 | xargs -0 ls -l
See man xargs.
Since you aren't doing any other type of filtering with find, you can use the following as of bash 4.0:
shopt -s globstar
getlist() {
for f in **/foo*
do
echo "File found: $f"
# do something useful
done
}
The **/ will match zero or more directories, so the full pattern will match foo* in the current directory or any subdirectories.
I really like for loops and array iteration, so I figure I will add this answer to the mix...
I also liked marchelbling's stupid file example. :)
$ mkdir test
$ cd test
$ touch "stupid file1"
$ touch "stupid file2"
$ touch "stupid file 3"
Inside the test directory:
readarray -t arr <<< "`ls -A1`"
This adds each file listing line into a bash array named arr with any trailing newline removed.
Let's say we want to give these files better names...
for i in ${!arr[#]}
do
newname=`echo "${arr[$i]}" | sed 's/stupid/smarter/; s/ */_/g'`;
mv "${arr[$i]}" "$newname"
done
${!arr[#]} expands to 0 1 2 so "${arr[$i]}" is the ith element of the array. The quotes around the variables are important to preserve the spaces.
The result is three renamed files:
$ ls -1
smarter_file1
smarter_file2
smarter_file_3
find has an -exec argument that loops over the find results and executes an arbitrary command. For example:
find . -iname "foo*" -exec echo "File found: {}" \;
Here {} represents the found files, and wrapping it in "" allows for the resultant shell command to deal with spaces in the file name.
In many cases you can replace that last \; (which starts a new command) with a \+, which will put multiple files in the one command (not necessarily all of them at once though, see man find for more details).
I recently had to deal with a similar case, and I built a FILES array to iterate over the filenames:
eval FILES=($(find . -iname "foo*" -printf '"%p" '))
The idea here is to surround each filename with double quotes, separate them with spaces and use the result to initialize the FILES array.
The use of eval is necessary to evaluate the double quotes in the find output correctly for the array initialization.
To iterate over the files, just do:
for f in "${FILES[#]}"; do
# Do something with $f
done
In some cases, here if you just need to copy or move a list of files, you could pipe that list to awk as well.
Important the \"" "\" around the field $0 (in short your files, one line-list = one file).
find . -iname "foo*" | awk '{print "mv \""$0"\" ./MyDir2" | "sh" }'
Ok - my first post on Stack Overflow!
Though my problems with this have always been in csh not bash the solution I present will, I'm sure, work in both. The issue is with the shell's interpretation of the "ls" returns. We can remove "ls" from the problem by simply using the shell expansion of the * wildcard - but this gives a "no match" error if there are no files in the current (or specified folder) - to get around this we simply extend the expansion to include dot-files thus: * .* - this will always yield results since the files . and .. will always be present. So in csh we can use this construct ...
foreach file (* .*)
echo $file
end
if you want to filter out the standard dot-files then that is easy enough ...
foreach file (* .*)
if ("$file" == .) continue
if ("file" == ..) continue
echo $file
end
The code in the first post on this thread would be written thus:-
getlist() {
for f in $(* .*)
do
echo "File found: $f"
# do something useful
done
}
Hope this helps!
Another solution for job...
Goal was :
select/filter filenames recursively in directories
handle each names (whatever space in path...)
#!/bin/bash -e
## #Trick in order handle File with space in their path...
OLD_IFS=${IFS}
IFS=$'\n'
files=($(find ${INPUT_DIR} -type f -name "*.md"))
for filename in ${files[*]}
do
# do your stuff
# ....
done
IFS=${OLD_IFS}

Resources