This question already has answers here:
How can I store the "find" command results as an array in Bash
(8 answers)
Closed 1 year ago.
I am trying to save results of FIND command into array so then I can do some AWK commands with them.
My actual code is: files_arr=( "$(find "$1" -type f \( -name "\'*[[:space:]]*\'" -o -name "*" \) -perm -a=r -print )
this code should find all files with spaces and without spaces and return them to my array (and are readable also)
The PROBLEM is, when I have directory named: 'not easy' and inside this is directory are files: 'file one' and 'file two' so what I will get is: not easy/file one
what I want to get is: 'not easy'/'file one'I was thinking about using SED to add quotes but it would add quotes even if I had just simple one word file which doesnt have quotes in it.
Thank you for our advices.
Try this out :
mapfile -d '' files_arr < <(find . -type f -name "'*[[:space:]]*'" -perm -a=r -print0)
declare -p files_arr # To see what's in the array
Related
This question already has answers here:
Moving multiple files having spaces in name (Linux)
(3 answers)
Exactly how do backslashes work within backticks?
(2 answers)
Closed 2 years ago.
I have the following script that renames files based on a specific file name string. The script was working fine until I had to apply it to a directory that contains a folder name with a space.
Script:
for file in `find /home/splunkLogs/Captin/PPM\ Images/PXT -type f -name '*.jpg'`; do mv -v "$file" "${file/-0.jpg/_Page_1.jpg}"; done
You'll notice the file name "PPM Images", which has a space. I added a backslash so the path would be readable, but I get the error "mv: cannot stat /home/splunkLogs/Captin/PPM: No such file or directory. I also tried putting the folder name in quotes in the path and received the same error. Can anyone guide me with a solution for handling filename spaces with the MV command?
So do not read lines using for. Read https://mywiki.wooledge.org/BashFAQ/001 .
find /home/splunkLogs/Captin/PPM\ Images/PXT -type f -name '*.jpg' |
while IFS= read -r file; do
mv -v "$file" "${file/-0.jpg/_Page_1.jpg}"
done
or better:
find /home/splunkLogs/Captin/PPM\ Images/PXT -type f -name '*.jpg' -print0 |
while IFS= read -r -d '' file; do
mv -v "$file" "${file/-0.jpg/_Page_1.jpg}"
done
Do not use backticks `. Using $(...) instead is greatly preferred.
This question already has answers here:
How to loop through file names returned by find?
(17 answers)
Closed 4 years ago.
I wanna loop through jpg files, using find to locate them.
The result is the full path name including ./ in front. I want to remove the replace ./ with ./newsubdir/ so I can use that as the output file name in a process, creating a modified copy of the original in newsubdir using the same folder structure.
This is what I tried.
#!/bin/bash
find . -type f -name '*jpg'
for file do
echo ${file:1}
done
However the substring extraction didn't seem to work at all. Is there a reason for that or a different way to do this. Very new to Bash.
I was going for something like this as a end result. Trying to square a bunch of pictures but keep the folder structure.
#!/bin/bash
find . -type f -name '*jpg'
for file do
convert '$file[2048x2048]' -gravity center -extent 2048x2048 "./newsubdir${file:1}"
done
You were close! Sticking a little closer to the original code (and thus avoiding starting more shells than necessary):
#!/bin/bash
find . -type f -name '*.jpg' -exec bash -c '
for file do
convert "$file[2048x2048]" -gravity center -extent 2048x2048 "./newsubdir${file:1}"
done
' _ {} +
...or, using your original shell and avoiding -exec entirely:
#!/bin/bash
while IFS= read -r -d '' file; do
convert "$file[2048x2048]" -gravity center -extent 2048x2048 "./newsubdir${file:1}"
done < <(find . -type f -name '*.jpg' -print0)
These patterns and more are part of UsingFind.
find . -type f -name '*jpg' -exec bash -c '
file=$1; convert "${file}[2048x2048]" -gravity \
center -extent 2048x2048 "./newsubdir/${file:1}"' _ {} \;
Frankly, I think you're much better off writing a script to do the conversion and just calling it with find . -type f -name '*.jpg' -exec script {} \;. Doing that will help to avoid inevitable quoting problems.
I want to find files that are starting with show and were created in a particular month. I have tried the following
for i in `find /home/data -type d -name "$MONTH"`;
do find $i -type f -name "show*" -printf "%h\n"|uniq >tempfile1;
done;
but I get this error:
-bash: /home/data/testdata/2017/Apr/25: Is a directory
How can I fix that?
If you run that small bit through ShellCheck, several issues become apparent:
Instead of looping over the output of find use -exec
You should enclose $i in quotes to prevent globbing and word splitting. That may be the cause of your issue.
This question already has answers here:
How to insert strings containing slashes with sed? [duplicate]
(11 answers)
Closed 5 years ago.
i need a quick command (linux or windows) to replace every \\ with a /, and all tries with sed failed because of the /.
(I already tried find . -name '*.*' -exec sed -i 's/\\///g' {} \;, but i think it failed with the "/".
find . -name '*.*' -type f -exec sed -i 's:\\\\:/:g' {} \;
You need to escape each backslash, and using a colon or comma as separators is generally recommended when making replacements with forward-slash. However, escaping the forward slash works too:
find . -name '*.*' -type f -exec sed -i 's/\\\\/\//g' {} \;
As pointed out in comments the OS module is probably what you really need to look at.
Edit: thanks to #tripleee for reminding me of the -type f line, which limits it to files, rather than including the current directory.
Also, I copied the syntax *.* from the OP but in general it isn't helpful. * alone is usually what you want, since files aren't guaranteed to have a dot in their name. Assuming you were happy to include files not containing a dot, the simplest thing to do here is have no -name at all:
find . -type f -exec sed -i 's:\\\\:/:g' {} \;
How can I search for files in directories that contain spaces in names, using find?
i use script
#!/bin/bash
for i in `find "/tmp/1/" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`
do
for j in `ls "$i" | grep sh | sed 's/\.txt//g'`
do
find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
done
done
but the files and directories that contain spaces in names are not processed?
This will grab all the files that have spaces in them
$ls
more space nospace stillnospace this is space
$find -type f -name "* *"
./this is space
./more space
I don't know how to achieve you goal. But given your actual solution, the problem is not really with find but with the for loops since "spaces" are taken as delimiter between items.
find has a useful option for those cases:
from man find:
-print0
True; print the full file name on the standard output, followed by a null character
(instead of the newline character that -print uses). This allows file names
that contain newlines or other types of white space to be correctly interpreted
by programs that process the find output. This option corresponds to the -0
option of xargs.
As the man saids, this will match with the -0 option of xargs. Several other standard tools have the equivalent option. You probably have to rewrite your complex pipeline around those tools in order to process cleanly file names containing spaces.
In addition, see bash "for in" looping on null delimited string variable to learn how to use for loop with 0-terminated arguments.
Do it like this
find . -type f -name "* *"
Instead of . you can specify your path, where you want to find files with your criteria
Your first for loop is:
for i in `find "/tmp/1" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`
If I understand it correctly, it is looking for all text files in the /tmp/1 directory, and then attempting to remove the file name with the sed command right? This would cause a single directory with multiple .txt files to be processed by the inner for loop more than once. Is that what you want?
Instead of using sed to get rid of the filename, you can use dirname instead. Also, later on, you use sed to get rid of the extension. You can use basename for that.
for i in `find "/tmp/1" -iname "*.txt"` ; do
path=$(dirname "$i")
for j in `ls $path | grep POD` ; do
file=$(basename "$j" .txt)
# Do what ever you want with the file
This doesn't solve the problem of having a single directory processed multiple times, but if it is an issue for you, you can use the for loop above to store the file name in an array instead and then remove duplicates with sort and uniq.
Use while read loop with null-delimited pathname output from find:
#!/bin/bash
while IFS= read -rd '' i; do
while IFS= read -rd '' j; do
find "/tmp/2/" -iname "$j.sh" -exec echo cp '{}' "$i" \;
done <(exec find "$i" -maxdepth 1 -mindepth 1 -name '*POD*' -not -name '*.txt' -printf '%f\0')
done <(exec find /tmp/1 -iname '*.txt' -not -iname '[0-9A-Za-z]*.txt' -print0)
Never used for i in $(find...) or similar as it'll fail for file names containing white space as you saw.
Use find ... | while IFS= read -r i instead.
It's hard to say without sample input and expected output but something like this might be what you need:
find "/tmp/1/" -iname "*.txt" |
while IFS= read -r i
do
i="${i%%[0-9A-Za-z]*\.txt}"
for j in "$i"/*sh*
do
j="${j%%\.txt}"
find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
done
done
The above will still fail for file names that contains newlines. If you have that situation and can't fix the file names then look into the -print0 option for find, and piping it to xargs -0.