how to cp files with spaces in the filename when files are provided by find - linux

I would like to ensure that all files found by find with a given criteria are properly copied to the required location.
$from = '/some/path/to/the/files'
$ext = 'custom_file_extension'
$dest = '/new/destination/for/the/files/with/given/extension'
cp 'find $from -name "*.$ext"' $dest
The problem here is that, when a file found with the proper extension and it is containing space cp cannot copy it properly.

You don't do that. You can't splat filenames with spaces that way.
You either get to use something from http://mywiki.wooledge.org/BashFAQ/001 to read the output from find line-by-line or into an array or you use find -exec to do the copy work.
Something like this:
from='/some/path/to/the/files'
ext='custom_file_extension'
dest='/new/destination/for/the/files/with/given/extension'
find "$from" -name "*.$ext" -exec cp -t "$dest" {} +
Using -exec command + here means that find will only execute as many cp commands as it needs based on command length limits. Using -exec command ; here would run one cp-per-file-found (but is more portable to older systems).
See comment from gniourf_gniourf about the use of -t in that cp command to make -exec command + work correctly.

Use -exec:
find "$from" -name "*.$ext" -exec cp {} "$dest" \;

you need to copy file one by one:
for file in "$from"/*."$ext"; do
cp "$file" "$dest"
done
I just use glob here, and it's enough and complete. I think find may introduce problem if the file name contains funny character.

The solution for this sort of problem is xargs -0 and the -print0 flag for find.
-print0 instructs find to print the results with a NUL character termination, instead of a newline, while -0 for xargs tells it expect input in that format.
Finally, the -J option for xargs allows one to put the arguments in the right place for a copy.
find "$from" -name "*.$ext" -print0 | xargs -0 -J % cp % "$dest"

It's better to use -exec argument of find command to do this:
find . -type f -name "*.ext" -exec cp {} ./destination_dir \;
I've checked this case with files containing spaces and it's work for me. Also don't forger to point out '-type f' if you want to find only files, not directories.

Related

Bash find- is showing the files but returning no such file or directory

I have a bash script I cannot get working. I am a dead set beginner in bash this is actually the first script I've ever used. I'm trying to get omxplayer to play a list of files in a directory. When the script runs I get feedback showing the file then the error that there is no such file or directory. Please help me?
#!/bin/sh
find /media/pi/88DC-E668/MP3/ -name "*.mp3" -exec PLAY={} \;; omxplayer "$PLAY";
This is the echo:
find: `PLAY=/media/pi/88DC-E668/MP3/Dance.mp3': No such file or directory
find: `PLAY=/media/pi/88DC-E668/MP3/Whitemary.mp3': No such file or directory
find: `PLAY=/media/pi/88DC-E668/MP3/Limo.mp3': No such file or directory
find: `PLAY=/media/pi/88DC-E668/MP3/Silo.mp3': No such file or directory
File "" not found.
Easy way:
find /media/pi/88DC-E668/MP3 -name \*.mp3 -exec omxplayer {} \;
or
while IFS= read -r -d '' mp3
do
omxplayer "$mp3"
done < <(find /media/pi/88DC-E668/MP3 -name \*.mp3 -print0)
or
find /media/pi/88DC-E668/MP3 -name \*.mp3 -print0 | xargs -0 -n1 omxplayer
You can omit the -n1 if the omxplayer could handle multiple filenames. In such case the 1st could be written as:
find /media/pi/88DC-E668/MP3 -name \*.mp3 -exec omxplayer {} +
but the simplest probably will be
#shopt -s globstar #the default is on
for mp3 in /media/pi/88DC-E668/MP3/{,**/}*.mp3
do
omxplayer "$mp3"
done
EDIT I stand corrected, but won't delete the answer as you can also learn from the mistakes of others. See comment and rather use this answer :)
So please don't do it like this, as this is a typical "happy path" solution - meaning: it works if you know what you're doing and you know your paths (e.g. that they don't contain spaces). I keep forgetting that many people don't know yet that spaces in paths are evil.
Just use xargs to pass what you found to your player like this:
#!/bin/sh
find /media/pi/88DC-E668/MP3/ -name "*.mp3" | xargs omxplayer
The -exec foo part means run the command foo for each path found.
In your case, -exec PATH={}, the {} part is replaced with the path name, ending up with something like -exec PATH=/media/pi/88DC-E668/MP3/Dance.mp3, and so then find tries to run the command PATH=/media/pi/88DC-E668/MP3/Dance.mp3 which fails because there isn't actually any such program to execute.
xargs is the usual way to do what you're trying to do, as described in another comment already.
You could also do:
find /media/pi/88DC-E668/MP3/ -name \*.mp3 |
while read f; do
omxplayer "$f"
done

How can I search for files in directories that contain spaces in names, using "find"?

How can I search for files in directories that contain spaces in names, using find?
i use script
#!/bin/bash
for i in `find "/tmp/1/" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`
do
for j in `ls "$i" | grep sh | sed 's/\.txt//g'`
do
find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
done
done
but the files and directories that contain spaces in names are not processed?
This will grab all the files that have spaces in them
$ls
more space nospace stillnospace this is space
$find -type f -name "* *"
./this is space
./more space
I don't know how to achieve you goal. But given your actual solution, the problem is not really with find but with the for loops since "spaces" are taken as delimiter between items.
find has a useful option for those cases:
from man find:
-print0
True; print the full file name on the standard output, followed by a null character
(instead of the newline character that -print uses). This allows file names
that contain newlines or other types of white space to be correctly interpreted
by programs that process the find output. This option corresponds to the -0
option of xargs.
As the man saids, this will match with the -0 option of xargs. Several other standard tools have the equivalent option. You probably have to rewrite your complex pipeline around those tools in order to process cleanly file names containing spaces.
In addition, see bash "for in" looping on null delimited string variable to learn how to use for loop with 0-terminated arguments.
Do it like this
find . -type f -name "* *"
Instead of . you can specify your path, where you want to find files with your criteria
Your first for loop is:
for i in `find "/tmp/1" -iname "*.txt" | sed 's/[0-9A-Za-z]*\.txt//g'`
If I understand it correctly, it is looking for all text files in the /tmp/1 directory, and then attempting to remove the file name with the sed command right? This would cause a single directory with multiple .txt files to be processed by the inner for loop more than once. Is that what you want?
Instead of using sed to get rid of the filename, you can use dirname instead. Also, later on, you use sed to get rid of the extension. You can use basename for that.
for i in `find "/tmp/1" -iname "*.txt"` ; do
path=$(dirname "$i")
for j in `ls $path | grep POD` ; do
file=$(basename "$j" .txt)
# Do what ever you want with the file
This doesn't solve the problem of having a single directory processed multiple times, but if it is an issue for you, you can use the for loop above to store the file name in an array instead and then remove duplicates with sort and uniq.
Use while read loop with null-delimited pathname output from find:
#!/bin/bash
while IFS= read -rd '' i; do
while IFS= read -rd '' j; do
find "/tmp/2/" -iname "$j.sh" -exec echo cp '{}' "$i" \;
done <(exec find "$i" -maxdepth 1 -mindepth 1 -name '*POD*' -not -name '*.txt' -printf '%f\0')
done <(exec find /tmp/1 -iname '*.txt' -not -iname '[0-9A-Za-z]*.txt' -print0)
Never used for i in $(find...) or similar as it'll fail for file names containing white space as you saw.
Use find ... | while IFS= read -r i instead.
It's hard to say without sample input and expected output but something like this might be what you need:
find "/tmp/1/" -iname "*.txt" |
while IFS= read -r i
do
i="${i%%[0-9A-Za-z]*\.txt}"
for j in "$i"/*sh*
do
j="${j%%\.txt}"
find "/tmp/2/" -iname "$j.sh" -exec cp {} "$i" \;
done
done
The above will still fail for file names that contains newlines. If you have that situation and can't fix the file names then look into the -print0 option for find, and piping it to xargs -0.

Recursively prepend text to file names

I want to prepend text to the name of every file of a certain type - in this case .txt files - located in the current directory or a sub-directory.
I have tried:
find -L . -type f -name "*.txt" -exec mv "{}" "PrependedTextHere{}" \;
The problem with this is dealing with the ./ part of the path that comes with the {} reference.
Any help or alternative approaches appreciated.
You can do something like this
find -L . -type f -name "*.txt" -exec bash -c 'echo "$0" "${0%/*}/PrependedTextHere${0##*/}"' {} \;
Where
bash -c '...' executes the command
$0 is the first argument passed in, in this case {} -- the full filename
${0%/*} removes everything including and after the last / in the filename
${0##*/} removes everything before and including the last / in the filename
Replace the echo with a mv once you're satisfied it's working.
Are you just trying to move the files to a new file name that has Prepend before it?
for F in *.txt; do mv "$F" Prepend"$F"; done
Or do you want it to handle subdirectories and prepend between the directory and file name:
dir1/PrependA.txt
dir2/PrependB.txt
Here's a quick shot at it. Let me know if it helps.
for file in $(find -L . -type f -name "*.txt")
do
parent=$(echo $file | sed "s=\(.*/\).*=\1=")
name=$(echo $file | sed "s=.*/\(.*\)=\1=")
mv "$file" "${parent}PrependedTextHere${name}"
done
This ought to work, as long file names does not have new line character(s). In such case make the find to use -print0 and IFS to have null.
#!/bin/sh
IFS='
'
for I in $(find -L . -name '*.txt' -print); do
echo mv "$I" "${I%/*}/prepend-${I##*/}"
done
p.s. Remove the echo to make the script effective, it's there to avoid accidental breakage for people who randomly copy paste stuff from here to their shell.

How to use find to make copies of file with prefix in CSH?

I am trying to make copies of certain file and let them have a prefix.
in order to do it I thought of using find. for our use, let's call them kuku files and I want them to have a "foo" prefix:
find . -maxdepth 1 -name "kuku*" -exec cp '{}' foo_'{}' \;
but it doesn't work because the find always starts the results with ./ so i get a lot of error messages saying "cp: cannot create regular file `foo_./kuku...`: No such file or directory".
the problem is solvable by using foreach f (`ls`) and than using grep and the status var, but it is cumbersome and I want to learn a better solution (and improve my knowledge of the find command along the way...).
update foreach solution (which I don't like and want your help in finding a replacement):
foreach f (`ls`)
echo $f | grep -lq kuku
if (! $status) then
cp $f foo_$f
endif
end
but this is UGLY! (end of update)
as the header says, I'm using csh - not because I love it, just because that's what we use at work...
update
trying to use basename as a solution, because find -exec basename '{}' \; removes the ./ prefix, but i failed using the basename inside the find with backticks (`), meaning that
find -name "kuku*" -exec cp '{}' foo_`basename '{}` \;
simply doesn't work.
Here you go.. I have tested in my linux box
find . -name "kuku*" -exec sh -c 'cp {} foo_`basename {}`' \;

Loop over file names from `find`?

If I run this command:
sudo find . -name *.mp3
then I can get a listing of lots of mp3 files.
Now I want to do something with each mp3 file in a loop. For example, I could create a while loop, and inside assign the first file name to the variable file. Then I could do something with that file. Next I could assign the second file name to the variable file and do with that, etc.
How can I realize this using a linux shell command? Any help is appreciated, thanks!
For this, use the read builtin:
sudo find . -name *.mp3 |
while read filename
do
echo "$filename" # ... or any other command using $filename
done
Provided that your filenames don't use the newline (\n) character, this should work fine.
My favourites are
find . -name '*.mp3' -exec cmd {} \;
or
find . -name '*.mp3' -print0 | xargs -0 cmd
While Loop
As others have pointed out, you can frequently use a while read loop to read filenames line by line, it has the drawback of not allowing line-ends in filenames (who uses that?).
xargs vs. -exec cmd {} +
Summarizing the comments saying that -exec...+ is better, I prefer xargs because it is more versatile:
works with other commands than just find
allows 'batching' (grouping) in command lines, say xargs -n 10 (ten at a time)
allows parallellizing, say xargs -P4 (max 4 concurrent processes running at a time)
does privilige separation (such as in the OP's case, where he uses sudo find: using -exec would run all commands as the root user, whereas with xargs that isn't necessary:
sudo find -name '*.mp3' -print0 | sudo xargs -0 require_root.sh
sudo find -name '*.mp3' -print0 | xargs -0 nonroot.sh
in general, pipes are just more versatile (logging, sorting, remoting, caching, checking, parallelizing etc, you can do that)
How about using the -exec option to find?
find . -name '*.mp3' -exec mpg123 '{}' \;
That will call the command mpg123 for every file found, i.e. it will play all the files, in the order they are found.
for file in $(sudo find . -name *.mp3);
do
# do something with file
done

Resources