How to handle spaces in MV command [duplicate] - linux

This question already has answers here:
Moving multiple files having spaces in name (Linux)
(3 answers)
Exactly how do backslashes work within backticks?
(2 answers)
Closed 2 years ago.
I have the following script that renames files based on a specific file name string. The script was working fine until I had to apply it to a directory that contains a folder name with a space.
Script:
for file in `find /home/splunkLogs/Captin/PPM\ Images/PXT -type f -name '*.jpg'`; do mv -v "$file" "${file/-0.jpg/_Page_1.jpg}"; done
You'll notice the file name "PPM Images", which has a space. I added a backslash so the path would be readable, but I get the error "mv: cannot stat /home/splunkLogs/Captin/PPM: No such file or directory. I also tried putting the folder name in quotes in the path and received the same error. Can anyone guide me with a solution for handling filename spaces with the MV command?

So do not read lines using for. Read https://mywiki.wooledge.org/BashFAQ/001 .
find /home/splunkLogs/Captin/PPM\ Images/PXT -type f -name '*.jpg' |
while IFS= read -r file; do
mv -v "$file" "${file/-0.jpg/_Page_1.jpg}"
done
or better:
find /home/splunkLogs/Captin/PPM\ Images/PXT -type f -name '*.jpg' -print0 |
while IFS= read -r -d '' file; do
mv -v "$file" "${file/-0.jpg/_Page_1.jpg}"
done
Do not use backticks `. Using $(...) instead is greatly preferred.

Related

How to read out a file line by line and for every line do a search with find and copy the search result to destination?

I hope you can help me with the following problem:
The Situation
I need to find files in various folders and copy them to another folder. The files and folders can contain white spaces and umlauts.
The filenames contain an ID and a string like:
"2022-01-11-02 super important file"
The filenames I need to find are collected in a textfile named ids.txt. This file only contains the IDs but not the whole filename as a string.
What I want to achieve:
I want to read out ids.txt line by line.
For every line in ids.txt I want to do a find search and copy cp the result to destination.
So far I tried:
for n in $(cat ids.txt); do find /home/alex/testzone/ -name "$n" -exec cp {} /home/alex/testzone/output \; ;
while read -r ids; do find /home/alex/testzone -name "$ids" -exec cp {} /home/alex/testzone/output \; ; done < ids.txt
The output folder remains empty. Not using -exec also gives no (search)results.
I was thinking that -name "$ids" is the root cause here. My files contain the ID + a String so I should search for names containing the ID plus a variable string (star)
As argument for -name I also tried "$ids *" "$ids"" *" and so on with no luck.
Is there an argument that I can use in conjunction with find instead of using the star in the -name argument?
Do you have any solution for me to automate this process in a bash script to read out ids.txt file, search the filenames and copy them over to specified folder?
In the end I would like to create a bash script that takes ids.txt and the search-folder and the output-folder as arguments like:
my-id-search.sh /home/alex/testzone/ids.txt /home/alex/testzone/ /home/alex/testzone/output
EDIT:
This is some example content of the ids.txt file where only ids are listed (not the whole filename):
2022-01-11-01
2022-01-11-02
2020-12-01-62
EDIT II:
Going on with the solution from tripleee:
#!/bin/bash
grep . $1 | while read -r id; do
echo "Der Suchbegriff lautet:"$id; echo;
find /home/alex/testzone -name "$id*" -exec cp {} /home/alex/testzone/ausgabe \;
done
In case my ids.txt file contains empty lines the -name "$id*" will be -name * which in turn finds all files and copies all files.
Trying to prevent empty line to be read does not seem to work. They should be filtered by the expression grep . $1 |. What am I doing wrong?
If your destination folder is always the same, the quickest and absolutely most elegant solution is to run a single find command to look for all of the files.
sed 's/.*/-o\n—name\n&*/' ids.txt |
xargs -I {} find -false {} -exec cp {} /home/alex/testzone/output +
The -false predicate is a bit of a hack to allow the list of actual predicates to start with -o (as in "or").
This could fail if ids.txt is too large to fit into a single xargs invocation, or if your sed does not understand \n to mean a literal newline.
(Here's a fix for the latter case:
xargs printf '-o\n-name\n%s*\n' <ids.txt |
...
Still the inherent problem with using xargs find like this is that xargs could split the list between -o and -name or between -name and the actual file name pattern if it needs to run more than one find command to process all the arguments.
A slightly hackish solution to that is to ensure that each pair is a single string, and then separately split them back out again:
xargs printf '-o_-name_%s*\n' <ids.txt |
xargs bash -c 'arr=("$#"); find -false ${arr[#]/-o_-name_/-o -name } -exec cp {} "$0"' /home/alex/testzone/ausgabe
where we temporarily hold the arguments in an array where each file name and its flags is a single item, and then replace the flags into separate tokens. This still won't work correctly if the file names you operate on contain literal shell metacharacters like * etc.)
A more mundane solution fixes your while read attempt by adding the missing wildcard in the -name argument. (I also took the liberty to rename the variable, since read will only read one argument at a time, so the variable name should be singular.)
while read -r id; do
find /home/alex/testzone -name "$id*" -exec cp {} /home/alex/testzone/output \;
done < ids.txt
Please try the following bash script copier.sh
#!/bin/bash
IFS=$'\n' # make newlines the only separator
set -f # disable globbing
file="files.txt" # name of file containing filenames
finish="finish" # destination directory
while read -r n ; do (
du -a | awk '{for(i=2;i<=NF;++i)printf $i" " ; print " "}' | grep $n | sed 's/ *$//g' | xargs -I '{}' cp '{}' $finish
);
done < $file
which copies recursively all the files named in files.txt from . and it's subfiles to ./finish
This new version works even if there are spaces in the directory names or file names.

Linux FIND searching files with names within single quotes [duplicate]

This question already has answers here:
How can I store the "find" command results as an array in Bash
(8 answers)
Closed 1 year ago.
I am trying to save results of FIND command into array so then I can do some AWK commands with them.
My actual code is: files_arr=( "$(find "$1" -type f \( -name "\'*[[:space:]]*\'" -o -name "*" \) -perm -a=r -print )
this code should find all files with spaces and without spaces and return them to my array (and are readable also)
The PROBLEM is, when I have directory named: 'not easy' and inside this is directory are files: 'file one' and 'file two' so what I will get is: not easy/file one
what I want to get is: 'not easy'/'file one'I was thinking about using SED to add quotes but it would add quotes even if I had just simple one word file which doesnt have quotes in it.
Thank you for our advices.
Try this out :
mapfile -d '' files_arr < <(find . -type f -name "'*[[:space:]]*'" -perm -a=r -print0)
declare -p files_arr # To see what's in the array

Copying a type of file, in specific directories, to another directory

I have a .txt file that contains a list of directories. I want to make a script that goes through this .txt file, copies anything in the directory thats listed of a certain file type, to another directory.
I've never done this with directories, only files.
How can i edit this simple script to work for reading a directory list, looking for a .csv file, and copy it to another directory?
cat filenames.list | \
while read FILENAME
do
find . -name "$FILENAME" -exec cp '{}' new_dir\;
done
for DIRNAME in $(dirname.list); do find $DIRNAME -type f -name "*.csv" -exec cp \{} dest \; ; done;
sorry, in my first answer i didnt understand what you asking for.
The first line of code, simply, take a dirname entry in your directory list as a path and search in it for each file which end with ".csv" extension; then copy it inside the destination you want.
But you could do with less code:
for DIRNAME in $(dirname.list); do cp $DIRNAME/*.csv dest ; done
Despite the filename of the list filenames.list, let me assume the file contains the list of directory names, not filenames. Then would you please try:
while IFS= read -r dir; do
find "$dir" -type f -name "*.mp3" -exec cp -p -- {} new_dir \;
done < filenames.list
The find command searches in "$dir" for files which have an extension .mp3 then copies them to the new_dir.
The script above does not care the duplication of the filenames. If you want to keep the original directory tree and/or need a countermeasure for the duplication of the filenames, please let me know.
Using find inside a while loop works but find will run on each line of the file, another alternative is to save the list in an array, that way find can search on the directories in the list in one search.
If you have bash4+ you can use mapfile.
mapfile -t directories < filenames.list
If you're stuck at bash3.
directories=()
while IFS= read -r line; do
directories+=("$lines")
done < filenames.list
Now if you're just after one file type like files ending in *.csv.
find "${directories[#]}" -type f -name '*.csv' -exec sh -c 'cp -v -- "$#" /newdirectory' _ {} +
If you have multiple file type to match and multiple directories to copy the files.
while IFS= read -r -d '' file; do
case $file in
*.csv) cp -v -- "$file" /foodirectory;; ##: csv file copy to foodirectory
*.mp3) cp -v -- "$file" /bardirectory;; ##: mp3 file copy to bardirectory
*.avi) cp -v -- "$file" /bazdirectory;; ##: avi file copy to bazdirectory
esac
done < <(find "${directories[#]}" -type f -print0)
find's print0 will work with read's -d '' when dealing with files with white spaces and newlines. see How can I find and deal with file names containing newlines, spaces or both?
The -- is there so if you have a problematic filename that starts with a dash - cp will not interpret it as an option.
Given find ability to process multiple folder, and assuming goal is to 'flatten' all csv files into a single destination, consider the following.
Note that it assumes folder names do not have special characters (including spaces, tabs, new lines, etc).
As a side benefit, it will minimize the number of 'cp' calls, making the process efficient across large number of files/folders.
find $(<filename.list) -name '*.csv' | xargs cp -t DESTINATION/
For the more complex case, where folder names/file name can be anything (including space, '*', etc.), consider using NUL separator (-print0 and -0).
xargs -I{} -t find '{}' -name '*.csv' <dd -print0 | xargs -0 -I{} -t cp -t new/ '{}'
Which will fork multiple find and multiple cp.

Remove Brackets from all the filenames in Linux

I have a string that remove the brackets from the filename.
$ find . -name "*.mkv" -exec rename 's/[\)\(]//g' {} \;
I have managed to make a statement that removes all the () in the filename of a directory, but whenever I run into a directory like for example amazing.(2018) It shows an error that:
No directory can be found.
Please provide any alternative I need this to work, and I want it to be recursive.
Better call sed:
find . -type f -iname "*.mkv" | \
while IFS= read -r line; \
do mv "$line" "$(printf %s "$line" | sed -re 's/(\[|\])//g')"; done;
input:
'1[a].mkv' '2[a].mkv' '3[a].mkv'
output:
1a.mkv 2a.mkv 3a.mkv
Ensure that you run "find" with the "-depth" parameter,
therefore leafs are renamed first,
i.e., a directory name is changed
only after all files and directories inside it are renamed.
Otherwise, if you first change the name of a directory,
files and directories inside it are no longer accessible
(they are not in the list that "find" built at the beginning).

Rename all files within a folder prefixed with foldername [duplicate]

This question already has answers here:
Rename filename to another name
(3 answers)
Closed 6 years ago.
I have a bunch of directories each having multiple files.
dir1
|- part1.txt
|- part2.txt . . .
dir2
|- part1.txt
|- part2.txt . . .
I want to rename the internal files (part1.txt and so on) to something like (dir1_part1.txt). How can this be done in ubuntu?
This question explains how suffix prefix can be added or removed. But how do I add a prefix as a Directory Name?
There is a tool called perl-rename sometimes called rename, not to be confused with rename from util-linux. This tool takes a perl expression and renames accordently:
perl-rename 's~/~_~' dir1/* dir2/*
The above will rename and move all files in dir1 and dir2 to the following:
dir1/file1 -> dir1_file1
dir1/file2 -> dir1_file2
dir1/file3 -> dir1_file3
You can play with the regex online
A simple bash script using find and parameter-expansion for finding the directory name.
#!/bin/bash
find . -name "*.csv" -type f -printf '%f\n' |
while read -r x; do
mv -v "$x" "${PWD##*/}_$x"
done
To handle files with special characters:-
To handle file-names that contain newlines or other types of white space or other special characters, am adopting -print0 from find and reading them with a de-limiter '':-
Am using parameter-expansion again, to strip leading characters ./ from find command.
#!/bin/bash
find . -name "*.csv" -type f -print0 |
while IFS= read -r -d '' x; do
x="${x:2}"
mv -v "$x" "${PWD##*/}_$x"
done
A working example:-
[dude#csv_folder]$ ls *.csv
1.csv 2.csv 3.csv 4.csv
[dude#csv_folder]$ ./myScript.sh
`1.csv' -> `csv_folder_1.txt'
`2.csv' -> `csv_folder_2.txt'
`3.csv' -> `csv_folder_3.txt'
`4.csv' -> `csv_folder_4.txt'
Okay, here is a simple example.
$ mkdir dir{1..5}
$ touch dir{1..5}/part{1..5}.txt
# the command above create the files for testing
# then do the rename action
$ for f in dir*/*;do mv -v $f $(dirname $f)/$(dirname $f)_$(basename $f);done

Resources