cp command failing in Linux - linux

I am facing a copy command problem while executing shell script in RHEL 5.
executed command is
cp -fp /fir1/dir2/*/bin/file1 `find . -name file1 -print`
error is
cp: Target ./6e0476aec9667638c87da1b17b6ccf46/file1 must be a directory
Would you please throw some ideas why it would be failing?
Thanks
Robert.

When cp is called with more than two filenames as arguments, it treats the last one as a target directory, and copies all the files named in the other arguments into that target directory. So, for example,
cp file1 file2 dir3
will create dir3/file1 and dir3/file2. It seems that in your case, the pattern /fir1/dir2/*/bin/file1 matches more than one filename, so cp is trying to treat the result of find as a target directory - which it isn't - and failing.

You can't copy many files to one location unless that location is a directory.
cp should be used thusly: cp sourcefile destinationfile or cp source1 source2 destinationdir.

As the others said you cannot copy multiple files to one file using cp. On the other hand, if you want to append the content of multiple files together into one destination file you can use cat.
For instance:
cat file1 file2 file3 > destinationfile

it is hard to answer without knowing what you are trying to achieve.
If, for example, you want to copy all files named "file1" within a directory structure to a target place /tmp, building the same directory structure there, this command will do the trick:
cd /dir1/dir2
find . -name file1 | cpio -pvd /tmp

You cannot copy multiple multiple files to a file, only to a directory, i.e.
cp file1 file2 file2 file4
is not possible, you need
cp file1 file2 file2 dir1

Related

Bash script to sort files into sub folders based on extension

I have the following structure:
FolderA
Sub1
Sub2
filexx.csv
filexx.doc
FolderB
Sub1
Sub2
fileyy.csv
fileyy.doc
I want to write a script that will move the .csv files into the folder sub1 for each parent directory (Folder A, Folder B and so on) giving me the following structure:
FolderA
Sub1
filexx.csv
Sub2
filexx.doc
FolderB
Sub1
fileyy.csv
Sub2
fileyy.doc
This is what I have till now but I get the error mv: cannot stat *.csv: No such file or directory
for f in */*/*.csv; do
mv -v "$f" */*/Sub1;
done
for f in */*/*.doc; do
mv -v "$f" */*/Sub2;
done
I am new to bash scripting so please forgive me if I have made a very obvious mistake. I know I can do this in Python as well but it will be lengthier which is why I would like a solution using linux commands.
find . -name "*.csv" -type f -execdir mv '{}' Sub1/ \;
Using find, search for all files with the extension .csv and then when we find them, execute a move command from within the directory containing the files, moving the files to directory Sub1
find . -name "*.doc" -type f -execdir mv '{}' Sub2/ \;
Follow the same principle for files with the extension .doc but this time, move the files to Sub2.
I believe you are getting this error because no file matched your wildcard. When it happens, the for loop will give $f the value of the wildcard itself. You are basically trying to move the file *.csv which does not exist.
To prevent this behavior, you can add shopt -s nullglob at the top of your script. When using this, if no file is found, your script won't enter the loop.
My advise is, make sure you run your script from the correct location when using wildcards like this. But maybe what you meant to do by writing */*/*.csv is to recursively match all the csv files. If that's what you intended to do, this is not the right way to do it.
To recursively match all csv/doc/etc files using native bash you can add shopt -s globstar to the top of your script and use **/*.csv as wildcard
#!/bin/bash
shopt -s globstar nullglob
for f in **/*.csv; do
mv "$f" Destination/ # Note that $f is surrounded by "" to handle whitespaces in filenames
done
You could also use the find (1) utility to achieve that. But if you're planning to do more processing on the files than just moving them, a for loop might be cleaner as you won't have to inline everything in the same command.
Side note : "Linux commands" as you say are actually not Linux commands, they are part of the GNU utilities (https://www.gnu.org/gnu/linux-and-gnu.en.html)
If csv files you want to move are in the top directories (from the point of view of the current directory), but not in the subdirectories of them, then simply:
#!/bin/bash
for dir in */; do
mv -v "$dir"*.csv "${dir}Sub1/"
mv -v "$dir"*.doc "${dir}Sub2/"
done
If the files in all subdirectories are wanted to be moved similarly, then:
shopt -s globstar
for file in **/*.csv; do
mv -v "$file" "${file%/*}/Sub1/"
done
for file in **/*.doc; do
mv -v "$file" "${file%/*}/Sub2/"
done
Note that, the directories Sub1 and Sub2 are relative to the directory where csv and doc files reside.

Copy files from multiple folders with including folder name in Linux

I have multiple sub folders e.g.:
ls ./
F1 F2 F5 F8 F12 ...
Each folder contain file "file.txt"
How to copy all file.txt files to main folder containing folder name?
cp ./F1/file.txt ./file_1.txt
cp ./F2/file.txt ./file_2.txt
...
Perl One Liner
first go to main folder than:
find . | perl -a -F/ -lne 'qx(cp -r "$F[1]" T/ )'
note
do not worry about log file on the screen if would be!
T/
is your target directory
main folder
Where all your file exist. If your all file is in the folder Music for example; so cd Music then that Perl One Liner
declare -a dirs
i=1
for d in */
do
dirs[i++]="${d%/}"
done
echo "There are ${#dirs[#]} dirs in the current path"
for((i=1;i<=${#dirs[#]};i++))
do
echo "Copying file.txt from ${dirs[i]} dir..."
cp ./${dirs[i]}/file.txt ./file_$i.txt
done
Save it as a script file, fileTxtCopy.sh, for instance. Then place it at the parent dir and give it executable permission sudo chmod +x fileTxtCopy.sh.
Run it as script and you should have all your file.txt file copied in parent dir.
Copies file.txt files from each folder inside a current directory to the current directory and appends numbers contained in a folder name to the name of the copied file.
for i in *; do a=$(<<< "$i" grep -o "[0-9]*" -); cp "$i/file.txt" "file_$a.txt"; done
Not the most robust approach though.

Linux find and copy files with same name to destination folder do not overwrite

I want to find and copy all files with *.jpg in one folder includes its sub folder to another folder
I use
find /tempL/6453/ -name "*.jpg" | xargs -I '{}' cp {} /tempL/;
but it overwrite files with same name
for example in /tempL/6453/, there are test (1).jpg test (2).jpg and folder 1, in /tempL/6453/1/, there are also have files with the same name test (1).jpg test (2).jpg
If I use the above command, there are only two files test (1).jpg test (2).jpg in /tempL/, it can not copy all files to /tempL/.
What I want is to copy all files to /tempL/, when there are same file name, just rename them, how to?
What I want is to copy all files to /tempL/, when there are same file name, just rename them, how to?
1) If you only do not what overwrite cp --backup will give you a backup for existing file, with --suffix option of cp, you can also specify the suffix to be used for backup.
2) --parents option of cpwill keep directory tree, i.e. files in folder 1 will be copy to new created 1 folder.
3) If you want to customize your rename processing, you can not use cp command only. write script for it and call it to process the result of find
Install "GNU parallel" and use:
find /tempL/6453/ -name "*.jpg" | parallel 'cp {} ./dest-dir/`stat -c%i {}`_{/}'
{/} ................. gets filename with no full path
I think the same approach should be possible with xargs, but learning about parallel was amazing for me, it gives us many beautiful solutions.
I recommend using echo before cp in order to test your command

Move files in bulk and create links in their place in the directory in linux

I am trying to move hundreds of files from one directory to another but create a softlink in the old directory while doing that. Is there a single line command that can do that?
/dir1
file1.txt
file2.txt
.
.
.
file100.txt
move to dir2 and create soft link to them in dir1.
I am currently doing that seperately but was hoping to find a single line command if possible.
cd dir1
mv *.txt /dir2
ln -s /dir2/*.txt .
I tried using find but that didn't work either.
There's no single line command. It's quite trivial to do with shell scripting. For example, in tcsh:
% cd dir1
% foreach FILETOMOVE ( file*.txt )
echo mv -iv $FILETOMOVE /dir2
echo ln -s /dir2/$FILETOMOVE .
end
(Remove the echo's once you're sure you've got it right.)
Bash is similar, with slightly different syntax.
This is slightly more complicated if the filenames or paths include spaces, but still quite simple. (:q in tcsh, using "", etc.)

using wildcard character [xyz] in cp command

I want to copy a file named TEST to a bunch of folders named 1/ 2/ ... 9/
I was trying to use
cp -v TEST ./[1-9]/
# which gives the result:
TEST '->' ./9/fractionofanions
cp: omitting directory './1'
.
.
cp: omitting directory './8'
Can anyone explain why it only copied to folder 9 in the first place, and also any workaround to do what I need? Thanks in advance.
cp can copy multiple files to a directory, but not files to multiple directories. In this instance, you are attempting to copy TEST and directories 1-8 to directory 9/ - see man cp
for more information.
However, you can use the following to copy a file into multiple directories, using find as a helper:
find [1-9] -exec cp file.txt {} \;
As you can verify in man cp, there can only be one target directory specified for cp. You can use a loop, though:
for target in ./[1-9]/ ; do
cp -v TEST "$target"
done

Resources