How do I copy multiple files at once in linux? With the source and the destination locations of these files being the same directory - linux

I have some files located in one directory /home/john
I want to copy all the files with *.text extension from this directory and save them as *.text.bkup, again in the same directory, i.e. /home/john
Is there a single command with which I can do that?
Also, with extension of the same idea, is it possible to copy all the files with multiple extentions (e.g. *.text & *.doc) as *.text.bkup & *.doc.bkup repectively (again in the same directory)?

This is best accomplished with a Shell loop:
~/tmp$ touch one.text two.text three.doc four.doc
~/tmp$ for FILE in *.text *.doc; do cp ${FILE} ${FILE}.bkup; done
~/tmp$ ls -1
four.doc
four.doc.bkup
one.text
one.text.bkup
three.doc
three.doc.bkup
two.text
two.text.bkup
What happens in the code above is the shell gets all .text and .doc files and then loops through each value one by one, assigning the variable FILE to each value. The code block between the "do" and the "done" is executed for every value of FILE, effectively copying each file to filename.bkup.

You can achieve this easily with find:
find /home/john -iname '*.text' -type f -exec cp \{} \{}.backup \;

No, there is no single/simple command to achieve this with standard tools
But you can write a script like this to do it for you.
for file in *.text
do
cp -i "${file}" "${file}.bkup"
done
with -i option you can confirm each overwriting operation

I sort of use a roundabout way to achieve this. It involves a Perl script and needs additional steps.
Step 1:
Copy the names of all the text files into a text file.
find -maxdepth 1 -type f -name '*.txt' > file_name1.txt
Step 2:
Make a duplicate of the copied file.
cp file_name1.txt file_name2.txt
Now open the file_name2.txt in vi editor and do a simple string substitution.
%s/.text/.text.backup/g
Step 3: Merge the source and destination file names into a single file separated by a comma.
paste -d, file_name1.txt file_name2.txt > file_name.txt
Step 4: Run the below perl script to achieve the desired results
open(FILE1,"<file_name.txt") or die'file doesnt exist'; #opens a file that has source and destination separated beforhand using commas
chomp(#F1_CONTENTS=(<FILE1>)); # copies the content of the file into an array
close FILE1;
while()
{
foreach $f1 (#F1_CONTENTS)
{
#file_name=split(/,/,$f1); # separates the file content based on commas
print "cp $file_name[0] $file_name[1]\n";
system ("cp $file_name[0] $file_name[1]"); # performs the actual copy here
}
last;
}

Related

Is there a grep command that allows me to grep multiple folders and copy them using a text file containing the file names

So I have a text file containing the names of ~1000 folder names, and a directory with around ~30,000 folders. What I need to do is to find a bash command that will read the text file for the folder names, and grep those folders from the directory and copy them to a new destination. Is this at all possible?
I am new to coding, my apologies if this isn't worded well.
you can use a bash scrit like this one:
fileList=$(cat nameFIle)
srcDir="/home/ex/src"
destDir="/home/ex/dest"
for name in fileList
do
cp -r "${srcDir}/${name}" "${destDir}"/
done
Definitely possible - and you don't even need grep. Assuming your text file has one file per line.
cp -r `cat filenames.txt` path_to_copy_location/
I would write:
xargs cp -t /destination/directory < file.of.dirnames

Splitting a large directory into smaller ones in Linux

I have a large directory named as application_pdf which contains 93k files. My use-case is to split the directory into 3 smaller subdirectories (to a different location that the original large directory) containing around 30k files each.
Can this be done directly from the commandline.
Thanks!
Using bash:
x=("path/to/dir1" "path/to/dir2" "path/to/dir3")
c=0
for f in *
do
mv "$f" "${x[c]}"
c=$(( (c+1)%3 ))
done
If you have the rename command from Perl, you could try it like this:
rename --dry-run -pe 'my #d=("dirA","dirB","dirC"); $_=$d[$N%3] . "/$_"' *.pdf
In case you are not that familiar with the syntax:
-p says to create output directories, à la mkdir -p
-e says to execute the following Perl snippet
$d[$N%3] selects one of the directories in array #d as a function of the serially incremented counter $N provided to the snippet by rename
The output value is passed back to rename by setting $_
Remove the --dry-run if it looks good. Please run on a small directory with a copy of 8-10 files first, and make a backup before trying on all your 93k files.
Test
touch {0,1,2,3,4,5,6}.pdf
rename --dry-run -pe 'my #d=("dirA","dirB","dirC"); $_=$d[$N%3] . "/$_"' *.pdf
'0.pdf' would be renamed to 'dirB/0.pdf'
'1.pdf' would be renamed to 'dirC/1.pdf'
'2.pdf' would be renamed to 'dirA/2.pdf'
'3.pdf' would be renamed to 'dirB/3.pdf'
'4.pdf' would be renamed to 'dirC/4.pdf'
'5.pdf' would be renamed to 'dirA/5.pdf'
'6.pdf' would be renamed to 'dirB/6.pdf'
More for my own reference, but if you don't have the Perl rename command, you could do it just in Perl:
perl -e 'use File::Copy qw(move);my #d=("dirA","dirB","dirC"); my $N=0; #files = glob("*.pdf"); foreach $f (#files){my $t=$d[$N++%3] . "/$f"; print "Moving $f to $t\n"; move $f,$t}'
Something like this might work:
for x in $(ls -1 originPath/*.pdf | head -30000); do
mv originPath/$x destinationPath/
done

Bash Script to replicate files

I have 25 files in a directory. I need to amass 25000 files for testing purposes. I thought I could just replicate these files over and over until I get 25000 files. I could manually copy paste 1000 times but that seemed tedious. So I thought I could write a script to do it for me. I tried
cp * .
As a trial but I got an error that said the source and destination file are the same. If I were to automate it how would i do it so that each of the 1000 times the new files are made with unique names?
As discussed in the comments, you can do something like this:
for file in *
do
filename="${file%.*}" # get everything up to last dot
extension="${file##*.}" # get extension (text after last dot)
for i in {00001..10000}
do
cp $file ${filename}${i}${extension}
done
done
The trick for i in {00001..10000} is used to loop from 1 to 10000 having the number with leading zeros.
The ${filename}${i}${extension} is the same as $filename$i$extension but makes more clarity over what is a variable name and what is text. This way, you can also do ${filename}_${i}${extension} to get files like a_23.txt, etc.
In case your current files match a specific pattern, you can always do for file in a* (if they all are on the a + something format).
If you want to keep the extension of the files, you can use this. Assuming, you want to copy all txt-files:
#!/bin/bash
for f in *.txt
do
for i in {1..10000}
do
cp "$f" "${f%.*}_${i}.${f##*.}"
done
done
You could try this:
for file in *; do for i in {1..1000}; do cp $file $file-$i; done; done;
It will append a number to any existing files.
The next script
for file in *.*
do
eval $(sed 's/\(.*\)\.\([^\.]*\)$/base="\1";ext="\2";/' <<< "$file")
for n in {1..1000}
do
echo cp "$file" "$base-$n.$ext"
done
done
will:
take all files with extensions *.*
creates the basename and extension (sed)
in a cycle 1000 times copyes the original file to file-number.extension
it is for DRY-RUN, remove the echo if satisfied

How to open all files in a directory in Bourne shell script?

How can I use the relative path or absolute path as a single command line argument in a shell script?
For example, suppose my shell script is on my Desktop and I want to loop through all the text files in a folder that is somewhere in the file system.
I tried sh myshscript.sh /home/user/Desktop, but this doesn't seem feasible. And how would I avoid directory names and file names with whitespace?
myshscript.sh contains:
for i in `ls`
do
cat $i
done
Superficially, you might write:
cd "${1:-.}" || exit 1
for file in *
do
cat "$file"
done
except you don't really need the for loop in this case:
cd "${1:-.}" || exit 1
cat *
would do the job. And you could avoid the cd operation with:
cat "${1:-.}"/*
which lists (cats) all the files in the given directory, even if the directory or the file names contains spaces, newlines or other difficult to manage characters. You can use any appropriate glob pattern in place of * — if you want files ending .txt, then use *.txt as the pattern, for example.
This breaks down if you might have so many files that the argument list is too long. In that case, you probably need to use find:
find "${1:-.}" -type f -maxdepth 1 -exec cat {} +
(Note that -maxdepth is a GNU find extension.)
Avoid using ls to generate lists of file names, especially if the script has to be robust in the face of spaces, newlines etc in the names.
Use a glob instead of ls, and quote the loop variable:
for i in "$1"/*.txt
do
cat "$i"
done
PS: ShellCheck automatically points this out.

Copy text from multiple files, same names to different path in bash (linux)

I need help copying content from various files to others (same name and format, different path).
For example, $HOME/initial/baby.desktop has text which I need to write into $HOME/scripts/baby.desktop. This is very simple for a single file, but I have 2500 files in $HOME/initial/ and the same number in $HOME/scripts/ with corresponding names (same names and format). I want append (copy) the content of file in path A to path B (which have the same name and format), to the end of file in path B without erase the content of file in path B.
Example content of $HOME/initial/*.desktop to final $HOME/scripts/*.desktop. I tried the following, but it don't work:
cd $HOME/initial/
for i in $( ls *.desktop ); do egrep "Icon" $i >> $HOME/scripts/$i; done
Firstly, I would backup $HOME/initial and $HOME/scripts, because there is lots of scope for people misunderstanding your question. Like this:
cd $HOME
tar -cvf initial.tar initial
tar -cvf scripts.tar scripts
That will put all the files in $HOME/initial into a single tarfile called initial.tar and all the files in $HOME/scripts into a single tarfile called scripts.tar.
Now for your question... in general, if you want to put the contents of FileB onto the end of FileA, the command is
cat FileB >> FileA
Note the DOUBLE ">>" which means "append" rather than single ">" which means overwrite.
So, I think you want to do this:
cd $HOME/initial/baby.desktop
cat SomeFile >> $HOME/scripts/baby.desktop/SomeFile
where SomeFile is the name of any file you choose to test with. I would test that has worked and then, if you are happy with that, go ahead and run the same command inside a loop:
cd $HOME/initial/baby.desktop
for SOURCE in *
do
DESTINATION="$HOME/scripts/baby.desktop/$SOURCE"
echo Appending "$SOURCE" to "$DESTINATION"
#cat "$SOURCE" >> "$DESTINATION"
done
When the output looks correct, remove the "#" at the start of the penultimate line and run it again.
I solved it, if some people want learn how to resolve is very simple:
using Sed
I need only the match (or pattern) line "Icon=/usr/share/some_picture.png into $HOME/initial/example.desktop to other with same name and format $HOME/scripts/example.desktop, but I had a lot of .desktop files (2500 files)
cd $HOME/initial
STRING_LINE=`grep -l -R "Icon=" *.desktop`
for i in $STRING_LINE; do sed -ne '/Icon=/ p' $i >> $HOME/scripts/$i ; done
_________
If you need only copy all to other file with same name and format
using cat
cd $HOME/initial
STRING_LINE=`grep -l -R "Icon=" *.desktop`
for i in $STRING_LINE; do cat $i >> $HOME/scripts/$i ; done

Resources