Is there a one-line command/script to copy one file to many files on Linux?
cp file1 file2 file3
copies the first two files into the third. Is there a way to copy the first file into the rest?
Does
cp file1 file2 ; cp file1 file3
count as a "one-line command/script"? How about
for file in file2 file3 ; do cp file1 "$file" ; done
?
Or, for a slightly looser sense of "copy":
tee <file1 file2 file3 >/dev/null
just for fun, if you need a big list of files:
tee <sourcefile.jpg targetfiles{01-50}.jpg >/dev/null- Kelvin Feb 12 at 19:52
But there's a little typo. Should be:
tee <sourcefile.jpg targetfiles{01..50}.jpg >/dev/null
And as mentioned above, that doesn't copy permissions.
You can improve/simplify the for approach (answered by #ruakh) of copying by using ranges from bash brace expansion:
for f in file{1..10}; do cp file $f; done
This copies file into file1, file2, ..., file10.
Resource to check:
http://wiki.bash-hackers.org/syntax/expansion/brace#ranges
for FILE in "file2" "file3"; do cp file1 $FILE; done
You can use shift:
file=$1
shift
for dest in "$#" ; do
cp -r $file $dest
done
cat file1 | tee file2 | tee file3 | tee file4 | tee file5 >/dev/null
(no loops used)
To copy the content of one file (fileA.txt) to many files (fileB.txt, fileC.txt, fileD.txt) in Linux,
use the following combination cat and tee commands:
cat fileA.txt | tee fileB.txt fileC.txt fileD.txt >/dev/null
applicable to any file extensions
only file names and extensions change, everything else remains same.
Use something like the following. It works on zsh.
cat file > firstCopy > secondCopy > thirdCopy
or
cat file > {1..100} - for filenames with numbers.
It's good for small files.
You should use the cp script mentioned earlier for larger files.
I'd recommend creating a general use script and a function (empty-files), based on the script, to empty any number of target files.
Name the script copy-from-one-to-many and put it in your PATH.
#!/bin/bash -e
# _ _____
# | |___ /_ __
# | | |_ \ \/ / Lex Sheehan (l3x)
# | |___) > < https://github.com/l3x
# |_|____/_/\_\
#
# Copy the contents of one file to many other files.
source=$1
shift
for dest in "$#"; do
cp $source $dest
done
exit
NOTES
The shift above removes the first element (the source file path) from the list of arguments ("$#").
Examples of how to empty many files:
Create file1, file2, file3, file4 and file5 with content:
for f in file{1..5}; do echo $f > "$f"; done
Empty many files:
copy-from-one-to-many /dev/null file1 file2 file3 file4 file5
Empty many files easier:
# Create files with content again
for f in file{1..5}; do echo $f > "$f"; done
copy-from-one-to-many /dev/null file{1..5}
Create empty_files function based on copy-from-one-to-many
function empty-files()
{
copy-from-one-to-many /dev/null "$#"
}
Example usage
# Create files with content again
for f in file{1..5}; do echo $f > "$f"; done
# Show contents of one of the files
echo -e "file3:\n $(cat file3)"
empty_files file{1..5}
# Show that the selected file no longer has contents
echo -e "file3:\n $(cat file3)"
Don't just steal code. Improve it; Document it with examples and share it. - l3x
Here's a version that will preface each cp command with sudo:
#!/bin/bash -e
# Filename: copy-from-one-to-may
# _ _____
# | |___ /_ __
# | | |_ \ \/ / Lex Sheehan (l3x)
# | |___) > < https://github.com/l3x
# |_|____/_/\_\
#
# Copy the contents of one file to many other files.
# Pass --sudo if you want each cp to be perfomed with sudo
# Ex: copy-from-one-to-many $(mktemp) /tmp/a /tmp/b /tmp/c --sudo
if [[ "$*" == *--sudo* ]]; then
maybe_use_sudo=sudo
fi
source=$1
shift
for dest in "$#"; do
if [ $dest != '--sudo' ]; then
$maybe_use_sudo cp $source $dest
fi
done
exit
You can use standard scripting commands for that instead:
Bash:
for i in file2 file3 ; do cp file1 $i ; done
The simplest/quickest solution I can think of is a for loop:
for target in file2 file3 do; cp file1 "$target"; done
A dirty hack would be the following (I strongly advise against it, and only works in bash anyway):
eval 'cp file1 '{file2,file3}';'
Go with the fastest cp operations
seq 1 10 | xargs -P 0 -I xxx cp file file-xxx
it means
seq 1 10 count from 1 to 10
| pipe it xargs
-P 0 do it in parallel - as many as needed
-I xxx name of each input xargs receives
cp file file-xxx means copy file to file-1, file-2, etc
and if name of files are different here is the other solutions.
First have the list of files which are going to be created. e.g.
one
two
three
four
five
Second save this list on disk and read the list with xargs just like before but without using seq.
xargs -P 0 -I xxx cp file xxx < list
which means 5 copy operations in parallel:
cp file one
cp file two
cp file three
cp file four
cp file five
and for xargs here is the behind the scene (5 forks)
3833 pts/0 Ss 0:00 bash
15954 pts/0 0:00 \_ xargs -P 0 -I xxx cp file xxx < list
15955 pts/0 0:00 \_ cp file one
15956 pts/0 0:00 \_ cp file two
15957 pts/0 0:00 \_ cp file three
15958 pts/0 0:00 \_ cp file four
15959 pts/0 0:00 \_ cp file five
I don't know how correct this is but i have used something like this
echo ./file1.txt ./file2.txt ./file3.txt | xargs -n 1 cp file.txt
Where echo ./file1.txt ... is destination of a file and use it to feed xargs with one "destination" by one. Therefore command xargs -n 1. And lastly cp file.txt, which is self explanatory i think :)
Related
I have files in a directory such as
FILE1.docx.txt
FILE2.docx.txt
FILE3.docx.txt
FILE4.docx.txt
FILE5.docx.txt
And I would like to remove .docx from all of them to make the final output such as
FILE1.txt
FILE2.txt
FILE3.txt
FILE4.txt
FILE5.txt
How do I do this?
With Parameter Expansion and mv
for f in *.docx.txt; do
echo mv -vn "$f" "${f%%.*}.${f##*.}"
done
The one-liner
for f in *.docx.txt; do echo mv -vn "$f" "${f%%.*}.${f##*.}"; done
Remove the echo if you think the output is correct, to rename the files.
Should work in any POSIX compliant shell, without any script.
With bash, enable the nullglob shell option so the glob *.docx.txt will not expand as literal *.docx.txt if there are no files ending with .docx.txt
#!/usr/bin/env bash
shopt -s nullglob
for f in *.docx.txt; do
echo mv -vn "$f" "${f%%.*}.${f##*.}"
done
UPDATE: Thanks to #Léa Gris add nullglob change the glob to *.docx.txt and add -n to mv, Although -n and -v is not defined by POSIX as per https://pubs.opengroup.org/onlinepubs/9699919799/utilities/mv.html It should be in both GNU and BSD mv
Just run this python script in the same folder that contains files:
import os
for file in os.listdir(os.getcwd()):
aux = file.split('.')
if len(aux) == 3:
os.rename(file, aux[0] + '.' + aux[2])
you can make use of sed and bash like this:
for i in *.docx.txt
do
mv "$i" "`echo $i | sed 's/.docx//'`"
done
I have a problem working with GNU Nano program code. This is my task:
Generate 100 files and in each one has to be one number(shuf -i1-1000 - n1). Then scan files and write numbers ascending order to a file named "output.txt".
My code:
#!/bin/bash
mkdir files
find /etc/ -name "*.txt"|xargs du -h >output.txt
for x in {1..100}
do
shuf -i 1-1000 -n 1 > files/$x.txt
done
for x in {1..100}
do
input=$(cat files/$x.txt)
done
I wanted to ask how to sort out numbers which are in files and write them all to output.txt file?
Thanks
Use sort to sort the numbers.
#! /bin/bash
mkdir files
shuf -i1-1000 -n100 | for i in {1..100} ; do
read n
echo $n > files/$i.txt
done
sort -n files/*.txt > files/output.txt
I have more than 500 Mp4 files in my server 1
so i want half of them to send to server 2 and half of them to server 3
but i dont know how to make this
Is there a way to select files by alphabet or maybe date or something else
example videos that start with
a,c,e*.mp4
will send to server 2 and videos that start with
b,d,f*.mp4
will send to server 3
or is there any other way you think is better
rsync -avzP /home/user/public_html/domain.com/ ip:/home/user2/public_html/domain.com/
1) use find to make a list of all the files
find /opt/mymp3folder -print > /tmp/foo
2) find the count of lines and split the list in two
wc -l /tmp/foo
387
split -l 200 /tmp/foo
mv xaa xaa.txt
and then rsync like this
rsync -avzP -e ssh `cat xaa.txt` root#0.0.0.0:/var/www/
I think that is better to split files by size than for numbers (I assume that you have several file sizes in your mp4).
#!/bin/bash
FOLDER=$1
TMP_FILE=$(mktemp)
find $FOLDER -type f -exec stat -c "%s;%n" {} \; | sort -t ';' -k 2 | awk 'BEGIN{ sum=0; FS=";"} { sum += $1; print sum";"$1";"$2 }' > $TMP_FILE
TOTAL_SIZE=$(tail -n 1 $TMP_FILE | cut -f 1 -d ';')
HALF_SIZE=$(echo $TOTAL_SIZE / 2 | bc)
echo $TOTAL_SIZE $HALF_SIZE
# split part
IFS=';'
while read A B C ; do
[ $A -lt $HALF_SIZE ] && echo "$C" >> lst_files_1.txt || echo "$C" >> lst_files_2.txt
done < $TMP_FILE
rsync -avzP
rm $TMP_FILE
After execution you have list_files_1.txt and list_files_2.txt that contains half of files depending of size.
You can send this files to each server using rsync:
rsync -avzP $(cat list_files_1.txt) ip:/home/user2/public_html/domain.com/
1) use find to make a list of all the files
find /opt/mymp3folder -print > /tmp/foo
2) find the count of lines and split the list in two
cd /tmp
wc -l /tmp/foo
387
split -l 200 /tmp/foo
3) split by default makes a set of files called xaa xab xac etc. So use xaa to copy to one server and xab to copy to the other
rsync -av --files-from=/tmp/xaa . server1:/opt/newmp3folder/
rsync -av --files-from=/tmp/xab . server2:/opt/newmp3folder/
'.' in the above is the "source" path and allows the use of relative paths in the "files-from" You either need to be in the same path that the find command is run from and use . or set it to an absolute value
Obviously if you wanted to do this on a regular basis probably want to script it properly
I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!
I have a file "atest.txt" that have some text..
I want to print this text at files "asdasd.txt asgfaya.txt asdjfusfdgh.txt asyeiuyhavujh.txt"
This files is not exist on my server..
I'm running Debian.. What can i do?
Use the tee(1) command, which duplicates its standard input to standard output and any files specified on the command line. E.g.
printf "Hello\nthis is a test\nthank you\n"
| tee test1.txt test2.txt $OTHER_FILES >/dev/null
Using your example:
cat atest.txt
| tee asdasd.txt asgfaya.txt asdjfusfdgh.txt asyeiuyhavujh.txt >/dev/null
From your bash prompt:
for f in test1.txt test2.txt test3.txt; do echo -e "hello\nworld" >> $f; done
If the text lives in atest.txt then do:
for f in test1.txt test2.txt test3.txt; do cat atest.txt >> $f; done
Isn't it simply:
cp atest.txt asdasd.txt
cp atest.txt asgfaya.txt
cp atest.txt asdjfusfdgh.txt
cp atest.txt asyeiuyhavujh.txt
?
In bash you can write
#!/bin/bash
$TEXT="hello\nthis is a test\nthank you"
for i in `seq 1 $1`; do echo -e $TEXT >text$i.txt; done
EDIT (in response of question change)
If you can't determine programmatically the names of the target files then you can use this script it:
#!/bin/bash
ORIGIN=$1;
shift
for i in `seq $#`; do cp "$ORIGIN" "$1"; shift; done
you can use it this way:
script_name origin_file dest_file1 second_dest_file 'third file' ...
If you are wondering why there are the double quotes into the cp command, it is for cope with filename containing spaces
If anyone would like to write same thing to all files in dir:
printf 'your_text' | tee *