copy files to multiple directories at once [duplicate] - linux

This question already has answers here:
Linux commands to copy one file to many files
(13 answers)
Closed 2 years ago.
mkdir dir{0..99}
echo hello > file
I want to copy file to every directories, dir0 to dir99.
Currently, the best solution I came up with is:
for i in {0..99}; do cp file dir$i; done
but there must be much more elegant ways to do this.
Is there a way to cp a file to multiple directories using a command similar to below?
cp file dir*
cp file dir{0..99}

You can use xargs to call cp 100 times :
echo dir{0..99} | xargs -n 1 cp file
check the man of xargs

Related

How to get deleted files into a log file. Bash script [duplicate]

This question already has answers here:
Linux find and delete files but redirect file names to be deleted
(5 answers)
Closed 1 year ago.
So im using the script
find /path/to/files/* -mtime +60 -exec rm {} \;
How can i collect the deleted files and transfer them into a logfile in Bash script
You could do something like:
find /path/... -print ... | tee -a <log.file>
The -print will print out all the hits, and the tee will append that to some log.file.
Side note: the * at the end of your /path/to/files/* seems superfluous.
Side note2: if you just want to delete the files, find has a built-in -delete.

How do I concatenate file contents into a single file in bash? [duplicate]

This question already has answers here:
Concatenating multiple text files into a single file in Bash
(12 answers)
Closed 4 years ago.
I have files in different directories under a parent directory, something like this:
Parent Dir
Dir 1 - File 1
Dir 2 - File 2
I want to have an output file that appends the content of File1 with File2. How do I do it in Bash?
Converting my comment to an answer. You can use following find + xargs pipeline command:
cd /parent/dir
find home -name "*.orc" -print0 | xargs -0 cat > test.orc

shell command to extract the part of filename having characters? [duplicate]

This question already has answers here:
Extract filename and extension in Bash
(38 answers)
In Bash, how to strip out all numbers in the file names in a directory while leaving the file extension intact
(1 answer)
Closed 5 years ago.
I have a file named(multi_extension123.txt). Before copying this file into the directory I need to remove files like
[multi_extension1234.txt
multi_extension1234.txt
multi_extension12345.txt] if present in the directory and then copy the earlier one to this directory. Can anyone give the solution with shellscript.
note: i need to remove only numerical numbers and extension alone.
I have tried this
$ filename= $file1
$ echo "${filename%[0-9].*}"
find . -type f maxdepth 0 mindepth 0 -name "'$filename'[0-9]*.txt" -exec rm -f {} \;

Unix backup script [duplicate]

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 6 years ago.
I am trying to learn scripting in Ubuntu.
I need to backup files created by a specific user in a folder where other users store there files. It needs to be compressed into a tar file with the file tree intact.
Edit: How do I find those files created by a user and then compressing them into a tar file with all the direcories and subdirectories
FILENAME=user_archive.tar
DESDIR=/home/user
FILES=find /shared -type d -user user * tar -rvf $DESDIR/$FILENAME
tar -jcvf $DESDIR/$FILENAME
As suggested by #Cyrus and using shellcheck, you keep the following error
To assign the output of a command, use var=$(cmd)
Then you get some errors to correct and here a working script
FILENAME=user_archive.tar
DESDIR=/home/user
FILES=$(find /shared -type d -user user)
tar -jcvf $DESDIR/$FILENAME $FILES

Removing files in a sub directory based on modification date [duplicate]

This question already has answers here:
bash script to remove directories based on modified file date
(3 answers)
Closed 8 years ago.
Hi so I'm trying to remove old backup files from a sub directory if the number of files exceeds the maximum and I found this command to do that
ls -t | sed -e '1,10d' | xargs -d '\n' rm
And my changes are as follows
ls -t subdirectory | sed -e '1,$f' | xargs -d '\n' rm
Obviously when I try running the script it gives me an error saying unknown commands: f
My only concern right now is that I'm passing in the max number of files allowed as an argument so I'm storing that in f but now I'm not too sure how to use that variable in the command above instead of having to set condition to a specific number.
Can anyone give me any pointers? And is there anything else I'm doing wrong?
Thanks!
The title of your question says "based on modification date". So why not simply using find with mtime option?
find subdirectory -mtime +5d -exec rm -v {} \;
Will delete all files older than 5 days.
The problem is that the file list you are passing to xargs does not contain the needed path information to delete the files. When called from the current directory, no path is needed, but if you call it with subdirectory, you need to then rm subdirectory/file from the current directory. Try it:
ls -t subdirectory # returns files with no path info
What you need to do is change to the subdirectory, call the removal script, then change back. In one line it could be done with:
pushd subdirectory &>/dev/null; ls -t | sed -e '1,$f' | xargs -d '\n' rm; popd
Other than doing it in a similar manner, you are probably better writing a slightly longer and more flexible script forming the list of files with the find command to insure the path information is retained.

Resources