Unix backup script [duplicate] - linux

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 6 years ago.
I am trying to learn scripting in Ubuntu.
I need to backup files created by a specific user in a folder where other users store there files. It needs to be compressed into a tar file with the file tree intact.
Edit: How do I find those files created by a user and then compressing them into a tar file with all the direcories and subdirectories
FILENAME=user_archive.tar
DESDIR=/home/user
FILES=find /shared -type d -user user * tar -rvf $DESDIR/$FILENAME
tar -jcvf $DESDIR/$FILENAME

As suggested by #Cyrus and using shellcheck, you keep the following error
To assign the output of a command, use var=$(cmd)
Then you get some errors to correct and here a working script
FILENAME=user_archive.tar
DESDIR=/home/user
FILES=$(find /shared -type d -user user)
tar -jcvf $DESDIR/$FILENAME $FILES

Related

How to get deleted files into a log file. Bash script [duplicate]

This question already has answers here:
Linux find and delete files but redirect file names to be deleted
(5 answers)
Closed 1 year ago.
So im using the script
find /path/to/files/* -mtime +60 -exec rm {} \;
How can i collect the deleted files and transfer them into a logfile in Bash script
You could do something like:
find /path/... -print ... | tee -a <log.file>
The -print will print out all the hits, and the tee will append that to some log.file.
Side note: the * at the end of your /path/to/files/* seems superfluous.
Side note2: if you just want to delete the files, find has a built-in -delete.

A script that deletes all the regular files (not the directories) with a .js extension that are in the current directory and its subfolders [duplicate]

This question already has answers here:
How to loop through a directory recursively to delete files with certain extensions
(16 answers)
Closed 2 years ago.
Write a script that deletes all the regular files (not the directories) with a .js extension that are present in the current directory and all its subfolders.
The answer should only contain one command after the shebang line, I've tried the following:
#!/bin/bash
rm -R *.js
… and:
#!/bin/bash
rm -f *.js
find . -name "*.js" -delete
Find all files in the current and child directories with the extension .js and delete the files.
The best way to achieve this remains the find command:
find . -type f -name '*.js' -exec rm -f {} \;
If however you want to stick to rm alone, it remains possible if you know exactly how many subdirectories lie under the place you're working in. For instance:
rm -f */*/*.js

copy files to multiple directories at once [duplicate]

This question already has answers here:
Linux commands to copy one file to many files
(13 answers)
Closed 2 years ago.
mkdir dir{0..99}
echo hello > file
I want to copy file to every directories, dir0 to dir99.
Currently, the best solution I came up with is:
for i in {0..99}; do cp file dir$i; done
but there must be much more elegant ways to do this.
Is there a way to cp a file to multiple directories using a command similar to below?
cp file dir*
cp file dir{0..99}
You can use xargs to call cp 100 times :
echo dir{0..99} | xargs -n 1 cp file
check the man of xargs

Unix tar returns The parameter list is too long [duplicate]

This question already has answers here:
Argument list too long error for rm, cp, mv commands
(31 answers)
Closed 3 years ago.
When I try to tar all the file in a folder using fowing command:
tar cvf mailpdfs.tar *.pdf
The shell complains:
ksh: /usr/bin/tar: 0403-027 The parameter list is too long.
How to deal with it? My folder contain 25000 pdf files, each file is 2MB in size, how can I copy them very fast?
You can copy/move all the pdf files to a newfolder and then tar the newfolder.
mv *.pdf newfolder
tar cvf mailpdfs.tar newfolder
Referenced from unix.com
The tar option -T is what you need
-T, --files-from=FILE
get names to extract or create from FILE
You are blowing the limit for file globbing in ksh, so you can generate the list of files like this
ls | grep '\.pdf$' >files.txt
Then use that file with tar
tar cvf mailpdfs.tar -T files.txt
Finally, you can do away with creating a temporary file to hold the filenames by getting tar to read them from stdin (by giving the -T option the special filename -).
So we end up with this
ls | grep '\.pdf$' | tar cvf mailpdfs.tar -T -

shell command to extract the part of filename having characters? [duplicate]

This question already has answers here:
Extract filename and extension in Bash
(38 answers)
In Bash, how to strip out all numbers in the file names in a directory while leaving the file extension intact
(1 answer)
Closed 5 years ago.
I have a file named(multi_extension123.txt). Before copying this file into the directory I need to remove files like
[multi_extension1234.txt
multi_extension1234.txt
multi_extension12345.txt] if present in the directory and then copy the earlier one to this directory. Can anyone give the solution with shellscript.
note: i need to remove only numerical numbers and extension alone.
I have tried this
$ filename= $file1
$ echo "${filename%[0-9].*}"
find . -type f maxdepth 0 mindepth 0 -name "'$filename'[0-9]*.txt" -exec rm -f {} \;

Resources