Create a empty tar file and then store its name in a variable - linux

I am writing a shell script in which a tar file with today's date and time will be created.
tar -zvcf "log_grabber$(date '+%y-%m-%d_%H%M').tar.gz" --files-from /dev/null
Now, to add more files to this tar files after running the find command. How can I get the name of the tar file and use it in the output of the find command?
find . -type f -name 'local*' -newermt "$user_date" -exec tar -rvf <variable tar file> {} \;
Any help will be very much useful.

Instead of
tar -zvcf "log_grabber$(date '+%y-%m-%d_%H%M').tar.gz" --files-from /dev/null
Create a variable with the name first and use that:
name="log_grabber$(date '+%y-%m-%d_%H%M').tar.gz"
tar -zvcf "$name" --files-from /dev/null
And then:
find . -type f -name 'local*' -newermt "$user_date" -exec tar -rvf "$name" {} +
Note that I changed \; to + so that tar gets multiple files in one invocation, rather than one tar invocation per file.

Related

Linux project: bash script to archive and remove files

I've been set a mini project to run a bash script to archive and remove files that are older than 'x' number of days. The file will be archived in the /nfs/archive directory and they need to be compressed (TAR) or removed... e.g. '/test.sh 15' would remove files older than 15 days. Moreover, I also need to input some validation checking before removing files...
My code so far:
> #!/bin/bash
>
> #ProjectEssentials:
>
> # TAR: allows you to back up files
> # cronjob: schedule taks
> # command: find . -mtime +('x') -exec rm {} \; this will remove files older than 'x' number of days
>
> find /Users/alimohamed/downloads/nfs/CAMERA -type f -name '*.mov'
> -mtime +10 -exec mv {} /Users/limohamed/downloads/nfs/archive/ \;
>
> # TAR: This will allow for the compression
>
> tar -cvzf doc.tar.gz /Users/alimohamed/downloads/nfs/archive/
>
> # Backup before removing files 'cp filename{,.bak}'? find /Users/alimohamed/downloads/nfs/CAMERA -type f name '*.mov' -mtime +30
> -exec rm {} \; ~
Any help would much appreciated!!
Modified script to fix few typos. Note backup file will have a YYYY-MM-DD, to allow for multiple backups (limited to one backup per day).Using TOP to make script generic - work on any account.
X=15 # Number of days
# Move old files (>=X days) to archive, via work folder
TOP=~/downloads/nfs
mkdir -p "$TOP/work"
find $TOP/CAMERA -type f -name '*.mov' -mtime +"$X" -exec mv {} "$WORK/work" \;
# Create daily backup (note YYYY-MM-DD in file name from work folder
tar -cvzf $TOP/archive/doc.$(date +%Y-%m-%d).tar.gz -C "$TOP/work" .
# Remove all files that were backed-up, If needed
find "$TOP/work" -type f -name '*.mov' -exec rm {} \; ~

Tar search results in .sh file

I have to tar a list of files, without path, that is a result of a find via sh (for crontab use).
In ubuntu's shell each command works fine but in .sh not.
I tried with :
#!/bin/sh
tar -zcvf /destination/one-$(date +"%Y%m%d").tgz < find /myfolder/ -iname 'one*' -printf '%f\n'
And also with
#!/bin/sh
find /myfolder/ -iname 'one*' -print0 | tar -czvf /destination/one-$(date +"%Y%m%d").tar.gz --null -T -
But both failed. May someone help? Alternatives ?
Additional scenario info:
/myfolder/ contains:
one1.log
one2.log
one3.log
two1.log
two2.log
I want one.tgz containing one1.log, one2.log, one3.log
I think you are looking to pass the filenames to tar on stdin :
find . -name \*.png -print0 | tar -cv --null -T- -f tarball.tar
In my case:
find /myfolder/ -iname "one*" -print0 | tar -czv --null -T- -f /destination/one-$(date +"%Y%m%d").tar.gz

script to tar up multiple log files separately

I'm on a RedHat Linux 6 machine, running Elasticsearch and Logstash. I have a bunch of log files that were rotated daily from back in June til August. I am trying to figure out the best way to tar them up to save some diskspace, without manually taring up each one. I'm a bit of a newbie at scripting, so I was wondering if someone could help me out? The files have the name elasticsearch-cluster.log.datestamp. Ideally they would all be in their individual tar files, so that it'd be easier to go back and take a look at that particular day's logs if needed.
You could use a loop :
for file in elasticsearch-cluster.log.*
do
tar zcvf "$file".tar.gz "$file"
done
Or if you prefer a one-liner (this is recursive):
find . -name 'elasticsearch-cluster.log.*' -print0 | xargs -0 -I {} tar zcvf {}.tar.gz {}
or as #chepner mentions with the -exec option:
find . -name 'elasticsearch-cluster.log.*' -exec tar zcvf {}.tar.gz {} \;
or if want to exclude already zipped files:
find . -name 'elasticsearch-cluster.log.*' -not -name '*.tar.gz' -exec tar zcvf {}.tar.gz {} \;
If you don't mind all the files being in a single tar.gz file, you can do:
tar zcvf backups.tar.gz elasticsearch-cluster.log.*
All these commands leave the original files in place. After you validate the tar.gz files, you can delete them manually.

How to find all tar files in various sub-folders, then extract them in the same folder they were found?

I have lots of sub-folders, with only some containing a tar file. i.e.:
folder1/
folder2/this-is-a.tar
folder3/
folder4/this-is-another.tar
I can find which dirs have the tar by simply doing ls */*.tar.
What I want to achieve is somehow find all .tar files, then extract them in the same directory they are found, then delete the .tars.
I've tried ls */*.tar | xargs -n1 tar xvf but that extracts the tars in in the directory I'm in, not the directory the tars were found.
Any help would be greatly appreciated.
for i in */*.tar ; do pushd `dirname $i` ; tar xf `basename $i` && rm `basename $i` ; popd ; done
Edit: this is probably a better way:
find . -type f -iname "*.tar" -print0 -execdir tar xf {} \; -delete
for file in */*.tar; do
(cd `dirname $file`; tar xvf `basename $file`)
unlink $file
done

Find and tar for each file in Linux

I have a list of files with different modification times, 1_raw,2_raw,3_raw... I want to find files that are modified more than 10 days ago and zip them to release disk space. However, the command:
find . -mtime +10 |xargs tar -cvzf backup.tar.gz
will create a new file backup.tar.gz
What I want is to create a tarball for each file, so that I can easily unzip each of them when needed. After the command, my files should become: 1_raw.tar.gz, 2_raw.tar.gz, 3_raw.tar.gz...
Is there anyway to do this? Thanks!
Something like this is what you are after:
find . -mtime +10 -type f -print0 | while IFS= read -r -d '' file; do
tar -cvzf "${file}.tar.gz" "$file"
done
The -type f was added so that it doesn't also process directories, just files.
This adds a compressed archive of each file that was modified more than 10 days ago, in all subdirectories, and places the compressed archive next to its respective unarchived version (in the same folder). I assume this is what you wanted.
If you didn't need to handle whitespaces in the path, you could do with simply:
for f in $(find . -mtime +10 -type f) ; do
tar -cvzf "${f}.tar.gz" "$f"
done
Simply, try this
$ find . -mtime +10 | xargs -I {} tar czvf {}.tar.gz {}
Here, {} indicates replace-str
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names read from standard input. Also, unquoted blanks do not terminate input items; instead the separator is the newline character. Implies -x and -L 1.
https://linux.die.net/man/1/xargs

Resources