How to extract archive from this script (using tar) - linux

I have absolutly no idea how to unpack the created archive. I give you the complete Script.
A Debian based Distibution named Univention uses this to backup several files in an tar archive.
The real archive is packed in an function. The main Content where they create the actual tar file is:
cat "$TMPDIR/freeinfo.txt" >> "$TMPDIR/Installinfo.txt" 2>/dev/null
echo >$TMPDIR/endtag.txt
echo "%%%%OXBACKUP_${DATE}_HEADER_ENDTAG" >> "$TMPDIR/endtag.txt"
BACKUPINFO="$BACKUPINFO endtag.txt"
cat 2>/dev/null << EOF > "$TMPDIR/Installinfo.sh"
BACKUPHOSTNAME="$hostname"
BACKUPDOMAINNAME="$domainname"
BACKUPBASEDN="$ldap_base"
BACKUPTIMEZONE="$(cat /etc/timezone)"
BACKUPLANG="$(echo $locale_default)"
BACKUPSAMBADOM="$windows_domain"
BACKUPSAMBAINSTALLED="$SAMBAINSTALLED"
BACKUPOXINTEGRATIONVERSION="$INTEGRATIONVERSION"
BACKUPSECLEVEL="$(univention-config-registry get version/security-patchlevel)"
BACKUPVERSION=2
SECRETFILES="$SECRETFILES"
OTHERFILES="$OTHERFILES"
OXCONFIG="$OXCONFIG"
CRONTABS="$CRONTABS"
CERTFILES="$CERTFILES"
EOF
pstatus=()
#
# the actual backup to stdout
#
sync ; sync ; sync
RETVAL=$(
(tar cO $BACKUPINFO 2>/dev/null
tar cO $SECRETFILES 2>/dev/null
tar cO $OTHERFILES 2>/dev/null
tar cO $OXCONFIG 2>/dev/null
tar cO $CRONTABS 2>/dev/null
tar cO $CERTFILES 2>/dev/null
[ -f $EXTRAFILES ] && tar --no-recursion -T $EXTRAFILES -cO 2>/dev/null
tar --no-recursion --null -T dirlist_mailandfilestore -cO 2>/dev/null
tar --null -T filelist_mailandfilestore -cO 2>/dev/null
tar --no-recursion --null -T dirlist_shares -cO 2>/dev/null
tar --null -T filelist_shares -cO 2>/dev/null
) |
#help us out with smbclient, perl, scp until we get a working curl
case "$BACKUPPROTOCOL" in
##stripped protocol specific stuff ... (*) is the way to go!
(*) dd 2>>${LOGFILE}_${DATE} > ${BACKUPPATH:-$DEFAULTBACKUPPATH}/backup_$DATE && echo "201"
chmod 640 "${BACKUPPATH}/backup_$DATE" >/dev/null 2>&1
chown root:www-data "${BACKUPPATH}/backup_$DATE" >/dev/null 2>&1
if [[ x"$BACKUPPATH" != x && "$BACKUPPATH" != "$DEFAULTBACKUPPATH" ]] ; then
# temporary permissions fix
ln -sf "${BACKUPPATH}/backup_$DATE" "$DEFAULTBACKUPPATH/"
fi
;;
esac
)
the archive is 54 GB on the system, tar xvf extract only the first level of the archive. Sorry hard to explain. All in all I only get 40MB out of this 54GB. All the Dirs that should be in the archive are not extracted.
The use of
$((tar ...
tar ... ) | dd > foo)
is also totally unknown to me, what does this script do?
I think I found a solution myself ( I updated the script a little bit):
The Script generates a tag which marks the end of the first archive.
I used grep -A1 -a -b "HEADER_ENDTAG" backup.tar
Value was 41247795
dd skip=41247795 if=../../backup of=test
Looks like I could now extract the "real" archive. Is there another way to automatically jump to this byte offset, e.g. without using grep manually?

Your script appears to concatenate several tar files together into a single large file.
To extract a single section, I use a shell function / script like this:
File tarsection:
#!/bin/sh
tar_section() {
local x=1;
while [ $x -lt $1 ]; do
tar t > /dev/null || echo "Error in section $x" >&2
x=$(( $x+1 ))
done
shift
tar f - "$#"
}
tarfile="$1"
shift
tar_section "$#" < "$tarfile"
Then you can do (for example, for part 3 of the big file):
tarsection YOUR_54GB_BACKUP_FILE 3 -t | less
cd ...extractlocation
tarsection YOUR_54GB_BACKUP_FILE 3 -x

Related

Determine how to uncompress an archive based on the file command

Im new to bash scripting, and I want to write a script called unpack, something like this:
unpack [-r] [-v] file [file...]
-v - verbose
-r - recursive - will traverse contents of folders recursively, performing unpack on each.
I need to determine what kind of compression was used and perform unpacking for those compression types.
Assuming file names and extensions could have no meaning - the only way to know what method to use is through the file command.
I have 4 unpacking options gunzip, bunzip2, unzip, uncompress
so I wrote a function called execute_unpacking
exectute_unpacking(){
for FILE in "${#}"
do
local FILE_TYPE=$(file "${FILE}")
# How to get the compression type of the file?
case "${FILE_TYPE}" in
*bzip2) bunzip2 ${RECURSIVE} "${FILE}" ;;
*gzip) gunzip ${RECURSIVE} "${FILE}" ;;
*Zip) unzip ${RECURSIVE} ${FILE} ;;
*compress) uncomprees ${RECURSIVE} ${FILE} ;;
?) echo "${FILE} cannot be extarcted" ;;
esac
done
}
So based on the $(file ${FILE}) i need to check for Zip, bzip2, compress, gzip
Is this the correct way to do it? (i don't want to use external tools like dtrx )
EDIT:
For example if I have 4 files:
$(file -i archive) => archive: text/plain; charset=us-ascii
$(file -i archive.bz2) => archive.bz2: application/x-bzip2; charset=binary
$(file -i archvive.gz) =>archive.gz: application/x-gzip; charset=binary
$(file -i archive.cmpr) => archive.cmpr: application/x-compress; charset=binary
So i need to assign to the FILE_TYPE variable 4 options gzip,compress,bzip2,txt and then match those pattern accordingly inside of my case statement
What I would do :
#!/bin/bash
set -v
case $1 in
*.tar) tar xvf "$1";;
*.tgz) tar zxvf "$1";;
*.tar.gz) tar zxvf "$1";;
*.xz) tar xJvf "$1";;
*.gz) gunzip "$1";;
*.zip) unzip "$1";;
*.rar) unrar x "$1";;
*tar.bz2) tar xjvf "$1";;
*.bz2) bzip2 -d "$1";;
*) echo >&2 "unknow $1"
exit 1
;;
esac
Could be enhanced using file -i :
case $(file -i "$1") in
*/x-bzip2*) bzip2 -d "$1";;
*/gzip*) gunzip "$1";;
*/zip*) unzip "$1"";;
*/x-xz*) tar xJvf "$1";;
?) echo "File $1 cannot be extracted";;
esac
With file -i as Gilles Quenot suggested:
for file; do
local file_type=$(file -i "$file")
case "$file_type" in
*application/x-bzip2*) echo "bzip2 file found";;
*application/gzip*) echo "gzip file found";;
*application/zip*) echo "zip file found";;
*application/x-xz*) echo "xz file found";;
*application/x-compress*) echo "compressed file found";;
?) echo "${file} cannot be extarcted";;
esac
done

tar command does not produce the .tar.gz file

I am trying to iterate in a loop, tar a couple of directories with each iteration and then compare the md5 sums of both of them. I notice that my first tar statement produces the tar files one level above the actual path of the directory. i.e. the statement:
tar -czvf ${folder_name}.tar.gz /tmp/psk1/hadoop_validation$ENV/${folder_name}
produces the ${folder_name}.tar.gz in /tmp/psk1/ rather than /tmp/psk1/hadoop_validation$ENV/
and the second tar statement:
tar -czvf ${folder_name}.tar.gz ${edge_base_dir}/wlossf$ENV/app/${folder_name}
doesn't produce the tar file at all. I can't find it even on one level above the actual path.
hdfs dfs -ls /haas/wlf/wlossf$ENV/app | while read rec; do
echo $rec
folder_path=`echo ${rec} | awk -F ' ' '{print $8}'`
folder_name=`echo ${folder_path} | awk -F '/' '{print $6}'`
if [ ! -z ${folder_name} ] && [ ! -z ${folder_path} ]; then
hdfs dfs -get ${folder_path} /tmp/psk1/hadoop_validation$ENV/
if [ $? -eq 0 ]; then
echo "Hadoop to local copy job Successful"
else
echo "Hadoop to local copy job Failed"
fi
tar -czvf ${folder_name}.tar.gz /tmp/psk1/hadoop_validation$ENV/${folder_name}
hadoop_md5=$(md5sum /tmp/psk1/hadoop_validation$ENV/${folder_name}.tar.gz)
tar -czvf ${folder_name}.tar.gz ${edge_base_dir}/wlossf$ENV/app/${folder_name}
edge_md5=$(md5sum ${edge_base_dir}/wlossf$ENV/app/${folder_name}.tar.gz)
if [ ${hadoop_md5} == ${edge_md5} ]; then
echo "${folder_name} is good"
else
echo "${folder_name} is bad"
fi
fi
echo ${folder_name}
echo ${folder_path}
done
What am I missing here? Any help would be appreciated.
Thank you.
As mouviciel said in the comments, tar by default creates the file in the current working directory.
Simply prefix the tar.gz file with the folder and it will create it where you want it:
tar -czvf /tmp/psk1/hadoop_validation$ENV/${folder_name}.tar.gz /tmp/psk1/hadoop_validation$ENV/${folder_name}
Note that as you will be creating the tar inside the same folder that you are archiving, you'll get a file changed as we read it warning as part of the output. Nothing to worry about.

Automation script to download and extract decompressed files using bash scripting

I created a script here and the script is all about checking the disk space if the disk space had enough space then it will download the file needed after that it will extract the file 5 times and it must not extract the last tar.gz file . But when I executed the script I got a problem here. For the extract side it will just extract 1 time but I already out it 5 times.
#!/bin/bash
outout=$(df -h)
file=$(awk -F = '{print $2}' config.txt)
filename=$(echo "$bundle" | awk -F / '{print $11}')
diskspace=$(df -h /var/ | sed '1d' | awk '{print $5}' | cut -d'%' -f1)
allowed=10
address=$(awk -F = '{print $2}' newconfig.txt)
if [ "${diskspace}" -gt "${allowed}" ]; then
# cd "/var/"
wget $bundle
else
echo "Not enough space to download the bundle"
echo $output
exit
fi
i=0
for tarfile in *.tar.gz
do
$(($i++))
[ $i = 5 ] && break
tar -xf "$tarfile"
done
For the config.txt
https://www.google.com.ph/files.tar.gz
files.tar.gz is composed of 5 tar inside
files1.tar.gz -> files2.tar.gz - > files3.tar.gz -> files4.tar.gz - > files5.tar.gz
What should I do to extract it five times until files5.tar.gz will display and it must not be extracted.
Help is greatly appreciated.
Since I couldn't figure out the exact code you wan't,
I would give a example code how to use while loop instead
i=0
tar xf files.tar.gz
while [ $i -lt 4 ];do
((i++))
tar xf files${i}.tar.gz
done
Or use for to do the same thing.
for i in "" {1..4};do
tar xf files${i}.tar.gz
done
These code executes following command
tar xf files.tar.gz
tar xf files1.tar.gz
tar xf files2.tar.gz
tar xf files3.tar.gz
tar xf files4.tar.gz

grep from tar.gz without extracting [faster one]

Am trying to grep pattern from dozen files .tar.gz but its very slow
am using
tar -ztf file.tar.gz | while read FILENAME
do
if tar -zxf file.tar.gz "$FILENAME" -O | grep "string" > /dev/null
then
echo "$FILENAME contains string"
fi
done
If you have zgrep you can use
zgrep -a string file.tar.gz
You can use the --to-command option to pipe files to an arbitrary script. Using this you can process the archive in a single pass (and without a temporary file). See also this question, and the manual.
Armed with the above information, you could try something like:
$ tar xf file.tar.gz --to-command "awk '/bar/ { print ENVIRON[\"TAR_FILENAME\"]; exit }'"
bfe2/.bferc
bfe2/CHANGELOG
bfe2/README.bferc
I know this question is 4 years old, but I have a couple different options:
Option 1: Using tar --to-command grep
The following line will look in example.tgz for PATTERN. This is similar to #Jester's example, but I couldn't get his pattern matching to work.
tar xzf example.tgz --to-command 'grep --label="$TAR_FILENAME" -H PATTERN ; true'
Option 2: Using tar -tzf
The second option is using tar -tzf to list the files, then go through them with grep. You can create a function to use it over and over:
targrep () {
for i in $(tar -tzf "$1"); do
results=$(tar -Oxzf "$1" "$i" | grep --label="$i" -H "$2")
echo "$results"
done
}
Usage:
targrep example.tar.gz "pattern"
Both the below options work well.
$ zgrep -ai 'CDF_FEED' FeedService.log.1.05-31-2019-150003.tar.gz | more
2019-05-30 19:20:14.568 ERROR 281 --- [http-nio-8007-exec-360] DrupalFeedService : CDF_FEED_SERVICE::CLASSIFICATION_ERROR:408: Classification failed even after maximum retries for url : abcd.html
$ zcat FeedService.log.1.05-31-2019-150003.tar.gz | grep -ai 'CDF_FEED'
2019-05-30 19:20:14.568 ERROR 281 --- [http-nio-8007-exec-360] DrupalFeedService : CDF_FEED_SERVICE::CLASSIFICATION_ERROR:408: Classification failed even after maximum retries for url : abcd.html
If this is really slow, I suspect you're dealing with a large archive file. It's going to uncompress it once to extract the file list, and then uncompress it N times--where N is the number of files in the archive--for the grep. In addition to all the uncompressing, it's going to have to scan a fair bit into the archive each time to extract each file. One of tar's biggest drawbacks is that there is no table of contents at the beginning. There's no efficient way to get information about all the files in the archive and only read that portion of the file. It essentially has to read all of the file up to the thing you're extracting every time; it can't just jump to a filename's location right away.
The easiest thing you can do to speed this up would be to uncompress the file first (gunzip file.tar.gz) and then work on the .tar file. That might help enough by itself. It's still going to loop through the entire archive N times, though.
If you really want this to be efficient, your only option is to completely extract everything in the archive before processing it. Since your problem is speed, I suspect this is a giant file that you don't want to extract first, but if you can, this will speed things up a lot:
tar zxf file.tar.gz
for f in hopefullySomeSubdir/*; do
grep -l "string" $f
done
Note that grep -l prints the name of any matching file, quits after the first match, and is silent if there's no match. That alone will speed up the grepping portion of your command, so even if you don't have the space to extract the entire archive, grep -l will help. If the files are huge, it will help a lot.
For starters, you could start more than one process:
tar -ztf file.tar.gz | while read FILENAME
do
(if tar -zxf file.tar.gz "$FILENAME" -O | grep -l "string"
then
echo "$FILENAME contains string"
fi) &
done
The ( ... ) & creates a new detached (read: the parent shell does not wait for the child)
process.
After that, you should optimize the extracting of your archive. The read is no problem,
as the OS should have cached the file access already. However, tar needs to unpack
the archive every time the loop runs, which can be slow. Unpacking the archive once
and iterating over the result may help here:
local tempPath=`tempfile`
mkdir $tempPath && tar -zxf file.tar.gz -C $tempPath &&
find $tempPath -type f | while read FILENAME
do
(if grep -l "string" "$FILENAME"
then
echo "$FILENAME contains string"
fi) &
done && rm -r $tempPath
find is used here, to get a list of files in the target directory of tar, which we're iterating over, for each file searching for a string.
Edit: Use grep -l to speed up things, as Jim pointed out. From man grep:
-l, --files-with-matches
Suppress normal output; instead print the name of each input file from which output would
normally have been printed. The scanning will stop on the first match. (-l is specified
by POSIX.)
Am trying to grep pattern from dozen files .tar.gz but its very slow
tar -ztf file.tar.gz | while read FILENAME
do
if tar -zxf file.tar.gz "$FILENAME" -O | grep "string" > /dev/null
then
echo "$FILENAME contains string"
fi
done
That's actually very easy with ugrep option -z:
-z, --decompress
Decompress files to search, when compressed. Archives (.cpio,
.pax, .tar, and .zip) and compressed archives (e.g. .taz, .tgz,
.tpz, .tbz, .tbz2, .tb2, .tz2, .tlz, and .txz) are searched and
matching pathnames of files in archives are output in braces. If
-g, -O, -M, or -t is specified, searches files within archives
whose name matches globs, matches file name extensions, matches
file signature magic bytes, or matches file types, respectively.
Supported compression formats: gzip (.gz), compress (.Z), zip,
bzip2 (requires suffix .bz, .bz2, .bzip2, .tbz, .tbz2, .tb2, .tz2),
lzma and xz (requires suffix .lzma, .tlz, .xz, .txz).
Which requires just one command to search file.tar.gz as follows:
ugrep -z "string" file.tar.gz
This greps each of the archived files to display matches. Archived filenames are shown in braces to distinguish them from ordinary filenames. For example:
$ ugrep -z "Hello" archive.tgz
{Hello.bat}:echo "Hello World!"
Binary file archive.tgz{Hello.class} matches
{Hello.java}:public class Hello // prints a Hello World! greeting
{Hello.java}: { System.out.println("Hello World!");
{Hello.pdf}:(Hello)
{Hello.sh}:echo "Hello World!"
{Hello.txt}:Hello
If you just want the file names, use option -l (--files-with-matches) and customize the filename output with option --format="%z%~" to get rid of the braces:
$ ugrep -z Hello -l --format="%z%~" archive.tgz
Hello.bat
Hello.class
Hello.java
Hello.pdf
Hello.sh
Hello.txt
All of the code above was really helpful, but none of it quite answered my own need: grep all *.tar.gz files in the current directory to find a pattern that is specified as an argument in a reusable script to output:
The name of both the archive file and the extracted file
The line number where the pattern was found
The contents of the matching line
It's what I was really hoping that zgrep could do for me and it just can't.
Here's my solution:
pattern=$1
for f in *.tar.gz; do
echo "$f:"
tar -xzf "$f" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true";
done
You can also replace the tar line with the following if you'd like to test that all variables are expanding properly with a basic echo statement:
tar -xzf "$f" --to-command 'echo "f:`basename $TAR_FILENAME` s:'"$pattern\""
Let me explain what's going on. Hopefully, the for loop and the echo of the archive filename in question is obvious.
tar -xzf: x extract, z filter through gzip, f based on the following archive file...
"$f": The archive file provided by the for loop (such as what you'd get by doing an ls) in double-quotes to allow the variable to expand and ensure that the script is not broken by any file names with spaces, etc.
--to-command: Pass the output of the tar command to another command rather than actually extracting files to the filesystem. Everything after this specifies what the command is (grep) and what arguments we're passing to that command.
Let's break that part down by itself, since it's the "secret sauce" here.
'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
First, we use a single-quote to start this chunk so that the executed sub-command (basename $TAR_FILENAME) is not immediately expanded/resolved. More on that in a moment.
grep: The command to be run on the (not actually) extracted files
--label=: The label to prepend the results, the value of which is enclosed in double-quotes because we do want to have the grep command resolve the $TAR_FILENAME environment variable passed in by the tar command.
basename $TAR_FILENAME: Runs as a command (surrounded by backticks) and removes directory path and outputs only the name of the file
-Hin: H Display filename (provided by the label), i Case insensitive search, n Display line number of match
Then we "end" the first part of the command string with a single quote and start up the next part with a double quote so that the $pattern, passed in as the first argument, can be resolved.
Realizing which quotes I needed to use where was the part that tripped me up the longest. Hopefully, this all makes sense to you and helps someone else out. Also, I hope I can find this in a year when I need it again (and I've forgotten about the script I made for it already!)
And it's been a bit a couple of weeks since I wrote the above and it's still super useful... but it wasn't quite good enough as files have piled up and searching for things has gotten more messy. I needed a way to limit what I looked at by the date of the file (only looking at more recent files). So here's that code. Hopefully it's fairly self-explanatory.
if [ -z "$1" ]; then
echo "Look within all tar.gz files for a string pattern, optionally only in recent files"
echo "Usage: targrep <string to search for> [start date]"
fi
pattern=$1
startdatein=$2
startdate=$(date -d "$startdatein" +%s)
for f in *.tar.gz; do
filedate=$(date -r "$f" +%s)
if [[ -z "$startdatein" ]] || [[ $filedate -ge $startdate ]]; then
echo "$f:"
tar -xzf "$f" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
fi
done
And I can't stop tweaking this thing. I added an argument to filter by the name of the output files in the tar file. Wildcards work, too.
Usage:
targrep.sh [-d <start date>] [-f <filename to include>] <string to search for>
Example:
targrep.sh -d "1/1/2019" -f "*vehicle_models.csv" ford
while getopts "d:f:" opt; do
case $opt in
d) startdatein=$OPTARG;;
f) targetfile=$OPTARG;;
esac
done
shift "$((OPTIND-1))" # Discard options and bring forward remaining arguments
pattern=$1
echo "Searching for: $pattern"
if [[ -n $targetfile ]]; then
echo "in filenames: $targetfile"
fi
startdate=$(date -d "$startdatein" +%s)
for f in *.tar.gz; do
filedate=$(date -r "$f" +%s)
if [[ -z "$startdatein" ]] || [[ $filedate -ge $startdate ]]; then
echo "$f:"
if [[ -z "$targetfile" ]]; then
tar -xzf "$f" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
else
tar -xzf "$f" --no-anchored "$targetfile" --to-command 'grep --label="`basename $TAR_FILENAME`" -Hin '"$pattern ; true"
fi
fi
done
zgrep works fine for me, only if all files inside is plain text.
it looks nothing works if the tgz file contains gzip files.
You can mount the TAR archive with ratarmount and then simply search for the pattern in the mounted view:
pip install --user ratarmount
ratarmount large-archive.tar mountpoint
grep -r '<pattern>' mountpoint/
This is much faster than iterating over each file and piping it to grep separately, especially for compressed TARs. Here are benchmark results in seconds for a 55 MiB uncompressed and 42 MiB compressed TAR archive containing 40 files:
Compression
Ratarmount
Bash Loop over tar -O
none
0.31 +- 0.01
0.55 +- 0.02
gzip
1.1 +- 0.1
13.5 +- 0.1
bzip2
1.2 +- 0.1
97.8 +- 0.2
Of course, these results are highly dependent on the archive size and how many files the archive contains. These test examples are pretty small because I didn't want to wait too long. But, they already exemplify the problem well enough. The more files there are, the longer it takes for tar -O to jump to the correct file. And for compressed archives, it will be quadratically slower the larger the archive size is because everything before the requested file has to be decompressed and each file is requested separately. Both of these problems are solved by ratarmount.
This is the code for benchmarking:
function checkFilesWithRatarmount()
{
local pattern=$1
local archive=$2
ratarmount "$archive" "$archive.mountpoint"
'grep' -r -l "$pattern" "$archive.mountpoint/"
}
function checkEachFileViaStdOut()
{
local pattern=$1
local archive=$2
tar --list --file "$archive" | while read -r file; do
if tar -x --file "$archive" -O -- "$file" | grep -q "$pattern"; then
echo "Found pattern in: $file"
fi
done
}
function createSampleTar()
{
for i in $( seq 40 ); do
head -c $(( 1024 * 1024 )) /dev/urandom | base64 > $i.dat
done
tar -czf "$1" [0-9]*.dat
}
createSampleTar myarchive.tar.gz
time checkEachFileViaStdOut ABCD myarchive.tar.gz
time checkFilesWithRatarmount ABCD myarchive.tar.gz
sleep 0.5s
fusermount -u myarchive.tar.gz.mountpoint
In my case the tarballs have a lot of tiny files and I want to know what archived file inside the tarball matches. zgrep is fast (less than one second) but doesn't provide the info I want, and tar --to-command grep is much, much slower (many minutes)1.
So I went the other direction and had zgrep tell me the byte offsets of the matches in the tarball and put that together with the list of offsets in the tarball of all archived files to find the matching archived files.
#!/bin/bash
set -e
set -o pipefail
function tar_offsets() {
# Get the byte offsets of all the files in a given tarball
# based on https://stackoverflow.com/a/49865044/60422
[ $# -eq 1 ]
tar -tvf "$1" -R | awk '
BEGIN{
getline;
f=$8;
s=$5;
}
{
offset = int($2) * 512 - and((s+511), compl(512)+1)
print offset,s,f;
f=$8;
s=$5;
}'
}
function tar_byte_offsets_to_files() {
[ $# -eq 1 ]
# Convert the search results of a tarball with byte offsets
# to search results with archived file name and offset, using
# the provided tar_offsets output (single pass, suitable for
# process substitution)
offsets_file="$1"
prev_offset=0
prev_offset_filename=""
IFS=' ' read -r last_offset last_len last_offset_filename < "$offsets_file"
while IFS=':' read -r search_result_offset match_text
do
while [ $last_offset -lt $search_result_offset ]; do
prev_offset=$last_offset
prev_offset_filename="$last_offset_filename"
IFS=' ' read -r last_offset last_len last_offset_filename < "$offsets_file"
# offsets increasing safeguard
[ $prev_offset -le $last_offset ]
done
# now last offset is the first file strictly after search result offset so prev offset is
# the one at or before it, and must be the one it is in
result_file_offset=$(( $search_result_offset - $prev_offset ))
echo "$prev_offset_filename:$result_file_offset:$match_text"
done
}
# Putting it together e.g.
zgrep -a --byte-offset "your search here" some.tgz | tar_byte_offsets_to_files <(tar_offsets some.tgz)
1 I'm running this in Git for Windows' minimal MSYS2 fork unixy environment, so it's possible that the launch overhead of grep is much much higher than on any kind of real Unix machine and would make `tar --to-command grep` good enough there; benchmark solutions for your own needs and platform situation before selecting.

Update bash script, file check, how?

#!/bin/sh
LOCAL=/var/local
TMP=/var/tmp
URL=http://um10.eset.com/eset_upd
USER=""
PASSWD=""
WGET="wget --user=$USER --password=$PASSWD -t 15 -T 15 -N -nH -nd -q"
UPDATEFILE="update.ver"
cd $LOCAL
CMD="$WGET $URL/$UPDATEFILE"
eval "$CMD" || exit 1;
if [ -n "`file $UPDATEFILE|grep -i rar`" ]; then
(
cd $TMP
rm -f $TMP/$UPDATEFILE
unrar x $LOCAL/$UPDATEFILE ./
)
UPDATEFILE=$TMP/$UPDATEFILE
URL=`echo $URL|sed -e s:/eset_upd::`
fi
TMPFILE=$TMP/nod32tmpfile
grep file=/ $UPDATEFILE|tr -d \\r > $TMPFILE
FILELIST=`cut -c 6- $TMPFILE`
rm -f $TMPFILE
echo "Downloading updates..."
for FILE in $FILELIST; do
CMD="$WGET \"$URL$FILE\""
eval "$CMD"
done
cp $UPDATEFILE $LOCAL/update.ver
perl -i -pe 's/\/download\/\S+\/(\S+\.nup)/\1/g' $LOCAL/update.ver
echo "Done."
So I have this code to download definitions for my antivirus. The only problem is that, it downloads all files everytime i run script. Is it possible to implement some sort file checking ?, let's say for example,
"if that file is present and have same filesize skip it"
Bash Linux
The -nc argument to wget will not re-fetch files that already exist. It is, however, not compatible with the -N switch. So you'll have to change your WGET line to:
WGET="wget --user=$USER --password=$PASSWD -t 15 -T 15 -nH -nd -q -nc"

Resources