I want to update an existing tar file with newer files.
At GNU, I read:
4.2.3 Updating an Archive
In the previous section, you learned how to use ‘--append’ to add a
file to an existing archive. A related operation is ‘--update’ (‘-u’).
The ‘--update’ operation updates a tar archive by comparing the date
of the specified archive members against the date of the file with the
same name. If the file has been modified more recently than the
archive member, then the newer version of the file is added to the
archive (as with ‘--append’).
However,
When I run my tar update command, the files are appended even though their modification dates are exactly the same. I want to ONLY append where modification dates of files to be tarred are newer than those already in the tar...
tar -uf ./tarfile.tar /localdirectory/ >/dev/null 2>&1
Currently, every time I update, the tar doubles in size...
The update you describe implies that the file within the archive is replaced. If the new copy is smaller than what's in the archive, it could be directly rewritten. If the new copy however is larger, tar would have to zero the existing archive entry and append. Such updates would leave runs of '\0's or other unused bytes, so any normal computer user would want that such sections are removed, which would be done by "moving up" bytes comprising the archive contents towards the start of the file (think C's memmove).
Such an in-place move operation however, which would involve seek-read-seek-write cycles, is costly, especially when you look at it in the context of tapes — which tar was designed for originally —, i.e. devices with a seek performance that is not comparable to harddisks. You'd wear out the tape rather quickly with such move ops. Oh and of course, WORM devices don't support this move op either.
If you do not want to use the "-P" switch tar -u... works correctly if the current directory is the parent directory of the one we are going to update, and the path to this directory in the tar command will not be an absolute path.
For exmple:
We want to update catalog /home/blabla/Dir. We do it like that:
cd /home/blabla
tar -u -f tarfile.tar Dir
In general, the update must be made from the same place as the creation, so that the paths agree.
It is also possible:
cd /home/blabla/Dir
tar-u -f /path/to/tarfile.tar .
You may simply create (instead of update) the archive each time:
tar -cvpf tarfile.tar *
This will solve the problem of your archive doubling in size each time. But of course, it is generating the whole archive every time.
By default tar strips the leading / from member names, but it does this after deciding what needs to be updated.
Therefore if you are archiving an absolute path, you either need to cd / and use relative paths, or add the -P/--absolute-names option.
cd /
tar -uf "$OLDPWD/tarfile.tar" localdirectory/ >/dev/null 2>&1
tar -cPf tarfile.tar /localdirectory/ >/dev/null 2>&1
tar -uPf tarfile.tar /localdirectory/ >/dev/null 2>&1
However, the updated items will still be appended. A tar (tape archive) file cannot be modified excpet by appending.
Warning! When speaking about "dates" it means any date, and that includes the access time.
Should your files have been accessed in any such way (a simple ls -l is enough) then tar is right to do what it does!
You need to find another way to do what you want. Probably use a sentinel file and see if its modification date is less than the files you wish to append.
Related
I need to copy a file from multiple locations to the BACK UP directory by retaining its directory structure. For example, I have a file "a.txt" at the following locations /a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt, I now need to copy this file from multiple locations to the backup directory /tmp/backup. The end result should be:
when i list /tmp/backup/a --> it should contain /b/a.txt /c/a.txt /d/a.txt & /e/a.txt.
For this, I had used the command: echo /a/*/a.txt | xargs -I {} -n 1 sudo cp --parent -vp {} /tmp/backup. This is throwing the error "cp: cannot stat '/a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt': No such file or directory"
-I option is taking the complete input from echo instead of individual values (like -n 1 does). If someone can help debug this issue that would be very helpful instead of providing an alternative command.
Use rsync with the --relative (-R) option to keep (parts of) the source paths.
I've used a wildcard for the source to match your example command rather than the explicit list of directories mentioned in your question.
rsync -avR /a/*/a.txt /tmp/backup/
Do the backups need to be exactly the same as the originals? In most cases, I'd prefer a little compression. [tar](https://man7.org/linux/man-pages/man1/tar.1.html) does a great job of bundling things including the directory structure.
tar cvzf /path/to/backup/tarball.tgz /source/path/
tar can't update compressed archives, so you can skip the compression
tar uf /path/to/backup/tarball.tar /source/path/
This gives you versioning of a sort, as if only updates changed files, but keeps the before and after versions, both.
If you have time and cycles and still want the compression, you can decompress before and recompress after.
i used the following:
gzip -9 -c -r <some_directory> > directory.gz
how do i decompress this directory ?
I have tried
gunzip directory.gz
i am just left with a single file and not a directory structure.
As others have already mentioned, gzip is a file compression tool and not an archival tool. It cannot work with directories. When you run it with -r, it will find all files in a directory hierarchy and compress them, i.e. replacing path/to/file with path/to/file.gz. When you pass -c the gzip output is written to stdout instead of creating files. You have effectively created one big file which contains several gzip-compressed files.
Now, you could look for the gzip file header/magic number, which is 1f8b and then reconstruct your files manually.
The sensible thing to do now is to create backups (if you haven't already). Backups always help (especially with problems such as yours). Create a backup of your directory.gz file now. Then read on.
Fortunately, there's an easier way than manually reconstructing all files: using binwalk, a forensics utility which can be used to extract files from within other files. I tried it with a test file, which was created the same way as yours. Running binwalk -e file.gz will create a folder with all extracted files. It even manages to reconstruct the original file names. The hierarchy of the directories is probably lost. But at least you have your file contents and their names back. Good luck!
Remember: backups are essential.
(For completeness' sake: What you probably intended to run: tar czf directory.tar.gz directory and then tar xf directory.tar.gz)
gzip will compress 1+ files, though not meant to function like an archive utility. The posted cmd-line would yield N compressed file images concatenated to stdout, redirected to the named output file; unfortunately stuff like filenames and any dirs would not be recorded. A pair like this should work:
(create)
tar -czvf dir.tar.gz <some-dir>
(extract)
tar -xzvf dir.tar.gz
With PHP I am using exec("tar -xf archive.tar -C /home/user/target/folder") to extract the contents of a specific archive (archive.tar) into the target directory (/home/user/target/folder), so that all existing contents of the target directory will be overwritten by the new ones that are contained in the archive.
It works fine and all files in the target directory are being overwritten after extract, but there is one directory in the archive that I would like to omit (from extracting and thus overwriting the existing one in the target folder)...
For example, the archive.tar contains:
folderA/
folderB/
folderC/
folderD/
fileA.php
fileB.php
fileC.xml
How could I extract (and overwrite) all except (for example) folderC/? In other words, I want folderC and its contents to remain intact in the user's directory and not be overwritten by the one contained in the tar archive.
Any suggestions?
(Tar on the hosting server is GNU version 1.23.)
You can use '--exclude' to omit a folder:
tar -xf archive.tar -C /home/user/target/folder" --exclude="folderC"
There is the --exclude PATTERN option in the tar tool.
Check: tar on linuxcommand.org
To be on the safe side, you could remove all write permissions from the folder. For example:
$ chmod 000 folderC/
An then do a normal tar extract (as regular user). You'll get some error messages on console, but your folder will remain untouched.... At the end of the tar, change back your folder original permissions. For example:
$ chmod 775 folderC/
Of course '--exclude' tar option is the right solution to this particular problem, but, if you are not completely 100% sure about a command syntax, and yor're handling critical data, my solution puts you on the safe side :-).
Write --exclude='./folder' at the beginning of the tar command.
In your case that is,
exec("tar -x --exclude='./C' -f archive.tar -C /home/user/target/folder")
How can I avoid the following types of errors, and force the
operation to occur?:
/bin/cp: cannot overwrite directory `./foo' with non-directory
/bin/cp: cannot overwrite non-directory `./usr/share/doc' with directory `/usr/share/doc'
To backup a partition, I want to copy a new version of a
directory onto an old version, to make them identical, with the
command:
/bin/cp -xau --remove-destination / .
The destination directory is in a ZFS filesystem that is being
regularly snapshotted. Relatively few files (or directories)
change.
That is why I don't want to just delete the whole
destination directory -- that will make the snapshot needlessly
large.
This is in Linux.
Thanks.
The following seems to work better (but not perfectly):
tar cf /mytmp/root.tar --one-file-system /
cd /backup/root
tar xvf /mytmp/root.tar --keep-newer-files
Note that this works better when /mytmp is not in the root
partition. :-)
The output from the 2nd tar command was 700K lines. I captured
it inside an emacs buffer and searched it for error messages.
There were two problems with not wanting to replace directories
with links, which I handled manually.
I am trying to do a grep and then a sed to search for specific strings inside files, which are inside multiple tars, all inside one master tar archive. Right now, I modify the files by
First extracting the master tar archive.
Then extracting all the tars inside it.
Then doing a recursive grep and then sed to replace a specific string in files.
Finally packaging everything again into tar archives, and all the archives inside the master archive.
Pretty tedious. How do I do this automatically using shell scripting?
There isn't going to be much option except automating the steps you outline, for the reasons demonstrated by the caveats in the answer by Kimvais.
tar modify operations
The tar command has some options to modify existing tar files. They are, however, not appropriate for your scenario for multiple reasons, one of them being that it is the nested tarballs that need editing rather than the master tarball. So, you will have to do the work longhand.
Assumptions
Are all the archives in the master archive extracted into the current directory or into a named/created sub-directory? That is, when you run tar -tf master.tar.gz, do you see:
subdir-1.23/tarball1.tar
subdir-1.23/tarball2.tar
...
or do you see:
tarball1.tar
tarball2.tar
(Note that nested tars should not themselves be gzipped if they are to be embedded in a bigger compressed tarball.)
master_repackager
Assuming you have the subdirectory notation, then you can do:
for master in "$#"
do
tmp=$(pwd)/xyz.$$
trap "rm -fr $tmp; exit 1" 0 1 2 3 13 15
cat $master |
(
mkdir $tmp
cd $tmp
tar -xf -
cd * # There is only one directory in the newly created one!
process_tarballs *
cd ..
tar -czf - * # There is only one directory down here
) > new.$master
rm -fr $tmp
trap 0
done
If you're working in a malicious environment, use something other than tmp.$$ for the directory name. However, this sort of repackaging is usually not done in a malicious environment, and the chosen name based on process ID is sufficient to give everything a unique name. The use of tar -f - for input and output allows you to switch directories but still handle relative pathnames on the command line. There are likely other ways to handle that if you want. I also used cat to feed the input to the sub-shell so that the top-to-bottom flow is clear; technically, I could improve things by using ) > new.$master < $master at the end, but that hides some crucial information multiple lines later.
The trap commands make sure that (a) if the script is interrupted (signals HUP, INT, QUIT, PIPE or TERM), the temporary directory is removed and the exit status is 1 (not success) and (b) once the subdirectory is removed, the process can exit with a zero status.
You might need to check whether new.$master exists before overwriting it. You might need to check that the extract operation actually extracted stuff. You might need to check whether the sub-tarball processing actually worked. If the master tarball extracts into multiple sub-directories, you need to convert the 'cd *' line into some loop that iterates over the sub-directories it creates.
All these issues can be skipped if you know enough about the contents and nothing goes wrong.
process_tarballs
The second script is process_tarballs; it processes each of the tarballs on its command line in turn, extracting the file, making the substitutions, repackaging the result, etc. One advantage of using two scripts is that you can test the tarball processing separately from the bigger task of dealing with a tarball containing multiple tarballs. Again, life will be much easier if each of the sub-tarballs extracts into its own sub-directory; if any of them extracts into the current directory, make sure you create a new sub-directory for it.
for tarball in "$#"
do
# Extract $tarball into sub-directory
tar -xf $tarball
# Locate appropriate sub-directory.
(
cd $subdirectory
find . -type f -print0 | xargs -0 sed -i 's/name/alternative-name/g'
)
mv $tarball old.$tarball
tar -cf $tarball $subdirectory
rm -f old.$tarball
done
You should add traps to clean up here, too, so the script can be run in isolation from the master script above and still not leave any intermediate directories around. In the context of the outer script, you might not need to be so careful to preserve the old tarball before the new is created (so rm -f $tarbal instead of the move and remove command), but treated in its own right, the script should be careful not to damage anything.
Summary
What you're attempting is not trivial.
Debuggability splits the job into two scripts that can be tested independently.
Handling the corner cases is much easier when you know what is really in the files.
You probably can sed the actual tar as tar itself does not do compression itself.
e.g.
zcat archive.tar.gz|sed -e 's/foo/bar/g'|gzip > archive2.tar.gz
However, beware that this will also replace foo with bar also in filenames, usernames and group names and ONLY works if foo and bar are of equal length