How do I zip files without compression using batch command - zip

I am writing a batch file in order to make an .ear file. Here is my code snippet:
winrar.exe a -afzip -m5 -ed -pTest -r -ep1 %earName%.ear %extractDest%*
This code compresses the including files which I don't want. How can I just add the files in the "extractDest" to the .ear file without compressing?

You can use command line zip from Info-Zip as follows:
zip -rp0 earname.ear source_dir
-0 means store and don't compress (-r recursive, -p preserve relative path).
Unlike winrar, Info-Zip is open source (no prompts to purchase), and its zip and unzip are true command-line utilities, and very small ones at that.
If you insist on using winrar, you must use option -m0 (instead of -m5), which selects 0 (store) compression level.

Related

How to use zip to generate a new archive but not just refresh files in the archive and add files into it, on linux?

Here, (on linux)
there is an existing archive named A.zip, which include File1 and File2:
A.zip:
File1
File2
and I run this command: zip A.zip File1 File3, then the archive A.zip becomes like:
A.zip:
File1
File2
File3
however, what I really want to get is a brand new archive A.zip! like:
A.zip:
File1
File3
I know it can be done by run rm A.zip and then run zip A.zip File1 File3, but it is not elegant and if I write these commands into a shell script so A.zip may not exist while the action to remove a non-existent file is not elegant as well.
Is there any options for me to get this done?
Use these option to works:
zip -FSr A.zip File1 File3
OPTIONS
-FS
Synchronize the contents of an archive with the files on the OS. Normally when an archive is updated, new files are added and changed files are updated but files that no longer exist on the
OS are not deleted from the archive. This option enables a new mode that checks entries in the archive against the file system. If the file time and file size of the entry matches that of
the OS file, the entry is copied from the old archive instead of being read from the file system and compressed. If the OS file has changed, the entry is read and compressed as usual. If
the entry in the archive does not match a file on the OS, the entry is deleted. Enabling this option should create archives that are the same as new archives, but since existing entries are
copied instead of compressed, updating an existing archive with -FS can be much faster than creating a new archive. Also consider using -u for updating an archive.
For this option to work, the archive should be updated from the same directory it was created in so the relative paths match. If few files are being copied from the old archive, it may be
faster to create a new archive instead.
Note that the timezone environment variable TZ should be set according to the local timezone in order for this option to work correctly. A change in timezone since the original archive was
created could result in no times matching and recompression of all files.
This option deletes files from the archive. If you need to preserve the original archive, make a copy of the archive first or use the --out option to output the updated archive to a new
file. Even though it may be slower, creating a new archive with a new archive name is safer, avoids mismatches between archive and OS paths, and is preferred.
-r
Travel the directory structure recursively; for example:
zip -r foo.zip foo
or more concisely
zip -r foo foo
In this case, all the files and directories in foo are saved in a zip archive named foo.zip, including files with names starting with ".", since the recursion does not use the shell's file-
name substitution mechanism. If you wish to include only a specific subset of the files in directory foo and its subdirectories, use the -i option to specify the pattern of files to be in‐
cluded. You should not use -r with the name ".*", since that matches ".." which will attempt to zip up the parent directory (probably not what was intended).
Multiple source directories are allowed as in
zip -r foo foo1 foo2
which first zips up foo1 and then foo2, going down each directory.
Note that while wildcards to -r are typically resolved while recursing down directories in the file system, any -R, -x, and -i wildcards are applied to internal archive pathnames once the di‐
rectories are scanned. To have wildcards apply to files in subdirectories when recursing on Unix and similar systems where the shell does wildcard substitution, either escape all wildcards
or put all arguments with wildcards in quotes. This lets zip see the wildcards and match files in subdirectories using them as it recurses.

Bash Scripting with xargs to BACK UP files

I need to copy a file from multiple locations to the BACK UP directory by retaining its directory structure. For example, I have a file "a.txt" at the following locations /a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt, I now need to copy this file from multiple locations to the backup directory /tmp/backup. The end result should be:
when i list /tmp/backup/a --> it should contain /b/a.txt /c/a.txt /d/a.txt & /e/a.txt.
For this, I had used the command: echo /a/*/a.txt | xargs -I {} -n 1 sudo cp --parent -vp {} /tmp/backup. This is throwing the error "cp: cannot stat '/a/b/a.txt /a/c/a.txt a/d/a.txt a/e/a.txt': No such file or directory"
-I option is taking the complete input from echo instead of individual values (like -n 1 does). If someone can help debug this issue that would be very helpful instead of providing an alternative command.
Use rsync with the --relative (-R) option to keep (parts of) the source paths.
I've used a wildcard for the source to match your example command rather than the explicit list of directories mentioned in your question.
rsync -avR /a/*/a.txt /tmp/backup/
Do the backups need to be exactly the same as the originals? In most cases, I'd prefer a little compression. [tar](https://man7.org/linux/man-pages/man1/tar.1.html) does a great job of bundling things including the directory structure.
tar cvzf /path/to/backup/tarball.tgz /source/path/
tar can't update compressed archives, so you can skip the compression
tar uf /path/to/backup/tarball.tar /source/path/
This gives you versioning of a sort, as if only updates changed files, but keeps the before and after versions, both.
If you have time and cycles and still want the compression, you can decompress before and recompress after.

GZip an entire directory

i used the following:
gzip -9 -c -r <some_directory> > directory.gz
how do i decompress this directory ?
I have tried
gunzip directory.gz
i am just left with a single file and not a directory structure.
As others have already mentioned, gzip is a file compression tool and not an archival tool. It cannot work with directories. When you run it with -r, it will find all files in a directory hierarchy and compress them, i.e. replacing path/to/file with path/to/file.gz. When you pass -c the gzip output is written to stdout instead of creating files. You have effectively created one big file which contains several gzip-compressed files.
Now, you could look for the gzip file header/magic number, which is 1f8b and then reconstruct your files manually.
The sensible thing to do now is to create backups (if you haven't already). Backups always help (especially with problems such as yours). Create a backup of your directory.gz file now. Then read on.
Fortunately, there's an easier way than manually reconstructing all files: using binwalk, a forensics utility which can be used to extract files from within other files. I tried it with a test file, which was created the same way as yours. Running binwalk -e file.gz will create a folder with all extracted files. It even manages to reconstruct the original file names. The hierarchy of the directories is probably lost. But at least you have your file contents and their names back. Good luck!
Remember: backups are essential.
(For completeness' sake: What you probably intended to run: tar czf directory.tar.gz directory and then tar xf directory.tar.gz)
gzip will compress 1+ files, though not meant to function like an archive utility. The posted cmd-line would yield N compressed file images concatenated to stdout, redirected to the named output file; unfortunately stuff like filenames and any dirs would not be recorded. A pair like this should work:
(create)
tar -czvf dir.tar.gz <some-dir>
(extract)
tar -xzvf dir.tar.gz

Use of temporary files and memory when using tar to backup very _large_ files with compression

When backing up one or more _very_large_ files using tar with compression (-j or -z) how does GNU tar manage the use of temporary files and memory?
Does it backup and compress the files block by block, file by file, or some other way?
Is there a difference between the way the following two commands use temporary files and memory?
tar -czf data.tar.gz ./data/*
tar -cf - ./data/* | gzip > data.tar.gz
Thank you.
No temporary files are used by either command. tar works completely in a streaming fashion. Packaging and compressing are completely separated from each other and also done in a piping mechanism when using the -z or -j option (or similar).
For each file tar it puts into an archive, it computes a file info datagram which contains the file's path, its user, permissions, etc., and also its size. This needs to be known up front (that's why putting the output of a stream into a tar archive isn't easy without using a temp file). After this datagram, the plain contents of the file follows. Since its size is known and already part of the file info ahead, the end of the file is unambiguous. So after this, the next file in the archive can follow directly. In this process no temporary files are needed for anything.
This stream of bytes is given to any of the implemented compression algorithms which also do not create any temporary files. Here I'm going out on a limb a bit because I don't know all compression algorithms by heart, but all that I ever came in touch with do not create temporary files.

How to get the list of files (ls command) from bz2 archive?

What is the Unix bash command to get the list of files (like ls) from archive file of type .bz2 (without unzipping the archive)?
First bzip2, gzip, etc compress only one file. So probably you have compressed tar file. To list the files you need command like:
tar tjvf file.bz2
This command uncompress the archive and test the content of tar.
Note that bzip2 compresses each file, and a simple .bz2 file always contains a single file of the same name with the ".bz2" part stripped off. When using bzip2 to compress a file, there is no option to specify a different name, the original name is used and .bz2 appended. So there are no files, only 1 file. If that file is a tar archive, it can contain many files, and the whole contents of the .tar.bz2 file can be listed with "tar tf file.tar.bz2" without unpacking the archive.

Resources