Archive manager + nautilus is very usefull thing for any work with archives
If you install p7zip-full package Archive manager can work with 7z archives
But Archive manager use default settings for compressing
It is very bad
Classical example with javadoc:
Download it from http://www.oracle.com/technetwork/java/javase/downloads/index.html
unzip jdk-6u23-docs.zip
mv docs javadoc
7z a -t7z -m0=lzma -ms=on javadoc.7z javadoc
du -chb javadoc.7z
24791075 javadoc.7z
But from man 7z and from LzmaLib.h we know that best compression is -mx=9 -mfb=273 -md=64m
Let's try:
7z a -t7z -m0=lzma -mx=9 -mfb=273 -md=64m -ms=on javadoc.7z javadoc
du -chb javadoc.7z
21308619 javadoc.7z
This is real better!
Question:
How to make Archive manager to use custom 7z command as default?
You'll get a faster answer at superuser, for questions like this one.
Looking at the program, I discovered that it was File-Roller and the compression parameters were in an XML file. The manual mentioned nothing about configuration for the compression level. Finally, I found this information with Google(at bottom of page):
Veikk0 wrote on the 24 Jul 10 at 20:17
In my opinion this should get more
attention. Creating archives can be
frustrating and difficult at the
moment, mostly because to change the
compression level you have to:
Open gconf-editor (alt+F2 or from terminal).
Navigate to /apps/file-roller/general
Manually edit the key called compression_level to very_fast, fast,
normal or maximum.
Create your archive with file-roller.
Repeat if you want to create another archive with different
compression level.
Furthermore, there's a bug for this: Bug 450019 - compression level
On Trisquel 6.0/Ubuntu 12.04, it's dconf-editor, and the schema is org.gnome.FileRoller.General.
The best compression with 7-zip can be achieved with
7zr a -mx=9 OUTPUT.7z INPUT
which produces slightly smaller files than the "maximum" compression level of File Roller, due to the fact, that File Roller uses the -m0=lzma2 parameter, which is no longer beneficial as of 7-zip version 9.20.
Related
It is useful to have a .deb format file to supply users of ones programs.
The official way of doing this via an upstream program, gpg authentication, quilt and debhelper is far too involved, because it is geared towards a chain of trust to be accepted in a distribution.
How to resolve the error " is not a Debian binary archive" if you restrict yourself to a simple .deb archive?
The .deb archive is an archive created by ar in linux.
This archive contains the following files in that order
debian-binary
control.tar.xz
data.tar.xz
debian-binary contains "2.0"
control.tar contains control and md5sums
data.tar contains your programs in a directory tree that is a copy of linux directory tree.
The error mentionned is a show stopper, because it gives no clue what happens.
After you have managed those three files in order, you can proceed :
lintian <file>.deb
The messages that come from lintian are much more helpful.
I need to get the content of archives and then I want to uncompress the selected one - but I dont want to uncompress the archives to know what's in it. I'd like to list and uncompress at least zip and rar, but (if that's possible) I don't want to be limited to only these two.
Can you advise good npm modules or other projects to achieve this?
Here's what I came up with:
zip
I found node-zip can only unzip files, but not list archive content.
rar
The best solution seems node-rar, but I can't install it on Windows.
node-uncompress This does what it says: It's an "Command-line wrapper for uncompressing various file types." So there is again no possibility to list archive content.
Currently I try to get node-uncompress to list files and hopefully it must never run cross-platform.
Solution:
I am now using 7zip with the node module node-7z instead of trying to get every archive working on its own. The corresponding site is: https://www.npmjs.com/package/node-7z
This library uses the OS independent archive manager 7zip. On Windows 7za is used. "7za.exe (a = alone) is a standalone version of 7-Zip". I've tested it on Windows and Ubuntu and it works great.
Update:
At Windows: Somehow I just got it working by adding 7za to the Path variables - not by adding 7za.exe to the "the same directory of your package.json file." like the description says.
Update 2:
On Windows 7za, that's referred in the node-7z post, cannot handle .rar-archives. So I'm using the "casual" 7-zip instead of 7za.exe. I just renamed the commanline 7z.exe to 7za.exe and added the 7-zip folder to the Path Variables.
I created a bunch of zip files on my computer (Mac OS X) using a command like this:
zip -r bigdirectory.zip bigdirectory
Then, I saved these zip files somewhere and deleted the original directories.
Now, when I try to extract the zip files, I get this kind of error:
$ unzip -l bigdirectory.zip
Archive: bigdirectory.zip
warning [bigdirectory.zip]: 5162376229 extra bytes at beginning or within zipfile
(attempting to process anyway)
error [bigdirectory.zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
I have since discovered that this could be because zip can't handle files over a certain size, maybe 4 gigs. At least I read that somewhere.
But why would the zip command let me create these files? The zip file in question is 9457464293 bytes and it let me make many more like this with absolutely no errors.
So clearly it can create these files.
I really hope my files aren't lost. I've learned my lesson and in the future I will check my archives before deleting the original files, and I'll probably also use another file format like tar/gzip.
For now though, what can I do? I really need my files.
Update
Some people have suggested that my unzip tool did not support big enough files (which is weird, because I used the builtin OS X zip and unzip). At any rate, I installed a new unzip from homebrew, and lo and behold, I do get a different error now:
$ unzip -t bigdirectory.zip
testing: bigdirectory/1.JPG OK
testing: bigdirectory/2.JPG OK
testing: bigdiretoryy/3.JPG OK
testing: bigdirectory/4.JPG OK
:
:
file #289: bad zipfile offset (local header sig): 4294967295
(attempting to re-compensate)
file #289: bad zipfile offset (local header sig): 4294967295
file #290: bad zipfile offset (local header sig): 9457343448
file #291: bad zipfile offset (local header sig): 9457343448
file #292: bad zipfile offset (local header sig): 9457343448
file #293: bad zipfile offset (local header sig): 9457343448
:
:
This is really worrisome because I need these files back. And there were definitely no errors upon creation of this zip file using the system zip tool. In fact, I made several of these at the same time and now they are all exhibiting the same problem.
If the file really is corrupt, how do I fix it?
Or, if it is not corrupt, how do I extract it?
Unzip below 6 seemingly fails, use
jar -xf <zipfile>
if you have java installed, or yet another unzip before you write the file off.
See: https://serverfault.com/questions/235139/how-to-unzip-files-bigger-than-4gb
Try 7z x
I had the same issue with unzip %x on Linux for a .zip file larger than 4GB, compounded with a only DEFLATED entries can have EXT descriptor error.
The command 7z x resolved all my issues though.
Be careful though, the command 7z x will extract all files with a path rooted in the current directory. The option -o allows to specify an output directory.
I had a similar problem backing up a 12GB directory before performing a hard disk format. Funnily enough I used the same command as you.
I read around and found suggestions to run:
zip -F
and
zip -FF
to try to fix the file.
Unfortunately these did not work and I still received errors.
After looking around some more, I found the ditto command and it worked perfectly against my original (untouched) zip file:
ditto -x -k original-file.zip dst-directory
-x to extract an archive
-k Specifies it to be a PKZip archive instead of the default CPIO
After using this command, I successfully extracted all of the files.
The built-in macOS Archive Utility (which is the default used when you select something in Finder and go to File -> Compress "<item>") also creates "corrupt" archives when a file in the archive is over 4 gigabytes in size, the size of the archive itself is over 4 gigabytes or you are trying to compress more than 65536 files into a single zip. This happens because it doesn't use the Zip64 extension format.
This is mentioned on https://apple.stackexchange.com/questions/221020/large-zip-files-created-in-os-x-cannot-be-opened-in-windows and is well covered in the "Apple Archive Utility (and ditto) and very large ZIP archives" 2009 blog post for the now defunct Springy utility. You can also see the 7-Zip folks are aware of the Apple tools creating corrupt zips issue too.
But why would the zip command let me create these files?
Strictly speaking, the original zip format only supports archives up to 2^32 bytes (4GiB) and which do not contain files that were originally larger than 4GiB and you there must be less than 65535 files. Because the command line version of the Infozip command tools shipped with OSX up to version OSX 10.11 (El Capitan) was no newer than 5.52, it could only produce non-conformant archives if you forced it to exceed the original zip format limits. Infozip 6.0 and above know how to make Zip64 archives and that standard has much higher limits. The Infozip 6.0 command line tools started shipping with macOS 10.12 (Sierra). In 2014 when the question was originally asked the newest OSX was 10.10 (Yosemite).
As stated above, even in macOS 10.15 (Catalina) the GUI Archive Utility still creates such "corrupt" zips.
If the file really is corrupt, how do I fix it?
It's corrupt in the sense that its non-conformant and will cause a lot of conformant tools to choke. You could extract (it see below) and then compress again with a tool that knows how to make Zip64 files...
Or, if it is not corrupt, how do I extract it?
Technically, all of the data from the files that have been compressed is still in the archive but the headers that allow fast listing of the zip's content are broken. Such zips can be a struggle to work with when using other tools (even testing such a zip with the command line unzip tool on the same version of macOS can indicate issues like invalid compressed data to inflate / bad zipfile offset (local header sig)).
To get at the files of such zips you need to use a program that will quietly just extract whatever was compressed without checking for conformance or trying to check/list the files. Examples of tools that can do this are:
macOS Archive Utility GUI tool
macOS command line tool ditto
7-zip
Java's jar tool
Infozip based tools won't be able to work with or repair such zip files once you've made such a problem zip file.
you can use
zip -FF corrupted.zip --out fixed.zip
replace corrupted.zip by your zip with issues
replace fixed.zip by the name of new .zip file fixed
I have faced exactly the same issue when I tried to unzip zip files of huge sizes (~7GB). I was damn sure that there was no error while copying the zip files to the server. (I double-checked it with rsync).
Depending on your situation, the solution is:
1) If you're doing this in a local machine, right click on the zip file and give Extract Here, this will work for (.zip) files of any size.
2) If your zip files are in a remote server, first load the server filesystem locally using sftp (sftp://username#server.url.address.com). After that just navigate to the directory and again do the same thing as you did in (1). i.e. right click on the zip file and extract it.
Might not be the best solution but that's one way of doing it.
Working on a web site. A number of third party javascript libraries use mixed-case in their files and folders.
I am working on a windows system.
When ready to upload from my local windows XAMPP environment to my linux hosting, I use 7zip to create a zip file of my site. I use 7zip's -xr! feature to skip certain directories like my .git repository.
I FTP the resulting .zip file to my server and use the server's "unzip" function to explode it. All my files are there but they are all changed to lowercase!
This kills the website as the third party libraries that are mixed-case are no longer found.
I've tried unzip -C but that did not seem to do anything.
I also look in the archive prior to uploading and on windows, all the file name cases are preserved.
Tried using GNU32's windows tar but the --exclude function is not allowing me to skip the .git directories.
I need some help in the form of:
How to use unzip in linux such that is preserves case (googled until hairless, but no love found...)
How to use tar on windows such that it excludes particular directories
How to use something else to achieve my goal. I honestly don't care what it is... I'm downloading CYGWIN right now to see if it'll help at all. I may end up installing Linux in a virtual box just to try tar-gz from a virtual machine actually running linux but would REALLY rather avoid that hassle every time I want to pack up a pretty simple archive.
Zip works fine for packing, but unpacking is not kosher.
Use tar's --exclude-vcs option:
--exclude-vcs
exclude version control system directories
Example:
tar --exclude-vcs czf foo.tar.gz foo
or for a *.tar.bz2 archive
tar --exclude-vcs cjf foo.tar.bz2 foo
Try unzip -U file.zip; this might work if you have an old version of unzip. Otherwise, post the output of unzip -v and unzip -l file.zip.
I'm able to extract files from a RPM file, but how do I "rebuild" it, for example cpio2rpm?
I have extracted RPM file using following command.
rpm2cpio theFileName.rpm | cpio –idmv
I have to modify the few web application files like *.php, *.html or .js. These files don’t require any source recompilation. So I would like to replaces or change these files with modification without rebuilding rpm. Since, I need to do this for multiple platforms like Redhat Linux and SUSE, and multiple architecture like 32 and 64 bit OS.
I am expecting to do these changes on only on system and without rebuild rpm and there would not be have target system architecture dependency (like i386, 64).
I am not looking like command rpmbuild –rebuild the.src.rpm since, I don’t have source. I need to be rebuild binary .RPM file(not source .rpm)
I want to do this without source and platform or architecture independent and without using spec file if possible.
Any buddy, could you please suggest any solution or any free tools.
Thank you to all whoever spends time to read and reply to my thread.
You can use rpmrebuild to modify an actual rpm file (it doesn't need to be installed).
Most of the examples for this use complicated inline edit commands to modify known files in particular ways, but you can use a normal editor. I used this to fix a shell script in an rpm file that I didn't have the source for. Call the command as
rpmrebuild -ep theFileName.rpm
This puts you in an editor with the spec file for the RPM. The name of the file will be something like ~/.tmp/rpmrebuild.12839/work/spec.2. If you look in, in this example, ~/.tmp/rpmrebuild.12839/work, you will find all of the files used to make the RPM (in my case, the file was in root/usr/sbin within that directory). So, go to another window, cd to that directory, and edit any files you need to change.
When you have finished editing files, go back to the edit window with the spec file, make any changes you need to that file (I didn't have any, since I wasn't adding or deleting files), save the file, and say "y" to the "Do you want to continue" question. It will then build a new RPM file, and tell you where it has put it (in my case, in ~/rpmbuild/RPMS/x86_64/)
You can repackage an installed RPM (including modified files) using rpmrebuild. http://rpmrebuild.sourceforge.net/
Obviously your binaries (if any) would have to be platform/architecture independent to work on all the OS flavors you're hoping for, but it sounds like if they're just web files that shouldn't be a problem.
Principially you can pack everything you want into a RPM file. Just treat what you have as "source" and write a SPEC file which puts the data where the compiled binaries would normally go.
Concerning RPM, I consider "source" "what I have" and "binary" "what I need to run". Not very exact terminology, but it helps working with RPMs.
Your spec file looks like any other spec file, what concerns the parameters etc. But the code part is different:
[...]
%prep
# Here you either have nothing to do or you already unpack the cpio and possibly modify it.
# %build can be omitted
%install
[ "${buildroot}" != "/" ] && [ -d ${buildroot} ] && rm -rf ${buildroot};
# Here you can either unpack the cpio or copy the data unpacked in %prep.
# Be careful to put it into %{buildroot} or $RPM_BUILD_ROOT.