I learned about DiskSpanning and that it is required for larger file sizes. I however just want to compile my files into a single installation file. What I got was an application installer and 9 bin files of 2Gb each.
Is it possible to have all this contained into a single installer?
I am attempting to take several game pak files and create an installer with them.
[Setup]
Compression=lzma
SolidCompression=yes
WizardStyle=modern
DiskSpanning=yes
SlicesPerDisk=1
DiskSliceSize=max
It is not possible.
The documentation says:
Note that it is required that you set this directive to yes if the compressed size of your installation exceeds 2,100,000,000 bytes, even if you don't intend to place the installation onto multiple disks.
2 GB is also the maximum size you can get per file.
Valid values: 262144 through 2100000000, or max
If you're worried that someone might miss copying some files, you might try a self extracting ZIP archive, e.g. using 7zip SFX Builder.
Related
I have a project packaged using electron-builder with NSIS target, producing as artifacts a 40 MB .exe file and a .exe.blockmap file (which is gzipped JSON, I know). The problem is that even if something as simple as version number changes, blockmaps start differing vastly (for example only 1756 chunks of 2032 matched) and each update ends up downloading multiple megabytes.
I understand it may not be easy to make a detailed file-by-file map of an NSIS .exe containing app-64.7z containing app.asar finally containing files, but does electron-builder even try? Can it be overridden to use some binary-diff as basis for the block splitting, to ensure minimum differences between consecutive versions? I can't seem to find any documentation on app-builder.exe's routines for blockmap creation.
The largest part of the solution was introducing shortVersion and shortVersionWindows configuration strings in electron-updater v22.3.1 (20 Jan 2020), letting the 50 MB application not be re-stamped with full versioning at every single version update.
Also, space-padding all -bundle.js files (if present) helps to keep any app.asar file length fields to stay the same, again limiting the scope of changes in the final file.
When I run Inno Setup on a large set of files (>2GB), it takes a long time to run. I believe it is spending its time in the compression, which should be CPU bound, but it is only using a couple of CPUs. Is there a way to spread this across (many) more cores?
Specifically, I'm working with this boost-release repository, which has an Inno Setup script that includes:
[Setup]
....
Compression=lzma2/ultra64
....
[Files]
Source: "boost_1.69.0/*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs ignoreversion
....
Calling Compil32.exe boost_installer.iss takes approximately 25 minutes on a machine with 16 cores and 32GB of RAM (Azure F16s v2).
The set of files is approximately 2.5GB with 2 GB of that being a set of about 300 compiled libraries. The remaining 500MB is 60,000 source files.
So to get to the bottom of this, I created a test project that went through all sorts of permutations of various Inno Setup configuration options.
The ones I found useful (and gave me a 40% improvement in speed!) are:
SolidCompression=yes
LZMAUseSeparateProcess=yes
LZMANumBlockThreads=6
Without SolidCompression, the LZMANumBlockThreads doesn't have much impact. But together, I saw a more typically parallelize-able problem, where more threads gave faster results (to a point).
If you find this interesting, I'd recommend the writeup I did on it, it has a lot of data to back it up.
Try setting LZMANumBlockThreads directive (the default value is 1):
When compressing a large amount of data, the LZMA2 compressor has the ability to divide the data into "blocks" and compress two or more of these blocks in parallel through the use of additional threads (provided sufficient processor power is available). This directive specifies the number of threads to use -- that is, the maximum number of blocks that the LZMA2 compressor may compress in parallel.
Compression=zip
SolidCompression=yes
LZMAUseSeparateProcess=yes
LZMANumBlockThreads=6
use zip compression for 2x faster installations.
speed tested. approved. use zip.
I make a j2me application that almost all of it, are text files.
size: 3mb
The problem is, when I run it on my mobile, it take about 10 sec to run. I do nothing on startup. I have another app with size: 7mb, but it runs without any delay!
Jar files link:
mine: http://s1.picofile.com/file/7252355799/mine.jar.html
correct one: http://s1.picofile.com/file/7252346448/correctone.jar.html
install both of them and run. mine take some seconds to show up, but the other shows immediately.
You need to take into account that JAR is a compressed file format.
To use jar file contents, device has to first de-compress it. How long does decompression take very much depends on jar contents and because of that, jar file size may be not directly related to startup delay.
You better use some zip tool (most if not all such tools can handle jar format) to learn about contents of the jar files you work with - this might give you better indication on what to expect at startup.
For example, I can easily imagine your "7 mb" jar file containing just a handful of jpeg images of total size, well, about same 7 mb - and decompressing very quickly.
As for "3 mb of text files" - if these decompress to something like few hundreds files of total size 50 mb then I would not wonder if it takes long to unpack at device startup.
What I am trying to do is this;
I get these zip files from clients which are 1.5gb in general. They all include pictures only. I need to make them into 100mb files to actually upload it to my server. Problem is that, If I break my 1.5gb zip file, I need to re-attach all of them if I need to use one.
When I break the 1.5gb zip file into a 100mb zip file, I need the 100mb one to act as a separate new file so the server will unzip it and upload the pictures into the database. I have looked for this problem but most of the threads are about how to split a zip file. This is partially what I want to do and I can do it now but I also need those smaller pieces to be able to unzip on its own. Is it possible to break a zip file into smaller pieces that will act as a new, stand alone zip files?
Thanks.
I have the same question. I think unzip in the Linux shell cannot handle a zip file larger than 1 GB, and I need to unzip them unattended in a headless NAS. What I do for now is unzip everything in the desktop HD, select files until they almost reach 1 GB, archive and delete them, then select the next set of files until I reach 1 GB.
Your answer is not clear, but I will try to answer it based upon my understanding of your dilemma.
Questions
Why does the file size need to be limited?
Is it the transfer to the server that is the constraining factor?
Is application (on the server) unable to process files over a certain size?
Can the process be altered so that image file fragments can be recombined on the server before processing?
What operating systems are in use on the client and the server?
Do you have shell access to the server?
A few options
Use imagemagick to reduce the files so they fit within the file size constraints
On Linux/Mac, this is relatively straightforward to do:
split -b 1m my_large_image.jpg (you need the b parameter for it to work on binary files)
Compress each file into its own zip
Upload to the server
Unzip
Concatenate the fragments back into an image file:
cat xaa xab xac xad (etc) > my_large_image.jpg
I am working to reduce the build time of a large Visual C++ 2008 application. One of the worst bottlenecks appears to be the generation of the PDB file: during the linking stage, mspdbsrv.exe quickly consumes available RAM, and the build machine begins to page constantly.
My current theory is that our PDB files are simply too large. However, I've been unable to find any information on what the "normal" size of a PDB file is. I've taking some rough measurements of one of the DLLs in our application, as follows:
CPP files: 34.5 MB, 900k lines
Header files: 21 MB, 400k lines
Compiled DLL: 33 MB (compiled for debug, not release)
PDB: 187 MB
So, the PDB file is roughly 570% the size of the DLL. Can someone with experience with large Visual C++ applications tell me whether these ratios and sizes make sense? Or is there a sign here that we are doing something wrong?
(The largest PDB file in our application is currently 271 MB, for a 47.5 MB DLL. Source code size is harder to measure for that one, though.)
Thanks!
Yes, .pdb files can be very large - even of the sizes you mention. Since a .pdb file contains data to map source lines to machine code and you compile a lot of code there's a lot of data in the .pdb file and you likely can't do anything with that directly.
One thing you could try is to split your program into smaller parts - DLLs. Each DLL will have its own independent .pdb. However I seriously doubt it will decrease the build time.
Do you really need full debug information at all time? You can create a configuration with less debug info in it.
But as sharptooth already said, it is time to refactor and split your program in small more maintainable parts. This won't only reduce build time.