How is an electron-builder NSIS block map generated? Can it be controlled? - node.js

I have a project packaged using electron-builder with NSIS target, producing as artifacts a 40 MB .exe file and a .exe.blockmap file (which is gzipped JSON, I know). The problem is that even if something as simple as version number changes, blockmaps start differing vastly (for example only 1756 chunks of 2032 matched) and each update ends up downloading multiple megabytes.
I understand it may not be easy to make a detailed file-by-file map of an NSIS .exe containing app-64.7z containing app.asar finally containing files, but does electron-builder even try? Can it be overridden to use some binary-diff as basis for the block splitting, to ensure minimum differences between consecutive versions? I can't seem to find any documentation on app-builder.exe's routines for blockmap creation.

The largest part of the solution was introducing shortVersion and shortVersionWindows configuration strings in electron-updater v22.3.1 (20 Jan 2020), letting the 50 MB application not be re-stamped with full versioning at every single version update.
Also, space-padding all -bundle.js files (if present) helps to keep any app.asar file length fields to stay the same, again limiting the scope of changes in the final file.

Related

Should I use git-lfs for packages info files?

As a developer working with several languages, I notice that in most modern languages, dependencies metadata files can change a lot.
For instance, in NodeJS (which in my opinion is the worst when it comes to package management), a change of dependencies or in the version of NPM (respectively yarn) version can lead to huge changes in package-lock.json (respectively yarn.lock), sometimes with tens of thousands of modified lines.
In Golang for instance, this would be go.sum which can have important changes (in smaller magnitude when compared to Node of course) when modifying dependencies or running go mod tidy at times.
Would it be more efficient to track these dependencies files with git-lfs? Is there a reason not to do it?
Even if they are text files, I know that it is advised to push SVG files with git-lfs, because they are mostly generated files and their diff has no reason to be small when regenerating them after a change.
Are there studies about what language and what size/age of a project that makes git-lfs become profitable?
Would it be more efficient to track these dependencies files with git-lfs?
Git does a pretty good job at compressing text files, so initially you probably wouldn't see much gains. If the file gets heavily modified often, then over time the total cloneable repo size would increase by less if you use Git LFS, but it may not be worth it compared to the total repo size increases, which could make the size savings negligible as a percentage. The primary use case for LFS is largish binary files that change often.
Is there a reason not to do it?
If you aren't already using Git LFS, I wouldn't recommend starting for this reason. Also, AFAIK there isn't native built in support for diffing versions of files stored in LFS, though workarounds exist. If you often find yourself diffing the files you are considering moving into LFS, the nominal storage size gain may not be worth it.

Enormous appimage created by appimage-builder

I'm packaging an application I have written into an AppImage so that it can be delivered to Linux users.
One of the key features of the GUI toolkit I'm using is that it is small and lightweight, allowing me to compile a build which is statically linked to the GUI library of around 6Mb.
However, after building the AppImage - where I do what the instructions say - use all the functionality (which basically includes only using file browser dialogues to load files) - it generates an absolutely enormous AppImage of around 200Mb!
I know that AppImages are supposed to be a "little bit" big, but this is completely mad as a proposed solution for portability when the natively compiled binary including a statically linked GUI toolkit is only 6Mb.
However, I'm not convinced at all that I need all of that 200Mb. A very similar piece of software to mine, but that additionally uses Qt (which is pretty bloated in comparison) is only about 30Mb. I actually suspect that appimage-builder is doing something very wrong - I think it is listing the files in the directory I explore when using the file browser dialogue as dependencies (they are big files). I have no real other explanation. But if so how do I stop it doing that?
Why is mine so big? What can I do about it?
For the record I am using this method for building my AppImage
Building my binary separately
Running appimage-builder --generate and completing the form
Running appimage-builder --recipe AppImageBuilder.yml --skip-tests
Edit: Removing the obviously not needed files that were being packaged have reduced the size of the appimage to just 140Mb, but this is still almost 5 times bigger than equivalent appimages I've seen. Are there some tricks/options I'm not aware of?
In few recent days got started with AppImage and faced the same problem.
Shortly: check dependencies of your app by any possible ways and configure recipe to include only concrete dependencies and avoid includings of any theme/icon/etc packages which are not realy used :)
My case is a small app, written in Dart (with Flutter). The built project itself weights about 22MB (du -sh . in output directory). My host os is Linux Mint (Cinnamon).
First time I run appimage-builder --generate it generated me the "recipe" with 17 packages to be installed and bunch of libraries to be copied from /lib/x86_64-linux-gnu/. When I generated AppImage from this recipe, result was about 105MB, which are extremely large in my opinion for small app.
My first experiments was to cleanup included files section, as I guess "all necessary" libraries should be installed from apt. I referred to a few configs from network where were marked only few libraries for include and was exclude section, which contains some DE related files (themes, fonts, icons and etc.)
Using that I got result about 50MB (which are still large enough).
Next experiments were referred to from this issue - https://github.com/AppImageCrafters/appimage-builder/issues/130#issuecomment-843288012
Shortly - after generating an AppImage file, there appeared file .bundle.yml file inside AppDir folder, which contains deployed libraries. Advice is to try exclude something from that. May be it's a good enough advice, but it takes too long time to check for each package/library if it breaks resulted AppImage file at least with official tests of appimage-builder (docker containers). I faced more broken results than any sane size reduction.
My next experiment was to reduce dependencies which should be installed from package manager and use files from host system. I deleted AppDir and appimage-builder-cache folders and regenerated the recipe. At next step I commented all packages which should be installed from package manager and leaved only included files. Result was fail, because of needing one package, but after adding it I got AppImage result in 36MB. That sounds much better than starting 105MB or even previous result of 50MB.
Here I got small "boost" - Flutter project built into AOT binaries, without runtime. So I checked output of ldd for my app, and then mapped list of required libraries to list of library files which were detected by appimage-builder. Finally some of them was correct, some not found in ldd output and some was in ldd output, but were not detected by appimage-builder. I added all undetected, removed all unused. My final result is 26MB and it passed all appimage-builder tests (running in docker images of fedora, cent, debian, ubuntu and arch)
I understand that it's bad enough for continuous building, because it will require to always check for used libraries and adapt config if something changed, but for rare enough builds I guess it's has some kind of balance between good and bad.

How does the unpacked size affect the minified size of an npm pckage?

I'm currently trying to reduce the bundle size of an npm-package I have created. And I managed to get to an unpacked size of about 210kb.
https://www.npmjs.com/package/#agile-ts/core/v/0.0.12 <- 210kb
https://www.npmjs.com/package/#agile-ts/core/v/0.0.11 <- 304kb (with comments)
One change I made was to remove all the comments with help of the 'tsconfig' file, which reduced my unpacked size about 100kb BUT the minified size stayed the same (57kb)?
https://bundlephobia.com/result?p=#agile-ts/core#0.0.12 <- 57kB
https://bundlephobia.com/result?p=#agile-ts/core#0.0.11 <- 57kB (with comments)
So I was wondering how the unpacked size does affect the minified size. Are in the minified size comments already removed? I found no answer to this question on the internet.
Another package I found has an unpacked size of about 325kb
https://www.npmjs.com/package/#hookstate/core <- 325kb
but a minified size of 16.7kB.
https://bundlephobia.com/result?p=#hookstate/core#3.0.6 <- 16.7kb
-> It is about 30% larger in the unpacked size but 70% smaller in the minified size?
The only difference I found, is that the package I just mention consists out of 10 files and my package consists out of 66 files.
So it's smaller than my package.. but then it should be smaller in the unpacked size too.
In case you have any idea how to reduce the package size.. feel free to contribute ^^ or to give me some advice
https://github.com/agile-ts/agile/issues/106
Thank you ;D
What should matter is NOT how much the package contains on disk, but how much it takes space in a final application after all the bundling and minification is applied. This process includes variable names renaming, removal of comments, tree shaking and removal of unused/unreferenced code. There are tools which allow to inspect final application size and size of dependencies. They are different depending on what bundler you use.
It is very bad idea to remove comments from source code for the purpose of minifying your package download size. As well as to remove other development supportive files like Typescript definitions, etc.

How do I limit log4j files based on size (only)?

I would like to configure log4j to write only files up to a maximum specified size, e.g. 5MB. When the log file hits 5MB I want it to start writing to a new file. I'd like to put some kind of meaningful timestamp into the logfile name to distinguish one file from the next.
I do not need it to rename or manipulate the old files in any way when a new one is written (compression would be a boon, but is not a must).
I absolutely do not want it to start deleting old files after a certain period of time or number of rolled files.
Timestamped chunks of N MB logfiles seems like the absolute basic minimum simple strategy I can imagine. It's so basic that I almost refuse to believe that it's not provided out of the box, yet I have been incapable of figuring out how to do this!
In particular, I have tried all incarnations of DailyRollingFileAppender, RollingFileAppender or the 'Extras' RollingFileAppender. All of these delete old files when the backupcount is exceeded. I don't want this, I want all my log files!
TimeBasedRollingPolicy from Extras does not fit because it doesn't have a max file size option and you cannot associate a secondary SizeBasedTriggeringPolicy (as it already implements TriggeringPolicy).
Is there an out of the box solution - preferably one that's in maven central?
I gave up trying to get this to work out of the box. It turns out that uk.org.simonsite.log4j.appender.TimeAndSizeRollingAppender is, as the young folk apparently say, the bomb.
I tried getting in touch some time back to get the author to stick this into maven central, but no luck so far.

Bash PATH length restrictions

Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run).
I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts.
One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines.
Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Does path length affect performance?
There's a hard limit. It's something like 32MB.
Yes, you can get it long enough to affect performance easily. Number of entries is the primary limiting factor, followed by number of / characters (this should not show itself unless the path depth exceeds some outrageous number like 30).

Resources