Prevent NSIS from appending archive? - nsis

I had bad day when a virus infect all *.exe including many of my NSIS installer and stripping them to only the stub (+ the virus code).
I wonder if there an option to pack archive inside the stub instead? So at least if it got infected the whole archive (hence the installer exe) could ve recovered by AV
This also mean if the file been tampered it will invalidated the PE too. So no /NCRC then

Even if the compressed data was stored in a PE section or resource, a virus could still strip it out when infecting. While the chance of data survival is probably better than the current implementation it will also complicate the compilation phase for makensis.
It is open source and you are free to submit a patch :)

Related

Enormous appimage created by appimage-builder

I'm packaging an application I have written into an AppImage so that it can be delivered to Linux users.
One of the key features of the GUI toolkit I'm using is that it is small and lightweight, allowing me to compile a build which is statically linked to the GUI library of around 6Mb.
However, after building the AppImage - where I do what the instructions say - use all the functionality (which basically includes only using file browser dialogues to load files) - it generates an absolutely enormous AppImage of around 200Mb!
I know that AppImages are supposed to be a "little bit" big, but this is completely mad as a proposed solution for portability when the natively compiled binary including a statically linked GUI toolkit is only 6Mb.
However, I'm not convinced at all that I need all of that 200Mb. A very similar piece of software to mine, but that additionally uses Qt (which is pretty bloated in comparison) is only about 30Mb. I actually suspect that appimage-builder is doing something very wrong - I think it is listing the files in the directory I explore when using the file browser dialogue as dependencies (they are big files). I have no real other explanation. But if so how do I stop it doing that?
Why is mine so big? What can I do about it?
For the record I am using this method for building my AppImage
Building my binary separately
Running appimage-builder --generate and completing the form
Running appimage-builder --recipe AppImageBuilder.yml --skip-tests
Edit: Removing the obviously not needed files that were being packaged have reduced the size of the appimage to just 140Mb, but this is still almost 5 times bigger than equivalent appimages I've seen. Are there some tricks/options I'm not aware of?
In few recent days got started with AppImage and faced the same problem.
Shortly: check dependencies of your app by any possible ways and configure recipe to include only concrete dependencies and avoid includings of any theme/icon/etc packages which are not realy used :)
My case is a small app, written in Dart (with Flutter). The built project itself weights about 22MB (du -sh . in output directory). My host os is Linux Mint (Cinnamon).
First time I run appimage-builder --generate it generated me the "recipe" with 17 packages to be installed and bunch of libraries to be copied from /lib/x86_64-linux-gnu/. When I generated AppImage from this recipe, result was about 105MB, which are extremely large in my opinion for small app.
My first experiments was to cleanup included files section, as I guess "all necessary" libraries should be installed from apt. I referred to a few configs from network where were marked only few libraries for include and was exclude section, which contains some DE related files (themes, fonts, icons and etc.)
Using that I got result about 50MB (which are still large enough).
Next experiments were referred to from this issue - https://github.com/AppImageCrafters/appimage-builder/issues/130#issuecomment-843288012
Shortly - after generating an AppImage file, there appeared file .bundle.yml file inside AppDir folder, which contains deployed libraries. Advice is to try exclude something from that. May be it's a good enough advice, but it takes too long time to check for each package/library if it breaks resulted AppImage file at least with official tests of appimage-builder (docker containers). I faced more broken results than any sane size reduction.
My next experiment was to reduce dependencies which should be installed from package manager and use files from host system. I deleted AppDir and appimage-builder-cache folders and regenerated the recipe. At next step I commented all packages which should be installed from package manager and leaved only included files. Result was fail, because of needing one package, but after adding it I got AppImage result in 36MB. That sounds much better than starting 105MB or even previous result of 50MB.
Here I got small "boost" - Flutter project built into AOT binaries, without runtime. So I checked output of ldd for my app, and then mapped list of required libraries to list of library files which were detected by appimage-builder. Finally some of them was correct, some not found in ldd output and some was in ldd output, but were not detected by appimage-builder. I added all undetected, removed all unused. My final result is 26MB and it passed all appimage-builder tests (running in docker images of fedora, cent, debian, ubuntu and arch)
I understand that it's bad enough for continuous building, because it will require to always check for used libraries and adapt config if something changed, but for rare enough builds I guess it's has some kind of balance between good and bad.

Fast kernel recompile

I'm trying to automate the process of recompile a upgraded kernel. (I mean version upgrade)
What I do:
Backup the object files (*.o) with rsync
Remove the directory and make mrproper
Extract new source and patch
Restore object files with rsync
But I found it doesn't make sense. Since skip compiled things need to get a hash, this should removed it.
Question: What file do I need to keep? or it doesn't exists?
BTW: I already know ccache but it broke with a little config change.
You're doing it wrong™ :-)
Keep the kernel tree as-is and simply patch it using the appropriate incremental patch. For example, for 3.x, you find these patches here:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/
If you currently have 3.18.11 built and want to upgrade to 3.18.12, download the 3.18.11-12 patch:
https://www.kernel.org/pub/linux/kernel/v3.x/incr/patch-3.18.11-12.xz
(or the .gz file, if you don't have the xz utilities installed.)
and apply it. Then "make oldconfig" and "make". Whatever needs to be rebuilt will be rebuilt.
However, it's actually best to not rely on the object file dependency mechanism. Who knows if something might end up not being rebuilt even though it should due to a bug. So I'd recommend starting clean every time with a "make clean" before applying the patch, even though it will rebuild everything.
Are you really in such a big need to save build time? If yes, it might be a better idea to configure the kernel ("make menuconfig") and disable all functionality you don't need (like device drivers for hardware you don't have, file systems you don't care about, networking features you will not use, etc.) Such a kernel that's optimized for my needs only takes about 3 or 4 minutes to build (normally, the full kernel with everything enabled would need over half an hour; or even more these days, it's been a very long time since I've built non-optimized kernels.)
Some more info on kernel patches:
https://www.kernel.org/doc/Documentation/applying-patches.txt
The incremental patch is a good way since it updates time stamps properly.
(GNU) Make use time stamps to identify rebuild so just keep the time stamps to avoid rebuild.
If we need rsync, we should use it with -t option.
Also for a patch doesn't have incremental patches, we can make it manually by comparing patched files.

.lib files and decompiling

I have a .exe which is compiled from a combination of .for (fortran), and .c source files.
It does not run on anything later than Win98, due to an error with the graphics server:
“access violation error in User 32.dll at Ox7e4467a9”
Unless there is some other way around the above error (?), I assume I have to recompile the .exe from source using a more modern graphics server. I have all the files to do this bar one .lib file!
Is it possible to pull any info on the missing lib file out of the current .exe I have?
It is possible to dis-assemble the .exe, but I don't think I gain much from this?
You probably can't "cut" the lib file from an executable. Even if you could somehow get the code from it, standard compilers and linker wouldn't know how to link against it, since it won't have the linking information needed (they are not included in the result binary).
However, if your problem is that your program works on Win98, but doesn't run on NT-based systems (XP, Vista, Win7), I think it would be easier to find out, what incompatibility is there that crashes the program. You mentioned that the access violation occurs in user32.dll. Start your program inside a debugger, take a look at which function the crash occurs. Make sure you have your PDB symbols loaded (so you can see names of internal non-public functions). Trace down which Win32 API is called and what are its parameters. Try to figure out, what should be at the memory that cannot be accessed.
Also without any other information, it's impossible to help you with that.
Once integrated into an image file (your exe), a library (your .lib) which is statically bound to an application (which is done by your linker) cannot be separated, differentiated from your own code, and thus, one cannot retrieve the code from a lib by decompiling the exe.

Uploading & extracting archive (zip, rar, targz, tarbz) automatically - security issue?

I'd like to create following functionality for my web-based application:
user uploads an archive file (zip/rar/tar.gz/tar.bz etc) (content - several image files)
archive is automatically extracted after upload
images are shown in the HTML list (whatever)
Are there any security issues involved with extraction process? E.g. possibility of malicious code execution contained within uploaded files (or well-prepared archive file), or else?
Aside the possibility of exploiting the system with things like buffer overflows if it's not implemented carefully, there can be issues if you blindly extract a well crafted compressed file with a large file with redundant patterns inside (a zip bomb). The compressed version is very small but when you extract, it'll take up the whole disk causing denial of service and possibly crashing the system.
Also, if you are not careful enough, the client might hand a zip file with server-side executable contents (.php, .asp, .aspx, ...) inside and request the file over HTTP, which, if not configured properly can result in arbitrary code execution on the server.
In addition to Medrdad's answer: Hosting user supplied content is a bit tricky. If you are hosting a zip file, then that can be used to store Java class files (also used for other formats) and therefore the "same origin policy" can be broken. (There was the GIFAR attack where a zip was attached to the end of another file, but that no longer works with the Java PlugIn/WebStart.) Image files should at the very least be checked that they actually are image files. Obviously there is a problem with web browsers having buffer overflow vulnerabilities, that now your site could be used to attack your visitors (this may make you unpopular). You may find some client side software using, say, regexs to pass data, so data in the middle of the image file can be executed. Zip files may have naughty file names (for instance, directory traversal with ../ and strange characters).
What to do (not necessarily an exhaustive list):
Host user supplied files on a completely different domain.
The domain with user files should use different IP addresses.
If possible decode and re-encode the data.
There's another stackoverflow question on zip bombs - I suggest decompressing using ZipInputStream and stopping if it gets too big.
Where native code touches user data, do it in a chroot gaol.
White list characters or entirely replace file names.
Potentially you could use an IDS of some description to scan for suspicious data (I really don't know how much this gets done - make sure your IDS isn't written in C!).

Fast, free tool to unarchive any compression format?

I've been using (Ubuntu's) file-roller to compress a range of files, e.g. .gz, .zip, .rar, .tar.gz, etc. It's nice because it provides a simple, uniform interface to de-compressing files in particular folders. However, it's pretty slow, apparently because it pops open a GUI window to tell you its uncompressing the file.
So I am wondering if anyone can recommend a tool that will uncompress multiple compression formats, and has a uniform interface?
7-zip can uncompress a wide variety of formats, including 7z, ZIP, GZIP, BZIP2, TAR, ARJ, CAB, CHM, CPIO, DEB, DMG, HFS, ISO, LZH, LZMA, MSI, NSIS, RAR, RPM, UDF, WIM, XAR and Z.
If using 7zip as a developer don't forget you can easily embed it in your own applications. Scroll down to "How can I add support for 7z archives to my application?" in that link. Great stuff gotta love 7zip. If you want an app to build on with a uniform interface 7zip is it. Not to mention its a SF project so you can take a look around if you like.
File-roller is simply a front-end to these file formats. It sits on top and parses the output from the compression programs. I doubt you will get any noticeable performance advantages by replacing it.
You could just go into the terminal, bypass the GUI and write, for an example:
unrar x -r mybig.archive.rar
tar xvfz mybig.archive.tar.gz
unzip mybig.archive.zip
Update: Ran a test (1.4G rar archive)
unrar (non-free): 1m25.207s
file-roller: 1m39.311s
7z-rar: 1m17.084s
unrar-free: failed
rar (shareware): 1m29.109s
14 extra seconds for a full front-end, I think it is acceptable. 7zip is fastest, without frontend.
Windows : Universal Extractor - is an application destined to extract virtually any type of archive available in today’s market: RAR, ZIP, 7Z, EXE, TAR, NRG, ISO, DLL, you name it; this program is able to process all of them at incredible speed.
There’s no other purpose to this program than extracting the contents of archives. As such, you cannot rely on it to create archives. Also, the number of files it can process simultaneously is restricted to one, so batch decompressing is not possible.
7zip can do the simple job, but not like Universal Extractor
Ubuntu: p7zip-rar or p4zip or Archive Manager

Resources