Generating Haskell source files in cabal packaging - haskell

I have a package where most of the .hs files are generated from some spec files by a Haskell processor (stored in a subdirectory).
At present this preprocessor is run manually, and so does not work particularly well with version control or when used as part of a larger stack.
I would like Cabal to run this processor, for example from Setup.hs, so that the package can be built using cabal build or by stack as part of a larger project without needing separate manual steps.
The input files are not 1-1 with the output .hs files: instead, a pile of .csv files are read in, and then a differently arranged pile of .hs files are output. I think this means I can't use cabal's preprocessor support.
For my present development purposes I have bodged a call to stack build && stack exec gen-exe at the start of Setup.hs. This runs sometimes too many times (for example it runs on register at the end of a build; and sometimes not enough (for example, it does not run if one of the input .csv files changes, so a manual clean is necessary).
Is this achieveable in cabal, and if so, what do I need to put where to run my build commands frequently enough and not too frequently?
(This is an inherited package... please don't blame me)

Related

Enormous appimage created by appimage-builder

I'm packaging an application I have written into an AppImage so that it can be delivered to Linux users.
One of the key features of the GUI toolkit I'm using is that it is small and lightweight, allowing me to compile a build which is statically linked to the GUI library of around 6Mb.
However, after building the AppImage - where I do what the instructions say - use all the functionality (which basically includes only using file browser dialogues to load files) - it generates an absolutely enormous AppImage of around 200Mb!
I know that AppImages are supposed to be a "little bit" big, but this is completely mad as a proposed solution for portability when the natively compiled binary including a statically linked GUI toolkit is only 6Mb.
However, I'm not convinced at all that I need all of that 200Mb. A very similar piece of software to mine, but that additionally uses Qt (which is pretty bloated in comparison) is only about 30Mb. I actually suspect that appimage-builder is doing something very wrong - I think it is listing the files in the directory I explore when using the file browser dialogue as dependencies (they are big files). I have no real other explanation. But if so how do I stop it doing that?
Why is mine so big? What can I do about it?
For the record I am using this method for building my AppImage
Building my binary separately
Running appimage-builder --generate and completing the form
Running appimage-builder --recipe AppImageBuilder.yml --skip-tests
Edit: Removing the obviously not needed files that were being packaged have reduced the size of the appimage to just 140Mb, but this is still almost 5 times bigger than equivalent appimages I've seen. Are there some tricks/options I'm not aware of?
In few recent days got started with AppImage and faced the same problem.
Shortly: check dependencies of your app by any possible ways and configure recipe to include only concrete dependencies and avoid includings of any theme/icon/etc packages which are not realy used :)
My case is a small app, written in Dart (with Flutter). The built project itself weights about 22MB (du -sh . in output directory). My host os is Linux Mint (Cinnamon).
First time I run appimage-builder --generate it generated me the "recipe" with 17 packages to be installed and bunch of libraries to be copied from /lib/x86_64-linux-gnu/. When I generated AppImage from this recipe, result was about 105MB, which are extremely large in my opinion for small app.
My first experiments was to cleanup included files section, as I guess "all necessary" libraries should be installed from apt. I referred to a few configs from network where were marked only few libraries for include and was exclude section, which contains some DE related files (themes, fonts, icons and etc.)
Using that I got result about 50MB (which are still large enough).
Next experiments were referred to from this issue - https://github.com/AppImageCrafters/appimage-builder/issues/130#issuecomment-843288012
Shortly - after generating an AppImage file, there appeared file .bundle.yml file inside AppDir folder, which contains deployed libraries. Advice is to try exclude something from that. May be it's a good enough advice, but it takes too long time to check for each package/library if it breaks resulted AppImage file at least with official tests of appimage-builder (docker containers). I faced more broken results than any sane size reduction.
My next experiment was to reduce dependencies which should be installed from package manager and use files from host system. I deleted AppDir and appimage-builder-cache folders and regenerated the recipe. At next step I commented all packages which should be installed from package manager and leaved only included files. Result was fail, because of needing one package, but after adding it I got AppImage result in 36MB. That sounds much better than starting 105MB or even previous result of 50MB.
Here I got small "boost" - Flutter project built into AOT binaries, without runtime. So I checked output of ldd for my app, and then mapped list of required libraries to list of library files which were detected by appimage-builder. Finally some of them was correct, some not found in ldd output and some was in ldd output, but were not detected by appimage-builder. I added all undetected, removed all unused. My final result is 26MB and it passed all appimage-builder tests (running in docker images of fedora, cent, debian, ubuntu and arch)
I understand that it's bad enough for continuous building, because it will require to always check for used libraries and adapt config if something changed, but for rare enough builds I guess it's has some kind of balance between good and bad.

Generate cached metadata with scons

I have build process where I have a set of input files I want to build slightly differently based on slow-to-extract meta-data I can find in the input files.
The simple approach would be to get this information from the file every time I run scons and build conditionally based on it but it means I need to rescan the files every time I start a build significantly slowing down the build.
I have two potential approaches I'm looking to explore:
A two stage build where I first run one scons file to extract the metadata into sidecar files. These sidecar files get picked up by a second scons project that generates the right build targets and operations based on the sidecar files.
(Ab)use a custom scanner for the input files to generate sidecar files with the metadata in and enable the implicit_cache to make sure scanning only happens when the input files changes.
What would be the most correct and scons-idiomatic way of accomplishing what I'm looking to do?

Is there a way to process the linux source tree using the config file

I am wondering if there is a way to use my .config file for building the Linux kernel and create another source tree with only the code which will actually be used in the build. For instance, if I choose not to include a certain driver, the code for that driver will be left out and all #ifdef CONFIG_<driver> sections will be removed from the resulting code.
I would like to be able to run the C preprocessor on the entire source tree for the kernel without running any of the actual compilation steps so that I can examine the full codebase of the kernel running on my system without any of the extra code included in the Linux project which will be unused when I compile my kernel.
I was wondering if there was any specific tool or step of make commands that would allow me to run a program like unifdef on the entire source tree using what is in my .config file.
Any help would be appreciated, thank you!

Remove package sources from cabal directory

Having installed a few packages with a large number of dependencies via cabal install, I now have several hundred megabytes of source files in my ~/.cabal/packages/hackage.haskell.org directory. As I'm trying to work on a small SSD, space is at a premium for me. Can I safely remove these, or will doing so cause failures later on?
Remove ~/.cabal/packages/hackage.haskell.org won't cause any failure, but cabal-install will redownload the huge 00-index.tar next time when you try compiling something and this single file is 80+% the size of the folder. It's the index of the whole haskell universe, now around 200MB and hopefully will grow without bound in future.
Compiled libraries and executables won't be affected, so if you are not going to build anything more it's fine to remove the whole folder.

Track origin of file deletion during make

I have a CMake project I am working on, and when I run "make" then the rebuild of one of the targets is getting triggered regardless of whether I touch any of the files in the system. I have managed to discover that this is because one of the files on which the target depends is somehow getting deleted when I run "make", so the system registers that it needs to be rebuilt. I tried setting "chattr +i" on this file to make it immutable and indeed, with this flag set, the rebuilding of the target is not triggered, though there are also no errors from a deletion attempt. But I think I can be sure that this is the culprit.
So now my question. How can I figure out what script or makefile is actually deleting this file? It is a big project with quite a few scripts and sub-makefiles which potentially run at different times during the build, so I am having a hard time manually discovering what is doing this. Is there some nice trick I can pull from the filesystem side to help? I tried doing "strace make", but parsing the output I can't seem to find the filename appearing anywhere. I am no strace expert though, so perhaps the file is getting deleted via some identifier rather than the filename? Or I am not stracing spawned processes perhaps?

Resources