How to package a Haskell application? - haskell

I have written a piece of code that I would like to create a binary and distribute to other folks without having them go through the rigmarole of setting up the Haskell platform and cabal. Is there a way to statically link the binary in a cabal build?

Just run cabal build, it links statically by default.

Related

Is it possible to use my own Python interpreter with Conda's OpenCV python library?

I work on a Python project that in one place callse Julia's code, and in other uses OpenCV.
Unfortunately, pyJulia prefers Python interpreter to be dynamically linked to the libpython. (I know I can build a custom Julia system image, but I fear the build delays when I want to test a development version of my Julia code from Python.)
What has worked so far, is using Spack instead of Conda. Python built by Spack has a shared libpython and Spack's repository does include a recent opencv.
Unfortunately, contrary to Conda, Spack is designed around a paradigm of compiling everything, rather than downloading binaries. The installation time of opencv is well over 1 hour, which barely is acceptable for a one-off install in the development environment, but is dismayingly long to build a Docker image.
So I have a thought: maybe it is possible to integrate my own Python with the rest of the Conda ecosystem?
This isn't a full solution, but Spack does support binary packages, as well as GitLab build pipelines to build them in parallel and keep them updated. What it does not have (yet) is a public binary mirror, so that you could install these things very quickly from pre-existing builds. That's in the works.
So, if you like the Spack approach, you can set up your own binary caches and automated builds for your dev environment.
I am not sure what the solution would be with Conda. You could make your own conda-forge packages, but I think if you deviate from the standard ones, you may end up reimplementing a lot of packages to support your use case. On the other hand, they may accept patches to make your particular configuration work.

Is it possible to build native gdb for Linux-ARM on Linux-x86-64?

I'm trying to build a native ARM GDB for an ARM board to use. Since it lacks a lot of the tools GDB needs for compilation, I'm trying to build it on my x86 machine.
./configure --host=arm-linux-gnueabi --target=arm-linux-gnueabi && make
However, half way through the build process, it complains that "termcap library" is missing. I think it means it couldn't find an ARM version of the library for it to use. So, is there a possible workaround, or should I not bother with this approach and think of another way?
You should be able to do this for your "ARM board". You need to compile GDB library dependencies (e.g. the termcap library) and install them where the cross compiler can find them before you can build GDB. Without a more specific description about the cross compilation toolchain and board it's hard to give more specific advice that will be helpful to you.
I cross build GDB for several Linux targets for my ELLCC cross development tool project (http://ellcc.org). You do need a few libraries built for the target to do a build. In addition to the standard C library, I used libedit, zlib, expat, and ncurses.

Why use cabal instead of make

As far as I understand cabal is the preferred way of building Haskell projects. Coming from a unix C/C++ background, I am used to make.
So what does cabal offer that I will not get from make?
Cabal will do more than just build your project, it also can manage your dependencies in a sandbox environment (as of 1.18), upload your package to hackage, and build libraries and executables in a lot less setup than it would take in make. It's more similar to pip/distutils/virtualenv rather than just a build system.

How to build a self-contained library with cabal?

I have a library which depends on some other libraries and of course the haskell runtime. It exports C API.
I want to build it in a way that it was fully self-contained and user wouldn't be bothered with installing haskell, cabal and all the dependencies.
it was fully self-contained and user wouldn't be bothered with installing haskell, cabal and all the dependencies
Then you must distribute your library with all its dependencies -- the Haskell compiler, runtime, C libraries, Cabal, dependent libraries. This is a non-trivial task -- you're rolling your own Haskell Platform.
You could modify the HP source and generate installers. They would be in effect standalone installers for your library.

Reasons not to enable shared library support in Cabal

I'm looking to install Hubris for a Ruby-to-Haskell bridge.
Recent install instructions say that I need to enable shared library support in Cabal. Are there reasons why I might not want to do that?
One reason is that when you build binaries using shared Haskell libraries, these are affected by any future breakage of your locally installed Haskell packages. In other words, when you upgrade a library, you will have to either keep the old .so files around or rebuild the program. This is the main reason why Debian is not yet providing -dyn packages for any library besides the set of boot packages.
(The fact that cabal-install does not uninstall stuff helps here a bit, I guess. But nevertheless I prefer not to worry that doing something with cabal-install or in .cabal might break existing programs.

Resources