How to build a self-contained library with cabal? - haskell

I have a library which depends on some other libraries and of course the haskell runtime. It exports C API.
I want to build it in a way that it was fully self-contained and user wouldn't be bothered with installing haskell, cabal and all the dependencies.

it was fully self-contained and user wouldn't be bothered with installing haskell, cabal and all the dependencies
Then you must distribute your library with all its dependencies -- the Haskell compiler, runtime, C libraries, Cabal, dependent libraries. This is a non-trivial task -- you're rolling your own Haskell Platform.
You could modify the HP source and generate installers. They would be in effect standalone installers for your library.

Related

How to prevent multiple builds of a library in a workspace, when binaries enable different features?

I have a Cargo workspace with a library and multiple binaries. The binaries enable different features in the library, and I would like to build the library only once (because it's big and building it takes a lot of time).
However, if I invoke cargo run/cargo test/clippy in the directory of one of the packages, the library is rebuilt with only the features enabled by the package I'm building.
Is it possible to tell cargo to apply feature unification as if the whole workspace is being built, when building only one package?

How to package a Haskell application?

I have written a piece of code that I would like to create a binary and distribute to other folks without having them go through the rigmarole of setting up the Haskell platform and cabal. Is there a way to statically link the binary in a cabal build?
Just run cabal build, it links statically by default.

Why use cabal instead of make

As far as I understand cabal is the preferred way of building Haskell projects. Coming from a unix C/C++ background, I am used to make.
So what does cabal offer that I will not get from make?
Cabal will do more than just build your project, it also can manage your dependencies in a sandbox environment (as of 1.18), upload your package to hackage, and build libraries and executables in a lot less setup than it would take in make. It's more similar to pip/distutils/virtualenv rather than just a build system.

Is /nodefaultlib:msvcr100 the proper approach to handling msvcr100.dll vs msvcr100d.dll defaultlib issue

For a cross-platform software project that builds on Linux and Windows we have distinct ways to handle third-party libraries. On Linux we build and link against the versions distributed with the CentOS/RHEL distribution, which means we link against release builds, whereas on Windows we maintain our own third-party library "packages" and on Windows we build two versions of every library - a release version that links msvcr100 and msvcp100 and a debug version that links msvcr100d and msvcp100d.
My question is simply whether it is necessary to build the debug version of the third-party dependencies on Windows or can we simply use /nodefaultlib:msvcr100 when building debug builds of our own software.
A follow up question: Where can I learn about good practices in this regard. I've read the MSDN pages about the msvc runtime, but there is very little there in terms of recommendations.
EDIT:
Let me rephrase the question more concisely. With VS2010, what is the problem with using /nodefaultlib:msvcr100 to link an executable build with /MDd when linking with libraries that are compiled with /MD.
My motivation for this is to avoid to have to build both release and debug version of third party libraries that I use. Also I want my debug build to run faster.
From the document for /MD, /MT, /LD (Use Run-Time Library):
MD: Causes your application to use the multithread- and DLL-specific version of the run-time library. Defines _MT and _DLL and causes the compiler to place the library name MSVCRT.lib into the .obj file.
Applications compiled with this option are statically linked to MSVCRT.lib. This library provides a layer of code that allows the linker to resolve external references. The actual working code is contained in MSVCR100.DLL, which must be available at run time to applications linked with MSVCRT.lib
/MDd: Defines _DEBUG, _MT, and _DLL and causes your application to use the debug multithread- and DLL-specific version of the run-time library. It also causes the compiler to place the library name MSVCRTD.lib into the .obj file.
So there is no documentation for any difference done to the generated code other than _DEBUG being defined.
You only use the Debug build of the CRT to debug your app. It contains lots of asserts to help you catch mistakes in your code. You never ship the debug build of your project, always the Release build. Nor can you, the license forbids shipping msvcr100d.dll. So building your project correctly automatically avoids the dependency on the debug version of the CRT.
The /nodefaultlib linker option was intended to allow linking your program with a custom CRT implementation. Quite rare but some programmers care a lot about building small programs and the standard CRT isn't exactly small.
Some programmers use the /nodefaultlib has a hack around a link problem. Induced when they link code that was built with Debug configuration settings with code built with Release configuration settings. Or link code that has incompatible CRT choices, /MD vs /MT. This can work, no guarantee, but of course only sweeps the real problem under the floor mat.
So no, it is not the proper choice, fixing the core problem should be your goal. Ensure that all your .obj and .lib files are built with the same compiler options and you won't have this problem. If that means that you have to pester a library owner for a proper build then pester first, hack around it only when you've discovered that you don't want to have a dependency on that .lib anymore but don't yet have the time to find an alternative.

Reasons not to enable shared library support in Cabal

I'm looking to install Hubris for a Ruby-to-Haskell bridge.
Recent install instructions say that I need to enable shared library support in Cabal. Are there reasons why I might not want to do that?
One reason is that when you build binaries using shared Haskell libraries, these are affected by any future breakage of your locally installed Haskell packages. In other words, when you upgrade a library, you will have to either keep the old .so files around or rebuild the program. This is the main reason why Debian is not yet providing -dyn packages for any library besides the set of boot packages.
(The fact that cabal-install does not uninstall stuff helps here a bit, I guess. But nevertheless I prefer not to worry that doing something with cabal-install or in .cabal might break existing programs.

Resources