Building with c2hs and cabal - haskell

I have a problem where cabal will not do dependency resolution on .chs files,
that is, if A.chs depends on B.chs (or really B.chi) then cabal will not
figure it out and call c2hs on the files in the correct order. I know that gtk2hs
uses a custom buildscript, however it is rather complicated and specialized for
gtk2hs. Is there an easier way of manually/automatically doing .chs dependency
resolution (by, for instance, listing out the files in the correct order)?

As it turns out, cabal will process exposed-modules by
the order they are listed, I guess only if there are
no dependencies to consider or maybe this behavior is
specific to .chs files. In order to manually resolve dependencies
one can simply order the modules correctly in the cabal file.

Related

How does the workflow with Haskell Stack work?

I don't get the point about Stack.
I used to write my Haskell code in my favourite environment, ran or compiled using GHC(i), and if necessary, installed packages using Cabal. Now, that apparently is not the way to go any more, but I don't understand how to work with Stack. So far, I have only understood that I need to write stack exec ghci instead ghci to start a repl.
Apart from that, the docs always talk about 'projects' for which I have to write some yaml files. But I probably don't have any project -- I just want to launch a GHCi repl and experiment a bit with my ideas. At the moment, this fails with the unability to get the packages that I want to work with installed.
How is working with Stack meant? Is there any explanation of their use cases? Where do I find my use case in there?
Edit. My confusion comes from the fact that I want to work with some software (IHaskell) whose installation guide explains the installation via stack. Assuming I already have a GHCi installed whose package base I maintain e.g. using Cabal. How would I have to set up stack.yaml to make stack use my global GHCi for that project?
First, notice that stack will use its own package base independent from cabal. AFAIK they can't be shared... hence, if you run stack build it'll download packages (including the compiler) on its own package database.
Nevertheless stack allows to use a system compiler (but not other libraries). To do so, in the stack.yaml you must have the following two lines
resolver: lts-XX.XX -- keep reading below
system-ghc: True
The version of the stackage snapshot can be found in: https://www.stackage.org/. Each snapshot works with a different version of the compiler. Be sure to use a snapshot with the same compiler version you have in you system. If it happens your system ghc is greater than any lts, then you can set allow-newer: true in stack.yaml.
Now, if getting a different database from stack feels wrong to you, notice that you can build the project with cabal too, since stack at the end of the day spits out a cabal file. Probably, It wont work out of the box if you build with cabal. You can modify the cabal file to match exactly the version of the packages of the snapshot you are using
In summary:
You can use your system-wide ghc
you can not share libraries installed with cabal.
you can use cabal to build the project, probably modifying the ihaskell.cabal file to match versions of the stackage's snapshot.

What is Cabal Hell?

I am a little bit confused while reading about Cabal Hell, as the term is overloaded. I guess originally Cabal Hell referred to the diamond dependency problem, which was solved by restricting the build plan to have only a single version of any package in each build plan (two different versions of a package can't exist in a single build plan) as explained in this answer.
However, the term is also used in various other contexts. Such as destructive re-installations, incorrect package dependency boundaries (lower/upper version bounds), inconsistent environments ... (or any other error reported by Cabal).
Particular among these, I am confused about 1) destructive re-installations and 2) inconsistent environments? What do they mean, and how cabal new-build solves these problems (is it just sandboxing like cabal sandbox)? And what role ghc-pkg has to play here?
Any references or a simple example where these problems could be reproduced would be very appreciated.
Regarding "destructive re-installations": If I am not wrong, GHC has a package manager of itself (ghc-pkg), and the packages are installed as dynamically linkable libraries i.e: base depends on ghc-prim, so if ghc-prim is removed it will break base, am I right? And since GHC only allows one instance of a package with the same version, cabal install might register a newer build of the same (package, version) such that it breaks the dependents of the unregistered package. If the above understanding regarding "destructive re-installations" are correct; how does cabal new-build help here?
The only meaningful use of the term is the one given in the linked answer. Related are the follow-on problems from having lots of different packages in the global database, which can make encountering diamond dependencies more common, requiring destructive reinstalls to resolve, etc.
The other usages of the term are not helpful and just mean "problems somehow involving cabal."
That said, let me answer your other questions.
1) ghc-pkg is not a package manager, but rather a tool for managing ghc package databases. It is used by cabal to register packages into databases, and can be used by end-users to inspect the contents of the databases. Think of it as part of the underlying substrate provided by ghc, not a competing tool.
2) new-build eliminates and replaces the standard notion of a packagedb entirely. Instead of saying that a db consists of packages and versions, with at most one of each pair, instead a db consists of potentially many copies of packages at any given version, each with potentially different versions of its dependencies, all of which are managed in part by hash-addressing, so marked by a unique "fingerprint". This is called the store. When you new-build, cabal calculates a build plan irrespective of any previously installed dependencies, from scratch. If a particular fingerprint (consisting of a package, version, and the versions of all its dependencies, certain flags, etc) already exists in the store, then it makes use of it. If it does not, it calculates it.
As such, the only "diamond dependencies" that can occur are the truly insoluble ones, and not the ones occasioned by having fixed too-early (due to already-installed deps) some portion of the dependency tree.
tldr; you write "since GHC only allows one instance of a package with the same version" but new-build partially lifts this restriction in the store which allows the solver to produce better, more reproducible plans more often.

stack.yaml file & .cabal file differences?

I have recently started using stack for Haskell, when specifying external dependencies for your project. Sometimes you place it in the .cabal file while other times you place it in the .yaml file.
Am I right in thinking that when you put it in the cabal file it only looks in the stackage repository for your packages. However when you place it in your .yaml file it also searches in the Hackage server, if it cannot find it in any of the snapshots?
All of the dependencies for your project go into the .cabal file. You are correct, though, that sometimes you also list packages in the stack.yaml file, which can be understandably confusing. Why is that?
Well, the .cabal file always expresses your dependencies upon packages, but the stack.yaml file effectively configures where those packages come from. Usually, when using stack, packages come from Stackage based on the resolver you specify in the stack.yaml file. However, Stackage does not include all the packages in Hackage, and it is not intended to—when you need packages that live outside of Stackage, you have to specify them in the stack.yaml file.
Why is this? Well, the resolver automatically couples two important pieces of information together: package names and package versions. Stackage resolvers provide a (weak) guarantee that all of the packages within a single resolver will work together, so when a package comes from a resolver, there is no need to manually pick which version you want. Instead, Stackage will decide for you.
When pulling packages from Hackage, you do not have this luxury, so you need to specify packages and their versions using extra-deps. For example, you might have something like this:
extra-deps:
- crypto-pubkey-openssh-0.2.7
- data-bword-0.1
- data-dword-0.3
This entry determines specifically which versions of which packages should be pulled from Hackage rather than Stackage.
When building an application, this might seem a little redundant—you can specify version constraints in the .cabal file, too, so why duplicate them in the stack.yaml file? However, when building a library, the distinction is a little more significant: the .cabal file expresses the actual version constraints of your library (if any), but the stack.yaml file specifies precisely which versions to actually install when developing locally.
In this sense, the stack.yaml file serves a purpose similar to the Gemfile.lock or npm-shrinkwrap.json files of other package managers, though the responsibilities are not nearly as clear-cut with stack (in part due to historical reasons around how Haskell’s package system works and some of the problems it’s had in the past).

How to use two different compilers for different targets in a .cabal file?

When I run cabal build it uses some Haskell compiler to build the executables and/or test-suites in my .cabal file.
Can I control which compiler is used for the different targets? Ideally, I would like to have separate build targets that use ghc and ghcjs in the same .cabal file. It seems to me that someone might want to use ghc and hugs or two version of ghc in the same project. Is this currently possible?
Also, how does cabal decide what compiler to use when running cabal build? I saw there is a compiler option in my ~/.cabal/config file but changing it from ghc to ghcjs and uncommenting it, did not seem to change what cabal build does.
The compiler to use is determined during the configure step (or during an install step's implicit configure step, which does not share configuration options with a previous configure step). It is also determined by the entity building the package and cannot be influenced by the person writing the package. Probably what happened to you is that a previous cabal build implicitly invoked the configure step and chose a compiler; future builds will keep a previous choice of compiler over one stuck in your global configuration file. You can countermand that by simply manually running cabal configure again.
It is possible to cause a build to fail with the wrong implementation, e.g.
library
if impl(ghc)
buildable: False
will prevent cabal from trying to build the package using GHC. However, this isn't really useful for building separate parts of a package with separate compilers, as cabal will refuse to install a package unless it can build the whole thing with a single compiler.
Probably the best way forward is to make separate packages for things that should be built by separate compilers.

Generate cabal file with dependencies on foreign libs

Is it possible to automatically generate cabal file for a given haskell project, that will create appropriate Build-depends dependencies for all the libs that the project uses?
Yes! In fact, the 'cabal init' command does this in the HEAD version of cabal-install. It's true that it's not possible to get it exactly right in all cases, but it just makes the best guesses it can and then lets you fix the generated build-depends list as necessary.
No, because some modules are provided by more than one package and it isn't practical (or even possible, really) for cabal to decide which one you want to use.
You can search for which package is provided by which module, or just run cabal-install several times until you've covered all the deps.

Resources