How to use two different compilers for different targets in a .cabal file? - haskell

When I run cabal build it uses some Haskell compiler to build the executables and/or test-suites in my .cabal file.
Can I control which compiler is used for the different targets? Ideally, I would like to have separate build targets that use ghc and ghcjs in the same .cabal file. It seems to me that someone might want to use ghc and hugs or two version of ghc in the same project. Is this currently possible?
Also, how does cabal decide what compiler to use when running cabal build? I saw there is a compiler option in my ~/.cabal/config file but changing it from ghc to ghcjs and uncommenting it, did not seem to change what cabal build does.

The compiler to use is determined during the configure step (or during an install step's implicit configure step, which does not share configuration options with a previous configure step). It is also determined by the entity building the package and cannot be influenced by the person writing the package. Probably what happened to you is that a previous cabal build implicitly invoked the configure step and chose a compiler; future builds will keep a previous choice of compiler over one stuck in your global configuration file. You can countermand that by simply manually running cabal configure again.
It is possible to cause a build to fail with the wrong implementation, e.g.
library
if impl(ghc)
buildable: False
will prevent cabal from trying to build the package using GHC. However, this isn't really useful for building separate parts of a package with separate compilers, as cabal will refuse to install a package unless it can build the whole thing with a single compiler.
Probably the best way forward is to make separate packages for things that should be built by separate compilers.

Related

How does the workflow with Haskell Stack work?

I don't get the point about Stack.
I used to write my Haskell code in my favourite environment, ran or compiled using GHC(i), and if necessary, installed packages using Cabal. Now, that apparently is not the way to go any more, but I don't understand how to work with Stack. So far, I have only understood that I need to write stack exec ghci instead ghci to start a repl.
Apart from that, the docs always talk about 'projects' for which I have to write some yaml files. But I probably don't have any project -- I just want to launch a GHCi repl and experiment a bit with my ideas. At the moment, this fails with the unability to get the packages that I want to work with installed.
How is working with Stack meant? Is there any explanation of their use cases? Where do I find my use case in there?
Edit. My confusion comes from the fact that I want to work with some software (IHaskell) whose installation guide explains the installation via stack. Assuming I already have a GHCi installed whose package base I maintain e.g. using Cabal. How would I have to set up stack.yaml to make stack use my global GHCi for that project?
First, notice that stack will use its own package base independent from cabal. AFAIK they can't be shared... hence, if you run stack build it'll download packages (including the compiler) on its own package database.
Nevertheless stack allows to use a system compiler (but not other libraries). To do so, in the stack.yaml you must have the following two lines
resolver: lts-XX.XX -- keep reading below
system-ghc: True
The version of the stackage snapshot can be found in: https://www.stackage.org/. Each snapshot works with a different version of the compiler. Be sure to use a snapshot with the same compiler version you have in you system. If it happens your system ghc is greater than any lts, then you can set allow-newer: true in stack.yaml.
Now, if getting a different database from stack feels wrong to you, notice that you can build the project with cabal too, since stack at the end of the day spits out a cabal file. Probably, It wont work out of the box if you build with cabal. You can modify the cabal file to match exactly the version of the packages of the snapshot you are using
In summary:
You can use your system-wide ghc
you can not share libraries installed with cabal.
you can use cabal to build the project, probably modifying the ihaskell.cabal file to match versions of the stackage's snapshot.

Building with c2hs and cabal

I have a problem where cabal will not do dependency resolution on .chs files,
that is, if A.chs depends on B.chs (or really B.chi) then cabal will not
figure it out and call c2hs on the files in the correct order. I know that gtk2hs
uses a custom buildscript, however it is rather complicated and specialized for
gtk2hs. Is there an easier way of manually/automatically doing .chs dependency
resolution (by, for instance, listing out the files in the correct order)?
As it turns out, cabal will process exposed-modules by
the order they are listed, I guess only if there are
no dependencies to consider or maybe this behavior is
specific to .chs files. In order to manually resolve dependencies
one can simply order the modules correctly in the cabal file.

Profile Haskell without installing profiling libraries for all dependencies

I wish to profile my program written in Haskell.
On compilation, I am told that I do not have profiling libraries for certain dependencies (e.g., criterion) installed and cabal aborts.
I have no interest in profiling parts of those dependencies; code called from main doesn't even use them.
How can I profile my application without installing profiling libraries I don't need and without removing all those dependencies?
A good way to circumvent having to compile everything with profiling is to use cabal sandbox. It allows you to set up a sandbox for one application only, and thereby you won't have to re-install your entire ~/.cabal prefix. You'll need a recent version of Cabal, so run cabal update && cabal install cabal-install first.
Once you initialise a sandbox, create a file cabal.config to include the necessary directives (in your case library-profiling: True; executable-profiling: True may also be handy.)
A side-effect of this is that you can test your code with dependencies that need not be installed globally, for example, experimental versions, or outdated versions.
EDIT: btw, I don't think you need to have profiling enabled for criterion to work. In any case, it works for me without profiling being enabled. Just write a Main module that contains main = defaultMain benchmarks where benchmarks has type [Benchmark], i.e. a list of benchmarks that you've written.
You then compile that file (say, we call it benchmarks.hs with ghc --make -o bench benchmarks.hs, and run the program, ./bench with the appropriate arguments (consult the criterion documentation for details. A good default argument is, say ./bench -o benchmarks.html which will generate a nifty report similar to this one)
I had the same problem this week, and although I had recompiled everything by hand, I was instructed in the IRC channel to do the following:
Go to your cabal config file (in case you don't know where)
Edit the line for enable library profiling (and while you are at it, enable documentation)
Run Cabal Install World
As mentioned in the question you refer to in your comment, a good way to solve this problem in the future is to enable profiling in the cabal configuration. This way all libraries are installed with profiling support. This might not be a satisfying solution but I guess many are opting for it.
If you are only interested in getting an impression of the memory usage of your program you can generate a heap profile of your program using -hT. More precisely, you have to compile the program with -rtsopts to enable RTS options then execute it using +RTS -hT. The compiler generates a file with the extension hp. You can convert the hp file into a postscript file with a heap profile using hp2ps. This should work without any profiling support, but note that I am too lazy to verify it as I have installed all libraries with profiling support ; )

Is it possible to compile "only a file" in a cabal project?

In JVM based programs, you can compile a file to a .class file and be able to run the binary again, without compiling necessarily all the files.
Is it possible to do it in haskell? Is it imperative to compile and link all the files in the project? If yes, why?
What if there is no binary, you are only installing a library?
For GHC, you can change and recompile a single module without having to recompile modules depending on that, provided the exposed interface doesn't change. GHC's --make mode (default as of ghc-7.*) checks whether recompilation is necessary and recompiles only those modules where it can't determine that it's not necessary.
If you have a cabal package and you cabal build after changing one module, you can see from the compiler output that it doesn't recompile all modules in the package in general, only the changed module and [maybe] the ones depending on it.
If you build an executable, that of course has to be relinked, but many of the old object files can be reused.
If you build a library, the library archive of course has to be rebuilt, but many of the old object files can be reused.

Generate cabal file with dependencies on foreign libs

Is it possible to automatically generate cabal file for a given haskell project, that will create appropriate Build-depends dependencies for all the libs that the project uses?
Yes! In fact, the 'cabal init' command does this in the HEAD version of cabal-install. It's true that it's not possible to get it exactly right in all cases, but it just makes the best guesses it can and then lets you fix the generated build-depends list as necessary.
No, because some modules are provided by more than one package and it isn't practical (or even possible, really) for cabal to decide which one you want to use.
You can search for which package is provided by which module, or just run cabal-install several times until you've covered all the deps.

Resources