How to get cabal and nix work together - haskell

As far as I understood, Nix is alternative for cabal sandbox.
I finally managed to install Nix, but I still don't understand how it can replace a sandbox.
I understand you don't need cabal using Nix and the wrapped version of GHC; however if you want
to publish a package you'll need at some point to package it using cabal. Therefore, you need to be able to write and test your cabal configuration within NIX. How do you do that?
Ideally, I would like an environment similar to cabal sandbox but "contained" within NIX, is that possible? In fact, what I really would like is the equivalent of nested sandboxes — as I usually work on projects made of multiple packages.
Update about my current workflow
At the moment I work on 2 or 3 independent projects (P1, P2, P3) which are each composed of 2 or 3 cabal modules/packages, let's say for P1: L11, L12 (libraries)
and E11 (executables). E11 depends on L12 which depends on L11. I mainly split the executables from the library because they are private and kept on a private git repo.
In theory, each project could have this own sandbox (shared between its submodules). I tried that (having a common sandbox for L11 L12 and E11), but it's quickly annoying because, if you modify L11, you can't rebuild it because E11 depends on it, so I have to uninstall E11 first to recompile L11.
It might be no exactly the case, but I encounter the similar problem.
This would be fine if I were occasionally modifying L11, but in practice, I changed it more that E11.
As the shared sandbox doesn't work, so I went back to the one sandbox for every package solution. It's working but is less than ideal.
The main problem is if I modify L11, I need to compile it twice (once in L11, and then again in E11). Also, each time I'm starting a new sandbox, as everybody knows, I need to wait a while to get everything package downloaded and recompiled.
So by using Nix, I'm hopping to be able to set up separate cabal "environments" per project, which solves all the issue aboves.
Hope this is clearer.

I do all my development using Nix and cabal these days, and I can happily say that they work in harmony very well. My current workflow is very new, in that it relies on features in nixpkgs that have only just reached the master branch. As such, the first thing you'll need to do is clone nixpkgs from Github:
cd ~
git clone git://github.com/nixos/nixpkgs
(In the future this won't be necessary, but right now it is).
Single Project Usage
Now that we have a nixpkgs clone, we can start using the haskellng package set. haskellng is a rewrite of how we package things in Nix, and is of interest to us for being more predictable (package names match Hackage package names) and more configurable. First, we'll install the cabal2nix tool, which can automate some things for us, and we'll also install cabal-install to provide the cabal executable:
nix-env -f ~/nixpkgs -i -A haskellngPackages.cabal2nix -A haskellngPackages.cabal-install
From this point, it's all pretty much clear sailing.
If you're starting a new project, you can just call cabal init in a new directory, as you would normally. When you're ready to build, you can turn this .cabal file into a development environment:
cabal init
# answer the questions
cabal2nix --shell my-project.cabal > shell.nix
This gives you a shell.nix file, which can be used with nix-shell. You don't need to use this very often though - the only time you'll usually use it is with cabal configure:
nix-shell -I ~ --command 'cabal configure'
cabal configure caches absolute paths to everything, so now when you want to build you just use cabal build as normal:
cabal build
Whenever your .cabal file changes you'll need to regenerate shell.nix - just run the command above, and then cabal configure afterwards.
Multiple Project Usage
The approach scales nicely to multiple projects, but it requires a little bit more manual work to "glue" everything together. To demonstrate how this works, lets consider my socket-io library. This library depends on engine-io, and I usually develop both at the same time.
The first step to Nix-ifying this project is to generate default.nix expressions along side each individual .cabal file:
cabal2nix engine-io/engine-io.cabal > engine-io/default.nix
cabal2nix socket-io/socket-io.cabal > socket-io/default.nix
These default.nix expressions are functions, so we can't do much right now. To call the functions, we write our own shell.nix file that explains how to combine everything. For engine-io/shell.nix, we don't have to do anything particularly clever:
with (import <nixpkgs> {}).pkgs;
(haskellngPackages.callPackage ./. {}).env
For socket-io, we need to depend on engine-io:
with (import <nixpkgs> {}).pkgs;
let modifiedHaskellPackages = haskellngPackages.override {
overrides = self: super: {
engine-io = self.callPackage ../engine-io {};
socket-io = self.callPackage ./. {};
};
};
in modifiedHaskellPackages.socket-io.env
Now we have shell.nix in each environment, so we can use cabal configure as before.
The key observation here is that whenever engine-io changes, we need to reconfigure socket-io to detect these changes. This is as simple as running
cd socket-io; nix-shell -I ~ --command 'cabal configure'
Nix will notice that ../engine-io has changed, and rebuild it before running cabal configure.

Related

Why do cabal configure?

In the Cabal User Guide it says that Cabal is often compared with autoconf and automake since the command line interface for actually configuring and building packages follows the same steps steps:
./configure --prefix=...
make
make install
compared to
cabal configure --prefix=...
cabal build
cabal install
My understanding is that ./configure uses a config file (produced by autoconf) to adapt the make process to the environment in which it will run and also to check dependencies. So ./configure therefore always have an "input" to conform to. But if cabal configure is not given any arguments what does it do, and why is it necessary before running cabal build?
The cabal configure step does at least two things I know of:
Check that the package description parses OK.
Check that all required dependencies are already installed (and report an error if not).
Basically it's running the constraint solver to decide exactly which packages you're going to build against. (E.g., if you have several versions of ByteString installed, which version are you going to use? Well it might depend on which version the packages you depend on are expecting...)
Also I believe it's possible to supply options at configure time which change exactly which features of the package get built (but I don't have experience with this).
I think originally you had to call configure before you could call build, but I believe now the cabal command-line tool does that step for you automatically in many cases. (E.g., cabal run now seems to automatically reconfigure if the package description file is newer than the configuration DB.)

cabal sandbox with stackage

I want to point my global cabal config to use stackage LTS only.
Does cabal sandbox provide any value in that case?
As I understand there should be no cabal hell anymore as all projects will use a predetermined set of package that are guaranteed to build together.
Is there any way to prebuild all stackage LTS packages to speed up all future project builds?
Why Sandboxes?
I think there are still benefits to using sandboxes:
Not every package is in stackage, if you end up using a library or depending on something that is not part of stackage you have no guarantee that it will work with the rest of your packages.
Sandboxes have other uses outside of just preventing cabal hell. Their other main use is to be able to add local directories as sources of packages. For example, lets say you have checked out two packages on your local disk ~/code/a and ~/code/b and lets say that b depends on a. If you want to check that b works with some changes you've made to a you can add your local a checkout as a source to b's cabal sandbox.
cd ~/code/b
cabal sandbox add-source ~/code/a
cabal build
Pre-build LTS Packages
If you are set on pre-building all of your packages you can use the following to install all the packages listed in a cabal.config file.
cat cabal.config | sed -rn 's/^.* ([^ ]+) ==.*/\1/gp' | xargs cabal install

Are there any Haskell specific tools that can show source code from imported modules?

How can I browse Haskell source code preferably without internet connection? Right now I click through hackage search results, click source link and search the source page. There are two problems:
I'm using current version as a proxy of what I have locally
This does not work recursively well (another clicks and searches for next definition)
Usually IDEs let you download sources for any library and open new editor tab with definition. I prefer reading code than documentation, less surprises along the way and I can learn something from them.
So, how can I setup for recursive source searches using Haskell tools or standard GNU tools if necessary? All I know right now is that I can generate ctags for vim but where does cabal store sources?
This is the opinionated workflow I follow to render the documentation with the source link enabled.
$ cd <package-name>
$ cabal sandbox init
$ cabal install --only-dependencies --enable-documentation --haddock-hyperlink-source
$ cabal configure --enable-documentation --haddock-hyperlink-source
$ cabal haddock --hyperlink-source
$ firefox dist/doc/html/<package-name>/index.html
The Source link should be enabled for all packages, including the dependencies, as long as they are installed in the sandbox.
In the particular case of Arch Linux, the distro I use, I try to avoid installing Haskell system packages through pacman because, by default, the documentation is not built with the source link enabled. In Arch Linux you can use ABS and modify the PKGBUILD with the parameters described above. I'm pretty sure something similar could be done in other distros, but have no idea about Windows or Mac OS X.
It's also worth mentioning that you don't need to type those parameters every time you run cabal. You can enable them by default in your .cabal/config
This should work without the sandbox but if you are dealing with more than one Haskell project I strongly recommend to use sandboxes.

How to to build src from a CygPort?

I have a question about the structure of the source code from a cygport package.
Here is the contents of a Cygports source file:
the actual source bundle for the project (tar.gz, tar.bz2, etc.)
the any number of *.patch files.
a .cygport file
I am trying to build gedit-3.4.2 from cygports repository.
How does the .cygport file help me run the proper options in the ./configure ?
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error. How do I get the list of ./configure options that were used to build the project when the cygport was built?
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
Here is the contents of gedit-3.4.2-1.cygport:
inherit python gnome2
DESCRIPTION="GNOME text editor"
PATCH_URI="3.4.2-cygwin.patch"
DEPEND="gnome-common gtk-doc
girepository(Gtk-3.0)
pkgconfig(enchant)
pkgconfig(gtksourceview-3.0)
pkgconfig(libpeas-gtk-1.0)"
PKG_NAMES="${PN} ${PN}-devel"
PKG_HINTS="setup devel"
gedit_CONTENTS="--exclude=gtk-doc --exclude=libgedit* etc/ usr/bin/ usr/lib/gedit/ ${PYTHON_SITELIB#/} usr/share/"
gedit_devel_CONTENTS="usr/include/ usr/lib/gedit/libgedit* usr/lib/pkgconfig/ usr/share/gtk-doc/"
DIFF_EXCLUDES="*.desktop.in *.schemas.in *-marshal.h"
CYGCONF_ARGS="--libexecdir=/usr/lib --enable-python"
KEEP_LA_FILES="none"
EDIT Someone from Cygwin Ports mailing list said:
"The configure options are
--libexecdir=/usr/lib --enable-python
Which is from CYGCONF_ARGS."
Here is the contents of a Cygports source file:
You'd do better to think of it as a Cygwin package source file.
cygport is simply a tool for automating the creation of Cygwin binary and source packages. It is the primary tool available, but unlike with some other packaging systems, there's really nothing forcing you to use it. It is quite possible to build a Cygwin package entirely by hand, since it is really nothing more than a tarball that Cygwin's setup.exe can blindly unpack into the Cygwin root directory (typically c:\cygwin) with the expectation that this will put the package's files in sensible locations.
Before cygport existed, people did build their own ad hoc packaging systems. Many Cygwin package maintainers still use these tools they created. (Yours truly included; two of my three packages use cygport, but the third still uses a custom build system.)
Ultimately, you want to read the cygport manual, in /usr/share/doc/cygport/manual.html.
(Yes, I know, "RTFM" answers are frowned on here. But, as one who currently maintains two cygport based packages in the official Cygwin package repository, please believe me when I tell you that the manual is still the single best resource available on this topic.)
How does the .cygport file help me run the proper options in the ./configure ?
As you found out through other resources, you'd first need to edit the CYGCONF_ARGS value in the .cygport file.
The simplest possible step after that is cygport gedit-3.4.2-1.cygport all. That attempts to rebuild all the binary packages in a single step. It also builds a new source package containing updated .cygport and patch files.
If something breaks in the all build process, it is usually faster to switch to using the sub-commands contained by all instead of completely restarting the process. The all step just runs prep, compile, install, package, and finish for you, in that order. For instance, if all fails during the compilation step, there's probably no need to repeat the prep step.
(It is exceptionally uncommon for cygport or a sane build system to wreck the build tree, forcing you to re-run prep. Far more commonly, you end up needing to re-do prep when you manually wreck the build tree while trying to get a new package to build for the first time and need to start over.)
For instance, in gedit if i don't specify --disable-spell it won't proceed due to error.
You can probably fix that by installing the libaspell-devel package from the official Cygwin package repository with setup.exe.
Personally, I wouldn't disable any feature unless it meant installing unofficial packages, such as those from the Cygwin Ports project.[*] It is nice to have Cygwin Ports repository, but because it contains so many packages, installing one can end up creating an "install the world" situation: package A depends on packages B, C and D, and C depends on E, F, G, H, and G depends on I, J, K, and... Dependency hierarchies within the Cygwin package repo tend to be flatter and narrower than those in the Cygports repo.
Is there some way we can use the cygport executable to build the cygport and change the prefix too?
You have guessed that you just add --prefix=/my/private/program/tree to CYGCONF_ARGS, I trust.
[*] If you are feeling confused about "Cygwin Ports" and cygport, the naming similarity is no coincidence. cygport is a tool created by Yaakov Selkowitz for himself when creating the Cygwin Ports package repository. Later, it became popular enough among other Cygwin package maintainers that it pushed out most of the competing build systems.

Run HAppStack app withot cabal

I'm trying out HAppStack. I installed HAppStack and created a project: happstack new project web. New folder 'web' created with project guestbook under it. So now I want to run it. The only way I could do it is run cabal install. But I want to run my app without installing with cabal! Executing run.sh errors: Could not find module 'Paths_guestbook'. How can I do it?
Edit:
In general, is there a way to run HAppStack app without rebuild like in Snap?
In general, you can always build Cabal projects without installing simply by doing:
$ cabal configure
$ cabal build
The resulting executable will usually be called dist/build/<project>/<project>.
The specific error you're getting is because the code must be built with Cabal to get the Paths_guestbook module, which will contain information about the location of data files used by it. (It may be the case that it's unable to find these data files if you run the executable without installing it; in that case, you'll need a more elaborate solution, such as cabal-dev.)
(I'm not a Happstack user, so I don't know if there's an official way to accomplish this, but this should work for basically any Cabal-based project in general. The repository shows that run.sh was last modified in 2009, so I suspect it has simply bit-rotten. It doesn't do anything special, though, so cabal build should work just fine.)
SHORT VERSION:
The run.sh seems to be missing an include paramater. Modify it to look like this:
#!/bin/sh
runghc -isrc -isrc-interactive-only src/Main.hs
I have update the run.sh in darcs to include this change.
LONG VERSION:
Normally that flag is not needed for Happstack applications. You can usually just do, runhaskell Main.hs. But in that particular example the Main.hs explicitly imports:
import Paths_guestbook (version)
which is used in the versionInfo function so that the server can report its own version number. Though version number in src-interactive-only is hardcoded and will generally be out of date. So it is only correct if you actually build with cabal.
The Paths_guestbook module is normally created automatically when cabal build is run. So, another fix would be to change the run.sh to:
#!/bin/sh
runghc -isrc -idist/build/autogen src/Main.hs
And run cabal configure && cabal build once. After that you will be able to use run.sh (until you do a cabal clean).
Another option would be to set a CPP flag in the .cabal file, and only import Paths_guestbook when the application is being built via cabal.
For example in the happstack.com source code:
http://patch-tag.com/r/stepcut/happstackDotCom/snapshot/current/content/pretty/Main.hs
In line 40 (or so) you will see an #ifdef __CABAL__. happstack.com needs to be able to know where to find the static content such as .css files. When doing runhaskell Main.hs in the local directory, it will look for the files in a sub-directory of the local directory. If you do cabal install it will instead look whever cabal installs the data files. Or, you can override the default location with command-line arguments. (Which is what the debian packaging for that app does).
Unfortunately, the happstack new project command is somewhat bitrotten because the author became a parent and has not had time to work on it in a long time. It will likely be removed from the upcoming Happstack release in order to reduce confusion.
In order to be truly useful, I think the command needs to prompt for a bunch of values and then generate a new project from a set of templates. Similar to how 'cabal init' works. But currently, no one has volunteered the time to make that happen.
To see changes to your source appear automatically with out restarting the server you can use the happstack-plugins library. There is an screencast of it here:
http://happstack.blogspot.com/2010/10/recompile-your-haskell-based-templates.html

Resources