I am trying to build an internal Haskell project on NixOS using cabal2nix. It wraps (and thus depends on) a foreign library which on Ubuntu one would build by wgetting the source, then running make && make install && ldconfig. Thus when cabal goes to build the program, it is apparently able to find the appropriate header files (which are in /usr/local/include/ta-lib or /usr/include/ta-lib).
On Nix, the process as I understand is to setup a .nix file to specify how to get and build the source, and then Nix sets up the isolated build environments. When I do this, the foreign library is fetched and built appropriately.
When Nix runs the configure step, it looks alright:
configureFlags: --verbose --prefix=/nix/store/fwpw03bd0c2m5yb7v2wc7g6f0qj912ra-talib-0.1.0.0 --libdir=$prefix/lib/$compiler --libsubdir=$pkgid --with-gcc=gcc --package-db=/tmp/nix-build-talib-0.1.0.0.drv-0/package.conf.d --ghc-option=-optl=-Wl,-rpath=/nix/store/fwpw03bd0c2m5yb7v2wc7g6f0qj912ra-talib-0.1.0.0/lib/ghc-7.10.2/talib-0.1.0.0 --enable-split-objs --disable-library-profiling --disable-executable-profiling --enable-shared --enable-library-vanilla --enable-executable-dynamic --enable-tests --extra-include-dirs=/nix/store/gvglncjgd5yif9bc03qalmp2mrjp524n-ta-lib-0.4.0/include --extra-lib-dirs=/nix/store/gvglncjgd5yif9bc03qalmp2mrjp524n-ta-lib-0.4.0/lib
With --extra-include-dirs and --extra-lib-dirs set to the correct paths in the Nix store. However, when it goes to build it complains with,
Setup: Missing dependency on a foreign library:
* Missing C library: ta_lib
Unfortunately I don't understand how cabal is determining whether the foreign library is present. I read here (Haskell how to resolve cabal error: Missing dependencies on foreign libraries?) that cabal will try to build and link a C program that consists of for each header it finds. So, somehow it is not finding the correct library.
What is wrong? Does this have to do with the step in Ubuntu of running ldconfig?
The problem is that ta_lib depends on the system math library m, but that library isn't linked by default. You can check that by creating a stub C program
echo "int main() { return 0; }" >test.c
and trying to link that with ta_lib:
$ nix-shell -p ta_lib --run "gcc test.c -lta_lib"
/nix/store/ghinzmxfm2s41nz8y873jlywwmcbw38l-ta-lib-0.4.0/lib/libta_lib.so: undefined reference to `sinh'
/nix/store/ghinzmxfm2s41nz8y873jlywwmcbw38l-ta-lib-0.4.0/lib/libta_lib.so: undefined reference to `sincos'
[...]
collect2: error: ld returned 1 exit status
Now, when Cabal tries to determine whether the library is available, it will attempt to link it to a stub test program, but that attempt will fail because of all those undefined symbol. Hence, Cabal complains that the library cannot be linked (even though its paths are configured and set-up correctly).
To remedy that issue, add the m library to the extra-libraries attribute in your project's Cabal file, like so:
extra-libraries: ta_lib, m
That should make the Cabal configure phase succeed.
Related
I recently upgraded to Cabal 3.2 (and GHC 8.10) and I am running into some major issues that make some of my project non-buildable anymore...
Thorough description of the problem
Here is a minimal (not) working configuration that fails every time:
I start off with a clean Cabal configuration (by deleting ~/.cabal); the reason for that will appear later in the post. I run cabal update to recreate the .cabal directory and to ensure Cabal is working.
I create a project (let's call it test1) using cabal init. This is a library project with one exposed module (conveniently named Test1) that exports some dummy function foo. I run cabal build, then cabal install --lib; everything is running smooth, so far so good.
Just to be sure, I leave the project directory and fire up GHCi. I type in :m Test1 to load the module I created earlier, and it works! I can type in foo ... and see my function executed. Also, I list the content of ~/.cabal/store/ghc-8.10.xxx and see that the test1-xxx directory is there.
I then create a new project, test2, still using cabal init. This time, I configure it to be an executable, and I add test1 as a dependency (using the build-depends field). But this time when I run cabal build, I run into some issue:
~/projects/haskell/test2> cabal build
Resolving dependencies...
cabal: Could not resolve dependencies:
[__0] trying: test2-0.1.0.0 (user goal)
[__1] unknown package: test1 (dependency of test2)
[__1] fail (backjumping, conflict set: test1, test2)
After searching the rest of the dependency tree exhaustively, these were the
goals I've had most trouble fulfilling: test2, test1
It seems to me like package test1 cannot be found, however I can access it from GHCi (and GHC for that matters) and it is present in ~/.cabal/store...
But unfortunately there is more.
I create a third project, test3. This is a library, and it depends on nothing else than base (so in particular it does not depend on test1). The lib exposes one module, Test3, with one function exported, bar. I run cabal build, no problem here. But when I want to install test3 with cabal install --lib I run into some errors:
~/projects/haskell/test3> cabal install --lib
Wrote tarball sdist to
/home/<user>/projects/haskell/test3/dist-newstyle/sdist/test3-0.1.0.0.tar.gz
Resolving dependencies...
cabal: Could not resolve dependencies:
[__0] unknown package: test1 (user goal)
[__0] fail (backjumping, conflict set: test1)
After searching the rest of the dependency tree exhaustively, these were the
goals I've had most trouble fulfilling: test1
It seems that it cannot find test1, although it has been installed correctly; may be this is a remnant of the failed build of test2 though...
Just to be sure, I fire up GHCi and type in :m Test3, but GHCi tells me that it cannot find module Test3 (and even suggests this is a typo and I was meaning Test1), showing that test3 indeed did not get installed, although it got successfully built...
Okay there is one more quirk to this whole situation: I create once again a new project with cabal init, called test4, which is an executable that (again) depends on nothing else than base. I keep the default Main.hs (that just prints "Hello, Haskell!"). I run cabal build: no problem. Then I run cabal install and... No problem either? I run test4 in a random location and it fires up the executable, printing "Hello, Haskell!" in the terminal...
And there is one last thing: I go to some random location and I run cabal install xxx --lib where xxx is a library package available on Hackage (for example xml) and:
~> cabal install xml --lib
Resolving dependencies...
cabal: Could not resolve dependencies:
[__0] unknown package: test1 (user goal)
[__0] fail (backjumping, conflict set: test1)
After searching the rest of the dependency tree exhaustively, these were the
goals I've had most trouble fulfilling: test1
This is the reason why I need to nuke .cabal regularly... Right now I seem to be in some kind of stale state where I cannot install any library anymore.
Technical configuration and notes
I am running Cabal 3.2.0.0 and GHC 8.10.0.20200123. I installed them from the hvr/ghc PPA, and I made sure there are no other versions of those tools anywhere on my computer.
Just as a note, I am running Ubuntu 18.04.4 LTS (with XFCE so XUbuntu to be exact). Everything else (seem to be like it) is up to date.
Last thing, regarding the *.cabal files I use for building, they are pretty much the ones generated by cabal init, except I switch executable xxx for library in the case of libraries, and I simply add a exposed-modules field for exposing modules for the libraries (so Test1 for test1 and Test3 for test3 respectively). I also use build-depends in test2 to make the project depend on test1. Apart from that, they are pretty much left untouched.
Notes and thoughts
I must confess that I am new to Cabal 3; until last week I was using Cabal 1 (because I never bothered to update it; yes I know this is bad). With Cabal 1 I did not have any problem whatsoever, and I was perfectly able to install a package from local sources and depend on it in other projects...
I feel like I am doing something wrong; maybe am I not using the correct Cabal commands? I saw somewhere something about cabal new-build and cabal new-install but it does not seem to do anything more than cabal build and cabal install, at least in my case. I also wanted to investigate sandboxes but it seems that has disappeared since version 2 of Cabal.
There is also a slight possibility this is a Cabal bug, but I don't find any relevant issue on the bug tracker that may be related to my problem...
What do you think about this? What am I doing wrong? Do you see any alternative or possible fix?
Thanks a lot!
GHC environment files
A GHC installation comes with a certain number of packages out-of-the box. base is one of them but there are others, for example text. If you install GHC alone (no cabal or stack) and open ghci, it should let you import Data.Text without problems.
What if you want GHC or ghci to be aware of other compiled packages present in your filesystem? You can point GHC to additional package databases using command-line flags, but there's also the concept of package environment files.
Environments are plain text files that contain a list of package-related GHC flags. There might be a global environment at ~/.ghc/$ARCH-$OS-$GHCVER/environments/default, and there might also exist local environments which only affect GHC and ghci commands invoked inside the same folder. The exact rules for search are described in the GHC User Guide.
What does cabal install --lib actually do?
By default, it modifies the global environment file, so that GHC and ghci can now find that library. That's why point 3) worked. The actual compiled binaries of the library still reside in the cabal store though.
We can also create local environment files. For example cabal install sop-core --lib --package-env . will create the environment file .ghc.environment.xxx in the current folder, and the library will be available to ghc and ghci when they are invoked there.
Why isn't test1 available for test2?
Modern cabal makes a distinction between local packages and external packages.
local packages is the set of packages you are developing together in a project, being edited, recompiled and changed repeatedly. They are built "inplace" and not seen outside the project. They can depend on each other.
external packages are dependencies from build-depends: whose source code is downloaded from a package repository and which, when compiled, are put in the cabal store so that other Cabal projects might make use of them without re-compiling.
The list of local packages and other project-level configuration details are specified in a cabal.project file. But you don't need one if you work on a single isolated package; the default list of packages is simply ./*.cabal.
cabal wants to completely control the build environment of local packages, and will ignore the global environment file. In your case, you'll have to make test1 and test2 local packages in the same project (likely the best option) or publish test1 and treat it as an external package.
Note that "cabal project" is a concept relevant only during development. Packages are published independently, there are no "projects" in Hackage or other repositories, just packages.
What if I want to treat test1 as external without publishing it to Hackage?
You will have to set up a local package repository, basically a non-public Hackage.
You can tell Cabal about additional package repositories in the Cabal configuration file, that is, the file that configures cabal itself. Its location is given in the last line of cabal --help.
But how to set up the repository? The hackage-repo-tool can help with that.
Why did test3 fail? Why did further library installs fail?
That's weird, I have no idea why that happens. Did you by perchance delete the ~/.cabal folder between steps 3) and 5) ? What happens if you delete the global GHC environment file and try again?
I have a Haskell library that I am developing using Stack. As I am developing the library, I like to write small test/experimentation programs that use the library. I keep a collection of these test programs for myself in a directory locally. These test modules are very quick and informal, and not appropriate to include as unit tests in the committed library code. Typically, most of them aren't even maintained and won't compile against the latest version of the library, but I keep them around in case I want to update them later. When I'm working on a test program, I want it to build against my working copy of the library, with any changes that I've made to the library locally.
How should I set up my Stack build environment for this situation? Here are some options I've tried, and the problems with each options.
Two Cabal packages, one Stack configuration. The stack.yaml file lists both packages and defines the build environment for both at once.
Problem: The stack.yaml file needs to be included as part of the committed library source code, so that other developers can build the library from source reproducibly. I don't want the public stack.yaml file for my library to include build information for my local test projects.
Problem: As far as I know, to make this work I need to have a .cabal file that lists all the executables and modules for my test programs. This is annoying to update whenever I want to throw together a quick experimental script, and will fail to build any of the test programs if I have even a single module that doesn't compile. I can't have a .cabal file with no sections, because Cabal gives "No executables, libraries,tests, or benchmarks found. Nothing to do.", and because this offers nowhere to list build-depends.
Create a Cabal sandbox for the test programs. Use cabal sandbox add-source to add the local library as a package. See also this answer.
Problem: Using Cabal sandboxes instead of Stack reintroduces a lot of the dependency problems that Stack is supposed to fix, such as using the system-global GHC instead of the GHC defined by the resolver.
Have a separate stack.yaml for the test programs. Add the library under packages as location: 'C:\Path\To\Local\Library' and set extra-dep: true for that dependency. (See here for more info on this feature.) Don't put any other Cabal packages under packages in the stack.yaml for the test programs. Use stack runghc to invoke test programs within the scope of their stack.yaml.
Problem: I just can't get this one to work. Running stack build inside the test program directory gives "Error parsing targets: The project contains no local packages (packages not marked with 'extra-dep')". Running stack runghc acts as if no dependencies are present at all. I don't want to add a Cabal package for the test programs because this has the same problem as option 1 with needing to construct an explicit .cabal file describing the modules to build.
Problem: Stack build configuration info that I want to be identical between the library and the test programs has to be copied manually. For example, if I change the resolver in my library's stack.yaml, I also need to change it in the stack.yaml for my test programs separately.
Have a directory inside my working copy of the library that contains all of my test programs. Use stack runghc to invoke test programs in the context of the library.
Problem: I'd like the directory with my test programs to be outside of the directory containing my library source code and build configuration, so that I don't have to tell the version control for my library to ignore my test code, and can have my own local version control just for the test programs.
Problem: Only works with a single local library dependency. If my test programs need to depend on local working copies of two different libraries with their own stack.yaml files, I'm out of luck.
Add a symbolic link inside my working copy of the library to a separate directory that contains all of my test programs. Navigate through the symlink and use stack runghc to invoke test programs in the context of the library.
Problem: Super awkward to use, especially since I'm on Windows and Windows has terrible symlink support.
Problem: Still need to tell my version control system to ignore the symlink.
Problem: Still only works with a single library dependency.
If only one local library is involved, I use option 4. You can put your tests outside the directory of your library, and either invoke stack from the directory of your library, or using --stack-yaml path/to/library/stack.yaml.
Otherwise, I use option 3, creating a separate stack project without setting extra-dep.
...
packages:
- 'path/to/package1'
- 'path/to/package2'
...
I can't think of a good workaround for the issue of configuration duplication. There would otherwise be conflicts if multiple packages specified different resolvers/package versions.
Edit: Actually a stub library works better, so edited to add.
I think the way to get #3 to work is -- under your scratch program directory -- (1) add . under packages in stack.yaml alongside the location/extra-dep: true package:
packages:
- '.'
- location: ../mylib
extra-dep: true
(2) create an executable clause in scratch.cabal that points to a stub main program (i.e., a "Hello World" program that compiles but need not do anything) which depends on your library:
executable main
hs-source-dirs: src
main-is: Stub.hs
build-depends: base
, mylib
default-language: Haskell2010
or a library clause with no exposed modules, again that depends on your mylib library:
library
hs-source-dirs: src
build-depends: base >= 4.7 && < 5
, mylib
default-language: Haskell2010
and (3) run stack build in the scratch directory. This should build and register mylib, and now stack runghc Prog1.hs should work fine for running programs that depend on mylib modules.
If you use the executable approach, the stub program is compiled as a side effect but otherwise ignored. If you use the library approach, it looks like the stub library isn't even built; and you then have the option of actually building a scratch library by adding some exposed modules of shared code for your test programs to use, if it's convenient, so the stub library might be best.
None of this solves the problem of keeping stack.yaml info like the resolver version in sync, but it seems to address all the problems you list in 1, 2, 4, and 5. In particular, it should work fine for test programs that depend on multiple local libraries you're developing.
I have a package named commands. I want to install it into its own sandbox e.g. .cabal-sandbox/x86_64-osx-ghc-7.8.3-packages.conf.d/commands-0.0.0-f3f84f48f42ac74a69ee5fd73512bfd0.conf. currently, there is just one .hi interface file for one module Commands, I don't know how it got there.
I tried cabal install commands, by the logic of "that's how the other packages got there I think", but it fails with unknown package.
I also tried stuff with ghc-pkg like ghc-pkg update commands -f .cabal-sandbox/x86_64-osx-ghc-7.8.3-packages.conf.d but I'm not using them right. ideally, I'd like to do this with cabal.
the last thing I tried was ghc -idist/build/, but it complained about the package names in the interface files being different, command versus main ("... differs from name found in the interface file ..."). and if I faked the executable's package with ghc -package-name commands-0.0.0, the linker complained that it couldn't find the symbol _ZCMain_main_closure, because every executable needs the function main in the module Main in the package main.
I'm sure there's a better way of doing this.
I followed online examples for my cabal file:
$ cat commands.cabal
name: commands
library
exposed-modules: Commands.Types, Commands.Bits
...
the minimal failing code example is just:
$ cat Main.hs
import Commands.Types
main = return ()
in the root project directory.
Context: I need to build my executable with make (not cabal) because it links to foreign code (Objective-C via language-c-inline). my makefile: https://github.com/sboosali/Haskell-DragonNaturallySpeaking/blob/master/Makefile). thus, I have to compile a script explicitly. I don't know how to compile the executable with cabal, but I want cabal to build and test and manage my library.
By putting my package into the sandbox, I will be able to import its modules from the script, by compiling with cabal exec -- ghc. I will also be able to include the script with extra-source-files at least, and know it will work.
Here's what I'd try:
First unregister any previous commands library. Try
ghc-pkg --global unregister commands
Install new commands to your sandbox.
From your sandbox directory, try this:
cabal --enable-shared --disable-documentation --prefix=./ install /path/to/your/library/source
Note the prefix specification.
I nuked it (rm -fr .cabal-sandbox/), reinstalled everything (cabal install --only-dependencies), added itself as a source (cabal sandbox add-source .), installed it (cabal install command). and then make worked. idk...
(My problem is about distribute binaries without haskell-platform, ghc, cabal, ...)
I need deploy a well cabal formed haskell application (a Yesod scaffolded) but I have disk space restrictions.
GHC size is about 1Gbytes, store all cabal source code, packages, etc... require more disk space, etc...
Obviously, haskell-platform, ghc, ... is about development (not deployment).
In my specific case I can generate
cabal clean && cabal configure && cabal build
and run succesfully (some like)
./dist/build/MyEntryPoint/MyEntryPoint arg arg arg
But, what about dependencies?, how move it to production environment? (together my "dist" compilation)
Can I put binary dependencies without cabal? How?
Thank you very much!
By default, ghc uses static linking of the Haskell libraries. So the resulting binary is independent of the Haskell ecosystem. If your program does not need any data files, just copy the binary out from ./dist/build/MyEntryPoint/MyEntryPoint to the host
If you also have data files (e.g templates, images, static html pages) that are referenced by the binary using the data path finding logic of Cabal, you can use Setup copy as follows (using happy as an example):
/tmp/happy-1.18.10 $ ./Setup configure
Warning: defaultUserHooks in Setup script is deprecated.
Configuring happy-1.18.10...
/tmp/happy-1.18.10 $ ./Setup build
Building happy-1.18.10...
Preprocessing executable 'happy' for happy-1.18.10...
[ 1 of 18] Compiling NameSet ( src/NameSet.hs, dist/build/happy/happy-tmp/NameSet.o )
[..]
[18 of 18] Compiling Main ( src/Main.lhs, dist/build/happy/happy-tmp/Main.o )
Linking dist/build/happy/happy ...
/tmp/happy-1.18.10 $ ./Setup copy --destdir=/tmp/to_be_deployed/
Installing executable(s) in /tmp/to_be_deployed/usr/local/bin
/tmp/happy-1.18.10 $ find /tmp/to_be_deployed
/tmp/to_be_deployed
/tmp/to_be_deployed/usr
/tmp/to_be_deployed/usr/local
/tmp/to_be_deployed/usr/local/bin
/tmp/to_be_deployed/usr/local/bin/happy
/tmp/to_be_deployed/usr/local/share
/tmp/to_be_deployed/usr/local/share/doc
/tmp/to_be_deployed/usr/local/share/doc/happy-1.18.10
/tmp/to_be_deployed/usr/local/share/doc/happy-1.18.10/LICENSE
/tmp/to_be_deployed/usr/local/share/happy-1.18.10
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Lib-ghc-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Lib-ghc
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Lib
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/GLR_Base
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-coerce-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-ghc-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-debug
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-coerce
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays-ghc
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-arrays
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-coerce
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate-ghc
/tmp/to_be_deployed/usr/local/share/happy-1.18.10/HappyTemplate
/tmp/happy-1.18.10 $ rsync -rva /tmp/to_be_deployed/ production.host:/
[..]
If you do not want to install into /usr/local then pass the desired prefix to Setup configure.
This works well if the target host is otherwise similar (same versions of C libraries such as gmp and ffi installed). If you also need to statically link some C library, see the question that hammar has linked in his comment.
This is the output from cabal install codec-image-devil:
Resolving dependencies...
Configuring Codec-Image-DevIL-0.2.3...
cabal: Missing dependency on a foreign library:
* Missing C library: IL
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
cabal: Error: some packages failed to install:
Codec-Image-DevIL-0.2.3 failed during the configure step. The exception was:
ExitFailure 1
I tried --extra-include-dirs and --extra-lib-dirs. but they didn't work. so I edited the .cabal in Codec-Image-DevIL-0.2.3.tar.gz. I don't know if I'm even supposed to change that. but it worked for pthread.
I added these two lines:
include-dirs: C:\Users\Rumbold\Documents\libs\IL\include, C:\Users\Rumbold\Documents\libs\pthread\include, .
extra-lib-dirs: C:\Users\Rumbold\Documents\libs\IL\lib, C:\Users\Rumbold\Documents\libs\pthread\lib, .
They are indented so they are in the Library section. I don't know if I got the format for lists right, just something i stumbled upon while googling. The libs and `header files are all in the correct place, I think.
any clue how i can get it to work?
Edit_1:
I got it to work with --extra-include-dirs and --extra-lib-dirs, so I don't need to edit the cabal anymore. but IL still doesn't work. is there a way to find out which files it's looking for?
Wdit_2:
Alright it works. I had to rename DevIL.lib and DevIL.dll to libIL.lib and libIL.dll. (not sure if I had to do both, but that's what I did. also kept them under their old names)
Edit_3:
Getting lots of errors like:
"cabal\Codec-Image-DevIL-0.2.3\ghc-6.12.3/libHSCodec-Image-DevIL-0.2.3.a(DevIL.o):fake:(.text+0x2379):
undefined reference to `ilGetInteger#4'"