linking rather than building in stack/cabal - haskell

question:
What do I have to do in the .cabal file in order to cause the libraries to link rather than build?
background:
I am trying to get coverage details from the command stack test --coverage
when I run this build I get the error message
Error: The coverage report for xmonad's test-suite "properties" did not consider any code. One possible cause of this is if your test-suite builds the library code (see stack issue #1008). It may also indicate a bug in stack or the hpc program. Please report this issue if you think your coverage report should have meaningful results.
Only one tix file found in /home/paul/temp/xmonad_coverage/.stack-work/install/x86_64-linux/09c83ca90bc1875ad3d1b5ea4a2a0c369c6367f3ad989533e627c073ee9962e0/8.0.1/hpc/, so not generating a unified coverage report.
on the stack documentation site, https://docs.haskellstack.org/en/stable/coverage/, it says that in order to run coverage I must have :
These test-suites link against your library, rather than building the
library directly. Coverage information is only given for libraries, ignoring
the modules which get compiled directly into your executable. A common case
where this doesn't happen is when your test-suite and library both have
something like hs-source-dirs: src/. In this case, when building your test-
suite you may also be compiling your library, instead of just linking against
it.
when I look in my .cabal file for the library there is
hs-source-dirs: src
and for test there is.
hs-source-dirs: tests
I don't understand the purpose of these and whether these are causing the library to build rather than be linked to.
could this be the reason that stack test --coverage is failing? or am I looking in the wrong place?

Turns out that I could fix this by doing the following:
stack clean; stack build; stack test --coverage --ghc-options "-fforce-recomp"
as described by Michael Sloan at https://github.com/commercialhaskell/stack/issues/1305
his explanation for a similar problem was:
Looks like what's happening is that the library isn't getting rebuilt, despite
reconfiguring with --ghc-options -fhpc and building the package. As a result, the .tix
file generated by the test only includes coverage info for the test itself

Related

Building a Specific Library with Stack?

I have a Haskell stack project whose cabal file is divided as follows:
library
exposed-modules:
Godot.Api
Godot.Api.Auto
-- ...
library generate
exposed-modules:
Generate
Spec
Types
Types.Internal
-- ...
When I run stack build it seems to only build the first library, but what I want stack to do is build just library generate. How do I do this? The following doesn't seem to work:
stack build project-name:library:generate # doesn't seem to work
stack build project-name:lib:generate # doesn't seem to work
Unfortunately, you can't write multiple librarys in one cabal file.
So you have to make one cabal file per library (usually make one directory per one cabal file).
Then, list them up in the stack.yaml:
packages:
- your-main-library
- generate
Then, run stack <the-library-to-build> to build a specific library:
stack build generate
FYI. Here's a project which contains several libraries: https://github.com/iij-ii/direct-hs

What is the preferred way to write quick Haskell test programs that depend on Stack libraries in local directories?

I have a Haskell library that I am developing using Stack. As I am developing the library, I like to write small test/experimentation programs that use the library. I keep a collection of these test programs for myself in a directory locally. These test modules are very quick and informal, and not appropriate to include as unit tests in the committed library code. Typically, most of them aren't even maintained and won't compile against the latest version of the library, but I keep them around in case I want to update them later. When I'm working on a test program, I want it to build against my working copy of the library, with any changes that I've made to the library locally.
How should I set up my Stack build environment for this situation? Here are some options I've tried, and the problems with each options.
Two Cabal packages, one Stack configuration. The stack.yaml file lists both packages and defines the build environment for both at once.
Problem: The stack.yaml file needs to be included as part of the committed library source code, so that other developers can build the library from source reproducibly. I don't want the public stack.yaml file for my library to include build information for my local test projects.
Problem: As far as I know, to make this work I need to have a .cabal file that lists all the executables and modules for my test programs. This is annoying to update whenever I want to throw together a quick experimental script, and will fail to build any of the test programs if I have even a single module that doesn't compile. I can't have a .cabal file with no sections, because Cabal gives "No executables, libraries,tests, or benchmarks found. Nothing to do.", and because this offers nowhere to list build-depends.
Create a Cabal sandbox for the test programs. Use cabal sandbox add-source to add the local library as a package. See also this answer.
Problem: Using Cabal sandboxes instead of Stack reintroduces a lot of the dependency problems that Stack is supposed to fix, such as using the system-global GHC instead of the GHC defined by the resolver.
Have a separate stack.yaml for the test programs. Add the library under packages as location: 'C:\Path\To\Local\Library' and set extra-dep: true for that dependency. (See here for more info on this feature.) Don't put any other Cabal packages under packages in the stack.yaml for the test programs. Use stack runghc to invoke test programs within the scope of their stack.yaml.
Problem: I just can't get this one to work. Running stack build inside the test program directory gives "Error parsing targets: The project contains no local packages (packages not marked with 'extra-dep')". Running stack runghc acts as if no dependencies are present at all. I don't want to add a Cabal package for the test programs because this has the same problem as option 1 with needing to construct an explicit .cabal file describing the modules to build.
Problem: Stack build configuration info that I want to be identical between the library and the test programs has to be copied manually. For example, if I change the resolver in my library's stack.yaml, I also need to change it in the stack.yaml for my test programs separately.
Have a directory inside my working copy of the library that contains all of my test programs. Use stack runghc to invoke test programs in the context of the library.
Problem: I'd like the directory with my test programs to be outside of the directory containing my library source code and build configuration, so that I don't have to tell the version control for my library to ignore my test code, and can have my own local version control just for the test programs.
Problem: Only works with a single local library dependency. If my test programs need to depend on local working copies of two different libraries with their own stack.yaml files, I'm out of luck.
Add a symbolic link inside my working copy of the library to a separate directory that contains all of my test programs. Navigate through the symlink and use stack runghc to invoke test programs in the context of the library.
Problem: Super awkward to use, especially since I'm on Windows and Windows has terrible symlink support.
Problem: Still need to tell my version control system to ignore the symlink.
Problem: Still only works with a single library dependency.
If only one local library is involved, I use option 4. You can put your tests outside the directory of your library, and either invoke stack from the directory of your library, or using --stack-yaml path/to/library/stack.yaml.
Otherwise, I use option 3, creating a separate stack project without setting extra-dep.
...
packages:
- 'path/to/package1'
- 'path/to/package2'
...
I can't think of a good workaround for the issue of configuration duplication. There would otherwise be conflicts if multiple packages specified different resolvers/package versions.
Edit: Actually a stub library works better, so edited to add.
I think the way to get #3 to work is -- under your scratch program directory -- (1) add . under packages in stack.yaml alongside the location/extra-dep: true package:
packages:
- '.'
- location: ../mylib
extra-dep: true
(2) create an executable clause in scratch.cabal that points to a stub main program (i.e., a "Hello World" program that compiles but need not do anything) which depends on your library:
executable main
hs-source-dirs: src
main-is: Stub.hs
build-depends: base
, mylib
default-language: Haskell2010
or a library clause with no exposed modules, again that depends on your mylib library:
library
hs-source-dirs: src
build-depends: base >= 4.7 && < 5
, mylib
default-language: Haskell2010
and (3) run stack build in the scratch directory. This should build and register mylib, and now stack runghc Prog1.hs should work fine for running programs that depend on mylib modules.
If you use the executable approach, the stub program is compiled as a side effect but otherwise ignored. If you use the library approach, it looks like the stub library isn't even built; and you then have the option of actually building a scratch library by adding some exposed modules of shared code for your test programs to use, if it's convenient, so the stub library might be best.
None of this solves the problem of keeping stack.yaml info like the resolver version in sync, but it seems to address all the problems you list in 1, 2, 4, and 5. In particular, it should work fine for test programs that depend on multiple local libraries you're developing.

Haskell: Using `cabal test` for integration tests with code coverage

Setup: .cabal file with library, executable that depends on the library and a test-suite. The test-suite calls integration test written in python. However, that gives me empty coverage reports from cabal test. I think that's because python calls the executable not the test-suite?
Questions
What is the difference between "Package coverage report" and the "Test coverage report"?
Should I call the test-suite binary from python, where the test-suite has a flag such that it behaves like the executable?

Generate coverage report with stack

I want to generate code coverage report using Stack. I run command that
amounts to (omitting options passed to test suite via --test-arguments):
$ stack test --coverage
This performs the testing and then outputs the following:
Error: The coverage report for myproject's test-suite "tests" did not
consider any code. One possible cause of this is if your test-suite builds
the library code (see stack issue #1008). It may also indicate a bug in
stack or the hpc program. Please report this issue if you think your
coverage report should have meaningful results.
I think it should (this creates empty report). GHC options are identical for
all components of my package. There is no need for test suite to rebuild the
library. After all, if Cabal can generate the report, Stack should be able
to do it given the same Cabal config or am I mistaken?
I've opened
an issue on Stack
GitHub repo as suggested.
After a while I decided to create good old sandbox and generate the report
using Cabal instead (I really need to see the report, you know). It worked
previously, but now I get:
$ cabal sandbox init
… <everything OK>
$ cabal update
… <everything OK>
$ cabal install --only-dependencies --enable-tests
… <everything OK>
$ cabal configure --enable-tests --enable-coverage
… <everything OK>
$ cabal build
… <everything OK>
$ cabal test
Running 2 test suites...
Test suite tests: RUNNING...
Test suite tests: PASS
Test suite logged to: dist/test/myproject-0.1.0-tests.log
hpc: can not find HUnit_DDLSMCRs3jyLBDbJPCH01j/Test.HUnit.Lang in ["./.hpc","./dist/hpc/vanilla/mix/myproject-0.1.0","./dist/hpc/vanilla/mix/tests"]
What? I've never seen this, although I generated many reports
before. Someone up there just decided that I won't get that report today,
it seems.
Do you know how to generate coverage report using Stack? Has anyone
succeeded at this?
In my case I was still getting this error. Running:
stack clean
stack test --coverage
solved the problem, as reported here.
Recent changes upstream fixed it. Should be resolved for users of 0.1.7.0 and later.

Generating documentation for my own code with Haddock and stack

I have annotated my code in Haddock style and would like to generate browse-able documentation. Since I am also using stack, I want to integrate the documentation generation into the workflow. However, I have not yet been able to generate anything useful.
I can run
stack haddock
and it will generate documentation in the style I want (to be found deep inside ~/.stack/), but it only seems to generate documentation for the packages I depend on, rather than for my own code.
When I run
stack haddock --help
I get the impression that I can use the additional argument --haddock to generate documentation for my own project, and --no-haddock-deps to leave out the documentation for my dependencies. However, when I run
stack haddock --haddock --no-haddock-deps
nothing seems to happen. If I stack clean first it will recompile all my code but no output is generated seeming to relate in any way to documentation.
As an intermediate solution I have also tried running Haddock by itself, i.e.
haddock my-source.hs
but then I get an error that it cannot find a module the file depends on (which is installed locally by stack). This gives me the impression that documentation generation will have to go through stack somehow. I have looked for, but not really found any explanations related to configuring my .cabal and stack.yaml files for documentation.
TL;DR
How can I use stack and Haddock to generate documentation for the code in my own package?
According to this ticket on the stack issue tracker, Stack can currently only build documentation for libraries, but not executables.
Cabal can be configured to work with the stack databases with this command:
cabal configure --package-db=clear --package-db=global --package-db=$(stack path --snapshot-pkg-db) --package-db=$(stack path --local-pkg-db)
after which you can run cabal haddock --executables to generate the documentation.
By the way, stack haddock is only a shortcut for stack build --haddock, so there is no need to write stack haddock --haddock.
https://www.reddit.com/r/haskell/comments/5ugm9s/how_to_generate_haddock_docs_for_nonlibrary_code/ddtwqzc/
The following solution only works when individual files are specified:
stack exec -- haddock --html src/Example.hs src/Main.hs --hyperlinked-source --odir=dist/docs

Resources