Haskell: Using `cabal test` for integration tests with code coverage - haskell

Setup: .cabal file with library, executable that depends on the library and a test-suite. The test-suite calls integration test written in python. However, that gives me empty coverage reports from cabal test. I think that's because python calls the executable not the test-suite?
Questions
What is the difference between "Package coverage report" and the "Test coverage report"?
Should I call the test-suite binary from python, where the test-suite has a flag such that it behaves like the executable?

Related

Entry point for test suite with multiple files in Cabal project?

New to Cabal, so sorry if this is obvious, but I obviously want to have more than one file in my Cabal project's test suite, yet the .cabal file is insisting on being given an entrypoint. What do I put for this?
For example, if I have two modules in my library and want to test each in their own test file. One is no more important than the other, so how do I go about making one of the files an entrypoint?
You could make two test suites.
test-suite A
main-is: test-module-A.hs
test-suite B
main-is: test-module-B.hs
Or you could make a single suite that imports both test modules.
test-suite both
main-is: test-both.hs
other-modules: TestA, TestB

linking rather than building in stack/cabal

question:
What do I have to do in the .cabal file in order to cause the libraries to link rather than build?
background:
I am trying to get coverage details from the command stack test --coverage
when I run this build I get the error message
Error: The coverage report for xmonad's test-suite "properties" did not consider any code. One possible cause of this is if your test-suite builds the library code (see stack issue #1008). It may also indicate a bug in stack or the hpc program. Please report this issue if you think your coverage report should have meaningful results.
Only one tix file found in /home/paul/temp/xmonad_coverage/.stack-work/install/x86_64-linux/09c83ca90bc1875ad3d1b5ea4a2a0c369c6367f3ad989533e627c073ee9962e0/8.0.1/hpc/, so not generating a unified coverage report.
on the stack documentation site, https://docs.haskellstack.org/en/stable/coverage/, it says that in order to run coverage I must have :
These test-suites link against your library, rather than building the
library directly. Coverage information is only given for libraries, ignoring
the modules which get compiled directly into your executable. A common case
where this doesn't happen is when your test-suite and library both have
something like hs-source-dirs: src/. In this case, when building your test-
suite you may also be compiling your library, instead of just linking against
it.
when I look in my .cabal file for the library there is
hs-source-dirs: src
and for test there is.
hs-source-dirs: tests
I don't understand the purpose of these and whether these are causing the library to build rather than be linked to.
could this be the reason that stack test --coverage is failing? or am I looking in the wrong place?
Turns out that I could fix this by doing the following:
stack clean; stack build; stack test --coverage --ghc-options "-fforce-recomp"
as described by Michael Sloan at https://github.com/commercialhaskell/stack/issues/1305
his explanation for a similar problem was:
Looks like what's happening is that the library isn't getting rebuilt, despite
reconfiguring with --ghc-options -fhpc and building the package. As a result, the .tix
file generated by the test only includes coverage info for the test itself

What is the preferred way to write quick Haskell test programs that depend on Stack libraries in local directories?

I have a Haskell library that I am developing using Stack. As I am developing the library, I like to write small test/experimentation programs that use the library. I keep a collection of these test programs for myself in a directory locally. These test modules are very quick and informal, and not appropriate to include as unit tests in the committed library code. Typically, most of them aren't even maintained and won't compile against the latest version of the library, but I keep them around in case I want to update them later. When I'm working on a test program, I want it to build against my working copy of the library, with any changes that I've made to the library locally.
How should I set up my Stack build environment for this situation? Here are some options I've tried, and the problems with each options.
Two Cabal packages, one Stack configuration. The stack.yaml file lists both packages and defines the build environment for both at once.
Problem: The stack.yaml file needs to be included as part of the committed library source code, so that other developers can build the library from source reproducibly. I don't want the public stack.yaml file for my library to include build information for my local test projects.
Problem: As far as I know, to make this work I need to have a .cabal file that lists all the executables and modules for my test programs. This is annoying to update whenever I want to throw together a quick experimental script, and will fail to build any of the test programs if I have even a single module that doesn't compile. I can't have a .cabal file with no sections, because Cabal gives "No executables, libraries,tests, or benchmarks found. Nothing to do.", and because this offers nowhere to list build-depends.
Create a Cabal sandbox for the test programs. Use cabal sandbox add-source to add the local library as a package. See also this answer.
Problem: Using Cabal sandboxes instead of Stack reintroduces a lot of the dependency problems that Stack is supposed to fix, such as using the system-global GHC instead of the GHC defined by the resolver.
Have a separate stack.yaml for the test programs. Add the library under packages as location: 'C:\Path\To\Local\Library' and set extra-dep: true for that dependency. (See here for more info on this feature.) Don't put any other Cabal packages under packages in the stack.yaml for the test programs. Use stack runghc to invoke test programs within the scope of their stack.yaml.
Problem: I just can't get this one to work. Running stack build inside the test program directory gives "Error parsing targets: The project contains no local packages (packages not marked with 'extra-dep')". Running stack runghc acts as if no dependencies are present at all. I don't want to add a Cabal package for the test programs because this has the same problem as option 1 with needing to construct an explicit .cabal file describing the modules to build.
Problem: Stack build configuration info that I want to be identical between the library and the test programs has to be copied manually. For example, if I change the resolver in my library's stack.yaml, I also need to change it in the stack.yaml for my test programs separately.
Have a directory inside my working copy of the library that contains all of my test programs. Use stack runghc to invoke test programs in the context of the library.
Problem: I'd like the directory with my test programs to be outside of the directory containing my library source code and build configuration, so that I don't have to tell the version control for my library to ignore my test code, and can have my own local version control just for the test programs.
Problem: Only works with a single local library dependency. If my test programs need to depend on local working copies of two different libraries with their own stack.yaml files, I'm out of luck.
Add a symbolic link inside my working copy of the library to a separate directory that contains all of my test programs. Navigate through the symlink and use stack runghc to invoke test programs in the context of the library.
Problem: Super awkward to use, especially since I'm on Windows and Windows has terrible symlink support.
Problem: Still need to tell my version control system to ignore the symlink.
Problem: Still only works with a single library dependency.
If only one local library is involved, I use option 4. You can put your tests outside the directory of your library, and either invoke stack from the directory of your library, or using --stack-yaml path/to/library/stack.yaml.
Otherwise, I use option 3, creating a separate stack project without setting extra-dep.
...
packages:
- 'path/to/package1'
- 'path/to/package2'
...
I can't think of a good workaround for the issue of configuration duplication. There would otherwise be conflicts if multiple packages specified different resolvers/package versions.
Edit: Actually a stub library works better, so edited to add.
I think the way to get #3 to work is -- under your scratch program directory -- (1) add . under packages in stack.yaml alongside the location/extra-dep: true package:
packages:
- '.'
- location: ../mylib
extra-dep: true
(2) create an executable clause in scratch.cabal that points to a stub main program (i.e., a "Hello World" program that compiles but need not do anything) which depends on your library:
executable main
hs-source-dirs: src
main-is: Stub.hs
build-depends: base
, mylib
default-language: Haskell2010
or a library clause with no exposed modules, again that depends on your mylib library:
library
hs-source-dirs: src
build-depends: base >= 4.7 && < 5
, mylib
default-language: Haskell2010
and (3) run stack build in the scratch directory. This should build and register mylib, and now stack runghc Prog1.hs should work fine for running programs that depend on mylib modules.
If you use the executable approach, the stub program is compiled as a side effect but otherwise ignored. If you use the library approach, it looks like the stub library isn't even built; and you then have the option of actually building a scratch library by adding some exposed modules of shared code for your test programs to use, if it's convenient, so the stub library might be best.
None of this solves the problem of keeping stack.yaml info like the resolver version in sync, but it seems to address all the problems you list in 1, 2, 4, and 5. In particular, it should work fine for test programs that depend on multiple local libraries you're developing.

Generate coverage report with stack

I want to generate code coverage report using Stack. I run command that
amounts to (omitting options passed to test suite via --test-arguments):
$ stack test --coverage
This performs the testing and then outputs the following:
Error: The coverage report for myproject's test-suite "tests" did not
consider any code. One possible cause of this is if your test-suite builds
the library code (see stack issue #1008). It may also indicate a bug in
stack or the hpc program. Please report this issue if you think your
coverage report should have meaningful results.
I think it should (this creates empty report). GHC options are identical for
all components of my package. There is no need for test suite to rebuild the
library. After all, if Cabal can generate the report, Stack should be able
to do it given the same Cabal config or am I mistaken?
I've opened
an issue on Stack
GitHub repo as suggested.
After a while I decided to create good old sandbox and generate the report
using Cabal instead (I really need to see the report, you know). It worked
previously, but now I get:
$ cabal sandbox init
… <everything OK>
$ cabal update
… <everything OK>
$ cabal install --only-dependencies --enable-tests
… <everything OK>
$ cabal configure --enable-tests --enable-coverage
… <everything OK>
$ cabal build
… <everything OK>
$ cabal test
Running 2 test suites...
Test suite tests: RUNNING...
Test suite tests: PASS
Test suite logged to: dist/test/myproject-0.1.0-tests.log
hpc: can not find HUnit_DDLSMCRs3jyLBDbJPCH01j/Test.HUnit.Lang in ["./.hpc","./dist/hpc/vanilla/mix/myproject-0.1.0","./dist/hpc/vanilla/mix/tests"]
What? I've never seen this, although I generated many reports
before. Someone up there just decided that I won't get that report today,
it seems.
Do you know how to generate coverage report using Stack? Has anyone
succeeded at this?
In my case I was still getting this error. Running:
stack clean
stack test --coverage
solved the problem, as reported here.
Recent changes upstream fixed it. Should be resolved for users of 0.1.7.0 and later.

Why does "cabal sdist" not include all "files needed to build"?

According to the wiki entry,
It packages up the files needed to build the project
I have a simple executables-only .cabal project, which basically contains
Executable myprog
hs-source-dirs: src
main-is: MyMain.hs
and is made up of some additional .hs files below src/ beyond src/MyMain.hs. E.g., src/Utils.hs and a few others.
cabal build has no problems building myprog, and compiles the required additional .hs files below src/, but cabal sdist does not, thus creating a dysfunctional source-tarball. What am I doing wrong? How do I tell cabal to include all source files below hs-source-dirs?
As a side-note, with GNU Autotools, there was a make distcheck target, which would first build a source-tarball, and then try to build the project via the newly generated source-tarball, thus ensuring everything's ok. Is there something similar for cabal, in order to make sure my source-tarball is sound?
You should list the other Haskell files in the .cabal file, inside the Executable stanza.
other-modules: Utils AFewOthers
The distribution only includes source files that are listed in your .cabal file. Cabal has no other way to detect which source files are in your package. You could still build because cabal build calls ghc --make, and ghc will find and compile all the source files it needs.

Resources