I've made a very simple project with a failing test suite https://github.com/k-bx/noruntests-play
Now when I run stack --test --no-run-tests build I would expect it to build the project, but not run the tests. Instead, it runs the tests:
➜ noruntests-play git:(master) stack --test --no-run-tests build
noruntests-play-0.1.0.0: test (suite: test)
test: error
CallStack (from HasCallStack):
error, called at tests/Tests.hs:4:8 in main:Main
Test suite failure for package noruntests-play-0.1.0.0
test: exited with: ExitFailure 1
Logs printed to console
What am I doing wrong here? Thank you!
You should put build option before --test like this:
$ stack build --test --no-run-tests
I'm not sure whether it's bug or feature. You can open issue here if you're interested in feedback from developers. Personally for me it seems strange to pass --test before build. In some reasonable sense --test is subpart of build and subparts or options are usually specified to the right of main option.
Also there's shorter version of what you want (because build --test is just test):
$ stack test --no-run-tests
Related
When I run the test suite using cabal test, I got the following message:
Running 1 test suites...
Test suite tests: RUNNING...
Test suite tests: PASS
Test suite logged to: my-lib-tests.log
But when I looked at the log file, the content was:
Test suite tests: RUNNING...
*** Failed! Falsified (after 1 test):
[]
Test suite tests: PASS
Test suite logged to: my-lib-tests.log
Why did I get a pass message when the tests clearly failed?
cabal test works under the assumption that a failing test suite will exit with a non-zero error code.
quickCheck prints a counterexample but returns normally.
To make the test executable fail when a counterexample is found, you can wrap QuickCheck tests using quickCheckResult and isSuccess.
There are test frameworks that do this for you, with a lot of useful functionality on top (like command-line arguments to select the tests to run), like tasty, with tasty-quickcheck.
I'm trying to separate the stages of building my package and its dependencies in a sequence of uses of stack build but have become a bit disoriented by all the combinations of flags that seem relevant. My goal is to separately:
Build package dependencies only
Build testing dependencies only
Build the package, only
Run tests (with everything already built)
stack --resolver XXX build --no-run-tests--no-run-benchmarks --only-dependencies
stack --resolver XXX build --no-run-tests --no-run-benchmarks
stack --resolver XXX build --no-run-tests --no-run-benchmarks --test --bench
stack --resolver XXX build --test --no-run-benchmarks
Which isn't quite the right order and does't feel quite right. I'd also ideally like to have additional steps for documentation:
Build package dependencies only
Build testing dependencies only
Build Haddock dependencies only
Build the package, only
Run tests (with everything already built)
Build Haddock, with doctests
What sequence of commands and combinations of flags would accomplish these steps?
I'm trying to test my Haskell package against several Stackage resolvers on Travis, but my --resolver environment variable is being ignored.
For example, if I specify
env:
- ARGS="--resolver lts-4.0"
in my .travis.yml, I still still seem to be using a different resolver (the one in my stack.yaml?) and GHC, as shown by lines like
Installing library in
/home/travis/build/orome/crypto-enigma-hs/.stack-work/install/x86_64-linux/lts-9.1/8.0.2/lib/x86_64-linux-ghc-8.0.2/crypto-enigma-0.0.2.9-6Cs7XSzJkwSDxsEMnLKb0X
in the corresponding build log, which indicates a different resolver (9.1), and corresponding GHC (8.0.2) being used.
How should my build (stack.yaml, .travis.yml, etc.) be configured to ensure that the resolvers (and corresponding GHC) I specify are used to preform my Travis builds and tests?
With env you just define environment variables. You still have to use them. stack on its own does not respect the ARGS variable, so use it in your script, e.g.
install:
# Build dependencies
- stack $ARGS --no-terminal --install-ghc test --only-dependencies
script:
# Build the package, its tests, and its docs and run the tests
- stack $ARGS --no-terminal --install-ghc test --haddock --no-haddock-deps
You should probably use a better name, for example RESOLVER:
env:
- RESOLVER=lts-4.0
- RESOLVER=lts-6.0
- RESOLVER=lts-8.0
install:
# Build dependencies
- stack --resolver $RESOLVER --no-terminal --install-ghc test --only-dependencies
script:
# Build the package, its tests, and its docs and run the tests
- stack --resolver $RESOLVER --no-terminal --install-ghc test --haddock --no-haddock-deps
Also keep in mind that it's usually a better idea to use multiple stack.yml to hold the configuration for that specific LTS variant.
For more information, see stack's Travis documentation and Travis' environment variables documentation.
I created a very simple project with stack. It contains: an executable, a library and test targets in the associated cabal file. When I load the code to ghci via stack ghci, I can't access test there, even if they are in separate module. Is there any way to use it in such a way?
Try stack ghci (your project name):(the test suite name). Then you should be able to enter main and your tests will run.
Example:
If your .cabal project file had the following values:
name: ExampleProject
...
test-suite Example-test
Then the command to run would be stack ghci ExampleProject:Example-test
(edit suggested by #Chris Stryczynski)
To watch the test and src directories so they are updated when you reload with :r, run:
stack ghci --ghci-options -isrc --ghci-options -itest ExampleProduct:Example-test
I want to generate code coverage report using Stack. I run command that
amounts to (omitting options passed to test suite via --test-arguments):
$ stack test --coverage
This performs the testing and then outputs the following:
Error: The coverage report for myproject's test-suite "tests" did not
consider any code. One possible cause of this is if your test-suite builds
the library code (see stack issue #1008). It may also indicate a bug in
stack or the hpc program. Please report this issue if you think your
coverage report should have meaningful results.
I think it should (this creates empty report). GHC options are identical for
all components of my package. There is no need for test suite to rebuild the
library. After all, if Cabal can generate the report, Stack should be able
to do it given the same Cabal config or am I mistaken?
I've opened
an issue on Stack
GitHub repo as suggested.
After a while I decided to create good old sandbox and generate the report
using Cabal instead (I really need to see the report, you know). It worked
previously, but now I get:
$ cabal sandbox init
… <everything OK>
$ cabal update
… <everything OK>
$ cabal install --only-dependencies --enable-tests
… <everything OK>
$ cabal configure --enable-tests --enable-coverage
… <everything OK>
$ cabal build
… <everything OK>
$ cabal test
Running 2 test suites...
Test suite tests: RUNNING...
Test suite tests: PASS
Test suite logged to: dist/test/myproject-0.1.0-tests.log
hpc: can not find HUnit_DDLSMCRs3jyLBDbJPCH01j/Test.HUnit.Lang in ["./.hpc","./dist/hpc/vanilla/mix/myproject-0.1.0","./dist/hpc/vanilla/mix/tests"]
What? I've never seen this, although I generated many reports
before. Someone up there just decided that I won't get that report today,
it seems.
Do you know how to generate coverage report using Stack? Has anyone
succeeded at this?
In my case I was still getting this error. Running:
stack clean
stack test --coverage
solved the problem, as reported here.
Recent changes upstream fixed it. Should be resolved for users of 0.1.7.0 and later.