How to run a group of tests sequentially? - haskell

I have tasty tests looking as:
myTests :: [TestTree]
myTests = [
testCase "1" $ do ... assertBool ...
, testCase "2" $ do ... assertBool ...
, testCase "3" $ do ... assertBool ...
, testCase "4" $ do ... assertBool ...
...
]
But the problem with them is that some of them (lets say, 3, 4) share some system resource which must be used in a "mutually exclusive" style (actually it's a file, but it could be something else). As I understood, the problem is that tests 1, 2, 3, 4, ... run in parallel.
So, I want to run tests 3, 4 sequentially. Others - in parallel. I am thinking about something like this:
par = [...]
seq = someMagic [...]
myTests = par <> seq
What is someMagic function? I found localOption #NumThreads 1 $ [...] but it works with TestTree as its 2nd argument, not with a list of TestTree! How to do something like this?
EDIT-1: I just found another idea: to use testGroup to "fold" them into one TestGroup.
EDIT-2: Just tried testGroup - but I get the same problem about a busy resource.
EDIT-3: Another idea is to use after to declare dependency

It's possible to declare explicit order (a dependency) between tests with after (and similar functions).
[
...
, testCase "testMyFunc: must fails" $ do
...
, after AllFinish "$NF ~ /testMyFunc/" $ testCase "anotherFunc: must work" $ do
...
...
]
Used expression is described in Tasty's README. So, if tests run in parallel, some of them can be ordered (sequential).

Related

Custom QuickCheck failed message

test1 = hspec $ do
describe "blabla" $ do
it "should be equl" $ verbose $
\input-> ...
In the above code, when a test failed, it prints the failed input. But I'm actually interested in another value that can be calculated from input. Can I ask QuickCheck to print the other value?
Somehow I've never seen it advertised, but you can use hspecs expectations inside of QuickCheck properties. Here is an example:
describe "blabla" $ do
it "should be equl" $ verbose $ \input ->
round input `shouldBe` floor (input :: Double)
Above property is clearly not true, so it should fail. Since we are not only interested input, but also want to know the computed values from it, shouldBe will give us just that:
3) blabla should be equl
Falsifiable (after 2 tests and 4 shrinks):
0.6
expected: 0
but got: 1
Naturally, due to verbose, only input will be printed for passing tests, while computed value (eg. round input) will only be printed for a failed test case, which is what you were looking for it seems anyways.

How to debug diverging test using QuickCheck

I have some parsing code using Megaparsec that I've written a simple property to test (it generates a random expression tree, pretty-prints it, and then checks that the result parses back to the original tree).
Unfortunately, there seems to be a bug and if I run the tests without any limits, I see the GHC process allocating more and more memory until either I kill it or the OOM killer gets there first.
Not a problem, I thought... but I can't for the life of me figure out what's causing the divergence. The property itself looks like this: (I've ripped out the proper testing and the shrinking code to try to minimise the code that actually runs)
prop_parse_expr :: Property
prop_parse_expr =
forAll arbitrary $
(\ pe ->
let str = prettyParExpr 0 pe in
counterexample ("Rendered to: " ++ show str) $
trace ("STARTING TEST: " ++ show str) $
case parse (expr <* eof) "" str of
Left _ -> trace "NOPE" $ False
Right _ -> trace "GOOD" $ True)
If I compile with profiling (using stack test --profile), I can run the resulting binary with RTS options. Ahah, I thought, and ran with -xc, thinking that I'd get a helpful stack trace when I sent a SIGINT to the stuck job. It seems not. Running with
./long/path/to/foo-test -j1 --test-seed 1 +RTS -xc
I see this output:
STARTING TEST: "0"
GOOD
STARTING TEST: "(x [( !0 )]) "
STARTING TEST: "({ 2 {( !0 )}} ) "
STARTING TEST: "{ 2{ ( x[0? {( 0) ,( x ) } :((0 )? (x ):0) -: ( -(^( x )) ) ]), 0**( x )} } "
STARTING TEST: "| (0? (x[({ 1{ (0)? x : ( 0 ) }} ) ]) :(~&( 0) ?( x):( (x ) ^( x ) )))"
STARTING TEST: "(0 )"
STARTING TEST: "0"
^C*** Exception (reporting due to +RTS -xc): (THUNK_STATIC), stack trace:
Test.Framework.Improving.runImprovingIO,
called from Test.Framework.Providers.QuickCheck2.runProperty,
called from Test.Framework.Providers.QuickCheck2.runTest,
called from Test.Framework.Runners.Core.runSimpleTest,
called from Test.Framework.Runners.Core.runTestTree.go,
called from Test.Framework.Runners.Core.runTestTree,
called from Test.Framework.Runners.Core.runTests',
called from Test.Framework.Runners.Core.runTests,
called from Test.Framework.Runners.Console.defaultMainWithOpts,
called from Test.Framework.Runners.Console.defaultMainWithArgs,
called from Test.Framework.Runners.Console.defaultMain,
called from Main.main
<snip: 2 more identical backtraces>
*** Exception (reporting due to +RTS -xc): (THUNK_STATIC), stack trace:
Test.Framework.Runners.Console.Utilities.hideCursorDuring,
called from Test.Framework.Runners.Console.Run.showRunTestsTop,
called from Test.Framework.Improving.runImprovingIO,
called from Test.Framework.Providers.QuickCheck2.runProperty,
called from Test.Framework.Providers.QuickCheck2.runTest,
called from Test.Framework.Runners.Core.runSimpleTest,
called from Test.Framework.Runners.Core.runTestTree.go,
called from Test.Framework.Runners.Core.runTestTree,
called from Test.Framework.Runners.Core.runTests',
called from Test.Framework.Runners.Core.runTests,
called from Test.Framework.Runners.Console.defaultMainWithOpts,
called from Test.Framework.Runners.Console.defaultMainWithArgs,
called from Test.Framework.Runners.Console.defaultMain,
called from Main.main
Can anyone tell me:
Why I see multiple STARTING TEST lines without GOOD or NOPE between them, despite the -j1?
How I get an actual stack trace that shows where the test is allocating all its memory?
Thanks for any ideas!
For anyone who finds this question, the problem with my code was that my arbitrary instance for expressions didn't constrain sizes properly, so sometimes tried to make enormous trees. See the "Generating Recursive Data Types" section of the QuickCheck manual for what I should have been doing!
I found that running commands like:
./long/path/to/foo-test -o3 +RTS -xc
helped me figure out what was going on. Strangely, the backtrace still shows several threads of execution. I don't really understand why, but at least I could then see that I was spending time in my "makeAnExpr" function. The trick is to tune the timeout (3 seconds above) so that it doesn't kill a test until it's well and truly stuck, but does stop it before it starts eating all your RAM!

Is it possible to skip tests in HSpec test suite?

In most programming languages it is easy to skip a test in some circumstances. Is there a proper way to do that in haskell HSpec based test suite?
"Changing it to xit marks the corresponding spec item as pending."
source:
http://hackage.haskell.org/package/hspec-2.7.8/docs/Test-Hspec.html
If I understand correctly, Core of HSpec doesn't have a special solution for this case. But you can skip tests without verbosity about it. For example:
main = hspec $ do
...
when isDatabaseServerRunning $ do
describe "database" $ do
it "test some one" $ do
...
Another solution maybe to use the function like mapSpecItem_ for changing result of test to Pending with a message about why was skipped. For example:
skip :: String -> Bool -> SpecWith a -> SpecWith a
skip _ False = id
skip why True = mapSpecItem_ updateItem
where
updateItem x = x{itemExample = \_ _ _ -> return . Pending . Just $ why}

How can QuickCheck test all properties for each sample

...instead of generating 100 new random samples for each property?
My testsuite contains the TemplateHaskell hack explained here [1] to
test all functions named prop_*. Running the test program prints
=== prop_foo from tests/lala.lhs:20 ===
+++ OK, passed 100 tests.
=== prop_bar from tests/lala.lhs:28 ===
+++ OK, passed 100 tests.
and it looks like going through 100 random samples for each of the
properties.
Problemis: Generating the samples is quite expensive, checking the
properties is not. So I'd like to have a means to pass each random
sample to each of the prop_* functions instead of creating new
(#properties * 100) many samples.
Is there anything like that built in? Actually, I think I'd need a
replacement for the splice
$(forAllProperties)
in
main :: IO ()
main
= do args <- parseArgs <$> getArgs
s <- $(forAllProperties) $ quickCheckWithResult args
s ? return () $ exitFailure
where
parseArgs as
= null as ? stdArgs $ stdArgs{ maxSuccess = read $ head as }
[1] Simple haskell unit testing, and
QuickCheck exit status on failures, and cabal integration
In this post you can see how to group tests
Stackoverflow post
That user provides a very simple example of use Test.Tasty.QuickCheck
Using testProperty and testGroup you can pass each random sample to each property
In the next link you can check the hackage of this package
Test.Tasty.QuickCheck

Named documentation chunk with Haddock

I'm trying to document my sources and I have a problem with named chunk.
I would like to document a large amount of patterns (more than 50), I would like to keep each explanation near the code and gather all the explanations in the header
I tried the following :
module Haddock
(
-- A test for Haddock named chunk
--
-- $doc1
--
-- $doc2
--
-- $doc3
)
where
-- $doc1
-- First serie of explanations
myFunction 0 = 1
myFunction 1 = 1
-- $doc2
-- Second serie of explanations
myFunction 2 = 3
-- $doc3
-- Some examples
--
-- > myFunction a
--
myFunction a = a * a
but only the first chunk is shown :
Is there something wrong with my Haddock markup or is it a bug ?

Resources