How can I write property tests for Template Haskell splices? - haskell

I always like to be able to write property tests for my code. When that code is Template Haskell, I'm mostly interested in the behavior of the generated code. Intuitively, I'd like to be able to write things like
import Test.QuickCheck
checkLift :: (Eq a, Lift a, Show a) => a -> Property
checkLift a = $$(liftTyped a) === a
prop_myLiftWorks :: MyType -> Property
prop_myLiftWorks = checkLift
Unfortunately, the Template Haskell stage restriction makes this exact approach totally impossible. The generated splices have to be compiled separately to run. Is there some easy way to spin up a GHC instance in the test suite (or do something with the ghc package or haskell-language-server?) and throw splices at it?
Ideally, I'd want to be able to work with declaration splices as well as expression splices. For example, to write property tests for deriving-compat, I'd want to generate arbitrary datatype declarations of various sorts and then compare Template Haskell-derived instances to GHC-derived ones.

Related

How to collect values spread throughout a Haskell codebase

I have a web application written in Haskell (using ghcjs on the client side and ghc on the server side) and I need a way to collect the CSS values which are spread throughout the modules. Currently I use a technique involving a CssStyle class and template haskell. When a module needs to export some CSS it creates a CssStyle instance for some type (the type has no significance except that it must be unique.) In the top level all the CssStyle instances are retrieved using the reifyInstances function from template haskell.
This approach has at least two drawbacks: You have to create meaningless types to attach the instances to, and you have to be sure all the instances are imported in the place where you scan and turn into real CSS. Can anyone think of a more beautiful way to collect data embedded in Haskell code?
================
Quelklef has requested some source code demonstrating the current solution:
{-# LANGUAGE AllowAmbiguousTypes, OverloadedStrings, MultiParamTypeClasses, TemplateHaskell, LambdaCase, FunctionalDependencies, TypeApplications #-}
import Clay
import Control.Lens hiding ((&))
import Data.Proxy
import Language.Haskell.TH
class CssStyle a where cssStyle :: Css
-- | Collect all the in scope instances of CssStyle and turn them into
-- pairs that can be used to build scss files. Result expression type
-- is [(FilePath, Css)].
reifyCss :: Q Exp
reifyCss = do
insts <- reifyInstances ''CssStyle [VarT (mkName "a")]
listE (concatMap (\case InstanceD _ _cxt (AppT _cls typ#(ConT tname)) _decs ->
[ [|($(litE (stringL (show tname))), $(appTypeE [|cssStyle|] (pure typ)))|] ]
_ -> []) insts)
data T1 = T1
instance CssStyle T1 where cssStyle = byClass "c1" & flexDirection row
data T2 = T2
instance CssStyle T2 where cssStyle = byClass "c2" & flexDirection column
-- Need to run this in the interpreter because of template haskell stage restriction:
--
-- > fmap (over _2 (renderWith compact [])) ($reifyCss :: [(String, Css)])
-- [("Main.T2",".c2{flex-direction:column}"),("Main.T1",".c1{flex-direction:row}")]
The point here is that any CssStyle instance from any module imported here will appear in this list, not only those defined locally.
Hmm...
I do not officially recommend your current approach. It's making use of typeclasses in a highly unorthodox way, so it's unlikely to act exactly as you like. As you've already noted, in order for it to work you need to make sure that all CssStyle instances are in scope, which is pretty arcane behaviour. Also, the current approach does not compose well, by which I mean that your css-related computation is all happening in the global context.
Unfortunately, I don't know of any canonical way to do what you want at compile-time.
However, I do have one idea. Most programs run in top-level "industrial" monads, and I'm assuming your program does as well. You could wrap your industrial monad with a new applicative (not monad) F. The role of this applicative is to allow subprograms to propagate their CSS needs to callers. Concretely, there would be a function style :: Css -> F () which acts akin to how tell acts in a writer monad. There would also be affordances to embed actions from your industrial monad into F. Then each module which has its own CSS exports its API wrapped F; doing this tracks the CSS requirements. There would be a function compileCss :: F a -> Css which builds the composite CSS style and does not execute any effectful operations embedded within F. Additionally, there would be a function execute :: F a -> IO a which executes the actions embedded in the F a value. Then main could make use of compileCss to emit the CSS, and make use of execute to separately run the program.
I admit this is somewhat awkward... wrapping all your existing code in F will be annoying at best. However, I do think it is at least correct, insofar as it tracks effects.
Perhaps the proper answer is to use an existing component-based web framework, which allows you define your component markup and styling in the same place? Some of them support emitting to static HTML.

How to introspect an Haskell file to get the types of its definitions

I have many files that must be processed automatically. Each file holds the response of one student to an exercise which asks the student to give definitions for some functions given a type for each function.
My idea is to have an Haskell script that loads each student file, and verifies if each function has the expected type.
A constraint is that the student files are not defined as modules.
How can I do this?
My best alternative so far is to spawn a GHCi process that will read stdin from a "test file" with GHCi commands, for example:
:load student1.hs
:t g
... and so on ...
then parse the returned output from GHCi to find the types of the functions in the student file.
Is there another clean way to load an arbitrary Haskell file and introspect its code?
Thanks
Haskell does not save type information at runtime. In Haskell, types are used for pre-runtime type checking at the static analysis phase and are later erased. You can read more about Haskell's type system here.
Is there a reason you want to know the type of a function at runtime? maybe we can help with the problem itself :)
Edit based on your 2nd edit:
I don't have a good solution for you, but here is one idea that might work:
Run a script that for each student module will:
Take the name of the module and produce a file Test.hs:
module Test where
import [module-name]
test :: a -> b -> [(b,a)]
test = g
run ghc -fno-code Test.hs
check the output does not contain type errors
write results into a log file
I think if you have a dynamically determined number of .hs files, which you need to load, parse and introspect, you could/should use the GHC API instead.
See for example:
Using GHC API to compile Haskell sources to CORE and CORE to binary
https://mail.haskell.org/pipermail/haskell-cafe/2009-April/060705.html
These might not be something you can use directly — and I haven't done anything like this myself so far either — but these should get you started.
See also:
https://wiki.haskell.org/GHC/As_a_library
https://hackage.haskell.org/package/hint
The closest Haskell feature to that is Data.Typeable.typeOf. Here's a GHCi session:
> import Data.Typeable
> typeOf (undefined :: Int -> Char)
Int -> Char
> typeOf (undefined :: Int -> [Char])
Int -> [Char]
> typeOf (undefined :: Int -> Maybe [Char])
Int -> Maybe [Char]
> :t typeOf
typeOf :: Typeable a => a -> TypeRep
Under the hood, the Typeable a constraint forces Haskell to retain some type tags until runtime, so that they can be retrieved by typeOf. Normally, no such tags exist at runtime. The TypeRep type above is the type for such tags.
That being said, having such information is almost never needed in Haskell. If you are using typeOf to implement something, you are likely doing it wrong.
If you are using that to defer type checks to run time, when they could have been performed at compile time, e.g. using a Dynamic-like type for everything, then you are definitely doing it wrong.
If the function is supposed to be exported with a specific name, I think probably the easiest way would be to just write a test script that calls the functions and checks they return the right results. If the test script doesn't compile, the student's submission is incorrect.
The alternative is to use either the GHC API (kinda hard), or play with Template Haskell (simpler, but still not that simple).
Yet another possibility is to load the student's code into GHCi and use the :browse command to dump out everything that's exported. You can then grep for the term you're interested in. That should be quite easy to automate.
There's a catch, however: foo :: x -> x and foo :: a -> a are the same type, even though textually they don't match at all. You might contemplate trying to normalise the variable names, but it's worse: foo :: Int -> Int and foo :: Num x => x -> x don't look remotely the same, yet one type is an instance of the other.
...which I guess means I'm saying that my answer is bad? :-(

How can I automate testing a Template Haskell function?

I defined a function using Template Haskell which generates a function definition given some type. The type is basically
makeFunc :: Name -> Q [Dec]
Right now, I use the -ddump-splices switch with GHC to see the generated splices. How can I automate this to verify that different types yield the expected splices?
A basic approach might be to just redirect the generated splice to a file and then compare that, but the generated code may well be different since it involves various identifiers constructed via newName.

Safely serialize Text to Bytestring

The text package does not provide an instance of Binary to encode/decode text. I have read about and understand the reasoning behind this (namely, Text should be encoding-agnostic). However, I need a Binary instance of Text. I've found that there is a package called text-binary that does this. However, the instance is as follows:
instance Binary T.Text where
put = put . T.encodeUtf8
get = T.decodeUtf8 <$> get
Pretty good, except that decodeUtf8 is a partial function, so I'd rather be using decodeUtf8' and passing the failure through the Get monad. I can't figure out how to fail correctly with the Get monad. Just from looking around in Data.Binary.Get, I see this:
data Decoder a = Fail !B.ByteString {-# UNPACK #-} !ByteOffset String
| Partial (Maybe B.ByteString -> Decoder a)
| Done !B.ByteString {-# UNPACK #-} !ByteOffset a
Which seems to indicate that there is a way to do what I want. I just can't see how the library authors intend for it to be used. I appreciate any insights that a more learned mind than my own has to offer.
Well, though we tend to disregard it, the Monad class still has that yucky fail method:
get = do
utf8'd <- get
case T.decodeUtf8' utf8'd of
Just t -> return t
Nothing -> fail "No parse for UTF-8 Text"
I'm not sure if it should still be considered "correct" to use fail, but it seems to be the obvious thing for a case like this. I suppose even if it's removed from the Monad class, there'll be some other class MonadPlus m => MonadFail m where fail :: String -> m a which Get will be an instance of.

Alpha conversion on a Haskell expression

Given a Haskell expression, I'd like to perform alpha conversion, ie. rename some of the non free variables.
I've started implementing my own function for this, which works on a haskell-src-exts Exp tree, however it turns out to be surprisingly nontrivial, so I can't help wondering - is there an established easy-to-use library solution for this kind of source conversion? Ideally, it should integrate with haskell-src-exts.
This is one of the problems where the "Scrap Your Boilerplate" style generic libraries shine!
The one I'm most familiar with is the uniplate package, but I don't actually have it installed at the moment, so I'll use the (very similar) functionality found in the lens package. The idea here is that it uses Data.Data.Data (which is the best qualified name ever) and related classes to perform generic operations in a polymorphic way.
Here's the simplest possible example:
alphaConvert :: Module -> Module
alphaConvert = template %~ changeName
changeName :: Name -> Name
changeName (Ident n) = Ident $ n ++ "_conv"
changeName n = n
The (%~) operator is from lens and just means to to apply the function changeName to everything selected by the generic traversal template. So what this does is find every alphanumeric identifier and append _conv to it. Running this program on its own source produces this:
module AlphaConv where
import Language.Haskell.Exts
import Control.Lens
import Control.Lens.Plated
import Data.Data.Lens
instance Plated_conv Module_conv
main_conv
= do ParseOk_conv md_conv <- parseFile_conv "AlphaConv.hs"
putStrLn_conv $ prettyPrint_conv md_conv
let md'_conv = alphaConvert_conv md_conv
putStrLn_conv $ prettyPrint_conv md'_conv
alphaConvert_conv :: Module_conv -> Module_conv
alphaConvert_conv = template_conv %~ changeName_conv
changeName_conv :: Name_conv -> Name_conv
changeName_conv (Ident_conv n_conv)
= Ident_conv $ n_conv ++ "_conv"
changeName_conv n_conv = n_conv
Not terribly useful since it doesn't distinguish between identifiers bound locally and those defined in an outside scope (such as being imported), but it demonstrates the basic idea.
lens may seem a bit intimidating (it has a lot more functionality than just this); you may find uniplate or another library more approachable.
The way you'd approach your actual problem would be a multi-part transformation that first selects the subexpressions you want to alpha-convert inside of, then uses a transformation on those to modify the names you want changed.

Resources