Changing a single record field to be strict leads to worse performance - haskell

I have a program that uses haskell-src-exts, and to improve performance I decided to make some record fields strict. This resulted in much worse performance.
Here's the complete module that I'm changing:
{-# LANGUAGE DeriveDataTypeable, BangPatterns #-}
module Cortex.Hackage.HaskellSrcExts.Language.Haskell.Exts.SrcSpan(
SrcSpan, srcSpan, srcSpanFilename, srcSpanStartLine,
srcSpanStartColumn, srcSpanEndLine, srcSpanEndColumn,
) where
import Control.DeepSeq
import Data.Data
data SrcSpan = SrcSpanX
{ srcSpanFilename :: String
, srcSpanStartLine :: Int
, srcSpanStartColumn :: Int
, srcSpanEndLine :: Int
, srcSpanEndColumn :: Int
}
deriving (Eq,Ord,Show,Typeable,Data)
srcSpan :: String -> Int -> Int -> Int -> Int -> SrcSpan
srcSpan fn !sl !sc !el !ec = SrcSpanX fn sl sc el ec
instance NFData SrcSpan where
rnf (SrcSpanX x1 x2 x3 x4 x5) = rnf x1
Note that the only way to construct a SrcSpan is by using the srcSpan function which is strict in all the Ints.
With this code my program (sorry, I can't share it) runs in 163s.
Now change a single line, e.g.,
, srcSpanStartLine :: !Int
I.e., the srcSpanStartLine field is now marked as strict. My program now takes 198s to run. So making that one field strict increases the running time by about 20%.
How is this possible? The code for the srcSpan function should be the same regardless since it is already strict. The code for the srcSpanStartLine selector should be a bit simpler since it no longer has to evaluate.
I've experimented with -funbox-strict-fields and -funbox-small-strict-field on and off. It doesn't make any noticeable difference. I'm using ghc 7.8.3.
Has anyone seen something similar? Any bright ideas what might cause it?

With some more investigation I can answer my own question. The short answer is uniplate.
Slightly longer answer. In one place I used uniplate to get the children of a Pat (haskell-src-exts type for patterns). The call looked like children p and the type of this instance of children was Pat SrcSpanInfo -> [Pat SrcSpanInfo]. So it's doing no recursion, just returning the immediate children of a node.
Uniplate uses two very different methods depending on if there are strict fields in the type your operating on. Without strict fields it reasonable fast, with strict fields it switches to using gfoldl and is incredibly slow. And even though my use of uniplate didn't directly involve a strict field, it slowed down.
Conclusion: Beware uniplate if you have a strict field anywhere in sight!

Related

Moving from static configuration to dynamic configuration

I am working on a haskell project where the settings are currently in a file called Setting.hs, so they are checked during compile time and can be accessed globally.
However, since that is a bit too static, I was considering to read the configuration during runtime. The codebase is huge and it seems it would be considerable effort to pass the setting e.g. as an argument through the whole program flow, since they may be arbitrarily accessed from anywhere.
Are there any design patterns, libraries or even ghc extensions that can help here without refactoring the whole code?
Thanks for the hints! I came up with a minimal example which shows how I will go about it with the reflection package:
{-# LANGUAGE Rank2Types, FlexibleContexts, UndecidableInstances #-}
import Data.Reflection
data GlobalConfig = MkGlobalConfig {
getVal1 :: Int
, getVal2 :: Double
, getVal3 :: String
}
main :: IO ()
main = do
let config = MkGlobalConfig 1 2.0 "test"
-- initialize the program flow via 'give'
print $ give config (doSomething 2)
-- this works too, the type is properly inferred
print $ give config (3 + 3)
-- and this as well
print $ give config (addInt 7 3)
-- We need the Given constraint, because we call 'somethingElse', which finally
-- calls 'given' to retrieve the configuration. So it has to be propagated up
-- the program flow.
doSomething :: (Given GlobalConfig) => Int -> Int
doSomething = somethingElse "abc"
-- since we call 'given' inside the function to retrieve the configuration,
-- we need the Given constraint
somethingElse :: (Given GlobalConfig) => String -> Int -> Int
somethingElse str x
| str == "something" = x + getVal1 given
| getVal3 given == "test" = 0 + getVal1 given
| otherwise = round (fromIntegral x * getVal2 given)
-- no need for Given constraint here, since this does not use 'given'
-- or any other functions that would
addInt :: Int -> Int -> Int
addInt = (+)
The Given class is a bit easier to work with and perfectly suitable for a global configuration model. All functions that do not make use of given (which gets the value) don't seem to need the class constraint. That means I only have to change functions that actually access the global configuration.
That's what I was looking for.
What you are asking, if it was possible would break referential transparency, at least for pure function ( a pure function result can depend on some global variables but not on a config file couldn't it ) ?
Usually people avoid that type of situation by passing implicitly the configuration as data via a Monad. Alternatively (if you are happy to refactor your code a bit) you can use the implicit parameter extenson, which in theory has been made to solve that type of problem but in practice doesn't really work.
However, if you really need, you can use unsafePerformIO and ioRef to have a top level mutable state which is dirty and frowned upton. You need a top level mutable state, because you need to be able to modify "mutate" your initial config when you are loading it.
Then you get things like that :
myGlobalVar :: IORef Int
{-# NOINLINE myGlobalVar #-}
myGlobalVar = unsafePerformIO (newIORef 17)

Which dictionary does GHC choose when more than one is in scope?

Consider the following example:
import Data.Constraint
class Bar a where
bar :: a -> a
foo :: (Bar a) => Dict (Bar a) -> a -> a
foo Dict = bar
GHC has two choices for the dictionary to use when selecting a Bar instance in foo: it could use the dictionary from the Bar a constraint on foo, or it could use the runtime Dict to get a dictionary. See this question for an example where the dictionaries correspond to different instances.
Which dictionary does GHC use, and why is it the "correct" choice?
It just picks one. This isn't the correct choice; it's a pretty well-known wart. You can cause crashes this way, so it's a pretty bad state of affairs. Here is a short example using nothing but GADTs that demonstrates that it is possible to have two different instances in scope at once:
-- file Class.hs
{-# LANGUAGE GADTs #-}
module Class where
data Dict a where
Dict :: C a => Dict a
class C a where
test :: a -> Bool
-- file A.hs
module A where
import Class
instance C Int where
test _ = True
v :: Dict Int
v = Dict
-- file B.hs
module B where
import Class
instance C Int where
test _ = False
f :: Dict Int -> Bool
f Dict = test (0 :: Int)
-- file Main.hs
import TestA
import TestB
main = print (f v)
You will find that Main.hs compiles just fine, and even runs. It prints True on my machine with GHC 7.10.1, but that's not a stable outcome. Turning this into a crash is left to the reader.
GHC just picks one, and this is the correct choice. Any two dictionaries for the same constraint are supposed to be equal.
OverlappingInstances and IncoherentInstances are basically equivalent in destructive power; they both lose instance coherence by design (any two equal constraints in your program being satisfied by the same dictionary). OverlappingInstances gives you a little more ability to work out which instances will be used on a case-by-case basis, but this isn't that useful when you get to the point of passing around Dicts as first class values and so on. I would only consider using OverlappingInstances when I consider the overlapping instances extensionally equivalent (e.g., a more efficient but otherwise equal implementation for a specific type like Int), but even then, if I care enough about performance to write that specialized implementation, isn't it a performance bug if it doesn't get used when it could be?
In short, if you use OverlappingInstances, you give up the right to ask the question of which dictionary will be selected here.
Now it's true that you can break instance coherence without OverlappingInstances. In fact you can do it without orphans and without any extensions other than FlexibleInstances (arguably the problem is that the definition of "orphan" is wrong when FlexibleInstances is enabled). This is a very long-standing GHC bug, which hasn't been fixed in part because (a) it actually can't cause crashes directly as far as anybody seems to know, and (b) there might be a lot of programs that actually rely on having multiple instances for the same constraint in separate parts of the program, and that might be hard to avoid.
Getting back to the main topic, in principle it's important that GHC can select any dictionary that it has available to satisfy a constraint, because even though they are supposed to be equal, GHC might have more static information about some of them than others. Your example is a little bit too simple to be illustrative but imagine that you passed an argument to bar; in general GHC doesn't know anything about the dictionary passed in via Dict so it has to treat this as a call to an unknown function, but you called foo at a specific type T for which there was a Bar T instance in scope, then GHC would know that the bar from the Bar a constraint dictionary was T's bar and could generate a call to a known function, and potentially inline T's bar and do more optimizations as a result.
In practice, GHC is currently not this smart and it just uses the innermost dictionary available. It would probably be already better to always use the outermost dictionary. But cases like this where there are multiple dictionaries available are not very common, so we don't have good benchmarks to test on.
Here's a test:
{-# LANGUAGE FlexibleInstances, OverlappingInstances, IncoherentInstances #-}
import Data.Constraint
class C a where foo :: a -> String
instance C [a] where foo _ = "[a]"
instance C [()] where foo _ = "[()]"
aDict :: Dict (C [a])
aDict = Dict
bDict :: Dict (C [()])
bDict = Dict
bar1 :: String
bar1 = case (bDict, aDict :: Dict (C [()])) of
(Dict,Dict) -> foo [()] -- output: "[a]"
bar2 :: String
bar2 = case (aDict :: Dict (C [()]), bDict) of
(Dict,Dict) -> foo [()] -- output: "[()]"
GHC above happens to use the "last" dictionary which was brought into scope. I wouldn't rely on this, though.
If you limit yourself to overlapping instances, only, then you wouldn't be able to bring in scope two different dictionaries for the same type (as far as I can see), and everything should be fine since the choice of the dictionary becomes immaterial.
However, incoherent instances are another beast, since they allow you to commit to a generic instance and then use it at a type which has a more specific instance. This makes it very hard to understand which instance will be used.
In short, incoherent instances are evil.
Update: I ran some further tests. Using only overlapping instances and an orphan instance in a separate module you can still obtain two different dictionaries for the same type. So, we need even more caveats. :-(

Can Haskell inline functions passed as an argument?

Let's say I pass a small function f to map. Can Haskell inline f with map to produce a small imperative loop? If so, how does Haskell keep track of what function f really is? Can the same be done with Arrow combinators?
If f is passed in as an argument, then no, probably not. If f is the name of a top-level function or a local function, then probably yes.
foobar f = ... map f ...
-- Probably not inlined.
foobar = ... map (\ x -> ...) ...
-- Probably inlined.
That said, I gather that most of the performance difference between inline and out of line comes not from the actual inlining itself, but rather from any additional subsequent optimisations this might allow.
The only way to be "sure" about these things is to actually write the code, actually compile it, and have a look at the Core that gets generated. And the only way to know if it makes a difference (positive or negative) is to actually benchmark the thing.
The definition of the Haskell language does not mandate a Haskell implementation to inline code, or to perform any kind of optimization. Any implementation is free to apply any optimization it may deem appropriate.
That being said, Haskell is nowadays often run using GHC, which does optimize Haskell code. For inlining, GHC uses some heuristics to decide whether something should inlined or not. The general advice is to turn optimization on with -O2 and check the output of the compiler. You can see the produced Core with -ddump-simpl, or the assembly code with -ddump-asm. Some other flags can be useful as well.
If you then see that GHC is not inlining stuff you would like to, you can provide a hint to the compiler with {-# INLINE foo #-} and related pragmas.
Be wary of mindlessly applying optimizations, though. Often, programmers spend their time to optimize parts of the program which have a negligible impact to the overall performance. To avoid this, it is strongly recommended to profile your code first, so that you know where your program spends a lot of time.
Here is an example where GHC does inline a function passed as an argument :
import qualified Data.Vector.Unboxed as U
import qualified Data.Vector as V
plus :: Int -> Int -> Int
plus = (+)
sumVect :: V.Vector Int -> Int
sumVect = V.foldl1 plus
plus is passed as the argument of foldl1, which results in summing a vector of integers. In the Core, plus is inlined and optimized to the unboxed GHC.Prim.+# :: Int# -> Int# -> Int# :
letrec {
$s$wfoldlM_loop_s759
:: GHC.Prim.Int# -> GHC.Prim.Int# -> GHC.Prim.Int#
$s$wfoldlM_loop_s759 =
\ (sc_s758 :: GHC.Prim.Int#) (sc1_s757 :: GHC.Prim.Int#) ->
case GHC.Prim.tagToEnum# # Bool (GHC.Prim.>=# sc_s758 ww1_s748)
of _ {
False ->
case GHC.Prim.indexArray#
# Int ww2_s749 (GHC.Prim.+# ww_s747 sc_s758)
of _ { (# ipv1_X72o #) ->
case ipv1_X72o of _ { GHC.Types.I# y_a5Kg ->
$s$wfoldlM_loop_s759
(GHC.Prim.+# sc_s758 1#) (GHC.Prim.+# sc1_s757 y_a5Kg)
}
};
True -> sc1_s757
}; }
That's the GHC.Prim.+# sc1_s757 y_a5Kg. You can add simple artihmetic inside function plus and see this Core expression expand.

Couldn't match kind `*' against `#'

What the heck is going on here:
"Couldn't match kind `*' against `#'"
I was trying the following in GHCi using TemplateHaskell (ghci -XTemplateHaskell)
$(reify ''Show >>= dataToExpQ (const Nothing))
I was hoping to get an Exp out of this (which does have an instance of Show). I am doing this to insert information about haskell types in an application such that it is available as actual data, not as a string.
My goal is the following:
info :: Info
info = $(reify ''Show >>= dataToExpQ (const Nothing))
I really don't understand that error message, what is '#' anyway? If there is #, is there also # -> # or * -> #? Is it something that relates to kinds like kinds relate to types (though I would not know what that could be)?
Okay, so I do understand now that GHC has a hierarchy of kinds and that `#' is a special kind of unboxed types. All well and good, but why does this error pop up? Maybe unboxed types do not play well with genercis?
I'm not fully sure that this makes sense to me yet, since I would consider unboxed types being an optimazition performed by the compiler. I also thought that if an instance of Data exists, it needs to be there for all types that could possible be included in the data structure.
Upon further investigation I believe that Names pose the problem, is there a way to circumvent them in dataToExpQ? How to use that argument anyway?
You're right, it is the Names that cause the problem. More specifically, the problem is that the NameFlavour data type has unboxed integers in some of its fields.
There's a Haddock note on the Data NameFlavor instance that raises some red flags. And if you click through to the source, you'll see that the gfoldl definition essentially treats the unboxed integers like integers. (There's really not much else choice…) This ultimately causes the error you're seeing because dataToExpQ — having been tricked by the deceptive Data NameFlavour instance — builds an Exp term that applies NameU to an (Int :: *) when NameU actually expects an (unboxed) (Int# :: #).
So the problem is that the Data instance for NameFlavour disobeys the invariant assumed by dataToExpQ. But not to worry! This scenario falls squarely under the reason that dataToExpQ takes an argument: the argument lets us provide special treatment for troublesome types. Below, I do this in order to correctly reify the NameFlavour constructors that have unboxed integer fields.
There may be solutions out there for this, but I'm not aware of them, so I rolled up the following. It requires a separate module because of the TH staging restriction.
{-# LANGUAGE TemplateHaskell #-}
{-# LANGUAGE MagicHash #-}
module Stage0 where
import Language.Haskell.TH
import Language.Haskell.TH.Syntax
import GHC.Types (Int(I#))
import GHC.Prim (Int#)
unboxed :: Int# -> Q Exp
unboxed i = litE $ intPrimL $ toInteger $ I# i -- TH does support unboxed literals
nameFlavorToQExp :: NameFlavour -> Maybe (Q Exp)
nameFlavorToQExp n = case n of
NameU i -> Just [| NameU $(unboxed i) |]
NameL i -> Just [| NameL $(unboxed i) |]
_ -> Nothing
And then the following compiles for me.
{-# LANGUAGE TemplateHaskell #-}
import Language.Haskell.TH
import Language.Haskell.TH.Quote
import Generics.SYB
import Stage0
info :: Info
info = $(reify ''Show >>= dataToExpQ (mkQ Nothing nameFlavorToQExp))
CAVEAT PROGRAMMER The unboxed integers we're bending over backwards for here correspond to "uniques" that GHC uses internally. They are not necessarily expected to be serialized. Depending on how you're using the resulting Info value, this may cause explosions.
Also note when reifying Show, you're also reifying every instance of Show that's in scope.
There's a lot of them — this generates a pretty big syntax term.
As the documentation says, these instances do not include the method definitions.
HTH.

Python-"is"-like equality operator for Haskell/GHC

Is there a GHC-specific "unsafe" extension to ask whether two Haskell references point to the same location?
I'm aware this can break referential transparency if not used properly. But there should be little harm (unless I'm missing something), if it is used very careful, as a means for optimizations by short-cutting recursive (or expensive) data traversal, e.g. for implementing an optimized Eq instance, e.g.:
instance Eq ComplexTree where
a == b = (a `unsafeSameRef` b) || (a `deepCompare` b)
providing deepCompare is guaranteed to be true if unsafeSameRef decides true (but not necessarily the other way around).
EDIT/PS: Thanks to the answer pointing to System.Mem.StableName, I was able to also find the paper Stretching the storage manager: weak pointers and stable names in Haskell which happens to have addressed this very problem already over 10 years ago...
GHC's System.Mem.StableName solves exactly this problem.
There's a pitfall to be aware of:
Pointer equality can change strictness. I.e., you might get pointer equality saying True when in fact the real equality test would have looped because of, e.g., a circular structure. So pointer equality ruins the semantics (but you knew that).
I think StablePointers might be of help here
http://www.haskell.org/ghc/docs/6.12.2/html/libraries/base-4.2.0.1/Foreign-StablePtr.html
Perhaps this is the kind of solution you are looking for:
import Foreign.StablePtr (newStablePtr, freeStablePtr)
import System.IO.Unsafe (unsafePerformIO)
unsafeSameRef :: a -> a -> Bool
unsafeSameRef x y = unsafePerformIO $ do
a <- newStablePtr x
b <- newStablePtr y
let z = a == b
freeStablePtr a
freeStablePtr b
return z;
There's unpackClosure# in GHC.Prim, with the following type:
unpackClosure# :: a -> (# Addr#,Array# b,ByteArray# #)
Using that you could whip up something like:
{-# LANGUAGE MagicHash, UnboxedTuples #-}
import GHC.Prim
eq a b = case unpackClosure# a of
(# a1,a2,a3 #) -> case unpackClosure# b of
(# b1,b2,b3 #) -> eqAddr# a1 b1
And in the same package, there's the interestingly named reallyUnsafePtrEquality# of type
reallyUnsafePtrEquality# :: a -> a -> Int#
But I'm not sure what the return value of that one is - going by the name it will lead to much gnashing of teeth.

Resources