Dynamic typing over containers of finite domain of basic types - haskell

I have a problem with writing a simple function without too much repeating myself, below is a simplified example. The real program I am trying to write is a port of an in-memory database for a BI server from python. In reality there are more different types (around 8) and much more logic, that is mostly expressible as functions operating on polymorphic types, like Vector a, but still some logic must deal with different types of values.
Wrapping each value separatly (using [(Int, WrappedValue)] type) is not an option due to efficiency reasons - in real code I am using unboxed vectors.
type Vector a = [(Int, a)] -- always sorted by fst
data WrappedVector = -- in fact there are 8 of them
FloatVector (Vector Float)
| IntVector (Vector Int)
deriving (Eq, Show)
query :: [WrappedVector] -> [WrappedVector] -- equal length
query vectors = map (filterIndexW commonIndices) vectors
where
commonIndices = intersection [mapFstW vector | vector <- vectors]
intersection :: [[Int]] -> [Int]
intersection = head -- dummy impl. (intersection of sorted vectors)
filterIndex :: Eq a => [Int] -> Vector a -> Vector a
filterIndex indices vector = -- sample inefficient implementation
filter (\(idx, _) -> idx `elem` indices) vector
mapFst :: Vector a -> [Int]
mapFst = map fst
-- idealy I whould stop here, but I must write repeat for all possible types
-- and kinds of wrapped containers and function this:
filterIndexW :: [Int] -> WrappedVector -> WrappedVector
filterIndexW indices vw = case vw of
FloatVector v -> FloatVector $ filterIndex indices v
IntVector v -> IntVector $ filterIndex indices v
mapFstW :: WrappedVector -> [Int]
mapFstW vw = case vw of
FloatVector v -> map fst v
IntVector v -> map fst v
-- sample usage of query
main = putStrLn $ show $ query [FloatVector [(1, 12), (2, -2)],
IntVector [(2, 17), (3, -10)]]
How can I express such code without wrapping and unwrapping like in mapFstW and filterIndexW functions?

If you're willing to work with a few compiler extensions, ExistentialQuantification solves your problem nicely.
{-# LANGUAGE ExistentialQuantification #-}
{-# LANGUAGE StandaloneDeriving #-}
module VectorTest where
type PrimVector a = [(Int, a)]
data Vector = forall a . Show a => Vector (PrimVector a)
deriving instance Show Vector
query :: [Vector] -> [Vector] -- equal length
query vectors = map (filterIndex commonIndices) vectors
where
commonIndices = intersection [mapFst vector | vector <- vectors]
intersection :: [[Int]] -> [Int]
intersection = head -- dummy impl. (intersection of sorted vectors)
filterIndex :: [Int] -> Vector -> Vector
filterIndex indices (Vector vector) = -- sample inefficient implementation
Vector $ filter (\(idx, _) -> idx `elem` indices) vector
mapFst :: Vector -> [Int]
mapFst (Vector l) = map fst l
-- sample usage of query
main = putStrLn $ show $ query [Vector [(1, 12), (2, -2)],
Vector [(2, 17), (3, -10)]]
The StandaloneDeriving requirement can be removed if you write a manual Show instance for Vector, e.g.
instance Show Vector where
show (Vector v) = show v

The standard option for wrapping a single type without a performance hit is to do
{-# LANGUAGE GeneralizedNewtypeDeriving #-} -- so we can derive Num
newtype MyInt = My Int deriving (Eq,Ord,Show,Num)
newtype AType a = An a deriving (Show, Eq)
Because it creates a difference only at the type level - the data representation is identical because it all gets compiled away. You can even specify that values are unboxed, BUT... this doesn't help you here because you're wrapping multiple types.
The real problem is that you're trying to represent a dynamically typed solution in a staticly typed language. There is necessarily a performance hit for dynamic typing which is hidden from you in a dynamic language but made explicit here in tagging.
You have two solutions:
Accept that dynamic typing involves additional runtime checks over static typing, and live with the ugly.
Reject the need for dynamic typing, accepting that polymorphic typing tidies up all the code and moves the type checking to compile time and data aquisition.
I feel that 2 is by far the best solution, and you should give up trying to list an program all the types you want to use, instead programming to use any type. It's neat, clear and efficient. You check validity and handle it once, then stop worrying.

Related

Heterogenous sized vectors where the types "work elsewhere"

Suppose I have a function that works on a vector with size known at compile-time (these are provided by the vector-sized package):
{-# LANGUAGE DataKinds, GADTs #-}
module Test where
import Data.Vector.Sized
-- Processes vectors known at compile time to have size 4.
processVector :: Vector 4 Int -> String
processVector = undefined
Fine, but what if I don't want to process vector of ints, but a vector of vectors?
-- Same thing but has subvectors of size 3.
processVector2 :: Vector 4 (Vector 3 Int) -> String
processVector2 = undefined
Fine, but there each sub-vector is of a fixed size. I want a function where the subvectors can each be of a different size but still known at compile time.
We can do this with existential quantifications:
data InnerVector = forall n. InnerVector (Vector n Int)
processVector3 :: Vector 4 InnerVector -> String
processVector3 = undefined
Fine, but what if I want to return not a String but a vector of the same dimensions?
processVector4 :: Vector 4 InnerVector -> Vector 4 InnerVector
processVector4 = undefined
This does not work because the second vector might have differently sized subvectors from the input subvectors! I want them known to be same at compile time. (So the subvectors at index 0 have same size, subvectors at index 1 have the same size, and so on.)
Is this possible to achieve? If not, do you know of (or can you create) a data structure that makes this possible?
I am avoiding tuples because:
My vectors will have size over 100.
Vectors make general processing easy (using 0-based indexes), so my processing functions continue to work even if I add more items to my vector.
I do indeed only want values of one type within the inner vectors (Int in the example).
By using existential quantification you effectively hide the sizes of the inner
vectors. But if you want to write code with types that convey that you are
preserving those sizes, you don't want them hidden. Instead you want your types
to be loud and clear about them.
So, let's define some types that broadcast these inner sizes. Essentially, you
need your "vector-of-vectors" type to be a type of heterogenous lists that
restricts the elements of these lists to be vectors. For sure, there are some
libraries out there that can help you put together such a type, but here we'll
roll our own. Just because it's more fun to do so.
Let's start with enabling some language extensions and then writing some types for the inner vectors and their sizes:
{-# LANGUAGE DataKinds, GADTs, InstanceSigs, KindSignatures, TypeOperators #-}
data Nat = Zero | Succ Nat
data Vector :: Nat -> * -> * where
VNil :: Vector Zero a
VCons :: a -> Vector n a -> Vector (Succ n) a
instance Functor (Vector n) where
fmap f VNil = VNil
fmap f (VCons x xs) = VCons (f x) (fmap f xs)
Next up is our type of "jagged" matrices (i.e., a vector of variable-size
vectors). As said, this is just a specific type of heterogeneous lists:
data JaggedMatrix :: [Nat] -> * -> * where
MNil :: JaggedMatrix '[] a
MCons :: Vector n a -> JaggedMatrix ns a -> JaggedMatrix (n : ns) a
There, that's it. The type of jagged matrices is indexed by a list that contains the sizes of the inner vectors. The outer dimension is not explicated in the type, but can simply be derived from the length of the inner-dimensions list.
Let's put it to work and write a dimensions-preservering
function. Here's an obvious one:
instance Functor (JaggedMatrix ns) where
fmap :: (a -> b) -> JaggedMatrix ns a -> JaggedMatrix ns b
fmap f MNil = MNil
fmap f (MCons xs xss) = MCons (fmap f xs) (fmap f xss)

Efficient way to do n-ary branch / tabulated functions?

I'm trying to get some basic information on the performance characteristics of branches in SBV.
Let's suppose I have an SInt16 and a very sparse lookup table Map Int16 a. I can implement the lookup with nested ite:
sCase :: (Mergeable a) => SInt16 -> a -> Map Int16 a -> a
sCase x def = go . toList
where
go [] = def
go ((k,v):kvs) = ite (x .== literal k) v (go kvs)
However, this means the generated tree will be very deep.
Does that matter?
If yes, is it better to instead generate a balanced tree of branches, effectively mirroring the Map's structure? Or is there some other scheme that would give even better performance?
If there are less than 256 entries in the map, would it change anything to "compress" it so that sCase works on an SInt8 and a Map Int8 a?
Is there some built-in SBV combinator for this use case that works better than iterated ite?
EDIT: It turns out that it matters a lot what my a is, so let me add some more detail to that. I am currently using sCase to branch in a stateful computation modeled as an RWS r w s a, with the following instances:
instance forall a. Mergeable a => Mergeable (Identity a) where
symbolicMerge force cond thn els = Identity $ symbolicMerge force cond (runIdentity thn) (runIdentity els)
instance (Mergeable s, Mergeable w, Mergeable a, forall a. Mergeable a => Mergeable (m a)) => Mergeable (RWST r w s m a) where
symbolicMerge force cond thn els = Lazy.RWST $
symbolicMerge force cond (runRWST thn) (runRWST els)
So stripping away all the newtypes, I'd like to branch into something of type r -> s -> (a, s, w) s.t. Mergeable s, Mergeable w and Mergeable a.
Symbolic look-ups are expensive
Symbolic array lookup will be expensive regardless of what data-structure you use. It boils down to the fact that there's no information available to the symbolic execution engine to cut-down on the state-space, so it ends up doing more or less what you coded yourself.
SMTLib Arrays
However, the best solution in these cases is to actually use SMT's support for arrays: http://smtlib.cs.uiowa.edu/theories-ArraysEx.shtml
SMTLib arrays are different than what you'd consider as an array in a regular programming language: It does not have bounds. In that sense, it's more of a map from inputs to outputs, spanning the entire domain. (i.e., they are equivalent to functions.) But SMT has custom theories to deal with arrays and thus they can handle problems involving arrays much more efficiently. (On the down-side, there's no notion of index-out-of-bounds or somehow controlling the range of elements you can access. You can code those up yourself on top of the abstraction though, leaving it up to you to decide how you want to handle such invalid accesses.)
If you are interested in learning more about how SMT solvers deal with arrays, the classic reference is: http://theory.stanford.edu/~arbrad/papers/arrays.pdf
Arrays in SBV
SBV supports arrays, through the SymArray class: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV.html#t:SymArray
The SFunArray type actually does not use SMTLib arrays. This was designed to support solvers that didn't understand Arrays, such as ABC: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV.html#t:SFunArray
The SArray type fully supports SMTLib arrays: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV.html#t:SArray
There are some differences between these types, and the above links describe them. However, for most purposes, you can use them interchangeably.
Converting a Haskell map to an SBV array
Going back to your original question, I'd be tempted to use an SArray to model such a look up. I'd code it as:
{-# LANGUAGE ScopedTypeVariables #-}
import Data.SBV
import qualified Data.Map as M
import Data.Int
-- Fill an SBV array from a map
mapToSArray :: (SymArray array, SymVal a, SymVal b) => M.Map a (SBV b) -> array a b -> array a b
mapToSArray m a = foldl (\arr (k, v) -> writeArray arr (literal k) v) a (M.toList m)
And use it as:
g :: Symbolic SBool
g = do let def = 0
-- get a symbolic array, initialized with def
arr <- newArray "myArray" (Just def)
let m :: M.Map Int16 SInt16
m = M.fromList [(5, 2), (10, 5)]
-- Fill the array from the map
let arr' :: SArray Int16 Int16 = mapToSArray m arr
-- A simple problem:
idx1 <- free "idx1"
idx2 <- free "idx2"
pure $ 2 * readArray arr' idx1 + 1 .== readArray arr' idx2
When I run this, I get:
*Main> sat g
Satisfiable. Model:
idx1 = 5 :: Int16
idx2 = 10 :: Int16
You can run it as satWith z3{verbose=True} g to see the SMTLib output it generates, which avoids costly lookups by simply delegating those tasks to the backend solver.
Efficiency
The question of whether this will be "efficient" really depends on how many elements your map has that you're constructing the array from. The larger the number of elements and the trickier the constraints, the less efficient it will be. In particular, if you ever write to an index that is symbolic, I'd expect slow-downs in solving time. If they're all constants, it should be relatively performant. As is usual in symbolic programming, it's really hard to predict any performance without seeing the actual problem and experimenting with it.
Arrays in the query context
The function newArray works in the symbolic context. If you're in a query context, instead use freshArray: https://hackage.haskell.org/package/sbv-8.7/docs/Data-SBV-Control.html#v:freshArray

What's a better way of managing large Haskell records?

Replacing fields names with letters, I have cases like this:
data Foo = Foo { a :: Maybe ...
, b :: [...]
, c :: Maybe ...
, ... for a lot more fields ...
} deriving (Show, Eq, Ord)
instance Writer Foo where
write x = maybeWrite a ++
listWrite b ++
maybeWrite c ++
... for a lot more fields ...
parser = permute (Foo
<$?> (Nothing, Just `liftM` aParser)
<|?> ([], bParser)
<|?> (Nothing, Just `liftM` cParser)
... for a lot more fields ...
-- this is particularly hideous
foldl1 merge [foo1, foo2, ...]
merge (Foo a b c ...seriously a lot more...)
(Foo a' b' c' ...) =
Foo (max a a') (b ++ b') (max c c') ...
What techniques would allow me to better manage this growth?
In a perfect world a, b, and c would all be the same type so I could keep them in a list, but they can be many different types. I'm particularly interested in any way to fold the records without needing the massive patterns.
I'm using this large record to hold the different types resulting from permutation parsing the vCard format.
Update
I've implemented both the generics and the foldl approaches suggested below. They both work, and they both reduce three large field lists to one.
Datatype-generic programming techniques can be used to transform all the fields of a record in some "uniform" sort of way.
Perhaps all the fields in the record implement some typeclass that we want to use (the typical example is Show). Or perhaps we have another record of "similar" shape that contains functions, and we want to apply each function to the corresponding field of the original record.
For these kinds of uses, the generics-sop library is a good option. It expands the default Generics functionality of GHC with extra type-level machinery that provides analogues of functions like sequence or ap, but which work over all the fields of a record.
Using generics-sop, I tried to create a slightly less verbose version of your merge funtion. Some preliminary imports:
{-# language TypeOperators #-}
{-# language DeriveGeneric #-}
{-# language TypeFamilies #-}
{-# language DataKinds #-}
import Control.Applicative (liftA2)
import qualified GHC.Generics as GHC
import Generics.SOP
A helper function that lifts a binary operation to a form useable by the functions of generics-sop:
fn_2' :: (a -> a -> a) -> (I -.-> (I -.-> I)) a -- I is simply an Identity functor
fn_2' = fn_2 . liftA2
A general merge function that takes a vector of operators and works on any single-constructor record that derives Generic:
merge :: (Generic a, Code a ~ '[ xs ]) => NP (I -.-> (I -.-> I)) xs -> a -> a -> a
merge funcs reg1 reg2 =
case (from reg1, from reg2) of
(SOP (Z np1), SOP (Z np2)) ->
let npResult = funcs `hap` np1 `hap` np2
in to (SOP (Z npResult))
Code is a type family that returns a type-level list of lists describing the structure of a datatype. The outer list is for constructors, the inner lists contain the types of the fields for each constructor.
The Code a ~ '[ xs ] part of the constraint says "the datatype can only have one constructor" by requiring the outer list to have exactly one element.
The (SOP (Z _) pattern matches extract the (heterogeneus) vector of field values from the record's generic representation. SOP stands for "sum-of-products".
A concrete example:
data Person = Person
{
name :: String
, age :: Int
} deriving (Show,GHC.Generic)
instance Generic Person -- this Generic is from generics-sop
mergePerson :: Person -> Person -> Person
mergePerson = merge (fn_2' (++) :* fn_2' (+) :* Nil)
The Nil and :* constructors are used to build the vector of operators (the type is called NP, from n-ary product). If the vector doesn't match the number of fields in the record, the program won't compile.
Update. Given that the types in your record are highly uniform, an alternative way of creating the vector of operations is to define instances of an auxiliary typeclass for each field type, and then use the hcpure function:
class Mergeable a where
mergeFunc :: a -> a -> a
instance Mergeable String where
mergeFunc = (++)
instance Mergeable Int where
mergeFunc = (+)
mergePerson :: Person -> Person -> Person
mergePerson = merge (hcpure (Proxy :: Proxy Mergeable) (fn_2' mergeFunc))
The hcliftA2 function (that combines hcpure, fn_2 and hap) could be used to simplify things further.
Some suggestions:
(1) You can use the RecordWildCards extension to automatically
unpack a record into variables. Doesn't help if you need to unpack
two records of the same type, but it's a useful to keep in mind.
Oliver Charles has a nice blog post on it: (link)
(2) It appears your example application is performing a fold over the records.
Have a look at Gabriel Gonzalez's foldl package. There is also a blog post: (link)
Here is a example of how you might use it with a record like:
data Foo = Foo { _a :: Int, _b :: String }
The following code computes the maximum of the _a fields and the
concatenation of the _b_ fields.
import qualified Control.Foldl as L
import Data.Profunctor
data Foo = Foo { _a :: Int, _b :: String }
deriving (Show)
fold_a :: L.Fold Foo Int
fold_a = lmap _a (L.Fold max 0 id)
fold_b :: L.Fold Foo String
fold_b = lmap _b (L.Fold (++) "" id)
fold_foos :: L.Fold Foo Foo
fold_foos = Foo <$> fold_a <*> fold_b
theFoos = [ Foo 1 "a", Foo 3 "b", Foo 2 "c" ]
test = L.fold fold_foos theFoos
Note the use of the Profunctor function lmap to extract out
the fields we want to fold over. The expression:
L.Fold max 0 id
is a fold over a list of Ints (or any Num instance), and therefore:
lmap _a (L.Fold max 0 id)
is the same fold but over a list of Foo records where we use _a
to produce the Ints.

Pattern matching on a private data constructor

I'm writing a simple ADT for grid axis. In my application grid may be either regular (with constant step between coordinates), or irregular (otherwise). Of course, the regular grid is just a special case of irregular one, but it may worth to differentiate between them in some situations (for example, to perform some optimizations). So, I declare my ADT as the following:
data GridAxis = RegularAxis (Float, Float) Float -- (min, max) delta
| IrregularAxis [Float] -- [xs]
But I don't want user to create malformed axes with max < min or with unordered xs list. So, I add "smarter" construction functions which perform some basic checks:
regularAxis :: (Float, Float) -> Float -> GridAxis
regularAxis (a, b) dx = RegularAxis (min a b, max a b) (abs dx)
irregularAxis :: [Float] -> GridAxis
irregularAxis xs = IrregularAxis (sort xs)
I don't want user to create grids directly, so I don't add GridAxis data constructors into module export list:
module GridAxis (
GridAxis,
regularAxis,
irregularAxis,
) where
But it turned out that after having this done I cannot use pattern matching on GridAxis anymore. Trying to use it
import qualified GridAxis as GA
test :: GA.GridAxis -> Bool
test axis = case axis of
GA.RegularAxis -> True
GA.IrregularAxis -> False
gives the following compiler error:
src/Physics/ImplicitEMC.hs:7:15:
Not in scope: data constructor `GA.RegularAxis'
src/Physics/ImplicitEMC.hs:8:15:
Not in scope: data constructor `GA.IrregularAxis'
Is there something to work this around?
You can define constructor pattern synonyms. This lets you use the same name for smart construction and "dumb" pattern matching.
{-# LANGUAGE PatternSynonyms #-}
module GridAxis (GridAxis, pattern RegularAxis, pattern IrregularAxis) where
import Data.List
data GridAxis = RegularAxis_ (Float, Float) Float -- (min, max) delta
| IrregularAxis_ [Float] -- [xs]
-- The line with "<-" defines the matching behavior
-- The line with "=" defines the constructor behavior
pattern RegularAxis minmax delta <- RegularAxis_ minmax delta where
RegularAxis (a, b) dx = RegularAxis_ (min a b, max a b) (abs dx)
pattern IrregularAxis xs <- IrregularAxis_ xs where
IrregularAxis xs = IrregularAxis_ (sort xs)
Now you can do:
module Foo
import GridAxis
foo :: GridAxis -> a
foo (RegularAxis (a, b) d) = ...
foo (IrregularAxis xs) = ...
And also use RegularAxis and IrregularAxis as smart constructors.
This looks as a use case for pattern synonyms.
Basically you don't export the real constructor, but only a "smart" one
{-# LANGUAGE PatternSynonyms #-}
module M(T(), SmartCons, smartCons) where
data T = RealCons Int
-- the users will construct T using this
smartCons :: Int -> T
smartCons n = if even n then RealCons n else error "wrong!"
-- ... and destruct T using this
pattern SmartCons n <- RealCons n
Another module importing M can then use
case someTvalue of
SmartCons n -> use n
and e.g.
let value = smartCons 23 in ...
but can not use the RealCons directly.
If you prefer to stay in basic Haskell, without extensions, you can use a "view type"
module M(T(), smartCons, Tview(..), toView) where
data T = RealCons Int
-- the users will construct T using this
smartCons :: Int -> T
smartCons n = if even n then RealCons n else error "wrong!"
-- ... and destruct T using this
data Tview = Tview Int
toView :: T -> Tview
toView (RealCons n) = Tview n
Here, users have full access to the view type, which can be constructed/destructed freely, but have only a restricted start constructor for the actual type T. Destructing the actual type T is possible by moving to the view type
case toView someTvalue of
Tview n -> use n
For nested patterns, things become more cumbersome, unless you enable other extensions such as ViewPatterns.

Pattern matching on length using this GADT:

I've defined the following GADT:
data Vector v where
Zero :: Num a => Vector a
Scalar :: Num a => a -> Vector a
Vector :: Num a => [a] -> Vector [a]
TVector :: Num a => [a] -> Vector [a]
If it's not obvious, I'm trying to implement a simple vector space. All vector spaces need vector addition, so I want to implement this by making Vector and instance of Num. In a vector space, it doesn't make sense to add vectors of different lengths, and this is something I would like to enforce. One way I thought to do it would be using guards:
instance Num (Vector v) where
(Vector a) + (Vector b) | length a == length b =
Vector $ zipWith (+) a b
| otherwise =
error "Only add vectors with the same length."
There is nothing really wrong with this approach, but I feel like there has to be a way to do this with pattern matching. Perhaps one way to do it would be to define a new data type VectorLength, which would look something like this:
data Length l where
AnyLength :: Nat a => Length a
FixedLength :: Nat a -> Length a
Then, a length component could be added to the Vector data type, something like this:
data Vector (Length l) v where
Zero :: Num a => Vector AnyLength a
-- ...
Vector :: Num a => [a] -> Vector (length [a]) [a]
I know this isn't correct syntax, but this is the general idea I'm playing with. Finally, you could define addition to be
instance Num (Vector v) where
(Vector l a) + (Vector l b) = Vector $ zipWith (+) a b
Is such a thing possible, or is there any other way to use pattern matching for this purpose?
What you're looking for is something (in this instance confusingly) named a Vector as well. Generally, these are used in dependently typed languages where you'd write something like
data Vec (n :: Natural) a where
Nil :: Vec 0 a
Cons :: a -> Vec n a -> Vec (n + 1) a
But that's far from valid Haskell (or really any language). Some very recent extensions to GHC are beginning to enable this kind of expression but they're not there yet.
You might be interested in fixed-vector which does a best approximation of a fixed Vector available in relatively stable GHC. It uses a number of tricks between type families and continuations to create classes of fixed-size vectors.
Just to add to the example in the other answer - this nearly works already in GHC 7.6:
{-# LANGUAGE DataKinds, GADTs, KindSignatures, TypeOperators #-}
import GHC.TypeLits
data Vector (n :: Nat) a where
Nil :: Vector 0 a
Cons :: a -> Vector n a -> Vector (n + 1) a
That code compiles fine, it just doesn't work quite the way you'd hope. Let's check it out in ghci:
*Main> :t Nil
Nil :: Vector 0 a
Good so far...
*Main> :t Cons "foo" Nil
Cons "foo" Nil :: Vector (0 + 1) [Char]
Well, that's a little odd... Why does it say (0 + 1) instead of 1?
*Main> :t Cons "foo" Nil :: Vector 1 String
<interactive>:1:1:
Couldn't match type `0 + 1' with `1'
Expected type: Vector 1 String
Actual type: Vector (0 + 1) String
In the return type of a call of `Cons'
In the expression: Cons "foo" Nil :: Vector 1 String
Uh. Oops. That'd be why it says (0 + 1) instead of 1. It doesn't know that those are the same. This will be fixed (at least this case will) in GHC 7.8, which is due out... In a couple months, I think?

Resources