Accessing a common field in sum type variants - haskell

Suppose I have a sum type (or several, in fact), that I know by design all have a common field:
data T1 a
= C1 String a
| C2 Int a
| C3 Bool a
data T2 a
= C4 Int Int a
| C5 [String] a
Is there a way to access the a field without having to pattern match on all variants across all types?
(I ask in the context of defining ASTs & having a neat way of accessing node-specific information)

A Minor Technical Detail
At the boring super-technical level, no. There is no way to access the fields of a constructor without pattern matching. Pattern matching is the primitive operation that causes the constructor to be evaluated. Before that, the fields don't even necessarily exist (thanks to non-strict evaluation).
Some Options That Might Be Useful
But you probably didn't mean that low-level question. You probably want a way to work with these data types without constantly writing pattern matches. And that can be done. It's just a matter of writing some functions. Which functions, though? ...that can be interesting.
You can write simple accessor functions:
t1ToA :: T1 a -> a
t1ToA (C1 _ x) = x
t1ToA (C2 _ x) = x
t1ToA (C3 _ x) = x
t2ToA :: T2 a -> a
t2ToA (C4 _ _ x) = x
t2ToA (C5 _ x) = x
Don't automatically reject this approach. Sure, it's a bit hungry on namespace because you need a different function name for each type. On the other hand, it's really good for readability and type inference. There's nothing magical anywhere. You might write some matching setter and modifier functions as well.
If you find that's getting to be too namespace hungry when you have various set and modify functions added in, you could use the van Laarhoven trick:
t1A :: Functor f => (a -> f a) -> T1 a -> f (T1 a)
t1A g (C1 x y) = C1 x <$> g y
t1A g (C2 x y) = C2 x <$> g y
t1A g (C3 x y) = C3 x <$> g y
t2A :: Functor f => (a -> f a) -> T2 a -> f (T2 a)
t2A g (C4 x y z) = C4 x y <$> g z
t2A g (C5 x y) = C5 x <$> g y
This representation lets you do reading and updating from the same type, though it is awkward without some helper functions. This is the representation used by libraries like lens, which provide you a huge number of those helper functions. But maybe you don't want to worry about learning how to work with this representation. I'm going to assume this isn't really what you're looking for and not even go into the details of how those helper functions work. But at a high level, they make clever use of specific types for f like Identity and Const a.
An option if you are willing to give up some type inference in order to reduce namespace use is to go for some sort of ad-hoc class:
class ToA f where
toA :: f a -> a
instance ToA T1 where
toA (C1 _ x) = x
toA (C2 _ x) = x
toA (C3 _ x) = x
instance ToA T2 where
toA :: T2 a -> a
toA (C4 _ _ x) = x
toA (C5 _ x) = x
You could choose to combine this with the van Laarhoven encoding, for what it's worth. This would minimize the amount of namespace you grab, but requiring some additional helpers for the sake of using them easily.
There are a few other options that you might be able to work with, like using less ad-hoc tools GHC provides. Data and Generic are different classes you could work with where GHC gives you a lot of the tools already. But these tend to be very complex to pick up the first time around.
But Maybe There's a Better Solution
There's one last option that is actually the one I would recommend in most cases. Refactor your data types so the shared values aren't duplicated.
data WithA t a = WithA t a
data T1
= C1 String
| C2 Int
| C3 Bool
And so on. Or however you might choose to refactor it. The important part is that the shared field is lifted out of the sum type, and is just always present. I think that this often ends up working the best. It often communicates what you mean better. When you have 3 constructors which each have a field of the same type it's not immediately obvious that that field should be seen as interchangeable between the constructors, even when the datatype is polymorphic over that field's type. But if it's a single field outside of the multiple constructors it is immediately obvious that it's always the same thing. Don't underestimate the communication value that provides for all future maintainers of the code.

Carl already mentioned a few alternatives. For completeness, let me add that using records allows one to use a common field name for all constructors, and that in turn allows to get/set the common field.
data T a
= K1 { foo :: a, oth1 :: Int }
| K2 { foo :: a, oth2 :: String }
get :: T a -> a
get = foo
set :: T a -> a -> T a
set t x = t{foo = x}
I am not a huge fan of this approach since other fields like oth1, oth2 become partial functions. I would rather refactor the type as Carl showed (the WithA example).

Related

Is there a way to bind the supressed type variable of an existential data type during pattern matching?

Using GADTs, I have defined a depth-indexed tree data type (2–3 tree). The depth is there to statically ensure that the trees are balanced.
-- Natural numbers
data Nat = Z | S Nat
-- Depth-indexed 2-3 tree
data DT :: Nat -> Type -> Type where
-- Pattern of node names: N{#subtrees}_{#containedValues}
N0_0 :: DT Z a
N2_1 :: DT n a -> a -> DT n a
-> DT (S n) a
N3_2 :: DT n a -> a -> DT n a -> a -> DT n a
-> DT (S n) a
deriving instance Eq a => Eq (DT n a)
Now, some operations (e.g. insertion) might or might not change the depth of the tree. So I want to hide it from the type signature. I do this using existential data types.
-- 2-3 tree
data T :: Type -> Type where
T :: {unT :: DT n a} -> T a
insert :: a -> T a -> T a
insert x (T dt) = case dt of
N0_0 -> T $ N2_1 N0_0 x N0_0
{- ... -}
So far so good. My problem is:
I don't see how I can now define Eq on T.
instance Eq a => Eq (T a) where
(T x) == (T y) = _what
Obviously, I would like to do something like this:
(T {n = nx} x) == (T {n = ny} y)
| nx == ny = x == y
| otherwise = False
I don't know how / whether I can bind the type variables in the patter match. And I am neither sure how to compare them once I get them.
(I suspect Data.Type.Equality is for this, but I haven't seen any example of it in use.)
So, is there a way to implement the Eq (T a) instance, or is there some other approach that is recommended in this case?
You should write a depth-independent equality operator, which is able to compare two trees even if they have different depths n and m.
dtEq :: Eq a => DT n a -> DT m a -> Bool
dtEq N0_0 N0_0 = True
dtEq (N2_1 l1 x1 r1) (N2_1 l2 x2 r2) =
dtEq l1 l2 && x1 == x2 && dtEq r1 r2
dtEq (N3_2 a1 x1 b1 y1 c1) (N3_2 a2 x2 b2 y2 c2) =
dtEq a1 a2 && x1 == x2 && dtEq b1 b2 && y1 == y2 && dtEq c1 c2
dtEq _ _ = False
Then, for your existential type:
instance Eq a => Eq (T a) where
(T x) == (T y) = dtEq x y
Even if in the last line the depths are unknown (because of the existential), it won't matter for dtEq since it can accept any depth.
Minor side note: dtEq exploits polymorphic recursion, in that recursive calls can use a different depth from the one in the original call. Haskell allows polymorphic recursion, as long as an explicit type signature is provided. (We need one anyway, since we are using GADTs.)
You could use Data.Coerce.coerce to compare the contents of the trees: as long as you label the depth parameter as phantom, it should be willing to give you coerce :: DT n a -> DT m a.
But this doesn't really solve the problem, of course: you want to know if their types are the same. Well, maybe there is some solution with Typeable, but it doesn't sound like much fun. Absent Typeable, it seems impossible to me, because you want two contradictory things.
First, you want that trees of different depths should be separate types, not intermixable at all. This means everyone who handles them has to know what type they are.
Second, you want that you can give such a tree to someone without telling them how deep it is, have them munge it around arbitrarily, and then give it back to you. How can they do that, if you require type knowledge to operate on them?
Existentials do not "suppress" type information: they throw it away. Like all type information, it is gone at runtime; and you've made it invisible at compile time too.
I'm also not sure your problem is just with Eq: how will you even implement functions like insert? It's easy for N0_0, because that is known to have type DT Z a, but for the other cases I don't see how you will construct a DT (S n) a to wrap in your T when you can't know what n was.

What is the Maybe type and how does it work?

I am just starting to program in Haskell, and I came across the following definition:
calculate :: Float -> Float -> Maybe Float
Maybe a is an ordinary data type defined as:
data Maybe a = Just a | Nothing
There are thus two possibilities: or you define a value of type a as Just a (like Just 3), or Nothing in case the query has no answer.
It is meant to be defined as a way to define output for non-total functions.
For instance: say you want to define sqrt. The square root is only defined for positive integers, you can thus define sqrt as:
sqrt x | x >= 0 = Just $ ...
| otherwise = Nothing
with ... a way to calculate the square root for x.
Some people compare Nothing with the "null pointer" you find in most programming languages. By default, you don't implement a null pointer for data types you define (and if you do, all these "nulls" look different), by adding Nothing you have a generic null pointer.
It can thus be useful to use Maybe to denote that it is possible no output can be calculated. You could of course also error on values less than 0:
sqrt x | x >= 0 = Just $ ...
| otherwise = error "The value must be larger or equal to 0"
But errors usually are not mentioned in the type signature, nor does a compiler have any problem if you don't take them into account. Haskell is also shifting to total functions: it's better to always try at least to return a value (e.g. Nothing) for all possible inputs.
If you later want to use the result of a Maybe a, you for instance need to write:
succMaybe :: Maybe Int -> Maybe Int
succMaybe (Just x) = Just (x+1)
succMaybe _ = Nothing
But by writing Just for the first case, you somehow warn yourself that it is possible that Nothing can occur. You can also get rid of the Maybe by introducing a "default" value:
justOrDefault :: a -> Maybe a -> a
justOrDefault _ (Just x) = x
justOrDefault d _ = d
The builtin maybe function (note the lowercase), combines the two previous functions:
maybe :: b -> (a -> b) -> Maybe a -> b
maybe _ f (Just x) = f x
maybe z _ Nothing = z
So you specify a b (default value) together with a function (a -> b). In case Maybe a is Just x, the function is applied to it and returned, in case the input value is Nothing, the default value will be used.
Working with Maybe a's can be hard, because you always need to take the Nothing case into account, to simplify this you can use the Maybe monad.
Tom Schrijvers also shows that Maybe is the successor function in type algebra: you add one extra value to your type (Either is addition and (,) is the type-algebraic equivalent of multiplication).

Why context is not considered when selecting typeclass instance in Haskell?

I understand that when having
instance (Foo a) => Bar a
instance (Xyy a) => Bar a
GHC doesn't consider the contexts, and the instances are reported as duplicate.
What is counterintuitive, that (I guess) after selecting an instance, it still needs to check if the context matches, and if not, discard the instance. So why not reverse the order, and discard instances with non-matching contexts, and proceed with the remaining set.
Would this be intractable in some way? I see how it could cause more constraint resolution work upfront, but just as there is UndecidableInstances / IncoherentInstances, couldn't there be a ConsiderInstanceContexts when "I know what I am doing"?
This breaks the open-world assumption. Assume:
class B1 a
class B2 a
class T a
If we allow constraints to disambiguate instances, we may write
instance B1 a => T a
instance B2 a => T a
And may write
instance B1 Int
Now, if I have
f :: T a => a
Then f :: Int works. But, the open world assumption says that, once something works, adding more instances cannot break it. Our new system doesn't obey:
instance B2 Int
will make f :: Int ambiguous. Which implementation of T should be used?
Another way to state this is that you've broken coherence. For typeclasses to be coherent means that there is only one way to satisfy a given constraint. In normal Haskell, a constraint c has only one implementation. Even with overlapping instances, coherence generally holds true. The idea is that instance T a and instance {-# OVERLAPPING #-} T Int do not break coherence, because GHC can't be tricked into using the former instance in a place where the latter would do. (You can trick it with orphans, but you shouldn't.) Coherence, at least to me, seems somewhat desirable. Typeclass usage is "hidden", in some sense, and it makes sense to enforce that it be unambiguous. You can also break coherence with IncoherentInstances and/or unsafeCoerce, but, y'know.
In a category theoretic way, the category Constraint is thin: there is at most one instance/arrow from one Constraint to another. We first construct two arrows a : () => B1 Int and b : () => B2 Int, and then we break thinness by adding new arrows x_Int : B1 Int => T Int, y_Int : B2 Int => T Int such that x_Int . a and y_Int . b are both arrows () => T Int that are not identical. Diamond problem, anyone?
This does not answer you question as to why this is the case. Note, however, that you can always define a newtype wrapper to disambiguate between the two instances:
newtype FooWrapper a = FooWrapper a
newtype XyyWrapper a = XyyWrapper a
instance (Foo a) => Bar (FooWrapper a)
instance (Xyy a) => Bar (XyyWrapper a)
This has the added advantage that by passing around either a FooWrapper or a XyyWrapper you explicitly control which of the two instances you'd like to use if your a happens to satisfy both.
Classes are a bit weird. The original idea (which still pretty much works) is a sort of syntactic sugar around what would otherwise be data statements. For example you can imagine:
data Num a = Num {plus :: a -> a -> a, ... , fromInt :: Integer -> a}
numInteger :: Num Integer
numInteger = Num (+) ... id
then you can write functions which have e.g. type:
test :: Num x -> x -> x -> x -> x
test lib a b c = a + b * (abs (c + b))
where (+) = plus lib
(*) = times lib
abs = absoluteValue lib
So the idea is "we're going to automatically derive all of this library code." The question is, how do we find the library that we want? It's easy if we have a library of type Num Int, but how do we extend it to "constrained instances" based on functions of type:
fooLib :: Foo x -> Bar x
xyyLib :: Xyy x -> Bar x
The present solution in Haskell is to do a type-pattern-match on the output-types of those functions and propagate the inputs to the resulting declaration. But when there's two outputs of the same type, we would need a combinator which merges these into:
eitherLib :: Either (Foo x) (Xyy x) -> Bar x
and basically the problem is that there is no good constraint-combinator of this kind right now. That's your objection.
Well, that's true, but there are ways to achieve something morally similar in practice. Suppose we define some functions with types:
data F
data X
foobar'lib :: Foo x -> Bar' x F
xyybar'lib :: Xyy x -> Bar' x X
bar'barlib :: Bar' x y -> Bar x
Clearly the y is a sort of "phantom type" threaded through all of this, but it remains powerful because given that we want a Bar x we will propagate the need for a Bar' x y and given the need for the Bar' x y we will generate either a Bar' x X or a Bar' x y. So with phantom types and multi-parameter type classes, we get the result we want.
More info: https://www.haskell.org/haskellwiki/GHC/AdvancedOverlap
Adding backtracking would make instance resolution require exponential time, in the worst case.
Essentially, instances become logical statements of the form
P(x) => R(f(x)) /\ Q(x) => R(f(x))
which is equivalent to
(P(x) \/ Q(x)) => R(f(x))
Computationally, the cost of this check is (in the worst case)
c_R(n) = c_P(n-1) + c_Q(n-1)
assuming P and Q have similar costs
c_R(n) = 2 * c_PQ(n-1)
which leads to exponential growth.
To avoid this issue, it is important to have fast ways to choose a branch, i.e. to have clauses of the form
((fastP(x) /\ P(x)) \/ (fastQ(x) /\ Q(x))) => R(f(x))
where fastP and fastQ are computable in constant time, and are incompatible so that at most one branch needs to be visited.
Haskell decided that this "fast check" is head compatibility (hence disregarding contexts). It could use other fast checks, of course -- it's a design decision.

Using a lens twice

I'm struggling with using the lens library for a particular problem. I'm trying to pass
an updated data structure
a lens focussed on part of that updated structure
to another function, g. I pass both the lens and the data structure because g needs some shared information from the data structure as well as a piece of information. (If it helps, the data structure contains information on a joint probability distribution, but g only works on either marginal and needs to know which marginal I'm looking at. The only difference between the two marginals is their mean with the rest of their definition being shared in the data structure).
My first attempt looked like this
f :: Functor f => Params -> ((Double -> f Double) -> Params -> f Params) -> a
f p l = g (l %~ upd $ p) l
where upd = ...
g p x = go p p^.x
but that fails during compilation because f gets inferred as being Identity for the update and Const Double for the getter.
What's the best way to accomplish what I want to do? I can imagine being able to do one of the following:
make a copy of the lens so that the type inference can be different in each case
rather than passing the updated structure and the lens, I pass the original structure and a lens which returns a modified value (if I only want to update the part of the structure that the lens looks at).
making a better design choice for my functions/data structure
something completely different
Thanks for any help!
András Kovács answer shows how to achieve this with RankNTypes. If you wish to avoid RankNTypes, then you can use ALens and cloneLens:
f :: a -> ALens' a Int -> (Int, a)
f a l = (newvalue, a & cloneLens l .~ newvalue)
where oldvalue = a^.cloneLens l
newvalue = if oldvalue == 0 then 0 else oldvalue - 1
Control.Lens.Loupe provides operators and functions that work on ALens instead of Lens.
Note that in many cases, you should also be able to use <<%~, which is like %~ but also returns the old value, or <%~, which returns the new value:
f :: a -> LensLike' ((,) Int) a Int -> (Int, a)
f a l = a & l <%~ g
where g oldvalue = if oldvalue == 0 then 0 else oldvalue - 1
This has the advantage that it can also work with Isos or sometimes also with Traversals (when the target type is a Monoid).
You want your type signature to look like this:
f :: Params -> Lens Params Params Double Double -> ...
-- alternatively, instead of the long Lens form you can write
-- Lens' Params Double
This is not equivalent to what you wrote out in the signature, because the functor parameter is quantified inside Lens:
type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)
The correct signature translates to :
f :: Params -> (forall f. Functor f => (Double -> f Double) -> Params -> f Params) -> ...
This prevents the compiler from unifying the different f parameters of different lens usages, i. e. you can use the lens polymorphically. Note that you need the RankNTypes or Rank2Types GHC extension in order to be able to write out the signature.
Benno gave the best general purpose answer.
There is two other options, however, which I offer here for completeness.
1.)
There are several Loupe combinators in Lens.
http://hackage.haskell.org/package/lens-4.1.2/docs/Control-Lens-Loupe.html
They all have names that involve #.
^# and #%= both take ALens which is a lens instantiated at a particular concrete choice of functor.
This can be useful if you need to pass around lists of lenses, or if you really really need multiple passes.
2.)
Another option, and my preferred tactic, is to figure out how to do both operations a the same time.
Here you are modifying, but want the value you just set. Well, yes can give you that by using <%~ instead of %~.
Now you only instantiate the lens at one choice of functor and your code gets faster.

How can a function be "transparently augmented" in Haskell?

Situation
I have function f, which I want to augment with function g, resulting in function named h.
Definitions
By "augment", in the general case, I mean: transform either input (one or more arguments) or output (return value) of function f.
By "augment", in the specific case, (specific to my current situation) I mean: transform only the output (return value) of function f while leaving all the arguments intact.
By "transparent", in the context of "augmentation", (both the general case and the specific case) I mean: To couple g's implementation as loosely to f's implementation as possible.
Specific case
In my current situation, this is what I need to do:
h a b c = g $ f a b c
I am interested in rewriting it to something like this:
h = g . f -- Doesn't type-check.
Because from the perspective of h and g, it doesn't matter what arguments f take, they only care about the return value, hence it would be tight coupling to mention the arguments in any way. For instance, if f's argument count changes in the future, h will also need to be changed.
So far
I asked lambdabot on the #haskell IRC channel: #pl h a b c = g $ f a b c to which I got the response:
h = ((g .) .) . f
Which is still not good enough since the number of (.)'s is dependent on the number of f's arguments.
General case
I haven't done much research in this direction, but erisco on #haskell pointed me towards http://matt.immute.net/content/pointless-fun which hints to me that a solution for the general case could be possible.
So far
Using the functions defined by Luke Palmer in the above article this seems to be an equivalent of what we have discussed so far:
h = f $. id ~> id ~> id ~> g
However, it seems that this method sadly also suffers from being dependent on the number of arguments of f if we want to transform the return value of f -- just as the previous methods.
Working example
In JavaScript, for instance, it is possible to achieve transparent augmentation like this:
function h () { return g(f.apply(this, arguments)) }
Question
How can a function be "transparently augmented" in Haskell?
I am mainly interested in the specific case, but it would be also nice to know how to handle the general case.
You can sort-of do it, but since there is no way to specify a behavior for everything that isn't a function, you'll need a lot of trivial instances for all the other types you care about.
{-# LANGUAGE TypeFamilies, DefaultSignatures #-}
class Augment a where
type Result a
type Result a = a
type Augmented a r
type Augmented a r = r
augment :: (Result a -> r) -> a -> Augmented a r
default augment :: (a -> r) -> a -> r
augment g x = g x
instance Augment b => Augment (a -> b) where
type Result (a -> b) = Result b
type Augmented (a -> b) r = a -> Augmented b r
augment g f x = augment g (f x)
instance Augment Bool
instance Augment Char
instance Augment Integer
instance Augment [a]
-- and so on for every result type of every function you want to augment...
Example:
> let g n x ys = replicate n x ++ ys
> g 2 'a' "bc"
"aabc"
> let g' = augment length g
> g' 2 'a' "bc"
4
> :t g
g :: Int -> a -> [a] -> [a]
> :t g'
g' :: Int -> a -> [a] -> Int
Well, technically, with just enough IncoherentInstances you can do pretty much anything:
{-# LANGUAGE MultiParamTypeClasses, TypeFamilies,
FlexibleInstances, UndecidableInstances, IncoherentInstances #-}
class Augment a b f h where
augment :: (a -> b) -> f -> h
instance (a ~ c, h ~ b) => Augment a b c h where
augment = ($)
instance (Augment a b d h', h ~ (c -> h')) => Augment a b (c -> d) h where
augment g f = augment g . f
-- Usage
t1 = augment not not
r1 = t1 True
t2 = augment (+1) (+)
r2 = t2 2 3
t3 = augment (+1) foldr
r3 = t3 (+) 0 [2,3]
The problem is that the real return value of something like a -> b -> c isn't
c, but b -> c. What you want require some kind of test that tells you if a type isn't
a function type. You could enumerate the types you are interested in, but that's not so
nice. I think HList solve this problem somehow, look at the paper. I managed to understand a bit of the solution with overlapping instances, but the rest goes a bit over my head I'm afraid.
JavaScript works, because its arguments are a sequence, or a list, so there is just one argument, really. In that sense it is the same as a curried version of the functions with a tuple representing the collection of arguments.
In a strongly typed language you need a lot more information to do that "transparently" for a function type - for example, dependent types can express this idea, but require the functions to be of specific types, not a arbitrary function type.
I think I saw a workaround in Haskell that can do this, too, but, again, that works only for specific types, which capture the arity of the function, not any function.

Resources