Examining the binding structure in a free monad AST - haskell

Take this simple base functor and other machinery for a free monad with binding terms:
{-# LANGUAGE DeriveFunctor #-}
import Control.Monad.Free
data ProgF r =
FooF (Double -> r)
| BarF Double (Int -> r)
| EndF
deriving Functor
type Program = Free ProgF
foo = liftF (FooF id)
bar a = liftF (BarF a id)
And here's a simple program
prog :: Program Int
prog = do
a <- foo
bar a
It has the following (hand-crafted) AST:
prog =
Free (FooF (\p0 ->
Free (BarF p0 (\p1 ->
Pure p1))
What I'd like to be able to do is reason about bound terms in the following way:
look at the Pure term in the AST
note the bound variables that occur there
annotate the corresponding binding nodes in the AST
Annotating a free monad AST directly via a cofree comonad seems to be impossible without doing some kind of pairing, but you could imagine getting to something like the following annotated AST (via, say, Fix) in which nodes binding variables that appear in Pure are annotated with Just True:
annotatedProg =
Just False :< FooF (\p0 ->
Just True :< BarF p0 (\p1 ->
Nothing :< EndF))
So: is there a way to inspect the bindings in a program like this in such an ad-hoc way? I.e., without introducing a distinct variable type à la this question, for example.
I suspect that this might be impossible to do. Options like data-reify are attractive but it seems to be extremely difficult or impossible to make ProgF an instance of the requisite typeclasses (Foldable, Traversable, MuRef).
Is that intuition correct, or is there some means to do this that I haven't considered? Note that I'm happy to entertain any gruesomely unsafe or dynamic means.

I'm satisfied that this is not possible to do by any 'sane' ad-hoc method, for much the same reason that it's not possible to examine the binding structure of e.g. \a -> \b -> \c -> b + a.

Related

Expressing infinite kinds

When expressing infinite types in Haskell:
f x = x x -- This doesn't type check
One can use a newtype to do it:
newtype Inf = Inf { runInf :: Inf -> * }
f x = x (Inf x)
Is there a newtype equivalent for kinds that allows one to express infinite kinds?
I already found that I can use type families to get something similar:
{-# LANGUAGE TypeFamilies #-}
data Inf (x :: * -> *) = Inf
type family RunInf x :: * -> *
type instance RunInf (Inf x) = x
But I'm not satisfied with this solution - unlike the types equivalent, Inf doesn't create a new kind (Inf x has the kind *), so there's less kind safety.
Is there a more elegant solution to this problem?
Responding to:
Like recursion-schemes, I want a way to construct ASTs, except I want my ASTs to be able to refer to each other - that is a term can contain a type (for a lambda parameter), a type can contain a row-type in it and vice-versa. I'd like the ASTs to be defined with an external fix-point (so one can have "pure" expressions or ones annotated with types after type inference), but I also want these fix-points to be able to contain other types of fix-points (just like terms can contain terms of different types). I don't see how Fix helps me there
I have a method, which maybe is near what you are asking for, that I have been experimenting with. It seems to be quite powerful, though the abstractions around this construction need some development. The key is that there is a kind Label which indicates from where the recursion will continue.
{-# LANGUAGE DataKinds #-}
import Data.Kind (Type)
data Label = Label ((Label -> Type) -> Type)
type L = 'Label
L is just a shortcut to construct labels.
Open-fixed-point definitions are of kind (Label -> Type) -> Type, that is, they take a "label interpretation (type) function" and give back a type. I called these "shape functors", and abstractly refer to them with the letter h. The simplest shape functor is one that does not recurse:
newtype LiteralF a f = Literal a -- does not depend on the interpretation f
type Literal a = L (LiteralF a)
Now we can make a little expression grammar as an example:
data Expr f
= Lit (f (Literal Integer))
| Let (f (L Defn)) (f (L Expr))
| Var (f (L String))
| Add (f (L Expr)) (f (L Expr))
data Defn f
= Defn (f (Literal String)) (f (L Expr))
Notice how we pass labels to f, which is in turn responsible for closing off the recursion. If we just want a normal expression tree, we can use Tree:
data Tree :: Label -> Type where
Tree :: h Tree -> Tree (L h)
Then a Tree (L Expr) is isomorphic to the normal expression tree you would expect. But this also allows us to, e.g., annotate the tree with a label-dependent annotation at each level of the tree:
data Annotated :: (Label -> Type) -> Label -> Type where
Annotated :: ann (L h) -> h (Annotated ann) -> Annotated ann (L h)
In the repo ann is indexed by a shape functor rather than a label, but this seems cleaner to me now. There are a lot of little decisions like this to be made, and I have yet to find the most convenient pattern. The best abstractions to use around shape functors still needs exploration and development.
There are many other possibilities, many of which I have not explored. If you end up using it I would love to hear about your use case.
With data-kinds, we can use a regular newtype:
import Data.Kind (Type)
newtype Inf = Inf (Inf -> Type)
And promote it (with ') to create new kinds to represent loops:
{-# LANGUAGE DataKinds #-}
type F x = x ('Inf x)
For a type to unpack its 'Inf argument we need a type-family:
{-# LANGUAGE TypeFamilies #-}
type family RunInf (x :: Inf) :: Inf -> Type
type instance RunInf ('Inf x) = x
Now we can express infinite kinds with a new kind for them, so no kind-safety is lost.
Thanks to #luqui for pointing out the DataKinds part in his answer!
I think you're looking for Fix which is defined as
data Fix f = Fix (f (Fix f))
Fix gives you the fixpoint of the type f. I'm not sure what you're trying to do but such infinite types are generally used when you use recursion scehemes (patterns of recursion that you can use) see recursion-schemes package by Edward Kmett or with free monads which among other things allow you to construct ASTs in a monadic style.

Set specific properties for data in Haskell

Let us say I want to make a ADT as follows in Haskell:
data Properties = Property String [String]
deriving (Show,Eq)
I want to know if it is possible to give the second list a bounded and enumerated property? Basically the first element of the list will be the minBound and the last element will be the maxBound. I am trying,
data Properties a = Property String [a]
deriving (Show, Eq)
instance Bounded (Properties a) where
minBound a = head a
maxBound a = (head . reverse) a
But not having much luck.
Well no, you can't do quite what you're asking, but maybe you'll find inspiration in this other neat trick.
{-# language ScopedTypeVariables, FlexibleContexts, UndecidableInstances #-}
import Data.Reflection -- from the reflection package
import qualified Data.List.NonEmpty as NE
import Data.List.NonEmpty (NonEmpty (..))
import Data.Proxy
-- Just the plain string part
newtype Pstring p = P String deriving Eq
-- Those properties you're interested in. It will
-- only be possible to produce bounds if there's at
-- least one property, so NonEmpty makes more sense
-- than [].
type Props = NonEmpty String
-- This is just to make a Show instance that does
-- what you seem to want easier to write. It's not really
-- necessary.
data Properties = Property String [String] deriving Show
Now we get to the key part, where we use reflection to produce class instances that can depend on run-time values. Roughly speaking, you can think of
Reifies x t => ...
as being a class-level version of
\(x :: t) -> ...
Because it operates at the class level, you can use it to parametrize instances. Since Reifies x t binds a type variable x, rather than a term variable, you need to use reflect to actually get the value back. If you happen to have a value on hand whose type ends in p, then you can just apply reflect to that value. Otherwise, you can always magic up a Proxy :: Proxy p to do the job.
-- If some Props are "in the air" tied to the type p,
-- then we can show them along with the string.
instance Reifies p Props => Show (Pstring p) where
showsPrec k p#(P str) =
showsPrec k $ Property str (NE.toList $ reflect p)
-- If some Props are "in the air" tied to the type p,
-- then we can give Pstring p a Bounded instance.
instance Reifies p Props => Bounded (Pstring p) where
minBound = P $ NE.head (reflect (Proxy :: Proxy p))
maxBound = P $ NE.last (reflect (Proxy :: Proxy p))
Now we need to have a way to actually bind types that can be passed to the type-level lambdas. This is done using the reify function. So let's throw some Props into the air and then let the butterfly nets get them back.
main :: IO ()
main = reify ("Hi" :| ["how", "are", "you"]) $
\(_ :: Proxy p) -> do
print (minBound :: Pstring p)
print (maxBound :: Pstring p)
./dfeuer#squirrel:~/src> ./WeirdBounded
Property "Hi" ["Hi","how","are","you"]
Property "you" ["Hi","how","are","you"]
You can think of reify x $ \(p :: Proxy p) -> ... as binding a type p to the value x; you can then pass the type p where you like by constraining things to have types involving p.
If you're just doing a couple of things, all this machinery is way more than necessary. Where it gets nice is when you're performing lots of operations with values that have phantom types carrying extra information. In many cases, you can avoid most of the explicit applications of reflect and the explicit proxy handling, because type inference just takes care of it all for you. For a good example of this technique in action, see the hyperloglog package. Configuration information for the HyperLogLog data structure is carried in a type parameter; this guarantees, at compile time, that only similarly configured structures are merged with each other.

Parsing to Free Monads

Say I have the following free monad:
data ExampleF a
= Foo Int a
| Bar String (Int -> a)
deriving Functor
type Example = Free ExampleF -- this is the free monad want to discuss
I know how I can work with this monad, eg. I could write some nice helpers:
foo :: Int -> Example ()
foo i = liftF $ Foo i ()
bar :: String -> Example Int
bar s = liftF $ Bar s id
So I can write programs in haskell like:
fooThenBar :: Example Int
fooThenBar =
do
foo 10
bar "nice"
I know how to print it, interpret it, etc. But what about parsing it?
Would it be possible to write a parser that could parse arbitrary
programs like:
foo 12
bar nice
foo 11
foo 42
So I can store them, serialize them, use them in cli programs etc.
The problem I keep running into is that the type of the program depends on which program is being parsed. If the program ends with a foo it's of
type Example () if it ends with a bar it's of type Example Int.
I do not feel like writing parsers for every possible permutation (it's simple here because there are only two possibilities, but imagine we add
Baz Int (String -> a), Doo (Int -> a), Moz Int a, Foz String a, .... This get's tedious and error-prone).
Perhaps I'm solving the wrong problem?
Boilerplate
To run the above examples, you need to add this to the beginning of the file:
{-# LANGUAGE DeriveFunctor #-}
import Control.Monad.Free
import Text.ParserCombinators.Parsec
Note: I put up a gist containing this code.
Not every Example value can be represented on the page without reimplementing some portion of Haskell. For example, return putStrLn has a type of Example (String -> IO ()), but I don't think it makes sense to attempt to parse that sort of Example value out of a file.
So let's restrict ourselves to parsing the examples you've given, which consist only of calls to foo and bar sequenced with >> (that is, no variable bindings and no arbitrary computations)*. The Backus-Naur form for our grammar looks approximately like this:
<program> ::= "" | <expr> "\n" <program>
<expr> ::= "foo " <integer> | "bar " <string>
It's straightforward enough to parse our two types of expression...
type Parser = Parsec String ()
int :: Parser Int
int = fmap read (many1 digit)
parseFoo :: Parser (Example ())
parseFoo = string "foo " *> fmap foo int
parseBar :: Parser (Example Int)
parseBar = string "bar " *> fmap bar (many1 alphaNum)
... but how can we give a type to the composition of these two parsers?
parseExpr :: Parser (Example ???)
parseExpr = parseFoo <|> parseBar
parseFoo and parseBar have different types, so we can't compose them with <|> :: Alternative f => f a -> f a -> f a. Moreover, there's no way to know ahead of time which type the program we're given will be: as you point out, the type of the parsed program depends on the value of the input string. "Types depending on values" is called dependent types; Haskell doesn't feature a proper dependent type system, but it comes close enough for us to have a stab at making this example work.
Let's start by forcing the expressions on either side of <|> to have the same type. This involves erasing Example's type parameter using existential quantification.†
data Ex a = forall i. Wrap (a i)
parseExpr :: Parser (Ex Example)
parseExpr = fmap Wrap parseFoo <|> fmap Wrap parseBar
This typechecks, but the parser now returns an Example containing a value of an unknown type. A value of unknown type is of course useless - but we do know something about Example's parameter: it must be either () or Int because those are the return types of parseFoo and parseBar. Programming is about getting knowledge out of your brain and onto the page, so we're going to wrap up the Example value with a bit of GADT evidence which, when unwrapped, will tell you whether a was Int or ().
data Ty a where
IntTy :: Ty Int
UnitTy :: Ty ()
data (a :*: b) i = a i :&: b i
type Sig a b = Ex (a :*: b)
pattern Sig x y = Wrap (x :&: y)
parseExpr :: Parser (Sig Ty Example)
parseExpr = fmap (\x -> Sig UnitTy x) parseFoo <|>
fmap (\x -> Sig IntTy x) parseBar
Ty is (something like) a runtime "singleton" representative of Example's type parameter. When you pattern match on IntTy, you learn that a ~ Int; when you pattern match on UnitTy you learn that a ~ (). (Information can be made to flow the other way, from types to values, using classes.) :*:, the functor product, pairs up two type constructors ensuring that their parameters are equal; thus, pattern matching on the Ty tells you about its accompanying Example.
Sig is therefore called a dependent pair or sigma type - the type of the second component of the pair depends on the value of the first. This is a common technique: when you erase a type parameter by existential quantification, it usually pays to make it recoverable by bundling up a runtime representative of that parameter.
Note that this use of Sig is equivalent to Either (Example Int) (Example ()) - a sigma type is a sum, after all - but this version scales better when you're summing over a large (or possibly infinite) set.
Now it's easy to build our expression parser into a program parser. We just have to repeatedly apply the expression parser, and then manipulate the dependent pairs in the list.
parseProgram :: Parser (Sig Ty Example)
parseProgram = fmap (foldr1 combine) $ parseExpr `sepBy1` (char '\n')
where combine (Sig _ val) (Sig ty acc) = Sig ty (val >> acc)
The code I've shown you is not exemplary. It doesn't separate the concerns of parsing and typechecking. In production code I would modularise this design by first parsing the data into an untyped syntax tree - a separate data type which doesn't enforce the typing invariant - then transform that into a typed version by type-checking it. The dependent pair technique would still be necessary to give a type to the output of the type-checker, but it wouldn't be tangled up in the parser.
*If binding is not a requirement, have you thought about using a free applicative to represent your data?
†Ex and :*: are reusable bits of machinery which I lifted from the Hasochism paper
So, I worry that this is the same sort of premature abstraction that you see in object-oriented languages, getting in the way of things. For example, I am not 100% sure that you are using the structure of the free monad -- your helpers for example simply seem to use id and () in a rather boring way, in fact I'm not sure if your Int -> x is ever anything other than either Pure :: Int -> Free ExampleF Int or const (something :: Free ExampleF Int).
The free monad for a functor F can basically be described as a tree whose data is stored in leaves and whose branching factor is controlled by the recursion in each constructor of the functor F. So for example Free Identity has no branching, hence only one leaf, and thus has the same structure as the monad:
data MonoidalFree m x = MF m x deriving (Functor)
instance Monoid m => Monad (MonoidalFree m) where
return x = MF mempty x
MF m x >>= my_x = case my_x x of MF n y -> MF (mappend m n) y
In fact Free Identity is isomorphic to MonoidalFree (Sum Integer), the difference is just that instead of MF (Sum 3) "Hello" you see Free . Identity . Free . Identity . Free . Identity $ Pure "Hello" as the means of tracking this integer. On the other hand if you have data E x = L x | R x deriving (Functor) then you get a sort of "path" of Ls and Rs before you hit this one leaf, Free E is going to be isomorphic to MonoidalFree [Bool].
The reason I'm going through this is that when you combine Free with an Integer -> x functor, you get an infinitely branching tree, and when I'm looking through your code to figure out how you're actually using this tree, all I see is that you use the id function with it. As far as I can tell, that restricts the recursion to either have the form Free (Bar "string" Pure) or else Free (Bar "string" (const subExpression)), in which case the system would seem to reduce completely to the MonoidalFree [Either Int String] monad.
(At this point I should pause to ask: Is that correct as far as you know? Was this what was intended?)
Anyway. Aside from my problems with your premature abstraction, the specific problem that you're citing with your monad (you can't tell the difference between () and Int has a bunch of really complicated solutions, but one really easy one. The really easy solution is to yield a value of type Example (Either () Int) and if you have a () you can fmap Left onto it and if you have an Int you can fmap Right onto it.
Without a much better understanding of how you're using this thing over TCP/IP we can't recommend a better structure for you than the generic free monads that you seem to be finding -- in particular we'd need to know how you're planning on using the infinite-branching of Int -> x options in practice.

Use of 'unsafeCoerce'

In Haskell, there is a function called unsafeCoerce, that turns anything into any other type of thing. What exactly is this used for? Like, why we would you want to transform things into each other in such an "unsafe" way?
Provide an example of a way that unsafeCoerce is actually used. A link to Hackage would help. Example code in someones question would not.
unsafeCoerce lets you convince the type system of whatever property you like. It's thus only "safe" exactly when you can be completely certain that the property you're declaring is true. So, for instance:
unsafeCoerce True :: Int
is a violation and can lead to wonky, bad runtime behavior.
unsafeCoerce (3 :: Int) :: Int
is (obviously) fine and will not lead to runtime misbehavior.
So what's a non-trivial use of unsafeCoerce? Let's say we've got an typeclass-bound existential type
module MyClass ( SomethingMyClass (..), intSomething ) where
class MyClass x where {}
instance MyClass Int where {}
data SomethingMyClass = forall a. MyClass a => SomethingMyClass a
Let's also say, as noted here, that the typeclass MyClass is not exported and thus nobody else can ever create instances of it. Indeed, Int is the only thing that instantiates it and the only thing that ever will.
Now when we pattern match to destruct a value of SomethingMyClass we'll be able to pull a "something" out from inside
foo :: SomethingMyClass -> ...
foo (SomethingMyClass a) =
-- here we have a value `a` with type `exists a . MyClass a => a`
--
-- this is totally useless since `MyClass` doesn't even have any
-- methods for us to use!
...
Now, at this point, as the comment suggests, the value we've pulled out has no type information—it's been "forgotten" by the existential context. It could be absolutely anything which instantiates MyClass.
Of course, in this very particular situation we know that the only thing implementing MyClass is Int. So our value a must actually have type Int. We could never convince the typechecker that this is true, but due to an outside proof we know that it is.
Therefore, we can (very carefully)
intSomething :: SomethingMyClass -> Int
intSomething (SomethingMyClass a) = unsafeCoerce a -- shudder!
Now, hopefully I've suggested that this is a terrible, dangerous idea, but it also may give a taste of what kind of information we can take advantage of in order to know things that the typechecker cannot.
In non-pathological situations, this is rare. Even rarer is a situation where using something we know and the typechecker doesn't isn't itself pathological. In the above example, we must be completely certain that nobody ever extends our MyClass module to instantiate more types to MyClass otherwise our use of unsafeCoerce becomes instantly unsafe.
> instance MyClass Bool where {}
> intSomething (SomethingMyClass True)
6917529027658597398
Looks like our compiler internals are leaking!
A more common example where this sort of behavior might be valuable is when using newtype wrappers. It's a fairly common idea that we might wrap a type in a newtype wrapper in order to specialize its instance definitions.
For example, Int does not have a Monoid definition because there are two natural monoids over Ints: sums and products. Instead, we use newtype wrappers to be more explicit.
newtype Sum a = Sum { getSum :: a }
instance Num a => Monoid (Sum a) where
mempty = Sum 0
mappend (Sum a) (Sum b) = Sum (a+b)
Now, normally the compiler is pretty smart and recognizes that it can eliminate all of those Sum constructors in order to produce more efficient code. Sadly, there are times when it cannot, especially in highly polymorphic situations.
If you (a) know that some type a is actually just a newtype-wrapped b and (b) know that the compiler is incapable of deducing this itself, then you might want to do
unsafeCoerce (x :: a) :: b
for a slight efficiency gain. This, for instance, occurs frequently in lens and is expressed in the Data.Profunctor.Unsafe module of profunctors, a dependency of lens.
But let me again suggest that you really need to know what's going on before using unsafeCoerce like this is anything but highly unsafe.
One final thing to compare is the "typesafe cast" available in Data.Typeable. This function looks a bit like unsafeCoerce, but with much more ceremony.
unsafeCoerce :: a -> b
cast :: (Typeable a, Typeable b) => a -> Maybe b
Which, you might think of as being implemented using unsafeCoerce and a function typeOf :: Typeable a => a -> TypeRep where TypeRep are unforgeable, runtime tokens which reflect the type of a value. Then we have
cast :: (Typeable a, Typeable b) => a -> Maybe b
cast a = if (typeOf a == typeOf b) then Just b else Nothing
where b = unsafeCoerce a
Thus, cast is able to ensure that the types of a and b really are the same at runtime, and it can decide to return Nothing if they are not. As an example:
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE ExistentialQuantification #-}
data A = A deriving (Show, Typeable)
data B = B deriving (Show, Typeable)
data Forget = forall a . Typeable a => Forget a
getAnA :: Forget -> Maybe A
getAnA (Forget something) = cast something
which we can run as follows
> getAnA (Forget A)
Just A
> getAnA (Forget B)
Nothing
So if we compare this usage of cast with unsafeCoerce we see that it can achieve some of the same functionality. In particular, it allows us to rediscover information that may have been forgotten by ExistentialQuantification. However, cast manually checks the types at runtime to ensure that they are truly the same and thus cannot be used unsafely. To do this, it demands that both the source and target types allow for runtime reflection of their types via the Typeable class.
The only time I ever felt compelled to use unsafeCoerce was on finite natural numbers.
{-# LANGUAGE DataKinds, GADTs, TypeFamilies, StandaloneDeriving #-}
data Nat = Z | S Nat deriving (Eq, Show)
data Fin (n :: Nat) :: * where
FZ :: Fin (S n)
FS :: Fin n -> Fin (S n)
deriving instance Show (Fin n)
Fin n is a singly linked data structure that is statically ensured to be smaller than the n type level natural number by which it is parametrized.
-- OK, 1 < 2
validFin :: Fin (S (S Z))
validFin = FS FZ
-- type error, 2 < 2 is false
invalidFin :: Fin (S (S Z))
invalidFin = FS (FS FZ)
Fin can be used to safely index into various data structures. It's pretty standard in dependently typed languages, though not in Haskell.
Sometimes we want to convert a value of Fin n to Fin m where m is greater than n.
relaxFin :: Fin n -> Fin (S n)
relaxFin FZ = FZ
relaxFin (FS n) = FS (relaxFin n)
relaxFin is a no-op by definition, but traversing the value is still required for the types to check out. So we might just use unsafeCoerce instead of relaxFin. More pronounced gains in speed can result from coercing larger data structures that contain Fin-s (for example, you could have lambda terms with Fin-s as bound variables).
This is an admittedly exotic example, but I find it interesting in the sense that it's pretty safe: I can't really think of ways for external libraries or safe user code to mess this up. I might be wrong though and I'd be eager to hear about potential safety issues.
There is no use of unsafeCoerce I can really recommend, but I can see that in some cases such a thing might be useful.
The first use that springs to mind is the implementation of the Typeable-related routines. In particular cast :: (Typeable a, Typeable b) => a -> Maybe b achieves a type-safe behaviour, so it is safe to use, yet it has to play dirty tricks in its implementation.
Maybe unsafeCoerce can find some use when importing FFI subroutines to force types to match. After all, FFI already allows to import impure C functions as pure ones, so it is intrinsecally usafe. Note that "unsafe" does not mean impossible to use, but just "putting the burden of proof on the programmer".
Finally, pretend that sortBy did not exist. Consider then this example:
-- Like Int, but using the opposite ordering
newtype Rev = Rev { unRev :: Int }
instance Ord Rev where compare (Rev x) (Rev y) = compare y x
sortDescending :: [Int] -> [Int]
sortDescending = map unRev . sort . map Rev
The code above works, but feels silly IMHO. We perform two maps using functions such as Rev,unRev which we know to be no-ops at runtime. So we just scan the list twice for no reason, but that of convincing the compiler to use the right Ord instance.
The performance impact of these maps should be small since we also sort the list. Yet it is tempting to rewrite map Rev as unsafeCoerce :: [Int]->[Rev] and save some time.
Note that having a coercing function
castNewtype :: IsNewtype t1 t2 => f t2 -> f t1
where the constraint means that t1 is a newtype for t2 would help, but it would be quite dangerous. Consider
castNewtype :: Data.Set Int -> Data.Set Rev
The above would cause the data structure invariant to break, since we are changing the ordering underneath! Since Data.Set is implemented as a binary search tree, it would cause quite a large damage.

Are there "type-level combinators"? Will they exist in some future?

Much of what makes haskell really nice to use in my opinion are combinators such as (.), flip, $ <*> and etc. It feels almost like I can create new syntax when I need to.
Some time ago I was doing something where it would be tremendously convenient if I could "flip" a type constructor. Suppose I have some type constructor:
m :: * -> * -> *
and that I have a class MyClass that needs a type with a type constructor with kind * -> *. Naturally I would choose to code the type in such a way that I can do:
instance MyClass (m a)
But suppose I can't change that code, and suppose that what really fits into MyClass is something like
type w b = m b a
instance MyClass w where
...
and then I'd have to activate XTypeSynonymInstances. Is there some way to create a "type level combinator" Flip such that I can just do:
instance MyClass (Flip m a) where
...
?? Or other type level generalisations of common operators we use in haskell? Is this even useful or am I just rambling?
Edit:
I could do something like:
newtype Flip m a b = Flip (m b a)
newtype Dot m w a = Dot m (w a)
...
But then I'd have to use the data constructors Flip, Dot, ... around for pattern matching and etc. Is it worth it?
Your question makes sense, but the answer is: no, it's not currently possible.
The problem is that (in GHC Haskell's type system) you can't have lambdas at the type level. For anything you might try that looks like it could emulate or achieve the effect of a type level lambda, you will discover that it doesn't work. (I know, because I did.)
What you can do is declare your Flip newtypes, and then write instances of the classes you want for them, painfully with the wrapping and the unwrapping (by the way: use record syntax), and then clients of the classes can use the newtypes in type signatures and not have to worry about the details.
I'm not a type theorist and I don't know the details of why exactly we can't have type level lambdas. I think it was something to do with type inference becoming impossible, but again, I don't really know.
You can do the following, but I don't think its actually very useful, since you still can't really partially apply it:
{-# LANGUAGE TypeFamilies, FlexibleInstances #-}
module Main where
class TFlip a where
type FlipT a
instance TFlip (f a b) where
type FlipT (f a b) = f b a
-- *Main> :t (undefined :: FlipT (Either String Int))
-- (undefined :: FlipT (Either String Int)) :: Either Int [Char]
Also see this previous discussion: Lambda for type expressions in Haskell?
I'm writing answer here just for clarifying things and to tell about achievements in the last years. There're a lot of features in Haskell and now you can write some operators in type. Using $ you can write something like this:
foo :: Int -> Either String $ Maybe $ Maybe Int
to avoid parenthesis instead of good old
foo :: Int -> Either String (Maybe (Maybe Int))

Resources