Haskell: Type safety with logically different Boolean values - haskell

Lets say I've got the following code
type IsTall = Bool
type IsAlive = Bool
is_short_alive_person is_tall is_alive = (not is_tall) && is_alive
Say, later on, I've got the following
a :: IsAlive
a = False
b :: IsTall
b = True
And call the following, getting the two arguments around the wrong way:
is_short_alive_person a b
This successfully compiles unfortunately, and at runtime tall dead people are instead found instead of short alive people.
I would like the above example not to compile.
My first attempt was:
newtype IsAlive = IsAlive Bool
newtype IsTall = IsTall Bool
But then I can't do something like.
switch_height :: IsTall -> IsTall
switch_height h = not h
As not is not defined on IsTalls, only Bools.
I could explicitly extract the Bools all the time, but that largely defeats the purpose.
Basically, I want IsTalls to interact with other IsTalls, just like they're Bools, except they won't interact with Bools and IsAlives without an explicit cast.
What's the best way to achieve this.
p.s. I think I've achieved this with numbers by doing in GHC:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype UserID = UserID Int deriving (Eq, Ord, Num)
newtype GroupID = GroupID Int deriving (Eq, Ord, Num)
(i.e. UserID's and GroupID's shouldn't interact)
but I can't seem to do this with Bools (deriving Bool doesn't work). I'm not even sure the above is the best approach anyway.

If you change your data type slightly you can make it an instance of Functor and you can then use fmap to do operations on the Boolean
import Control.Applicative
newtype IsAliveBase a = IsAlive a
newtype IsTallBase a = IsTall a
type IsAlive = IsAliveBase Bool
type IsTall = IsTallBase Bool
instance Functor IsAliveBase where
fmap f (IsAlive b) = IsAlive (f b)
instance Functor IsTallBase where
fmap f (IsTall b) = IsTall (f b)
switch_height :: IsTall -> IsTall
switch_height h = not <$> h -- or fmap not h
-- EDIT
for operations like && you can make it an instance of Applicative
instance Applicative IsAliveBase where
pure = IsAlive
(IsAlive f) <*> (IsAlive x) = IsAlive (f x)
and then you can do (&&) using liftA2
example:
*Main> let h = IsAlive True
*Main> liftA2 (&&) h h
IsAlive True
you can read more about this at http://en.wikibooks.org/wiki/Haskell/Applicative_Functors

Your options are either to define algebraic data types like
data Height = Tall | Short
data Wiggliness = Alive | Dead
or to define new operators, e.g., &&&, |||, complement and to overload them on the type of your choice. But even with overloading you won't be able to use them with if.
I'm not sure that Boolean operations on height make sense anyway. How do you justify a conclusion that "tall and short equals short" but "tall or short equals tall"?
I suggest you look for different names for your connectives, which you can then overload.
P.S. Haskell is always getting new features, so the best I can say is that if you can overload if I'm not aware of it. To say about Haskell that "such-and-such can't be done" is always dangerous...

You can get some way towards this, using newtypes and a class if you import the Prelude hiding the boolean functions you want to use with your IsTall and IsAlive values. You redefine the boolean functions as methods in the class, for which you then make instances for all 3 of the Bool, IsTall, and IsAlive types. If you use GeneralizedNewtypeDeriving you can even get the IsTall and IsAlive instances without having to write the wrapping/unwrapping boilerplate by hand.
Here's an example script I actually tried out in ghci:
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
import Prelude hiding ((&&), (||), not)
import qualified Prelude
class Boolish a where
(&&) :: a -> a -> a
(||) :: a -> a -> a
not :: a -> a
instance Boolish Bool where
(&&) = (Prelude.&&)
(||) = (Prelude.||)
not = Prelude.not
newtype IsTall = IsTall Bool
deriving (Eq, Ord, Show, Boolish)
newtype IsAlive = IsAlive Bool
deriving (Eq, Ord, Show, Boolish)
You can now &&, ||, and not values of any of the three types, but not together. And they're separate types, so your function signatures can now restrict which of the 3 they want to accept.
Higher order functions defined in other modules will work fine with this, as in:
*Main> map not [IsTall True, IsTall False]
[IsTall False,IsTall True]
But you won't be able to pass an IsTall to any other function defined elsewhere that expects a Bool, because the other module will still be using the Prelude version of the boolean functions. Language constructs like if ... then ... else ... will still be a problem too (although a comment by hammar on Norman Ramsey's answer says you can fix this with another GHC extension). I'd probably add a toBool method to that class to help uniformly convert back to regular Bools to help mitigate such problems.

Related

Set specific properties for data in Haskell

Let us say I want to make a ADT as follows in Haskell:
data Properties = Property String [String]
deriving (Show,Eq)
I want to know if it is possible to give the second list a bounded and enumerated property? Basically the first element of the list will be the minBound and the last element will be the maxBound. I am trying,
data Properties a = Property String [a]
deriving (Show, Eq)
instance Bounded (Properties a) where
minBound a = head a
maxBound a = (head . reverse) a
But not having much luck.
Well no, you can't do quite what you're asking, but maybe you'll find inspiration in this other neat trick.
{-# language ScopedTypeVariables, FlexibleContexts, UndecidableInstances #-}
import Data.Reflection -- from the reflection package
import qualified Data.List.NonEmpty as NE
import Data.List.NonEmpty (NonEmpty (..))
import Data.Proxy
-- Just the plain string part
newtype Pstring p = P String deriving Eq
-- Those properties you're interested in. It will
-- only be possible to produce bounds if there's at
-- least one property, so NonEmpty makes more sense
-- than [].
type Props = NonEmpty String
-- This is just to make a Show instance that does
-- what you seem to want easier to write. It's not really
-- necessary.
data Properties = Property String [String] deriving Show
Now we get to the key part, where we use reflection to produce class instances that can depend on run-time values. Roughly speaking, you can think of
Reifies x t => ...
as being a class-level version of
\(x :: t) -> ...
Because it operates at the class level, you can use it to parametrize instances. Since Reifies x t binds a type variable x, rather than a term variable, you need to use reflect to actually get the value back. If you happen to have a value on hand whose type ends in p, then you can just apply reflect to that value. Otherwise, you can always magic up a Proxy :: Proxy p to do the job.
-- If some Props are "in the air" tied to the type p,
-- then we can show them along with the string.
instance Reifies p Props => Show (Pstring p) where
showsPrec k p#(P str) =
showsPrec k $ Property str (NE.toList $ reflect p)
-- If some Props are "in the air" tied to the type p,
-- then we can give Pstring p a Bounded instance.
instance Reifies p Props => Bounded (Pstring p) where
minBound = P $ NE.head (reflect (Proxy :: Proxy p))
maxBound = P $ NE.last (reflect (Proxy :: Proxy p))
Now we need to have a way to actually bind types that can be passed to the type-level lambdas. This is done using the reify function. So let's throw some Props into the air and then let the butterfly nets get them back.
main :: IO ()
main = reify ("Hi" :| ["how", "are", "you"]) $
\(_ :: Proxy p) -> do
print (minBound :: Pstring p)
print (maxBound :: Pstring p)
./dfeuer#squirrel:~/src> ./WeirdBounded
Property "Hi" ["Hi","how","are","you"]
Property "you" ["Hi","how","are","you"]
You can think of reify x $ \(p :: Proxy p) -> ... as binding a type p to the value x; you can then pass the type p where you like by constraining things to have types involving p.
If you're just doing a couple of things, all this machinery is way more than necessary. Where it gets nice is when you're performing lots of operations with values that have phantom types carrying extra information. In many cases, you can avoid most of the explicit applications of reflect and the explicit proxy handling, because type inference just takes care of it all for you. For a good example of this technique in action, see the hyperloglog package. Configuration information for the HyperLogLog data structure is carried in a type parameter; this guarantees, at compile time, that only similarly configured structures are merged with each other.

How can I read the metadata of a type at runtime?

I'd like to write a program that prints out some metadata of a Haskell type. Although I know this isn't valid code, the idea is something like:
data Person = Person { name :: String, age :: Int }
metadata :: Type -> String
metadata t = ???
metadata Person -- returns "Person (name,age)"
The important restriction being I don't have an instance of Person, just the type.
I've started looking into Generics & Typeable/Data, but without an instance I'm not sure they'll do what I need. Can anyone point me in the right direction?
Reflection in Haskell works using the Typeable class, which is defined in Data.Typeable and includes the typeOf* method to get a run-time representation of a value's type.
ghci> :m +Data.Typeable
ghci> :t typeOf 'a'
typeOf 'a' :: TypeRep
ghci> typeOf 'a' -- We could use any value of type Char and get the same result
Char -- the `Show` instance of `TypeRep` just returns the name of the type
If you want Typeable to work for your own types, you can have the compiler generate an instance for you with the DeriveDataTypeable extension.
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data Person = Person { name :: String, age :: Int } deriving Typeable
You can also write your own instance, but really, no one has the time for that. Apparently you can't - see the comments
You can now use typeOf to grab a run-time representation of your type. We can query information about the type constructor (abbreviated to TyCon) and its type arguments:
-- (undefined :: Person) stands for "some value of type Person".
-- If you have a real Person you can use that too.
-- typeOf does not use the value, only the type
-- (which is known at compile-time; typeOf is dispatched using the normal instance selection rules)
ghci> typeOf (undefined :: Person)
Person
ghci> tyConName $ typeRepTyCon $ typeOf (undefined :: Person)
"Person"
ghci> tyConModule $ typeRepTyCon $ typeOf (undefined :: Person)
"Main"
Data.Typeable also provides a type-safe cast operation which allows you to branch on a value's runtime type, somewhat like C#'s as operator.
f :: Typeable a => a -> String
f x = case (cast x :: Maybe Int) of
Just i -> "I can treat i as an int in this branch " ++ show (i * i)
Nothing -> case (cast x :: Maybe Bool) of
Just b -> "I can treat b as a bool in this branch " ++ if b then "yes" else "no"
Nothing -> "x was of some type other than Int or Bool"
ghci> f True
"I can treat b as a bool in this branch yes"
ghci> f (3 :: Int)
"I can treat i as an int in this branch 9"
Incidentally, a nicer way to write f is to use a GADT enumerating the set of types you expect your function to be called with. This allows us to lose the Maybe (f can never fail!), does a better job of documenting our assumptions, and gives compile-time feedback when we need to change the set of admissible argument types for f. (You can write a class to make Admissible implicit if you like.)
data Admissible a where
AdInt :: Admissible Int
AdBool :: Admissible Bool
f :: Admissible a -> a -> String
f AdInt i = "I can treat i as an int in this branch " ++ show (i * i)
f AdBool b = "I can treat b as a bool in this branch " ++ if b then "yes" else "no"
In reality I probably wouldn't do either of these - I'd just stick f in a class and define instances for Int and Bool.
If you want run-time information about the right-hand side of a type definition, you need to use the entertainingly-named Data.Data, which defines a subclass of Typeable called Data.** GHC can derive Data for you too, with the same extension:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
import Data.Data
data Person = Person { name :: String, age :: Int } deriving (Typeable, Data)
Now we can grab a run-time representation of the values of a type, not just the type itself:
ghci> dataTypeOf (undefined :: Person)
DataType {tycon = "Main.Person", datarep = AlgRep [Person]}
ghci> dataTypeConstrs $ dataTypeOf (undefined :: Person)
[Person] -- Person only defines one constructor, called Person
ghci> constrFields $ head $ dataTypeConstrs $ dataTypeOf (undefined :: Person)
["name","age"]
Data.Data is the API for generic programming; if you ever hear people talking about "Scrap Your Boilerplate", this (along with Data.Generics, which builds on Data.Data) is what they mean. For example, you can write a function which converts record types to JSON using reflection on the type's fields.
toJSON :: Data a => a -> String
-- Implementation omitted because it is boring.
-- But you only have to write the boring code once,
-- and it'll be able to serialise any instance of `Data`.
-- It's a good exercise to try to write this function yourself!
* In recent versions of GHC, this API has changed somewhat. Consult the docs.
** Yes, the fully-qualified name of that class is Data.Data.Data.

Why constraints on data are a bad thing?

I know this question has been asked and answered lots of times but I still don't really understand why putting constraints on a data type is a bad thing.
For example, let's take Data.Map k a. All of the useful functions involving a Map need an Ord k constraint. So there is an implicit constraint on the definition of Data.Map. Why is it better to keep it implicit instead of letting the compiler and programmers know that Data.Map needs an orderable key.
Also, specifying a final type in a type declaration is something common, and one can see it as a way of "super" constraining a data type.
For example, I can write
data User = User { name :: String }
and that's acceptable. However is that not a constrained version of
data User' s = User' { name :: s }
After all 99% of the functions I'll write for the User type don't need a String and the few which will would probably only need s to be IsString and Show.
So, why is the lax version of User considered bad:
data (IsString s, Show s, ...) => User'' { name :: s }
while both User and User' are considered good?
I'm asking this, because lots of the time, I feel I'm unnecessarily narrowing my data (or even function) definitions, just to not have to propagate constraints.
Update
As far as I understand, data type constraints only apply to the constructor and don't propagate. So my question is then, why do data type constraints not work as expected (and propagate)? It's an extension anyway, so why not have a new extension doing data properly, if it was considered useful by the community?
TL;DR:
Use GADTs to provide implicit data contexts.
Don't use any kind of data constraint if you could do with Functor instances etc.
Map's too old to change to a GADT anyway.
Scroll to the bottom if you want to see the User implementation with GADTs
Let's use a case study of a Bag where all we care about is how many times something is in it. (Like an unordered sequence. We nearly always need an Eq constraint to do anything useful with it.
I'll use the inefficient list implementation so as not to muddy the waters over the Data.Map issue.
GADTs - the solution to the data constraint "problem"
The easy way to do what you're after is to use a GADT:
Notice below how the Eq constraint not only forces you to use types with an Eq instance when making GADTBags, it provides that instance implicitly wherever the GADTBag constructor appears. That's why count doesn't need an Eq context, whereas countV2 does - it doesn't use the constructor:
{-# LANGUAGE GADTs #-}
data GADTBag a where
GADTBag :: Eq a => [a] -> GADTBag a
unGADTBag (GADTBag xs) = xs
instance Show a => Show (GADTBag a) where
showsPrec i (GADTBag xs) = showParen (i>9) (("GADTBag " ++ show xs) ++)
count :: a -> GADTBag a -> Int -- no Eq here
count a (GADTBag xs) = length.filter (==a) $ xs -- but == here
countV2 a = length.filter (==a).unGADTBag
size :: GADTBag a -> Int
size (GADTBag xs) = length xs
ghci> count 'l' (GADTBag "Hello")
2
ghci> :t countV2
countV2 :: Eq a => a -> GADTBag a -> Int
Now we didn't need the Eq constraint when we found the total size of the bag, but it didn't clutter up our definition anyway. (We could have used size = length . unGADTBag just as well.)
Now lets make a functor:
instance Functor GADTBag where
fmap f (GADTBag xs) = GADTBag (map f xs)
oops!
DataConstraints_so.lhs:49:30:
Could not deduce (Eq b) arising from a use of `GADTBag'
from the context (Eq a)
That's unfixable (with the standard Functor class) because I can't restrict the type of fmap, but need to for the new list.
Data Constraint version
Can we do as you asked? Well, yes, except that you have to keep repeating the Eq constraint wherever you use the constructor:
{-# LANGUAGE DatatypeContexts #-}
data Eq a => EqBag a = EqBag {unEqBag :: [a]}
deriving Show
count' a (EqBag xs) = length.filter (==a) $ xs
size' (EqBag xs) = length xs -- Note: doesn't use (==) at all
Let's go to ghci to find out some less pretty things:
ghci> :so DataConstraints
DataConstraints_so.lhs:1:19: Warning:
-XDatatypeContexts is deprecated: It was widely considered a misfeature,
and has been removed from the Haskell language.
[1 of 1] Compiling Main ( DataConstraints_so.lhs, interpreted )
Ok, modules loaded: Main.
ghci> :t count
count :: a -> GADTBag a -> Int
ghci> :t count'
count' :: Eq a => a -> EqBag a -> Int
ghci> :t size
size :: GADTBag a -> Int
ghci> :t size'
size' :: Eq a => EqBag a -> Int
ghci>
So our EqBag count' function requires an Eq constraint, which I think is perfectly reasonable, but our size' function also requires one, which is less pretty. This is because the type of the EqBag constructor is EqBag :: Eq a => [a] -> EqBag a, and this constraint must be added every time.
We can't make a functor here either:
instance Functor EqBag where
fmap f (EqBag xs) = EqBag (map f xs)
for exactly the same reason as with the GADTBag
Constraintless bags
data ListBag a = ListBag {unListBag :: [a]}
deriving Show
count'' a = length . filter (==a) . unListBag
size'' = length . unListBag
instance Functor ListBag where
fmap f (ListBag xs) = ListBag (map f xs)
Now the types of count'' and show'' are exactly as we expect, and we can use standard constructor classes like Functor:
ghci> :t count''
count'' :: Eq a => a -> ListBag a -> Int
ghci> :t size''
size'' :: ListBag a -> Int
ghci> fmap (Data.Char.ord) (ListBag "hello")
ListBag {unListBag = [104,101,108,108,111]}
ghci>
Comparison and conclusions
The GADTs version automagically propogates the Eq constraint everywhere the constructor is used. The type checker can rely on there being an Eq instance, because you can't use the constructor for a non-Eq type.
The DatatypeContexts version forces the programmer to manually propogate the Eq constraint, which is fine by me if you want it, but is deprecated because it doesn't give you anything more than the GADT one does and was seen by many as pointless and annoying.
The unconstrained version is good because it doesn't prevent you from making Functor, Monad etc instances. The constraints are written exactly when they're needed, no more or less. Data.Map uses the unconstrained version partly because unconstrained is generally seen as most flexible, but also partly because it predates GADTs by some margin, and there needs to be a compelling reason to potentially break existing code.
What about your excellent User example?
I think that's a great example of a one-purpose data type that benefits from a constraint on the type, and I'd advise you to use a GADT to implement it.
(That said, sometimes I have a one-purpose data type and end up making it unconstrainedly polymorphic just because I love to use Functor (and Applicative), and would rather use fmap than mapBag because I feel it's clearer.)
{-# LANGUAGE GADTs #-}
import Data.String
data User s where
User :: (IsString s, Show s) => s -> User s
name :: User s -> s
name (User s) = s
instance Show (User s) where -- cool, no Show context
showsPrec i (User s) = showParen (i>9) (("User " ++ show s) ++)
instance (IsString s, Show s) => IsString (User s) where
fromString = User . fromString
Notice since fromString does construct a value of type User a, we need the context explicitly. After all, we composed with the constructor User :: (IsString s, Show s) => s -> User s. The User constructor removes the need for an explicit context when we pattern match (destruct), becuase it already enforced the constraint when we used it as a constructor.
We didn't need the Show context in the Show instance because we used (User s) on the left hand side in a pattern match.
Constraints
The problem is that constraints are not a property of the data type, but of the algorithm/function that operates on them. Different functions might need different and unique constraints.
A Box example
As an example, let's assume we want to create a container called Box which contains only 2 values.
data Box a = Box a a
We want it to:
be showable
allow the sorting of the two elements via sort
Does it make sense to apply the constraint of both Ord and Show on the data type? No, because the data type in itself could be only shown or only sorted and therefore the constraints are related to its use, not it's definition.
instance (Show a) => Show (Box a) where
show (Box a b) = concat ["'", show a, ", ", show b, "'"]
instance (Ord a) => Ord (Box a) where
compare (Box a b) (Box c d) =
let ca = compare a c
cb = compare b d
in if ca /= EQ then ca else cb
The Data.Map case
Data.Map's Ord constraints on the type is really needed only when we have > 1 elements in the container. Otherwise the container is usable even without an Ord key. For example, this algorithm:
transf :: Map NonOrd Int -> Map NonOrd Int
transf x =
if Map.null x
then Map.singleton NonOrdA 1
else x
Live demo
works just fine without the Ord constraint and always produce a non empty map.
Using DataTypeContexts reduces the number of programs you can write. If most of those illegal programs are nonsense, you might say it's worth the runtime cost associated with ghc passing in a type class dictionary that isn't used. For example, if we had
data Ord k => MapDTC k a
then #jefffrey's transf is rejected. But we should probably have transf _ = return (NonOrdA, 1) instead.
In some sense the context is documentation that says "every Map must have ordered keys". If you look at all of the functions in Data.Map you'll get a similar conclusion "every useful Map has ordered keys". While you can create maps with unordered keys using
mapKeysMonotonic :: (k1 -> k2) -> Map k1 a -> Map k2 a
singleton :: k2 a -> Map k2 a
But the moment you try to do anything useful with them, you'll wind up with No instance for Ord k2 somewhat later.

Use of 'unsafeCoerce'

In Haskell, there is a function called unsafeCoerce, that turns anything into any other type of thing. What exactly is this used for? Like, why we would you want to transform things into each other in such an "unsafe" way?
Provide an example of a way that unsafeCoerce is actually used. A link to Hackage would help. Example code in someones question would not.
unsafeCoerce lets you convince the type system of whatever property you like. It's thus only "safe" exactly when you can be completely certain that the property you're declaring is true. So, for instance:
unsafeCoerce True :: Int
is a violation and can lead to wonky, bad runtime behavior.
unsafeCoerce (3 :: Int) :: Int
is (obviously) fine and will not lead to runtime misbehavior.
So what's a non-trivial use of unsafeCoerce? Let's say we've got an typeclass-bound existential type
module MyClass ( SomethingMyClass (..), intSomething ) where
class MyClass x where {}
instance MyClass Int where {}
data SomethingMyClass = forall a. MyClass a => SomethingMyClass a
Let's also say, as noted here, that the typeclass MyClass is not exported and thus nobody else can ever create instances of it. Indeed, Int is the only thing that instantiates it and the only thing that ever will.
Now when we pattern match to destruct a value of SomethingMyClass we'll be able to pull a "something" out from inside
foo :: SomethingMyClass -> ...
foo (SomethingMyClass a) =
-- here we have a value `a` with type `exists a . MyClass a => a`
--
-- this is totally useless since `MyClass` doesn't even have any
-- methods for us to use!
...
Now, at this point, as the comment suggests, the value we've pulled out has no type information—it's been "forgotten" by the existential context. It could be absolutely anything which instantiates MyClass.
Of course, in this very particular situation we know that the only thing implementing MyClass is Int. So our value a must actually have type Int. We could never convince the typechecker that this is true, but due to an outside proof we know that it is.
Therefore, we can (very carefully)
intSomething :: SomethingMyClass -> Int
intSomething (SomethingMyClass a) = unsafeCoerce a -- shudder!
Now, hopefully I've suggested that this is a terrible, dangerous idea, but it also may give a taste of what kind of information we can take advantage of in order to know things that the typechecker cannot.
In non-pathological situations, this is rare. Even rarer is a situation where using something we know and the typechecker doesn't isn't itself pathological. In the above example, we must be completely certain that nobody ever extends our MyClass module to instantiate more types to MyClass otherwise our use of unsafeCoerce becomes instantly unsafe.
> instance MyClass Bool where {}
> intSomething (SomethingMyClass True)
6917529027658597398
Looks like our compiler internals are leaking!
A more common example where this sort of behavior might be valuable is when using newtype wrappers. It's a fairly common idea that we might wrap a type in a newtype wrapper in order to specialize its instance definitions.
For example, Int does not have a Monoid definition because there are two natural monoids over Ints: sums and products. Instead, we use newtype wrappers to be more explicit.
newtype Sum a = Sum { getSum :: a }
instance Num a => Monoid (Sum a) where
mempty = Sum 0
mappend (Sum a) (Sum b) = Sum (a+b)
Now, normally the compiler is pretty smart and recognizes that it can eliminate all of those Sum constructors in order to produce more efficient code. Sadly, there are times when it cannot, especially in highly polymorphic situations.
If you (a) know that some type a is actually just a newtype-wrapped b and (b) know that the compiler is incapable of deducing this itself, then you might want to do
unsafeCoerce (x :: a) :: b
for a slight efficiency gain. This, for instance, occurs frequently in lens and is expressed in the Data.Profunctor.Unsafe module of profunctors, a dependency of lens.
But let me again suggest that you really need to know what's going on before using unsafeCoerce like this is anything but highly unsafe.
One final thing to compare is the "typesafe cast" available in Data.Typeable. This function looks a bit like unsafeCoerce, but with much more ceremony.
unsafeCoerce :: a -> b
cast :: (Typeable a, Typeable b) => a -> Maybe b
Which, you might think of as being implemented using unsafeCoerce and a function typeOf :: Typeable a => a -> TypeRep where TypeRep are unforgeable, runtime tokens which reflect the type of a value. Then we have
cast :: (Typeable a, Typeable b) => a -> Maybe b
cast a = if (typeOf a == typeOf b) then Just b else Nothing
where b = unsafeCoerce a
Thus, cast is able to ensure that the types of a and b really are the same at runtime, and it can decide to return Nothing if they are not. As an example:
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE ExistentialQuantification #-}
data A = A deriving (Show, Typeable)
data B = B deriving (Show, Typeable)
data Forget = forall a . Typeable a => Forget a
getAnA :: Forget -> Maybe A
getAnA (Forget something) = cast something
which we can run as follows
> getAnA (Forget A)
Just A
> getAnA (Forget B)
Nothing
So if we compare this usage of cast with unsafeCoerce we see that it can achieve some of the same functionality. In particular, it allows us to rediscover information that may have been forgotten by ExistentialQuantification. However, cast manually checks the types at runtime to ensure that they are truly the same and thus cannot be used unsafely. To do this, it demands that both the source and target types allow for runtime reflection of their types via the Typeable class.
The only time I ever felt compelled to use unsafeCoerce was on finite natural numbers.
{-# LANGUAGE DataKinds, GADTs, TypeFamilies, StandaloneDeriving #-}
data Nat = Z | S Nat deriving (Eq, Show)
data Fin (n :: Nat) :: * where
FZ :: Fin (S n)
FS :: Fin n -> Fin (S n)
deriving instance Show (Fin n)
Fin n is a singly linked data structure that is statically ensured to be smaller than the n type level natural number by which it is parametrized.
-- OK, 1 < 2
validFin :: Fin (S (S Z))
validFin = FS FZ
-- type error, 2 < 2 is false
invalidFin :: Fin (S (S Z))
invalidFin = FS (FS FZ)
Fin can be used to safely index into various data structures. It's pretty standard in dependently typed languages, though not in Haskell.
Sometimes we want to convert a value of Fin n to Fin m where m is greater than n.
relaxFin :: Fin n -> Fin (S n)
relaxFin FZ = FZ
relaxFin (FS n) = FS (relaxFin n)
relaxFin is a no-op by definition, but traversing the value is still required for the types to check out. So we might just use unsafeCoerce instead of relaxFin. More pronounced gains in speed can result from coercing larger data structures that contain Fin-s (for example, you could have lambda terms with Fin-s as bound variables).
This is an admittedly exotic example, but I find it interesting in the sense that it's pretty safe: I can't really think of ways for external libraries or safe user code to mess this up. I might be wrong though and I'd be eager to hear about potential safety issues.
There is no use of unsafeCoerce I can really recommend, but I can see that in some cases such a thing might be useful.
The first use that springs to mind is the implementation of the Typeable-related routines. In particular cast :: (Typeable a, Typeable b) => a -> Maybe b achieves a type-safe behaviour, so it is safe to use, yet it has to play dirty tricks in its implementation.
Maybe unsafeCoerce can find some use when importing FFI subroutines to force types to match. After all, FFI already allows to import impure C functions as pure ones, so it is intrinsecally usafe. Note that "unsafe" does not mean impossible to use, but just "putting the burden of proof on the programmer".
Finally, pretend that sortBy did not exist. Consider then this example:
-- Like Int, but using the opposite ordering
newtype Rev = Rev { unRev :: Int }
instance Ord Rev where compare (Rev x) (Rev y) = compare y x
sortDescending :: [Int] -> [Int]
sortDescending = map unRev . sort . map Rev
The code above works, but feels silly IMHO. We perform two maps using functions such as Rev,unRev which we know to be no-ops at runtime. So we just scan the list twice for no reason, but that of convincing the compiler to use the right Ord instance.
The performance impact of these maps should be small since we also sort the list. Yet it is tempting to rewrite map Rev as unsafeCoerce :: [Int]->[Rev] and save some time.
Note that having a coercing function
castNewtype :: IsNewtype t1 t2 => f t2 -> f t1
where the constraint means that t1 is a newtype for t2 would help, but it would be quite dangerous. Consider
castNewtype :: Data.Set Int -> Data.Set Rev
The above would cause the data structure invariant to break, since we are changing the ordering underneath! Since Data.Set is implemented as a binary search tree, it would cause quite a large damage.

Difficulties with 'Either e` instance of `Failure` from "failure" package

This is more or less a simplified version of the issue I'm trying to understand in my own code. I'm working with functions that are polymorphic in the Failure class from this package.
{-# LANGUAGE FlexibleContexts #-}
import Data.Maybe
import Data.Either
import Control.Failure
data FookError = FookError deriving Show
fook :: (Failure FookError m)=> Int -> m Int
fook = undefined
fooks :: (Failure FookError m)=> Int -> m Int
-- DOES NOT TYPE-CHECK:
--fooks n = return $ head $ rights $ map fook [1..]
-- OKAY:
fooks n = return $ head $ catMaybes $ map fook [1..]
You can see in the above code, that when I treat the return type of fook as Maybe Int the module compiles fine, however treating it as Either Fook Int fails.
What is going on here?
This is because, in the non-working definition of fooks, fook's type is ambiguous. When you use catMaybes, it disambiguates as Maybe Int, but when you use rights, it can be Either e Int for any e, and the compiler doesn't necessarily know which. Sure, by default the only Failure instance for Either is instance Failure e (Either e), but there's nothing stopping you defining, e.g. instance Failure String (Either Int).
If you explicitly specify the type by defining fooks as
fooks n =
return $ head $ rights $ map (fook :: Int -> Either FookError Int) [1..]
then it works fine.
However, I suspect that you're not doing what you really want here; fooks never actually uses the failure capability of the underlying monad; indeed, even if there are no non-failing results, the monadic action still succeeds and returns a value. The value just so happens to be an error, but that's still probably not what you want :)
If you want fooks to try a bunch of individual fooks in turn, and return the first one that succeeds, then something like:
fooks :: (Failure FookError m, MonadPlus m) => Int -> m Int
fooks n = foldr mplus (failure FookError) $ map fook [1..]
should do the trick. The plain Failure class by itself offers no way to recover from errors, so you need to require MonadPlus too. Note that the failure FookError here will never actually be used, since [1..] is infinite, but presumably you're planning to change the definition; say to one that actually uses n :)
Unfortunately, that isn't all! Either e doesn't have a MonadPlus instance, presumably because there's no reasonable value of mzero (although another potential problem is that mplus (Left e) (Left e') could be either Left e or Left e').
Thankfully, it's easy to define an instance for our specific type:
instance MonadPlus (Either FookError) where
mzero = failure FookError
mplus a#(Right _) _ = a
mplus (Left _) a = a
You'll need {-# LANGUAGE FlexibleInstances #-} at the top of your file to do this.

Resources