Overriding fromInteger in Haskell - haskell

So I like Haskell, but am dissatisfied with the Num class.
So I want to make my own typeclass hierarchic for algebraic types.
The problem is, even if I import Prelude hiding Num and everything associated with it, still the only way to make the literal 1 have type t is to make t instance Num.
I would love to make my own fromInteger class and leave Num out of the picture entirely, like this
import Prelude hiding (everything having to do with Num)
import qualified Prelude (everything having to do with Num)
class (Eq fi) => FromInteger fi where
fromInteger :: Integral -> fi
foo :: (FromInteger fi) => fi -> String
foo 1 = "that was a one"
foo 0 = "that was a zero"
foo n = "that was neither zero nor one"
and then I would implement fromInteger appropriately for brand new types and never have to say anything about Num.
Is there a way to tell the parser to use a different fromInteger method?
Thanks!

You are looking for GHC's RebindableSyntax extension.
Enable it by putting
{-# LANGUAGE RebindableSyntax #-}
at the top of your source file.

Related

Could you write a type function to invert a constraint?

Is it possible to write a type function that would take a constraint like Show and return one that constrains the RHS to types that are not an instance of Show?
The signature would be something like
type family Invert (c :: * -> Constraint) :: * -> Constraint
No. It is a design principle of the language that you are never allowed to do this. The rule is if a program is valid, adding more instances should not break it. This is the open-world assumption. Your desired constraint is a pretty direct violation:
data A = A
f :: Invert Show a => a -> [a]
f x = [x]
test :: [A]
test = f A
Would work, but adding
instance Show A
would break it. Therefore, the original program should never have been valid in the first place, and therefore Invert cannot exist.
As HTNW answered, it is in general not supposed to be possible to assert that a type is not an instance of a class. However, it is certainly possible to assert for a concrete type that it's never supposed to be possible to have an instance of some class for it. An ad-hoc way would be this:
{-# LANGUAGE ConstraintKinds, KindSignatures, AllowAmbiguousTypes
, MultiParamTypeClasses, FlexibleInstances #-}
import GHC.Exts (Constraint)
class Non (c :: * -> Constraint) (t :: *) where
nonAbsurd :: c t => r
But this is unsafe – the only way to write an instance is, like,
instance Non Show (String->Bool) where
nonAbsurd = undefined
but then somebody else could come up with a bogus instance Show (String->Bool) and would then be able to use your nonAbsurd for proving the moon is made out of green cheese.
A better option to make an instance impossible is to “block” it: write that instance yourself “pre-emptively”, but in such a way that it's a type error to actually invoke it.
import Data.Constraint.Trivial -- from trivial-constraint
instance Impossible0 => Show (String->Bool) where
show = nope
Now if anybody tries to add that instance, or tries to use it, they'll get a clear compiler error.

TypeLits or Singletons: Promoting an `Integer` to `KnownNat` (`Nat`) at Runtime

I've found two ways to promote an Integer to a Nat (or KnownNat, I don't get the distintion yet) at runtime, either using TypeLits and Proxy (Data.Proxy and GHC.TypeLits), or Singletons (Data.Singletons). In the code below you can see how each of the two approaches is used:
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE NoImplicitPrelude #-}
module Main where
import Prelude hiding (replicate)
import Data.Proxy (Proxy(Proxy))
import Data.Monoid ((<>))
import Data.Singletons (SomeSing(..), toSing)
import GHC.TypeLits
import Data.Singletons.TypeLits
import Data.Vector.Sized (Vector, replicate)
main :: IO ()
main = playingWithTypes 8
playingWithTypes :: Integer -> IO ()
playingWithTypes nn = do
case someNatVal nn of
Just (SomeNat (proxy :: Proxy n)) -> do
-- let (num :: Integer) = natVal proxy
-- putStrLn $ "Some num: " <> show num
putStrLn $ "Some vector: " <> show (replicate 5 :: Vector n Int)
Nothing -> putStrLn "There's no number, the integer was not a natural number"
case (toSing nn :: SomeSing Nat) of
SomeSing (SNat :: Sing n) -> do
-- let (num :: Integer) = natVal (Proxy :: Proxy n)
-- putStrLn $ "Some num: " <> show num
putStrLn $ "Some vector: " <> show (replicate 5 :: Vector n Int)
The documentation for TypeLits indicates that it shouldn't be used by developers, but Singletons don't capture the case in which the given Integer is not a natural number (i.e., running playingWithTypes 8 runs without errors, but playingWithTypes (-2) fails when we try to create a Singleton from the non-natural number).
So, what is the "standard" way to promote an Integer to a Nat? Or what is the best approach to promote, using TypeLits and Proxy, or Singletons?
Nat (or KnownNat, I don't get the distintion yet)
Nat is the kind of type-level natural numbers. It has no term-level inhabitants. The idea is that GHC promotes any natural number into the type-level, and gives it kind Nat.
KnownNat is a constraint, on something of kind Nat, whose implementation witnesses how to convert the thing of kind Nat to a term-level Integer. GHC automagically creates instances of KnownNat for all type-level inhabitants of the kind Nat1.
That said, even if every n :: Nat (read type n of kind Nat) has a KnownNat instance on it1, you still need to write out the constraint.
I've found two ways to promote an Integer to a Nat
Have you really? At the end of the day, Nat in today's GHC is simply magical. singletons taps into that same magic. Under the hood, it uses someNatVal.
So, what is the "standard" way to promote an Integer to a Nat? Or what is the best approach to promote, using GHC.TypeLits and Proxy, or singletons?
There is no standard way. My take is: use singletons when you can afford its dependency footprint and GHC.TypeLits otherwise. The advantage of singletons is that the SingI type class makes it convenient to do induction based analysis while still also relying on GHC's special Nat type.
1 As pointed out in the comments, not every inhabitant of the Nat kind has a KnownNat instance. For example, Any Nat :: Nat where Any is the one from GHC.Exts. Only the inhabitants 0, 1, 2, ... have KnownNat instances.

Hide a constructor but not the type on import

I've got an internal module I'd like to provide an external API for
module Positive.Internal where
newtype Positive a = Positive { getPositive :: a }
deriving (Eq, Ord)
-- smart constructor
toPositive :: (Num a, Ord a) => a -> Maybe (Positive a)
toPositive a | a <= 0 = Nothing
| otherwise = Just $ Positive a
-- ...
I want to hide the dumb constructor, and replace it with a unidirectional
pattern so users can still pattern match values, they just have to use the smart constructor to use new values.
Since I want the pattern and the dumb constructor to use the same name, I need to hide the dumb constructor to prevent namespace clashes.
However, since the dumb constructor and the type share names, it's a little tricky to import the everything BUT the dumb constructor.
Currently I'm doing this, which works ok:
{-# LANGUAGE PatternSynonyms #-}
module Positive
( module Positive.Internal, pattern Positive
) where
import Positive.Internal (Positive())
import Positive.Internal hiding (Positive)
import qualified Positive.Internal as Internal
pattern Positive :: a -> Positive a
pattern Positive a <- Internal.Positive a
I could simplify my imports by just using the qualified import, but I'm curious.
Is there a way to, in a single import statement, import all of Positive.Internal except the dumb constructor?
I tried hiding (Positive(Positive)), but that hid both the type and the dumb constructor. I've poked about the wiki, but I haven't noticed any way to differentiate between constructors and types in hiding lists.
Correct me if I am wrong, but I am almost certain this is what you are looking for:
{-# LANGUAGE PatternSynonyms #-}
module Positive
( module Positive.Internal, pattern Positive, foo
) where
import Positive.Internal hiding (pattern Positive)
import qualified Positive.Internal as Internal (pattern Positive)
pattern Positive :: a -> Positive a
pattern Positive a <- Internal.Positive a
foo :: Positive Int
foo = Internal.Positive 5
Internal module stays the same way as it is defined so far. And for the sake of example:
module Negative where
import Positive
bar :: Maybe Int
bar = getPositive <$> toPositive 6
Let's double check in GHCi:
Prelude> :load Negative
[1 of 3] Compiling Positive.Internal ( Positive/Internal.hs, interpreted )
[2 of 3] Compiling Positive ( Positive.hs, interpreted )
[3 of 3] Compiling Negative ( Negative.hs, interpreted )
Ok, modules loaded: Negative, Positive, Positive.Internal.
*Negative> bar
Just 6
*Negative> getPositive foo
5
*Negative> :i Positive
newtype Positive a = Positive.Internal.Positive {getPositive :: a}
-- Defined at Positive/Internal.hs:3:1
instance [safe] Ord a => Ord (Positive a)
-- Defined at Positive/Internal.hs:4:17
instance [safe] Eq a => Eq (Positive a)
-- Defined at Positive/Internal.hs:4:13
*Negative> :t Positive
<interactive>:1:1: error:
• non-bidirectional pattern synonym ‘Positive’ used in an expression
• In the expression: Positive

Associated Parameter Restriction using Functional Dependency

The function f below, for a given type 'a', takes a parameter of type 'c'. For different types 'a', 'c' is restricted in different ways. Concretely, when 'a' is any Integral type, 'c' should be allowed to be any 'Real' type. When 'a' is Float, 'c' can ONLY be Float.
One attempt is:
{-# LANGUAGE
MultiParamTypeClasses,
FlexibleInstances,
FunctionalDependencies,
UndecidableInstances #-}
class AllowedParamType a c | a -> c
class Foo a where
f :: (AllowedParamType a c) => c -> a
fIntegral :: (Integral a, Real c) => c -> a
fIntegral = error "implementation elided"
instance (Integral i, AllowedParamType i d, Real d) => Foo i where
f = fIntegral
For some reason, GHC 7.4.1 complains that it "could not deduce (Real c) arising from a use of fIntegral". It seems to me that the functional dependency should allow this deduction. In the instance, a is unified with i, so by the functional dependency, d should be unified with c, which in the instance is declared to be 'Real'. What am I missing here?
Functional dependencies aside, will this approach be expressive enough to enforce the restrictions above, or is there a better way? We are only working with a few different values for 'a', so there will be instances like:
instance (Integral i, Real c) => AllowedParamType i c
instance AllowedParamType Float Float
Thanks
A possibly better way, is to use constraint kinds and type families (GHC extensions, requires GHC 7.4, I think). This allows you to specify the constraint as part of the class instance.
{-# LANGUAGE ConstraintKinds, TypeFamilies, FlexibleInstances, UndecidableInstances #-}
import GHC.Exts (Constraint)
class Foo a where
type ParamConstraint a b :: Constraint
f :: ParamConstraint a b => b -> a
instance Integral i => Foo i where
type ParamConstraint i b = Real b
f = fIntegral
EDIT: Upon further experimentation, there are some subtleties that mean that this doesn't work as expected, specifically, type ParamConstraint i b = Real b is too general. I don't know a solution (or if one exists) right now.
OK, this one's been nagging at me. given the wide variety of instances,
let's go the whole hog and get rid of any relationship between the
source and target type other than the presence of an instance:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
class Foo a b where f :: a -> b
Now we can match up pairs of types with an f between them however we like, for example:
instance Foo Int Int where f = (+1)
instance Foo Int Integer where f = toInteger.((7::Int) -)
instance Foo Integer Int where f = fromInteger.(^ (2::Integer))
instance Foo Integer Integer where f = (*100)
instance Foo Char Char where f = id
instance Foo Char String where f = (:[]) -- requires TypeSynonymInstances
instance (Foo a b,Functor f) => Foo (f a) (f b) where f = fmap f -- requires FlexibleInstances
instance Foo Float Int where f = round
instance Foo Integer Char where f n = head $ show n
This does mean a lot of explicit type annotation to avoid No instance for... and Ambiguous type error messages.
For example, you can't do main = print (f 6), but you can do main = print (f (6::Int)::Int)
You could list all of the instances with the standard types that you want,
which could lead to an awful lot of repetition, our you could light the blue touchpaper and do:
instance Integral i => Foo Double i where f = round -- requires FlexibleInstances
instance Real r => Foo Integer r where f = fromInteger -- requires FlexibleInstances
Beware: this does not mean "Hey, if you've got an integral type i,
you can have an instance Foo Double i for free using this handy round function",
it means: "every time you have any type i, it's definitely an instance
Foo Double i. By the way, I'm using round for this, so unless your type i is Integral,
we're going to fall out." That's a big issue for the Foo Integer Char instance, for example.
This can easily break your other instances, so if you now type f (5::Integer) :: Integer you get
Overlapping instances for Foo Integer Integer
arising from a use of `f'
Matching instances:
instance Foo Integer Integer
instance Real r => Foo Integer r
You can change your pragmas to include OverlappingInstances:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
So now f (5::Integer) :: Integer returns 500, so clearly it's using the more specific Foo Integer Integer instance.
I think this sort of approach might work for you, defining many instances by hand, carefully considering when to go completely wild
making instances out of standard type classes. (Alternatively, there aren't all that many standard types, and as we all know, notMany choose 2 = notIntractablyMany, so you could just list them all.)
Here's a suggestion to solve a more general problem, not yours specifically (I need more detail yet first - I promise to check later). I'm writing it in case other people are searching for a solution to a similar problem to you, I certainly was in the past, before I discovered SO. SO is especially great when it helps you try a radically new approach.
I used to have the work habit:
Introduce a multi-parameter type class (Types hanging out all over the place, so...)
Introduce functional dependencies (Should tidy it up but then I end up needing...)
Add FlexibleInstances (Alarm bells start ringing. There's a reason the compiler has this off by default...)
Add UndecidableInstances (GHC is telling you you're on your own, because it's not convinced it's up to the challenge you're setting it.)
Everything blows up. Refactor somehow.
Then I discovered the joys of type families (functional programming for types (hooray) - multi-parameter type classes are (a bit like) logic programming for types). My workflow changed to:
Introduce a type class including an associated type, i.e. replace
class MyProblematicClass a b | a -> b where
thing :: a -> b
thang :: b -> a -> b
with
class MyJustWorksClass a where
type Thing a :: * -- Thing a is a type (*), not a type constructor (* -> *)
thing :: a -> Thing a
thang :: Thing a -> a -> Thing a
Nervously add FlexibleInstances. Nothing goes wrong at all.
Sometimes fix things by using constraints like (MyJustWorksClass j,j~a)=> instead of (MyJustWorksClass a)=> or (Show t,t ~ Thing a,...)=> instead of (Show (Thing a),...) => to help ghc out. (~ essentially means 'is the same type as')
Nervously add FlexibleContexts. Nothing goes wrong at all.
Everything works.
The reason "Nothing goes wrong at all" is that ghc calculates the type Thing a using my type function Thang rather than trying to deduce it using a merely a bunch of assertions that there's a function there and it ought to be able to work it out.
Give it a go! Read Fun with Type Functions before reading the manual!

Is a scoped type statement possible?

Is it possible to do the following:
foo = bar
where
type A = (Some, Huge, Type, Sig)
meh :: A -> (A, A) -> A
I only need to use this custom type inside the where clause, so it does not make sense to define it globally.
This isn't possible. Why not just define it above the function? You don't have to export it from the module (just use an explicit export list).
By the way, if you really do have a type that big, it's probably a sign that you should factor it into smaller parts, especially if you have a lot of tuples as your example suggests; data-types would be more appropriate.
Actually, there's one, slightly ridiculous, way to approximate this:
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE ScopedTypeVariables #-}
foo :: forall abbrv. (abbrv ~ (Some, Huge, Type, Sig))
=> abbrv -> abbrv
foo x = meh x (x, x)
where meh :: abbrv -> (abbrv, abbrv) -> abbrv
meh x y = {- ... -}
I can't really recommend enabling two language extensions just for the sake of abbreviating types in signatures, though if you're already using them (or GADTs instead of type families) I suppose it doesn't really hurt anything.
Silliness aside, you should consider refactoring your types in cases like this, as ehird suggests.

Resources