Is there a semigroup/monoid in the context of a monad? - haskell

I'm giving my tensor operations a notion of sharing, using a monadic context Shared (implemented as State Nat), so
(+) : Tensor F64 -> Tensor F64 -> Tensor F64
becomes
(+) : Tensor F64 -> Tensor F64 -> Shared $ Tensor F64
If I do this, (+) can't be used in my semigroup. Is there a more general notion of a semigroup (and monoid) that allows for context such as this, so
Semigroup (Tensor F64) where
(<+>) = (+)
becomes e.g.
SemigroupM Shared (Tensor F64) where
(<+>) = (+)
and is it implemented in the Idris stdlib?
Tagged Haskell because, and correct me if I'm wrong, the question is essentially the same there.

Related

Code unexpectedly accepted by GHC/GHCi

I don't understand why this code should pass type-checking:
foo :: (Maybe a, Maybe b)
foo = let x = Nothing in (x,x)
Since each component is bound to the same variable x, I would expect that the most general type for this expression to be (Maybe a, Maybe a). I get the same results if I use a where instead of a let. Am I missing something?
Briefly put, the type of x gets generalized by let. This is a key step in the Hindley-Milner type inference algorithm.
Concretely, let x = Nothing initially assigns x the type Maybe t, where t is a fresh type variable. Then, the type gets generalized, universally quantifying all its type variables (technically: except those in use elsewhere, but here we only have t). This causes x :: forall t. Maybe t. Note that this is exactly the same type as Nothing :: forall t. Maybe t.
Hence, each time we use x in our code, that refers to a potentially different type Maybe t, much like Nothing. Using (x, x) gets the same type as (Nothing, Nothing) for this reason.
Instead, lambdas do not feature the same generalization step. By comparison (\x -> (x, x)) Nothing "only" has type forall t. (Maybe t, Maybe t), where both components are forced to be of the same type. Here x is again assigned type Maybe t, with t fresh, but it is not generalized. Then (x, x) is assigned type (Maybe t, Maybe t). Only at the top-level we generalize adding forall t, but at that point is too late to obtain a heterogeneous pair.

What's the difference between parametric polymorphism and higher-kinded types?

I am pretty sure they are not the same. However, I am bogged down by the
common notion that "Rust does not support" higher-kinded types (HKT), but
instead offers parametric polymorphism. I tried to get my head around that and understand the difference between these, but got just more and more entangled.
To my understanding, there are higher-kinded types in Rust, at least the basics. Using the "*"-notation, a HKT does have a kind of e.g. * -> *.
For example, Maybe is of kind * -> * and could be implemented like this in Haskell.
data Maybe a = Just a | Nothing
Here,
Maybe is a type constructor and needs to be applied to a concrete type
to become a concrete type of kind "*".
Just a and Nothing are data constructors.
In textbooks about Haskell, this is often used as an example for a higher-kinded type. However, in Rust it can simply be implemented as an enum, which after all is a sum type:
enum Maybe<T> {
Just(T),
Nothing,
}
Where is the difference? To my understanding this is a
perfectly fine example of a higher-kinded type.
If in Haskell this is used as a textbook example of HKTs, why is it
said that Rust doesn't have HKT? Doesn't the Maybe enum qualify as a
HKT?
Should it rather be said that Rust doesn't fully support HKT?
What's the fundamental difference between HKT and parametric polymorphism?
This confusion continues when looking at functions, I can write a parametric
function that takes a Maybe, and to my understanding a HKT as a function
argument.
fn do_something<T>(input: Maybe<T>) {
// implementation
}
again, in Haskell that would be something like
do_something :: Maybe a -> ()
do_something :: Maybe a -> ()
do_something _ = ()
which leads to the fourth question.
Where exactly does the support for higher-kinded types end? Whats the
minimal example to make Rust's type system fail to express HKT?
Related Questions:
I went through a lot of questions related to the topic (including links they have to blogposts, etc.) but I could not find an answer to my main questions (1 and 2).
In Haskell, are "higher-kinded types" *really* types? Or do they merely denote collections of *concrete* types and nothing more?
Generic struct over a generic type without type parameter
Higher Kinded Types in Scala
What types of problems helps "higher-kinded polymorphism" solve better?
Abstract Data Types vs. Parametric Polymorphism in Haskell
Update
Thank you for the many good answers which are all very detailed and helped a lot. I decided to accept Andreas Rossberg's answer since his explanation helped me the most to get on the right track. Especially the part about terminology.
I was really locked in the cycle of thinking that everything of kind * -> * ... -> * is higher-kinded. The explanation that stressed the difference between * -> * -> * and (* -> *) -> * was crucial for me.
Some terminology:
The kind * is sometimes called ground. You can think of it as 0th order.
Any kind of the form * -> * -> ... -> * with at least one arrow is first-order.
A higher-order kind is one that has a "nested arrow on the left", e.g., (* -> *) -> *.
The order essentially is the depth of left-side nesting of arrows, e.g., (* -> *) -> * is second-order, ((* -> *) -> *) -> * is third-order, etc. (FWIW, the same notion applies to types themselves: a second-order function is one whose type has e.g. the form (A -> B) -> C.)
Types of non-ground kind (order > 0) are also called type constructors (and some literature only refers to types of ground kind as "types"). A higher-kinded type (constructor) is one whose kind is higher-order (order > 1).
Consequently, a higher-kinded type is one that takes an argument of non-ground kind. That would require type variables of non-ground kind, which are not supported in many languages. Examples in Haskell:
type Ground = Int
type FirstOrder a = Maybe a -- a is ground
type SecondOrder c = c Int -- c is a first-order constructor
type ThirdOrder c = c Maybe -- c is second-order
The latter two are higher-kinded.
Likewise, higher-kinded polymorphism describes the presence of (parametrically) polymorphic values that abstract over types that are not ground. Again, few languages support that. Example:
f : forall c. c Int -> c Int -- c is a constructor
The statement that Rust supports parametric polymorphism "instead" of higher-kinded types does not make sense. Both are different dimensions of parameterisation that complement each other. And when you combine both you have higher-kinded polymorphism.
A simple example of what Rust can't do is something like Haskell's Functor class.
class Functor f where
fmap :: (a -> b) -> f a -> f b
-- a couple examples:
instance Functor Maybe where
-- fmap :: (a -> b) -> Maybe a -> Maybe b
fmap _ Nothing = Nothing
fmap f (Just x) = Just (f x)
instance Functor [] where
-- fmap :: (a -> b) -> [a] -> [b]
fmap _ [] = []
fmap f (x:xs) = f x : fmap f xs
Note that the instances are defined on the type constructor, Maybe or [], instead of the fully-applied type Maybe a or [a].
This isn't just a parlor trick. It has a strong interaction with parametric polymorphism. Since the type variables a and b in the type fmap are not constrained by the class definition, instances of Functor cannot change their behavior based on them. This is an incredibly strong property in reasoning about code from types, and where a lot of where the strength of Haskell's type system comes from.
It has one other property - you can write code that's abstract in higher-kinded type variables. Here's a couple examples:
focusFirst :: Functor f => (a -> f b) -> (a, c) -> f (b, c)
focusFirst f (a, c) = fmap (\x -> (x, c)) (f a)
focusSecond :: Functor f => (a -> f b) -> (c, a) -> f (c, b)
focusSecond f (c, a) = fmap (\x -> (c, x)) (f a)
I admit, those types are beginning to look like abstract nonsense. But they turn out to be really practical when you have a couple helpers that take advantage of the higher-kinded abstraction.
newtype Identity a = Identity { runIdentity :: a }
instance Functor Identity where
-- fmap :: (a -> b) -> Identity a -> Identity b
fmap f (Identity x) = Identity (f x)
newtype Const c b = Const { getConst :: c }
instance Functor (Const c) where
-- fmap :: (a -> b) -> Const c a -> Const c b
fmap _ (Const c) = Const c
set :: ((a -> Identity b) -> s -> Identity t) -> b -> s -> t
set f b s = runIdentity (f (\_ -> Identity b) s)
get :: ((a -> Const a b) -> s -> Const a t) -> s -> a
get f s = getConst (f (\x -> Const x) s)
(If I made any mistakes in there, can someone just fix them? I'm reimplementing the most basic starting point of lens from memory without a compiler.)
The functions focusFirst and focusSecond can be passed as the first argument to either get or set, because the type variable f in their types can be unified with the more concrete types in get and set. Being able to abstract over the higher-kinded type variable f allows functions of a particular shape can be used both as setters and getters in arbitrary data types. This is one of the two core insights that led to the lens library. It couldn't exist without this kind of abstraction.
(For what it's worth, the other key insight is that defining lenses as a function like that allows composition of lenses to be simple function composition.)
So no, there's more to it than just being able to accept a type variable. The important part is being able to use type variables that correspond to type constructors, rather than some concrete (if unknown) type.
I'm going to resume it: a higher-kinded type is just a type-level higher-order function.
But take a minute:
Consider monad transformers:
newtype StateT s m a :: * -> (* -> *) -> * -> *
Here,
- s is the desired type of the state
- m is a functor, another monad that StateT will wrap
- a is the return type of an expression of type StateT s m
What is the higher-kinded type?
m :: (* -> *)
Because takes a type of kind * and returns a kind of type *.
It's like a function on types, that is, a type constructor of kind
* -> *
In languages like Java, you can't do
class ClassExample<T, a> {
T<a> function()
}
In Haskell T would have kind *->*, but a Java type (i.e. class) cannot have a type parameter of that kind, a higher-kinded type.
Also, if you don't know, in basic Haskell an expression must have a type that has kind *, that is, a "concrete type". Any other type like * -> *.
For instance, you can't create an expression of type Maybe. It has to be types applied to an argument like Maybe Int, Maybe String, etc. In other words, fully applied type constructors.
Parametric polymorphism just refers to the property that the function cannot make use of any particular feature of a type (or kind) in its definition; it is a complete blackbox. The standard example is length :: [a] -> Int, which only works with the structure of the list, not the particular values stored in the list.
The standard example of HKT is the Functor class, where fmap :: (a -> b) -> f a -> f b. Unlike length, where a has kind *, f has kind * -> *. fmap also exhibits parametric polymorphism, because fmap cannot make use of any property of either a or b in its definition.
fmap exhibits ad hoc polymorphism as well, because the definition can be tailored to the specific type constructor f for which it is defined. That is, there are separate definitions of fmap for f ~ [], f ~ Maybe, etc. The difference is that f is "declared" as part of the typeclass definition, rather than just being part of the definition of fmap. (Indeed, typeclasses were added to support some degree of ad hoc polymorphism. Without type classes, only parametric polymorphism exists. You can write a function that supports one concrete type or any concrete type, but not some smaller collection in between.)

Why is a function type required to be "wrapped" for the type checker to be satisfied?

The following program type-checks:
{-# LANGUAGE RankNTypes #-}
import Numeric.AD (grad)
newtype Fun = Fun (forall a. Num a => [a] -> a)
test1 [u, v] = (v - (u * u * u))
test2 [u, v] = ((u * u) + (v * v) - 1)
main = print $ fmap (\(Fun f) -> grad f [1,1]) [Fun test1, Fun test2]
But this program fails:
main = print $ fmap (\f -> grad f [1,1]) [test1, test2]
With the type error:
Grad.hs:13:33: error:
• Couldn't match type ‘Integer’
with ‘Numeric.AD.Internal.Reverse.Reverse s Integer’
Expected type: [Numeric.AD.Internal.Reverse.Reverse s Integer]
-> Numeric.AD.Internal.Reverse.Reverse s Integer
Actual type: [Integer] -> Integer
• In the first argument of ‘grad’, namely ‘f’
In the expression: grad f [1, 1]
In the first argument of ‘fmap’, namely ‘(\ f -> grad f [1, 1])’
Intuitively, the latter program looks correct. After all, the
following, seemingly equivalent program does work:
main = print $ [grad test1 [1,1], grad test2 [1,1]]
It looks like a limitation in GHC's type system. I would like to know
what causes the failure, why this limitation exists, and any possible
workarounds besides wrapping the function (per Fun above).
(Note: this is not caused by the monomorphism restriction; compiling
with NoMonomorphismRestriction does not help.)
This is an issue with GHC's type system. It is really GHC's type system by the way; the original type system for Haskell/ML like languages don't support higher rank polymorphism, let alone impredicative polymorphism which is what we're using here.
The issue is that in order to type check this we need to support foralls at any position in a type. Not only bunched all the way at the front of the type (the normal restriction which allows for type inference). Once you leave this area type inference becomes undecidable in general (for rank n polymorphism and beyond). In our case, the type of [test1, test2] would need to be [forall a. Num a => a -> a] which is a problem considering that it doesn't fit into the scheme discussed above. It would require us to use impredicative polymorphism, so called because a ranges over types with foralls in them and so a could be replaced with the type in which it's being used.
So, therefore there's going to be some cases that misbehave just because the problem is not fully solvable. GHC does have some support for rank n polymorphism and a bit of support for impredicative polymorphism but it's generally better to just use newtype wrappers to get reliable behavior. To the best of my knowledge, GHC also discourages using this feature precisely because it's so hard to figure out exactly what the type inference algorithm will handle.
In summary, math says that there will be flaky cases and newtype wrappers are the best, if somewhat dissatisfying way, to cope with it.
The type inference algorithm will not infer higher rank types (those with forall at the left of ->). If I remember correctly, it becomes undecidable. Anyway, consider this code
foo f = (f True, f 'a')
what should its type be? We could have
foo :: (forall a. a -> a) -> (Bool, Char)
but we could also have
foo :: (forall a. a -> Int) -> (Int, Int)
or, for any type constructor F :: * -> *
foo :: (forall a. a -> F a) -> (F Bool, F Char)
Here, as far as I can see, we can not find a principal type -- a type which is the most general type we can assign to foo.
If a principal type does not exist, the type inference machinery can only pick a suboptimal type for foo, which can cause type errors later on. This is bad. Instead, GHC relies on a Hindley-Milner style type inference engine, which was greatly extended so to cover more advanced Haskell types. This mechanism, unlike plain Hindley-Milner, will assign f a polymorphic type provided the user explicitly required that, e.g. by giving foo a signature.
Using a wrapper newtype like Fun also instructs GHC in a similar way, providing the polymorphic type for f.

What are the reasons that protocols and multimethods in Clojure are less powerful for polymorphism than typeclasses in Haskell?

More broadly this question is about various approaches to the expression problem. The idea is that your program is a combination of a datatype and operations over it. We want to be able to add new cases without recompiling the old classes.
Now Haskell has some really awesome solutions to the expression problem with the TypeClass. In particular - we can do:
class Eq a where
(==) :: a -> a -> Bool
(/=) :: a -> a -> Bool
member :: (Eq a) => a -> [a] -> Bool
member y [] = False
member y (x:xs) = (x == y) || member y xs
Now in Clojure there are multimethods - so you can do:
(defmulti area :Shape)
(defn rect [wd ht] {:Shape :Rect :wd wd :ht ht})
(defn circle [radius] {:Shape :Circle :radius radius})
(defmethod area :Rect [r]
(* (:wd r) (:ht r)))
(defmethod area :Circle [c]
(* (. Math PI) (* (:radius c) (:radius c))))
(defmethod area :default [x] :oops)
(def r (rect 4 13))
(def c (circle 12))
(area r)
-> 52
(area c)
-> 452.3893421169302
(area {})
-> :oops
Also in Clojure you have protocols - with which you can do:
(defprotocol P
(foo [x])
(bar-me [x] [x y]))
(deftype Foo [a b c]
P
(foo [x] a)
(bar-me [x] b)
(bar-me [x y] (+ c y)))
(bar-me (Foo. 1 2 3) 42)
=> 45
(foo
(let [x 42]
(reify P
(foo [this] 17)
(bar-me [this] x)
(bar-me [this y] x))))
=> 17
Now this individual makes the claim:
But, there are protocols and multi-methods. These are very powerful, but not as powerful as Haskell's typeclasses. You can introduce something like a typeclass by specifying your contract in a protocol. This only dispatches on the first argument, whereas Haskell can dispatch on the entire signature, including return value. Multi-methods are more powerful than protocols, but not as powerful as Haskell's dispatch.
My question is: What are the reasons that protocols and multimethods in Clojure are less powerful for polymorphism than typeclasses in Haskell?
Well the obvious one is that protocols can only dispatch on the first and only the first argument. This means they're roughly equivalent to
class Foo a where
bar :: a -> ...
quux :: a -> ...
...
Where a must be the first argument. Haskell's type classes let a appear anywhere in the function. So protocols are easily less expressive than typeclasses.
Next is multimethods. Multimethods, if I'm not mistaken, allow dispatch based on a function of all the arguments. This looks more expressive in some ways than Haskell, since you can dispatch arguments of the same type differently. However, this can actually be done in Haskell, generally by wrapping the argument in a newtype for dispatching.
A few things that can't be done with multimethods to my knowledge:
Dispatch on return type
Store values polymorphic over all types of a type class forall a. Foo a => a
To see how 1. comes into play, consider Monoid it has a value mempty :: Monoid m => m. It's not a function, and simulating this is impossible with Clojure since we don't have any type information about what method we're expected to choose.
For 2. consider read :: Read a => String -> a. In Haskell we could actually create a list which has the type [forall a. Read a => a], we've essentially deferred the computation and we can now run and rerun elements of the list to attempt to read them as different values.
Typeclasses also have static types so there's some checking to make sure you're not going to end up "stuck" without an instance to call statically, but Clojure is dynamically typed so I'll chalk this up to a difference in style between the two languages rather than a particular advantage one way or the other. Also of course is the advantage that typeclasses have a lot less overhead than multimethods since generally the witness record can be inlined and everything is resolved statically.
The most fundamental difference is that with type classes, dispatch is on types not on values. No value is needed to perform it. That allows much more general cases. The most obvious example is dispatch on (part of) the result type of a function. Consider e.g. Haskell's Read class:
class Read a where
readsPrec :: Int -> String -> [(a, String)]
...
Such dispatch is clearly impossible with multi-methods, which have to dispatch on their arguments.
See also my more extensive comparison with plain OO.

Why can't I map a function that multiplies by a Fractional onto a list of Nums?

I want to make a list of numbers every 0.1 from -150 to 150.
To do this, I created a list, and then tried to map a Fractional multiplication lambda onto it, like so:
let indices = [-1500,-1499..1500]
let grid = map (\x -> 0.1 *x) indices
This makes ghci spit out an error.
On the other hand, both of these work fine:
let a = 0.1*2
and
let grid = map (\x -> 2 *x) indices
What's going on here? Why does multiplication of a Num by a Fractional only fail when applied to a list with map?
EDIT:
The error I get is:
No instance for (Fractional Integer)
arising from the literal `0.1'
Possible fix: add an instance declaration for (Fractional Integer)
In the first argument of `(*)', namely `0.1'
In the expression: 0.1 * x
In the first argument of `map', namely `(\ x -> 0.1 * x)'
You've discovered the "dreaded monomorphism restriction". Basically GHC will infer the type of indices to be a monotype like [Integer] instead of Num a => a. You can either provide an annotation like indices :: [Float], or rework your definitions to avoid the restriction.
For example (not a suggestion), if you make indices a function: let indices a = [-1500, -1499..1500], the inferred type is now (Enum t, Num t) => a -> [t]. The a parameter is unused but defeats the restriction. Then you can then do map f (indices whatever). See much more information in the Haskell Wiki about the Monomorphism Restriction.
This is defaulting.
Your indices variable, rather than being polymorphic over the Num typeclass as you might expect, is defaulting to Integer, at which point you can't multiply it by 0.1, since 0.1 will resolve to some Fractional type.
You could force indices to be polymorphic with an explicit type signature:
let indices :: (Enum a, Num a) => [a]; indices = [-1500,-1499..1500]
although in practice you don't often want explicitly polymorphic lists in that way.
There is a page about the monomorphism restriction on the haskell wiki although it's not particularly succinct : http://www.haskell.org/haskellwiki/Monomorphism_restriction
let grid = map (\x -> 0.1 * (fromInteger x)) indices
-- grid [-150.0,-149.9,-149.8,-149.70000000000002,-149.6,-149.5 ...]
the Following code worked for me
let indices = [-1500,-1499..1500]
let grid = map ( \x -> x /10 ) indices
it doesn't like the 0.1
For full explanation see section "Monomorphic trouble" on this link

Resources