Apparently it's a bad idea to put a typeclass constraint on a data declaration [src], [src].
I haven't personally come across a desire to constrain the types within data types I've created, but it's not obvious to me why the language designers "decided it was a bad idea to allow". Why is that?
I haven't personally come across a desire to constrain the types within data types I've created, but it's not obvious to me why the language designers "decided it was a bad idea to allow". Why is that?
Because it was misleading and worked completely backwards from what would actually be useful.
In particular, it didn't actually constrain the types within the data type in the way you're probably expecting. What it did do was put a class constraint on the data constructor itself, which meant that you needed to satisfy the instance when constructing a value... but that was all.
So, for instance, you couldn't simply define a binary search tree with an Ord constraint and then know that any tree has sortable elements; the lookup and insert functions would still need an Ord constraint themselves. All you'd prevent would be constructing an empty tree that "contains" values of some non-ordered type. As far as pattern matching was concerned, there was no constraint on the contained type at all.
On the other hand, the people working on Haskell didn't think that the sensible version (that people tended to assume data type contexts provided) was a bad idea at all! In fact, class constraints on a data type declared with GADT syntax (generalized algebraic data types, enabled in GHC with the GADTs language pragma) do work in the obvious way--you need a constraint to construct the value, and the instance in question also gets stored in the GADT, so that you don't need a constraint to work with values, and pattern matching on the GADT constructor lets you use the instance it captured.
It's not actually a bad idea to add a typeclass constraint on a
data type - it can be very useful, and doesn't break your other code.
The badness is all about the fact that often people expect that they can then
use the data type to excuse them from putting a constraint on functions
that use the data type, but that's not the case.
(You could argue that an implicit constraint can cause problems.)
Putting a constraint on a datatype actually puts it on the all the constructors
that mention the constrained type.
Just as with an ordinary function with a constraint, if you use the constructor,
you must add the constraint. I think that's healthy and above board.
It does ensure you can't put data in your data type unless you can do
certain things with it. Its useful. You won't be creating a programming
faux pas by using one, and it's not bad practice, it's just not as lovely as
they wanted.
The "bad idea to allow" is probably because GADTs is really what they would like.
If GADTs had been around first, they wouldn't have done this.
I don't think it's such a bad thing to have both. If you want a state
manipulating function, you can use a permanently explicit parameter you pass around,
or you can use a monad and make it implicit. If you want a constraint on
data you can use a permanently explicit one on a data declaration or an implicit one
with a GADT. GADTs and monads are more sophisticated, but it doesn't make
explicit parameters or data type constraints wrong.
Related
To clarify my question, let me rephrase it in a more or less equivalent way:
Why is there a concept of superclass/class inheritance in Haskell?
What are the historical reasons that led to that design choice?
Why would it be so bad, for example, to have a base library with no class hierarchy, just typeclasses independent from each other?
Here I'll expose some random thoughts that made me want to ask this question. My current intuitions might be inaccurate as they are based on my current understanding of Haskell which is not perfect, but here they are...
It is not obvious to me why type class inheritance exists in Haskell. I find it a bit weird, as it creates asymmetry in concepts.
Often in mathematics, concepts can be defined from different viewpoints, I don't necessarily want to favor an order of how they ought to be defined. OK there is some order in which one should prove things, but once theorems and structures are there, I'd rather see them as available independent tools.
Moreover one perhaps not so good thing I see with class inheritance is this: I think a class instance will silently pick a corresponding superclass instance, which was probably implemented to be the most natural one for that type. Let's consider a Monad viewed as a subclass of Functor. Maybe there could be more than one way to define a Functor on some type that also happens to be a Monad. But saying that a Monad is a Functor implicitly makes the choice of one particular Functor for that Monad. Someday, you might forget that actually you wanted some other Functor.
Perhaps this example is not the best fit, but I have the feeling this sort of situation might generalize and possibly be dangerous if your class is a child of many. Current Haskell inheritance sounds like it makes default choices about parents implicitly.
If instead you have a design without hierarchy, I feel you would always have to be explicit about all the properties required, which would perhaps mean a bit less risk, more clarity, and more symmetry. So far, what I'm seeing is that the cost of such a design, would be : more constraints to write in instance definitions, and newtype wrappers, for each meaningful conversion from one set of concepts to another. I am not sure, but perhaps that could have been acceptable. Unfortunately, I think Haskell auto deriving mechanism for newtypes doesn't work very well, I would appreciate that the language was somehow smarter with newtype wrapping/unwrapping and required less verbosity.
I'm not sure, but now that I think about it, perhaps an alternative to newtype wrappers could be specific imports of modules containing specific variations of instances.
Another alternative I thought about while writing this, is that maybe one could weaken the meaning of class (P x) => C x, where instead of it being a requirement that an instance of C selects an instance of P, we could just take it to loosely mean that for example, C class also contains P's methods but no instance of P is automatically selected, no other relationship with P exists. So we could keep some sort of weaker hierarchy that might be more flexible.
Thanks if you have some clarifications over that topic, and/or correct my possible misunderstandings.
Maybe you're tired of hearing from me, but here goes...
I think superclasses were introduced as a relatively minor and unimportant feature of type classes. In Wadler and Blott, 1988, they are briefly discussed in Section 6 where the example class Eq a => Num a is given. There, the only rationale offered is that it's annoying to have to write (Eq a, Num a) => ... in a function type when it should be "obvious" that data types that can be added, multiplied, and negated ought to be testable for equality as well. The superclass relationship allows "a convenient abbreviation".
(The unimportance of this feature is underscored by the fact that this example is so terrible. Modern Haskell doesn't have class Eq a => Num a because the logical justification for all Nums also being Eqs is so weak. The example class Eq a => Ord a would be been a lot more convincing.)
So, the base library implemented without any superclasses would look more or less the same. There would just be more logically superfluous constraints on function type signatures in both library and user code, and instead of fielding this question, I'd be fielding a beginner question about why:
leq :: (Ord a) => a -> a -> Bool
leq x y = x < y || x == y
doesn't type check.
To your point about superclasses forcing a particular hierarchy, you're missing your target.
This kind of "forcing" is actually a fundamental feature of type classes. Type classes are "opinionated by design", and in a given Haskell program (where "program" includes all the libraries, include base used by the program), there can be only one instance of a particular type class for a particular type. This property is referred to as coherence. (Even though there is a language extension IncohorentInstances, it is considered very dangerous and should only be used when all possible instances of a particular type class for a particular type are functionally equivalent.)
This design decision comes with certain costs, but it also comes with a number of benefits. Edward Kmett talks about this in detail in this video, starting at around 14:25. In particular, he compares Haskell's coherent-by-design type classes with Scala's incoherent-by-design implicits and contrasts the increased power that comes with the Scala approach with the reusability (and refactoring benefits) of "dumb data types" that comes with the Haskell approach.
So, there's enough room in the design space for both coherent type classes and incoherent implicits, and Haskell's appoach isn't necessarily the right one.
BUT, since Haskell has chosen coherent type classes, there's no "cost" to having a specific hierarchy:
class Functor a => Monad a
because, for a particular type, like [] or MyNewMonadDataType, there can only be one Monad and one Functor instance anyway. The superclass relationship introduces a requirement that any type with Monad instance must have Functor instance, but it doesn't restrict the choice of Functor instance because you never had a choice in the first place. Or rather, your choice was between having zero Functor [] instances and exactly one.
Note that this is separate from the question of whether or not there's only one reasonable Functor instance for a Monad type. In principle, we could define a law-violating data type with incompatible Functor and Monad instances. We'd still be restricted to using that one Functor MyType instance and that one Monad MyType instance throughout our program, whether or not Functor was a superclass of Monad.
In Haskell, it's possible to add constraints to a type parameter.
For example:
foo :: Functor f => f a
The question: is it possible to negate a constraint?
I want to say that f can be anything except Functor for example.
UPD:
So it comes from the idea of how to map the bottom nested Functor.
Let's say I have Functor a where a can be a Functor b or not and the same rules works for b.
Reasons why this is not possible: (basically all the same reason, just different aspects of it)
There is an open-world assumption about type classes. It isn't possible to prove that a type is not an instance of a class because even if during compilation of a module, the instance isn't there, that doesn't mean somebody doesn't define it in a module “further down the road”. This could in principle be in a separate package, such that the compiler can't possibly know whether or not the instance exists.
(Such orphan instances are generally quite frowned upon, but there are use cases for them and the language doesn't attempt to prevent this.)
Membership in a class is an intuitionistic property, meaning that you shouldn't think of it as a classical boolean value “instance or not instance” but rather, if you can prove that a type is an instance then this gives you certain features for the type (specified by the class methods). If you can't prove that the type is an instance then this doesn't mean there is no instance, but perhaps just that you're not smart enough to prove it. (Read, “perhaps that nobody is smart enough”.)This ties back to the first point: the compiler not yet having the instance available is one case of “not being smart enough”.
A class isn't supposed to be used for making dispatch on whether or not a type is in it, but for enabling certain polymorphic functions even if they require ad-hoc conditions on the types. That's what class methods do, and they can come from a class instance, but how could they come from a “not-in-class instance”?
Now, all that said, there is a way you can kind of fake this: with an overlapping instance. Don't do it, it's a bad idea, but... that's the closest you can get.
I am learning Haskell and trying to understand the Monoid typeclass.
At the moment, I am reading the haskellbook and it says the following about the pattern (monoid):
One of the finer points of the Haskell community has been its
propensity for recognizing abstract patterns in code which have
well-defined, lawful representations in mathematics.
What does the author mean by abstract patterns?
Abstract in this sense is the opposite of concrete. This is probably one of the key things to understand about Haskell.
What is a concrete thing? Well, most values in Haskell are concrete. For example 'a' :: Char. The letter 'a' is a Char value, and it's a concrete value. Char is a concrete type. But in 1 :: Num a => a, the number 1 is actually a value of any type, so long as that type has the set of functions that the Num typeclass sets out as mandatory. This is an abstract value! We can have abstract values, abstract types, and therefore abstract functions. When the program is compiled, the Haskell compiler will pick a particular concrete value that supports all of our requirements.
Haskell, at its core, has a very simple, small but incredibly flexible language. It's very similar to an expression of maths, actually. This makes it very powerful. Why? because most things that would be built in language constructs in other languages are not directly built into Haskell, but defined in terms of this simple core.
One of the core pieces is the function, which, it turns out, most of computation is expressible in terms of. Because so much of Haskell is just defined in terms of this small simple core, it means we can extend it to almost anywhere we can imagine.
Typeclasses are probably the best example of this. Monoid, and Num are examples of typeclasses. These are constructs that allow programmers to use an abstraction like a function across a great many types but only having to define it once. Typeclasses let us use the same function names across a whole range of types if we can define those functions for those types. Why is that important or useful? Well, if we can recognise a pattern across, for example, all numbers, and we have a mechanism for talking about all numbers in the language itself, then we can write functions that work with all numbers at once. This is an abstract pattern. You'll notice some Haskellers are quite interested in a branch of mathematics called Category Theory. This branch is pretty much the mathematical definition of abstract patterns. Contrast this ability to encode such things with the inability of other languages, where in other languages the patterns the community notice are often far less rigorous and have to be manually written out, and without any respect for its mathematical nature. The beauty of following the mathematics is the extremely large body of stuff we get for free by aligning our language closer with mathematics.
This is a good explanation of these basics including typeclasses in a book that I helped author: http://www.happylearnhaskelltutorial.com/1/output_other_things.html
Because functions are written in a very general way (because Haskell puts hardly any limits on our ability to express things generally), we can write functions that use types which express such things as "any type, so long as it's a Monoid". These are called type constraints, as above.
Generally abstractions are very useful because we can, for example, write on single function to operate on an entire range of types which means we can often find functions that do exactly what we want on our types if we just make them instances of specific typeclasses. The Ord typeclass is a great example of this. Making a type we define ourselves an instance of Ord gives us a whole bunch of sorting and comparing functions for free.
This is, in my opinion, one of the most exciting parts about Haskell, because while most other languages also allow you to be very general, they mostly take an extreme dip in how expressive you can be with that generality, so therefore also are less powerful. (This is because they are less precise in what they talk about, because their types are less well "defined").
This is how we're able to reason about the "possible values" of a function, and it's not limited to Haskell. The more information we encode at the type level, the more toward the specificity end of the spectrum of expressivity we veer. For example, to take a classic case, the function const :: a -> b -> a. This function requires that a and b can be of absolutely any type at all, including the same type if we like. From that, because the second parameter can be a different type than the first, we can work out that it really only has one possible functionality. It can't return an Int, unless we give it an Int as its first value, because that's not any type, right? So therefore we know the only value it can return is the first value! The functionality is defined right there in the type! If that's not mindblowing, then I don't know what is. :)
As we move to dependent types (that is, a type system where types are first class, which means also that ordinary values can be encoded in the type system), we can get closer and closer to having the type system specify specifically what the constraints of possible functionality are. However, the kicker is, it doesn't necessarily speak about the implementation of the functionality unless we want it to, because we're in control of how abstract it is, but while maintaining expressivity and much precision. That's pretty fascinating, and amazingly powerful.
Much math can be expressed in the language that underpins Haskell, the lambda calculus.
Suppose I have the following class:
class P a where
nameOf :: a -> String
I would like to declare that all instances of this class are automatically instances of Show. My first attempt would be the following:
instance P a => Show a where
show = nameOf
My first attempt to go this way yesterday resulted in a rabbit warren of language extensions: I was first told to switch on flexible instances, then undecidable instances, then overlapping instances, and finally getting an error about overlapping instance declarations. I gave up and returned to repeating the code. However, this fundamentally seems like a very simple demand, and one that should be easily satisfied.
So, two questions:
Is there a trivially easy way to do this that I've just missed?
Why do I get an overlapping instances problem? I can see why I might need UndecidableInstances, since I seem to be violating the Paterson condition, but there are no overlapping instances around here: there are no instances of P, even. Why does the typechecker believe there are multiple instances for Show Double (as seems to be the case in this toy example)?
You get the overlapping instances error because some of your instances of P may have other instances of Show and then the compiler won't be able to decide which ones to use. If you have an instance of P for Double, then there you go, you get two instances of Show for Double: yours general one and the one already declared in Haskell's base library. How this error is triggered is correctly stated by #augustss in the comments to your question. For more info see the specs.
As you already know, there is no way to achieve what you're trying without the UndecidableInstances. When you enable that flag you must understand that you're taking over the compiler's responsibility to ensure that there won't arise any conflicting instances. This means that, of course, there mustn't be any other instances of Show produced in your library. This also means that your library won't export the P class, which will erase the possibility of users of the library declaring the conflicting instances.
If your case somehow conflicts with the said above, it's a reliable sign of that there must be something wrong with it. And in fact there is...
What you're trying to achieve is incorrect above all. You are missing several important points about the Show typeclass, distinguishing it from constructs like a toString method of popular OO languages:
From Show's haddock:
The result of show is a syntactically correct Haskell expression containing only constants, given the fixity declarations in force at the point where the type is declared. It contains only the constructor names defined in the data type, parentheses, and spaces. When labelled constructor fields are used, braces, commas, field names, and equal signs are also used.
In other words, declaring an instance of Show, which does not produce a valid Haskell expression, is incorrect per se.
Given the above it just doesn't make sense to declare a custom instance of Show when the type allows to simply derive it.
When a type does not allow to derive it (e.g., GADT), generally you'll still have to stick to type-specific instances to produce correct results.
So, if you need a custom representation function, you shouldn't use Show for that. Just declare a custom class, e.g.:
class Repr a where
repr :: a -> String
and approach the instances declaration responsibly.
Reading "Real world Haskell" i found some intresting question about data types:
This pattern matching and positional
data access make it look like you have
very tight coupling between data and
code that operates on it (try adding
something to Book, or worse change the
type of an existing part).
This is usually a very bad thing in
imperative (particularly OO)
languages... is it not seen as a
problem in Haskell?
source at RWH comments
And really, writing some Haskell programs I found that when I make small change to data type structure it affects almost all functions that use that data type. Maybe there are some good practices for data type usage. How can i minimize code coupling?
What you are describing is commonly known as the expression problem -- http://en.wikipedia.org/wiki/Expression_Problem.
There is a definite trade-off to be made, haskell code in general, and algebraic data types in particular, tends to fall into the hard to change the type but easy to add functions over the type. This optimizes for (up front) well-designed, complete, data types.
All that said, there are a number of things that you can do to reduce the coupling.
Define good library functions, by defining a complete set of combinators and higher order functions that are useful for interacting with your data type you will reduce coupling. It is often said that when ever you think of pattern matching there is a better solution using higher-order functions. If you look for these situations you will be in a better spot.
Expose your data structures as more abstract types. This means implementing all appropriate type classes. This will assist with defining a library functions as you will get a bunch for free with any of the type classes you implement, for examples look at operations over Functor or Monad.
Hide (as much as possible) any type constructors. Constructors expose implementation detail and will encourage coupling. Hint: this links in with defining a good api for interacting with your type, consumers of your type should rarely, if ever, have to use the type constructors.
The haskell community seems particularly good at this, if you look at many of the libraries on hackage you will find really good examples of implementing type classes and exposing good library functions.
In addition to what's been said:
One interesting approach is the "scrap your boilerplate" style of defining functions over data types, which makes use of generic functions (as opposed to explicit pattern matching) to define functions over the constructors of a data type. Looking at the "scrap your boilerplate" papers, you will see examples of functions which can cope with changes to the structure of a data type.
A second method, as Hibberd pointed out, is to use folds, maps, unfolds, and other recursion combinators, to define your functions. When you write functions using higher order functions, oftentimes small changes to the underlying data type can be dealt with in the instance declarations for Functor, Foldable, and so on.
First, I'd like to mention that in my view, there are two kinds of couplings:
One that makes your code cease to compile when you change one and forget to change the other
One that makes your code buggy when you change one and forget to change the other
While both are problematic, the former is significantly less of a headache, and that seems to be the one you're talking about.
I think the main problem you're mentioning is due to over-using positional arguments. Haskell almost forces you to have positional arguments in your ordinary functions, but you can avoid them in your type products (records).
Just use records instead of multiple anonymous fields inside data constructors, and then you can pattern-match any field you want out of it, by name.
bad (Blah _ y) = ...
good (Blah{y = y}) = ...
Avoid over-using tuples, especially those beyond 2-tuples, and liberally create records/newtypes around things to avoid positional meaning.