Related
I've been doing some light reading on functional programming concepts and ideas. So far, so good, I've read about three main concepts: algebraic structures, type classes, and algebraic data types. I have a fairly good understanding of what algebraic data types are. I think sum types and product types are fairly straightforward. For example, I can imagine creating an algebraic data type like a Card type which is a product type consisting of two enum types, Suit (with four values and symbols) and Rank (with 13 values and symbols).
However, I'm still hung up on trying to understand precisely what algebraic structures and type classes are. I just have a surface-level picture in my head but can't quite completely wrap my head around, for instance, the different types of algebraic structures like functors, monoids, monads, etc. How exactly are these different? How can they be used in a programming setting? How are type classes different from regular classes? Can anyone at least point me in the direction of a good book on abstract algebra and functional programming? Someone recommended I learn Haskell but do I really need to learn Haskell in order to understand functional programming?
"algebraic structure" is a concept that goes well beyond programming, it belongs to mathematics.
Imagine the unfathomably deep sea of all possible mathematical objects. Numbers of every stripe (the naturals, the reals, p-adic numbers...) are there, but also things like sequences of letters, graphs, trees, symmetries of geometrical figures, and all well-defined transformations and mappings between them. And much else.
We can try to "throw a net" into this sea and retain only some of those entities, by specifying conditions. Like "collections of things, for which there is an operation that combines two of those things into a third thing of the same type, and for which the operation is associative". We can give those conditions their own name, like, say, "semigroup". (Because we are talking about highly abstract stuff, choosing a descriptive name is difficult.)
That leaves out many inhabitants of the mathematical "sea", but the description still fits a lot of them! Many collections of things are semigroups. The natural numbers with the multiplication operation for example, but also non-empty lists of letters with concatenation, or the symmetries of a square with composition.
You can expand your description with extra conditions. Like "a semigroup, and there's also an element such that combining it with any other element gives the other element, unchanged". That restricts the number of mathematical entities that fit the description, because you are demanding more of them. Some valid semigroups will lack that "neutral element". But a lot of mathematical entities will still satisfy the expanded description. If you aren't careful, you can declare conditions so restrictive that no possible mathematical entity can actually fit them! At other times, you can be so precise that only one entity fits them.
Working purely with these descriptions of mathematical entities, using only the general properties we require of them, we can obtain unexpected results about them, non-obvious at first sight, results that will apply to all entities which fit the description. Think of these discoveries as the mathematical equivalent of "code reuse". For example, if we know that some collection of things is a semigroup, then we can calculate exponentials using binary exponentiation instead of tediously combining a thing with itself n times. But that only works because of the associative property of the semigroup operation.
You’ve asked quite a few questions here, but I can try to answer them as best I can:
… different types of algebraic structures like functors, monoids, monads, etc. How exactly are these different? How can they be used in a programming setting?
This is a very common question when learning Haskell. I won’t write yet another answer here — and a complete answer is fairly long anyway — but a simple Google search gives some very good answers: e.g. I can recommend 1 2 3
How are type classes different from regular classes?
(By ‘regular classes’ I assume you mean classes as found in OOP.)
This is another common question. Basically, the two have almost nothing in common except the name. A class in OOP is a combination of fields and methods. Classes are used by creating instances of that class; each instance can store data in its fields, and manipulate that data using its methods.
By contrast, a type class is simply a collection of functions (often also called methods, though there’s pretty much no connection). You can declare an instance of a type class for a data type (again, no connection) by redefining each method of the class for that type, after which you may use the methods with that type. For instance, the Eq class looks like this:
class Eq a where
(==) :: a -> a -> Bool
(/=) :: a -> a -> Bool
And you can define an instance of that class for, say, Bool, by implementing each function:
instance Eq Bool where
True == True = True
False == False = True
_ == _ = False
p /= q = not (p == q)
Can anyone at least point me in the direction of a good book on abstract algebra and functional programming?
I must admit that I can’t help with this (and it’s off-topic for Stack Overflow anyway).
Someone recommended I learn Haskell but do I really need to learn Haskell in order to understand functional programming?
No, you don’t — you can learn functional programming from any functional language, including Lisp (particularly Scheme dialects), OCaml, F#, Elm, Scala etc. Haskell happens to be a particularly ‘pure’ functional programming language, and I would recommend it as well, but if you just want to learn and understand functional programming then any one of those will do.
I am learning Haskell and trying to understand the Monoid typeclass.
At the moment, I am reading the haskellbook and it says the following about the pattern (monoid):
One of the finer points of the Haskell community has been its
propensity for recognizing abstract patterns in code which have
well-defined, lawful representations in mathematics.
What does the author mean by abstract patterns?
Abstract in this sense is the opposite of concrete. This is probably one of the key things to understand about Haskell.
What is a concrete thing? Well, most values in Haskell are concrete. For example 'a' :: Char. The letter 'a' is a Char value, and it's a concrete value. Char is a concrete type. But in 1 :: Num a => a, the number 1 is actually a value of any type, so long as that type has the set of functions that the Num typeclass sets out as mandatory. This is an abstract value! We can have abstract values, abstract types, and therefore abstract functions. When the program is compiled, the Haskell compiler will pick a particular concrete value that supports all of our requirements.
Haskell, at its core, has a very simple, small but incredibly flexible language. It's very similar to an expression of maths, actually. This makes it very powerful. Why? because most things that would be built in language constructs in other languages are not directly built into Haskell, but defined in terms of this simple core.
One of the core pieces is the function, which, it turns out, most of computation is expressible in terms of. Because so much of Haskell is just defined in terms of this small simple core, it means we can extend it to almost anywhere we can imagine.
Typeclasses are probably the best example of this. Monoid, and Num are examples of typeclasses. These are constructs that allow programmers to use an abstraction like a function across a great many types but only having to define it once. Typeclasses let us use the same function names across a whole range of types if we can define those functions for those types. Why is that important or useful? Well, if we can recognise a pattern across, for example, all numbers, and we have a mechanism for talking about all numbers in the language itself, then we can write functions that work with all numbers at once. This is an abstract pattern. You'll notice some Haskellers are quite interested in a branch of mathematics called Category Theory. This branch is pretty much the mathematical definition of abstract patterns. Contrast this ability to encode such things with the inability of other languages, where in other languages the patterns the community notice are often far less rigorous and have to be manually written out, and without any respect for its mathematical nature. The beauty of following the mathematics is the extremely large body of stuff we get for free by aligning our language closer with mathematics.
This is a good explanation of these basics including typeclasses in a book that I helped author: http://www.happylearnhaskelltutorial.com/1/output_other_things.html
Because functions are written in a very general way (because Haskell puts hardly any limits on our ability to express things generally), we can write functions that use types which express such things as "any type, so long as it's a Monoid". These are called type constraints, as above.
Generally abstractions are very useful because we can, for example, write on single function to operate on an entire range of types which means we can often find functions that do exactly what we want on our types if we just make them instances of specific typeclasses. The Ord typeclass is a great example of this. Making a type we define ourselves an instance of Ord gives us a whole bunch of sorting and comparing functions for free.
This is, in my opinion, one of the most exciting parts about Haskell, because while most other languages also allow you to be very general, they mostly take an extreme dip in how expressive you can be with that generality, so therefore also are less powerful. (This is because they are less precise in what they talk about, because their types are less well "defined").
This is how we're able to reason about the "possible values" of a function, and it's not limited to Haskell. The more information we encode at the type level, the more toward the specificity end of the spectrum of expressivity we veer. For example, to take a classic case, the function const :: a -> b -> a. This function requires that a and b can be of absolutely any type at all, including the same type if we like. From that, because the second parameter can be a different type than the first, we can work out that it really only has one possible functionality. It can't return an Int, unless we give it an Int as its first value, because that's not any type, right? So therefore we know the only value it can return is the first value! The functionality is defined right there in the type! If that's not mindblowing, then I don't know what is. :)
As we move to dependent types (that is, a type system where types are first class, which means also that ordinary values can be encoded in the type system), we can get closer and closer to having the type system specify specifically what the constraints of possible functionality are. However, the kicker is, it doesn't necessarily speak about the implementation of the functionality unless we want it to, because we're in control of how abstract it is, but while maintaining expressivity and much precision. That's pretty fascinating, and amazingly powerful.
Much math can be expressed in the language that underpins Haskell, the lambda calculus.
At first glance, there obvious distinctions between the two kinds of "class". However, I believe there are more similarities:
Both have different kinds of constructors.
Both define a group of operations that could be applied to a particular type of data, in other words, they both define an Interface.
I can see that "class" is much more concise in Haskell and it's also more efficient. But, I have a feeling that, theoretically, "class" and "abstract class" are identical.
What's your opinion?
Er, not really, no.
For one thing, Haskell's type classes don't have constructors; data types do.
Also, a type class instance isn't really attached to the type it's defined for, it's more of a separate entity. You can import instances and data definitions separately, and usually it doesn't really make sense to think about "what class(es) does this piece of data belong to". Nor do functions in a type class have any special access to the data type an instance is defined for.
What a type class actually defines is a collection of identifiers that can be shared to do conceptually equivalent things (in some sense) to different data types, on an explicit per-type basis. This is why it's called ad-hoc polymorphism, in contrast to the standard parametric polymorphism that you get from regular type variables.
It's much, much closer to "overloaded functions" in some languages, where different functions are given the same name, and dispatch is done based on argument types (for some reason, other languages don't typically allow overloading based on return type, though this poses no problem for type classes).
Apart from the implementation differences, one major conceptual difference is regarding when the classes / type classes as declared.
If you create a new class, MyClass, in e.g. Java or C#, you need to specify all the interfaces it provides at the time you develop the class. Now, if you bundle your code up to a library, and provide it to a third party, they are limited by the interfaces you decided the class to have. If they want additional interfaces, they'd have to create a derived class, TheirDerivedClass. Unfortunately, your library might make copies or MyClass without knowledge of the derived class, and might return new instances through its interfaces thatt they'd then have to wrap. So, to really add new interfaces to the class, they'd have to add a whole new layer on top of your library. Not elegant, nor really practical either.
With type classes, you specify interfaces that a type provides separate from the type definition. If a third party library now contained YourType, I can just instantiate YourType to belong to the new interfaces (that you did not provide when you created the type) within my own code.
Thus, type classes allow the user of the type to be in control of what interfaces the type adheres to, while with 'normal' classes, the developer of the class is in control (and has to have the crystal ball needed to see all the possible things for what the user might want to use the class).
From: http://www.haskell.org/tutorial/classes.html
Before going on to further examples of the use of type classes, it is worth pointing out two other views of Haskell's type classes. The first is by analogy with object-oriented programming (OOP). In the following general statement about OOP, simply substituting type class for class, and type for object, yields a valid summary of Haskell's type class mechanism:
"Classes capture common sets of operations. A particular object may be an instance of a class, and will have a method corresponding to each operation. Classes may be arranged hierarchically, forming notions of superclasses and sub classes, and permitting inheritance of operations/methods. A default method may also be associated with an operation."
In contrast to OOP, it should be clear that types are not objects, and in particular there is no notion of an object's or type's internal mutable state. An advantage over some OOP languages is that methods in Haskell are completely type-safe: any attempt to apply a method to a value whose type is not in the required class will be detected at compile time instead of at runtime. In other words, methods are not "looked up" at runtime but are simply passed as higher-order functions.
This slideshow may help you understand the similarities and differences between OO abstract classes and Haskell type classes: Classes, Jim, But Not As We Know Them.
What are the similarities and the differences between Haskell's TypeClasses and Go's Interfaces? What are the relative merits / demerits of the two approaches?
Looks like only in superficial ways are Go interfaces like single parameter type classes (constructor classes) in Haskell.
Methods are associated with an interface type
Objects (particular types) may have implementations of that interface
It is unclear to me whether Go in any way supports bounded polymorphism via interfaces, which is the primary purpose of type classes. That is, in Haskell, the interface methods may be used at different types,
class I a where
put :: a -> IO ()
get :: IO a
instance I Int where
...
instance I Double where
....
So my question is whether Go supports type polymorphism. If not, they're not really like type classes at all. And they're not really comparable.
Haskell's type classes allow powerful reuse of code via "generics" -- higher kinded polymorphism -- a good reference for cross-language support for such forms of generic program is this paper.
Ad hoc, or bounded polymorphism, via type classes, is well described here. This is the primary purpose of type classes in Haskell, and one not addressed via Go interfaces, meaning they're not really very similar at all. Interfaces are strictly less powerful - a kind of zeroth-order type class.
I will add to Don Stewart's excellent answer that one of the surprising consquences of Haskell's type classes is that you can use logic programming at compile time to generate arbitrarily many instances of a class. (Haskell's type-class system includes what is effectively a cut-free subset of Prolog, very similar to Datalog.) This system is exploited to great effect in the QuickCheck library. Or for a very simple example, you can see how to define a version of Boolean complement (not) that works on predicates of arbitrary arity. I suspect this ability was an unintended consequence of the type-class system, but it has proven incredibly powerful.
Go has nothing like it.
In haskell typeclass instantiation is explicit (i.e. you have to say instance Foo Bar for Bar to be an instance of Foo), while in go implementing an interface is implicit (i.e. when you define a class that defines the right methods, it automatically implements the according interface without having to say something like implement InterfaceName).
An interface can only describe methods where the instance of the interface is the receiver. In a typeclass the instantiating type can appear at any argument position or the return type of a function (i.e. you can say, if Foo is an instance of type Bar there must be a function named baz, which takes an Int and returns a Foo - you can't say that with interfaces).
Very superficial similarities, Go's interfaces are more like structural sub-typing in OCaml.
C++ Concepts (that didn't make it into C++0x) are like Haskell type classes. There were also "axioms" which aren't present in Haskell at all. They let you formalize things like the monad laws.
UML is a standard aimed at the modeling of software which will be written in OO languages, and goes hand in hand with Java. Still, could it possibly be used to model software meant to be written in the functional programming paradigm? Which diagrams would be rendered useful given the embedded visual elements?
Is there a modeling language aimed at functional programming, more specifically Haskell? What tools for putting together diagrams would you recommend?
Edited by OP Sept 02, 2009:
What I'm looking for is the most visual, lightest representation of what goes on in the code. Easy to follow diagrams, visual models not necessarily aimed at other programmers. I'll be developing a game in Haskell very soon but because this project is for my graduation conclusion work I need to introduce some sort of formalization of the proposed solution. I was wondering if there is an equivalent to the UML+Java standard, but for Haskell.
Should I just stick to storyboards, written descriptions, non-formalized diagrams (some shallow flow-chart-like images), non-formalized use case descriptions?
Edited by jcolebrand June 21, 2012:
Note that the asker originally wanted a visual metphor, and now that we've had three years, we're looking for more/better tools. None of the original answers really addressed the concept of "visual metaphor design tool" so ... that's what the new bounty is looking to provide for.
I believe the modeling language for Haskell is called "math". It's often taught in schools.
Yes, there are widely used modeling/specification languages/techniques for Haskell.
They're not visual.
In Haskell, types give a partial specification.
Sometimes, this specification fully determines the meaning/outcome while leaving various implementation choices.
Going beyond Haskell to languages with dependent types, as in Agda & Coq (among others), types are much more often useful as a complete specification.
Where types aren't enough, add formal specifications, which often take a simple functional form.
(Hence, I believe, the answers that the modeling language of choice for Haskell is Haskell itself or "math".)
In such a form, you give a functional definition that is optimized for clarity and simplicity and not all for efficiency.
The definition might even involve uncomputable operations such as function equality over infinite domains.
Then, step by step, you transform the specification into the form of an efficiently computable functional program.
Every step preserves the semantics (denotation), and so the final form ("implementation") is guaranteed to be semantically equivalent to the original form ("specification").
You'll see this process referred to by various names, including "program transformation", "program derivation", and "program calculation".
The steps in a typical derivation are mostly applications of "equational reasoning", augmented with a few applications of mathematical induction (and co-induction).
Being able to perform such simple and useful reasoning was a primary motivation for functional programming languages in the first place, and they owe their validity to the "denotative" nature of "genuinely functional programming".
(The terms "denotative" and "genuinely functional" are from Peter Landin's seminal paper The Next 700 Programming languages.)
Thus the rallying cry for pure functional programming used to be "good for equational reasoning", though I don't hear that description nearly as often these days.
In Haskell, denotative corresponds to types other than IO and types that rely on IO (such as STM).
While the denotative/non-IO types are good for correct equational reasoning, the IO/non-denotative types are designed to be bad for incorrect equational reasoning.
A specific version of derivation-from-specification that I use as often as possible in my Haskell work is what I call "semantic type class morphisms" (TCMs).
The idea there is to give a semantics/interpretation for a data type, and then use the TCM principle to determine (often uniquely) the meaning of most or all of the type's functionality via type class instances.
For instance, I say that the meaning of an Image type is as a function from 2D space.
The TCM principle then tells me the meaning of the Monoid, Functor, Applicative, Monad, Contrafunctor, and Comonad instances, as corresponding to those instances for functions.
That's a lot of useful functionality on images with very succinct and compelling specifications!
(The specification is the semantic function plus a list of standard type classes for which the semantic TCM principle must hold.)
And yet I have tremendous freedom of how to represent images, and the semantic TCM principle eliminates abstraction leaks.
If you're curious to see some examples of this principle in action, check out the paper Denotational design with type class morphisms.
We use theorem provers to do formal modelling (with verification), such as Isabelle or Coq. Sometimes we use domain specific languages (e.g. Cryptol) to do the high level design, before deriving the "low level" Haskell implementation.
Often we just use Haskell as the modelling language, and derive the actual implementation via rewriting.
QuickCheck properties also play a part in the design document, along with type and module decompositions.
Yes, Haskell.
I get the impression that programmers using functional languages don't feel the need to simplify their language of choice away when thinking about their design, which is one (rather glib) way of viewing what UML does for you.
I have watched a few video interviews, and read some interviews, with the likes of erik meijer and simon peyton-jones. It seems as when it comes to modelling and understanding ones problem domain, they use type signatures, especially function signatures.
Sequence diagrams (UML) could be related to the composition of functions.
A static class diagram (UML) could be related to type signatures.
In Haskell, you model by types.
Just begin with writing your function-, class- and data signatures without any implementation and try to make the types fit. Next step is QuickCheck.
E.g. to model a sort:
class Ord a where
compare :: a -> a -> Ordering
sort :: Ord a => [a] -> [a]
sort = undefined
then tests
prop_preservesLength l = (length l) == (length $ sort l)
...
and finally the implementation ...
Although not a recommendation to use (as it appears to be not available for download), but the HOPS system visualizes term graphs, which are often a convenient representation of functional programs.
It may be also considered a design tool as it supports documenting the programs as well as constructing them; I believe it can also step through the rewriting of the terms if you want it to so you can see them unfold.
Unfortunately, I believe it is no longer actively developed though.
I realize I'm late to the party, but I'll still give my answer in case someone would find it useful.
I think I'd go for systemic methodologies of the likes of SADT/IDEF0.
https://en.wikipedia.org/wiki/Function_model
https://en.wikipedia.org/wiki/Structured_Analysis_and_Design_Technique
Such diagrams can be made with the Dia program that is available on both Linux, Windows and MacOS.
You can a data flow process network model as described in Realtime Signal Processing: Dataflow, Visual, and Functional Programming by Hideki John Reekie
For example for code like (Haskell):
fact n | n == 0 = 1
| otherwise = n * fact (n - 1)
The visual representation would be:
What would be the point in modelling Haskell with Maths? I thought the whole point of Haskell was that it related so closely to Maths that Mathematicians could pick it up and run with it. Why would you translate a language into itself?
When working with another functional programming language (f#) I used diagrams on a white board describing the large blocks and then modelled the system in an OO way using UML, using classes. There was a slight miss match in the building blocks in f# (split the classes up into data structures and functions that acted upon them). But for the understanding from a business perspective it worked a treat. I would add mind that the problem was business/Web oriented and don't know how well the technique would work for something a bit more financial. I think I would probably capture the functions as objects without state and they should fit in nicely.
It all depends on the domain your working in.
I use USL - Universal Systems Language. I'm learning Erlang and I think it's a perfect fit.
Too bad the documentation is very limited and nobody uses it.
More information here.