I'm wondering if there's a way to emulate Haskell's typeclasses in Common Lisp.
Generic functions allow overloading, and it's possible to define types using deftype(which could be defined by membership to some list of instances, for example).
But I can't dispatch on a type. Is there a way to make a class a subclass(and a subtype) of some other class after its definition(e.g. making the cons class a subclass of a sequence class, without redefining cons)?
Thanks.
Type classes in Haskell are a means to statically look up implementations for "interfaces" in the form of dictionaries (similarly to how vtables in e.g. C++ are used but (almost) fully statically, unlike C++ which does dynamic dispatch at runtime). Common Lisp however is a dynamically typed language so such lookup would make no sense. However you can implement your own look up of "type class" implementations (instances) at runtime — a design not too hard to imagine in a language as expressive as Common Lisp.
P.S. Python's Zope has an adaption mechanism with very similar charactetistics, if you feel like referring to an existing solution in a dynamic setting.
You cannot modify the class hierarchy in the way you envision, but you can achieve pretty much the same effect.
Suppose that your definition of a sequence is that it has a method for the function sequence-length.
(defclass sequence ...)
(defmethod sequence-length ((s sequence)) ...)
Then you can easily extend your sequence-length method to conses:
(defmethod sequence-length ((s cons))
(length s))
Did that create a new class that includes cons? Not really. You can express the type of things that have a sequence-length method by saying (or sequence cons), but that's not really useful.
Related
In java and c# we have interfaces, what is the equivalent to that in a language like haskell or what is the concept called in functional programming?
There are things like typeclasses, as the other answers say, but even more than that, there's one pervasive interface: a function. Many, many places where an object-oriented program would need some custom interface, a similar functional program can just use a function. eg, map f xs in haskell uses f, where an object-oriented program might use a Strategy or whatever to accomplish the same task.
Haskell typeclasses fulfill some of the same roles as interfaces in object oriented languages.
data and newtype in Haskell are approximately equal to class in Java.
class in Haskell is approximately equal to interface in Java.
I have been through various papers/articles/blogs and what not about Monads. People talk about them in various context like category theory (what in world is that?) etc. After going through all this and trying to really understand and write monadic code, I came to the understanding that monads are just syntactic sugar (probably the most glorified of them all). Whether it is do notation in Haskell or the Computation Expressions in F# or even the LINQ select many operator (remember LINQ syntax is also a syntactic sugar in C#/VB).
My question is if anyone believe monads are more then syntactic sugar (over nested method calls) then please enlighten me with "practicality" rather than "theoretical concepts".
Thanks all.
UPDATE:
After going through all the answers I came to the conclusion that implementation of monad concept in a particular language is driven through a syntactic sugar BUT monad concept in itself is not related to syntactic sugar and is very general or abstract concept. Thanks every body for the answer to make the difference clear between the concept itself and the ways it is being implemented in languages.
Monad aren't syntactic sugar; Haskell has some sugar for dealing with monads, but you can use them without the sugar and operators. So, Haskell doesn't really 'support' monads any more than loads of other languages, just makes them easier to use and implement. A monad isn't a programming construct, or a language feature as such; it's an abstracted way of thinking about certain types of objects, which, when intuited as Haskell types, provide a nice way of thinking about the transfer of state in types which lets Haskell (or indeed any language, when thought of functionally) do its thing.
do notation, computation expressions and similar language constructs are of course syntactic sugar. This is readily apparent as those constructs are usually defined in terms of what they desugar to. A monad is simply a type that supports certain operations. In Haskell Monad is a typeclass which defines those operations.
So to answer your question: Monad is not a syntactic sugar, it's a type class, however the do notation is syntactic sugar (and of course entirely optional - you can use monads just fine without do notation).
By definition, Monads aren't syntactic sugar. They are a triple of operations (return/unit, map, and join) over a universe of values (lists, sets, option types, stateful functions, continuations, etc.) that obey a small number of laws. As used in programming, these operations are expressed as functions. In some cases, such as Haskell, these functions can be expressed polymorphically over all monads, through the use of typeclasses. In other cases, these functions have to be given a different name or namespace for each monad. In some cases, such as Haskell, there is a layer of syntactic sugar to make programming with these functions more transparent.
So Monads aren't about nested function calls per-se, and certainly aren't about sugar for them. They are about the three functions themselves, the types of values they operate over, and the laws these functions obey.
Monads are syntactic sugar in the same sense that classes and method call syntax are syntactic sugar. It is useful and practical, if a bit verbose, to apply object-oriented principles to a language such as C. Like OO (and many other language features) monads are an idea, a way of thinking about organizing your programs.
Monadic code can let you write the shape of code while deferring certain decisions to later. A Log monad, which could be a variant of Writer could be used to write code for a library that supports logging but let the consuming application decide where the logging goes, if anywhere. You can do this without syntactic sugar at all, or you can leverage it if the language you're working in supports it.
Of course there are other ways to get this feature but this is just one, hopefully "practical" example.
No,
you can think of a Monad (or any other type-classes in Haskell) more in terms of a pattern.
You see a pattern and you handle it every time the same way, so that you can generalize over it.
In this case it's the pattern of of values added information (or if you like data inside some kind of bags - but this picture does not hold for every monad) and a way to chain those together nicely.
The syntactic suggar is just some nice little way to compose the binds ;)
Its a extension to the thing ;)
For the practical concepts: just look at async-workflows, or the IO monad - should be practical enough ;)
I would first call it a pattern, in the sense that m a -> (a -> m b) -> m b (with a reasonable behavior) is convenient for many different problems / type constructors.
Actually so convenient that it deserves providing some syntactic sugar in the language. That's the do notation in Haskell, from in C#, for comprehensions in scala. The syntatic sugar requires only adherence to a naming pattern when implementing (selectMany in C#, flatMap in scala). Those languages do that without Monad being a type in their libraries (in scala, one may be written). Note that C# does that for the pattern Iterator too. While there is an interface IEnumerable, foreach is translated to calls to GetEnumerator/MoveNext/Current based on the name of the methods, irrespective of the types. Only when the translation is done is it checked that everything is defined and well typed.
But in Haskell (that may be done in Scala or OCaml too, non in C# and I believe this is not possible in F# either), Monad is more than design pattern + syntatic sugar based on naming pattern. It's an actual API, software component, whatever.
Consider the iterator pattern in (statically typed) imperative languages. You may just implement MoveNext/Current (or hasNext/next) in classes where this is appropriate. And if there is some syntactic sugar like C# for it, that's already quite useful. But if you make it an interface, you can immediately do much more. You can have computations that works on any iterator. You can have utilities methods on iterator (find, filter, chain, nest..) making them more poweful.
When Monad is a type rather than just a pattern, you can do the same. You can have utilities functions that make working with Monad more powerful (in Control.Monad) you can have computation where the type of monad to use is a parameter (see this old article from Wadler showing how an interpreter can be parameterized by the monad type and what various instances do). To have a monad type (type class), you need some kind of higher order type, that is you need to be able to parametrize with a type constructor, rather than a simple data type.
At first glance, there obvious distinctions between the two kinds of "class". However, I believe there are more similarities:
Both have different kinds of constructors.
Both define a group of operations that could be applied to a particular type of data, in other words, they both define an Interface.
I can see that "class" is much more concise in Haskell and it's also more efficient. But, I have a feeling that, theoretically, "class" and "abstract class" are identical.
What's your opinion?
Er, not really, no.
For one thing, Haskell's type classes don't have constructors; data types do.
Also, a type class instance isn't really attached to the type it's defined for, it's more of a separate entity. You can import instances and data definitions separately, and usually it doesn't really make sense to think about "what class(es) does this piece of data belong to". Nor do functions in a type class have any special access to the data type an instance is defined for.
What a type class actually defines is a collection of identifiers that can be shared to do conceptually equivalent things (in some sense) to different data types, on an explicit per-type basis. This is why it's called ad-hoc polymorphism, in contrast to the standard parametric polymorphism that you get from regular type variables.
It's much, much closer to "overloaded functions" in some languages, where different functions are given the same name, and dispatch is done based on argument types (for some reason, other languages don't typically allow overloading based on return type, though this poses no problem for type classes).
Apart from the implementation differences, one major conceptual difference is regarding when the classes / type classes as declared.
If you create a new class, MyClass, in e.g. Java or C#, you need to specify all the interfaces it provides at the time you develop the class. Now, if you bundle your code up to a library, and provide it to a third party, they are limited by the interfaces you decided the class to have. If they want additional interfaces, they'd have to create a derived class, TheirDerivedClass. Unfortunately, your library might make copies or MyClass without knowledge of the derived class, and might return new instances through its interfaces thatt they'd then have to wrap. So, to really add new interfaces to the class, they'd have to add a whole new layer on top of your library. Not elegant, nor really practical either.
With type classes, you specify interfaces that a type provides separate from the type definition. If a third party library now contained YourType, I can just instantiate YourType to belong to the new interfaces (that you did not provide when you created the type) within my own code.
Thus, type classes allow the user of the type to be in control of what interfaces the type adheres to, while with 'normal' classes, the developer of the class is in control (and has to have the crystal ball needed to see all the possible things for what the user might want to use the class).
From: http://www.haskell.org/tutorial/classes.html
Before going on to further examples of the use of type classes, it is worth pointing out two other views of Haskell's type classes. The first is by analogy with object-oriented programming (OOP). In the following general statement about OOP, simply substituting type class for class, and type for object, yields a valid summary of Haskell's type class mechanism:
"Classes capture common sets of operations. A particular object may be an instance of a class, and will have a method corresponding to each operation. Classes may be arranged hierarchically, forming notions of superclasses and sub classes, and permitting inheritance of operations/methods. A default method may also be associated with an operation."
In contrast to OOP, it should be clear that types are not objects, and in particular there is no notion of an object's or type's internal mutable state. An advantage over some OOP languages is that methods in Haskell are completely type-safe: any attempt to apply a method to a value whose type is not in the required class will be detected at compile time instead of at runtime. In other words, methods are not "looked up" at runtime but are simply passed as higher-order functions.
This slideshow may help you understand the similarities and differences between OO abstract classes and Haskell type classes: Classes, Jim, But Not As We Know Them.
What are the similarities and the differences between Haskell's TypeClasses and Go's Interfaces? What are the relative merits / demerits of the two approaches?
Looks like only in superficial ways are Go interfaces like single parameter type classes (constructor classes) in Haskell.
Methods are associated with an interface type
Objects (particular types) may have implementations of that interface
It is unclear to me whether Go in any way supports bounded polymorphism via interfaces, which is the primary purpose of type classes. That is, in Haskell, the interface methods may be used at different types,
class I a where
put :: a -> IO ()
get :: IO a
instance I Int where
...
instance I Double where
....
So my question is whether Go supports type polymorphism. If not, they're not really like type classes at all. And they're not really comparable.
Haskell's type classes allow powerful reuse of code via "generics" -- higher kinded polymorphism -- a good reference for cross-language support for such forms of generic program is this paper.
Ad hoc, or bounded polymorphism, via type classes, is well described here. This is the primary purpose of type classes in Haskell, and one not addressed via Go interfaces, meaning they're not really very similar at all. Interfaces are strictly less powerful - a kind of zeroth-order type class.
I will add to Don Stewart's excellent answer that one of the surprising consquences of Haskell's type classes is that you can use logic programming at compile time to generate arbitrarily many instances of a class. (Haskell's type-class system includes what is effectively a cut-free subset of Prolog, very similar to Datalog.) This system is exploited to great effect in the QuickCheck library. Or for a very simple example, you can see how to define a version of Boolean complement (not) that works on predicates of arbitrary arity. I suspect this ability was an unintended consequence of the type-class system, but it has proven incredibly powerful.
Go has nothing like it.
In haskell typeclass instantiation is explicit (i.e. you have to say instance Foo Bar for Bar to be an instance of Foo), while in go implementing an interface is implicit (i.e. when you define a class that defines the right methods, it automatically implements the according interface without having to say something like implement InterfaceName).
An interface can only describe methods where the instance of the interface is the receiver. In a typeclass the instantiating type can appear at any argument position or the return type of a function (i.e. you can say, if Foo is an instance of type Bar there must be a function named baz, which takes an Int and returns a Foo - you can't say that with interfaces).
Very superficial similarities, Go's interfaces are more like structural sub-typing in OCaml.
C++ Concepts (that didn't make it into C++0x) are like Haskell type classes. There were also "axioms" which aren't present in Haskell at all. They let you formalize things like the monad laws.
If I have an object, how can I determine its type? (Is there an OCaml equivalent to Java's instanceof operator?)
OCaml has structural typing for objects rather than nominative typing as in Java. So the type of an object is basically determined (and only determined) by its methods. Objects in OCaml can be created directly, without going through something like a class.
You can write functions which require that its argument objects have certain methods (and that those methods have certain types); for example, the following method takes an argument that is any object with a method "bar":
let foo x = x#bar
There's a discussion of "Matching Objects With Patterns" on Lambda the Ultimate (the paper uses Scala as the language, so won't answer your question). A more relevant Ocaml mailing list thread indicates that there's no RTTI/safe-downcasting for objects.
For algebraic (non object) types you obviously have:
match expr with
Type1 x -> x
Type2 (x,y) -> y
called (pattern) matching
Someone did write an extension that allows down/up-casting Ocaml objects.
In short, you have to encode your own RTTI mechanism. OCaml provides no RTTI or up/down casting (the latter in part because inheritance and subtyping are orthogonal in OCaml rather than unified as in Java).
You could do something with strings or polymorphic variants to encode type information in your classes and objects. I believe that LablGTK does some of this, and provides a utility library to support object tagging and up/down casting.
Somewhat out-of-topic, but the OPA language (which draws heavily from some aspects of OCaml), allows the equivalent of pattern-matching on objects. So it's quite feasible.