First of all, I want to clarify that I've tried to find a solution to my problem googling but I didn't succeed.
I need a way to compare two expressions. The problem is that these expressions are not comparable. I'm coming from Erlang, where I can do :
case exp1 of
exp2 -> ...
where exp1 and exp2 are bound. But Haskell doesn't allow me to do this. However, in Haskell I could compare using ==. Unfortunately, their type is not member of the class Eq. Of course, both expressions are unknown until runtime, so I can't write a static pattern in the source code.
How could compare this two expressions without having to define my own comparison function? I suppose that pattern matching could be used here in some way (as in Erlang), but I don't know how.
Edit
I think that explaining my goal could help to understand the problem.
I'm modyfing an Abstract Syntax Tree (AST). I am going to apply a set of rules that are going to modify this AST, but I want to store the modifications in a list. The elements of this list should be a tuple with the original piece of the AST and its modification. So the last step is to for each tuple search for a piece of the AST that is exactly the same, and substitute it by the second element of the tuple. So, I will need something like this:
change (old,new):t piece_of_ast =
case piece_of_ast of
old -> new
_ -> piece_of_ast
I hope this explanation clarify my problem.
Thanks in advance
It's probably an oversight in the library (but maybe I'm missing a subtle reason why Eq is a bad idea!) and I would contact the maintainer to get the needed Eq instances added in.
But for the record and the meantime, here's what you can do if the type you want to compare for equality doesn't have an instance for Eq, but does have one for Data - as is the case in your question.
The Data.Generics.Twins package offers a generic version of equality, with type
geq :: Data a => a -> a -> Bool
As the documentation states, this is 'Generic equality: an alternative to "deriving Eq" '. It works by comparing the toplevel constructors and if they are the same continues on to the subterms.
Since the Data class inherits from Typeable, you could even write a function like
veryGenericEq :: (Data a, Data b) => a -> b -> Bool
veryGenericEq a b = case (cast a) of
Nothing -> False
Maybe a' -> geq a' b
but I'm not sure this is a good idea - it certainly is unhaskelly, smashing all types into one big happy universe :-)
If you don't have a Data instance either, but the data type is simple enough that comparing for equality is 100% straightforward then StandaloneDeriving is the way to go, as #ChristianConkle indicates. To do this you need to add a {-# LANGUAGE StandaloneDeriving #-} pragma at the top of your file and add a number of clauses
deriving instance Eq a => Eq (CStatement a)
one for each type CStatement uses that doesn't have an Eq instance, like CAttribute. GHC will complain about each one you need, so you don't have to trawl through the source.
This will create a bunch of so-called 'orphan instances.' Normally, an instance like instance C T where will be defined in either the module that defines C or the module that defines T. Your instances are 'orphans' because they're separated from their 'parents.' Orphan instances can be bad because you might start using a new library which also has those instances defined; which instance should the compiler use? There's a little note on the Haskell wiki about this issue. If you're not publishing a library for others to use, it's fine; it's your problem and you can deal with it. It's also fine for testing; if you can implement Eq, then the library maintainer can probably include deriving Eq in the library itself, solving your problem.
I'm not familiar with Erlang, but Haskell does not assume that all expressions can be compared for equality. Consider, for instance, undefined == undefined or undefined == let x = x in x.
Equality testing with (==) is an operation on values. Values of some types are simply not comparable. Consider, for instance, two values of type IO String: return "s" and getLine. Are they equal? What if you run the program and type "s"?
On the other hand, consider this:
f :: IO Bool
f = do
x <- return "s" -- note that using return like this is pointless
y <- getLine
return (x == y) -- both x and y have type String.
It's not clear what you're looking for. Pattern matching may be the answer, but if you're using a library type that's not an instance of Eq, the likely answer is that comparing two values is actually impossible (or that the library author has for some reason decided to impose that restriction).
What types, specifically, are you trying to compare?
Edit: As a commenter mentioned, you can also compare using Data. I don't know if that is easier for you in this situation, but to me it is unidiomatic; a hack.
You also asked why Haskell can't do "this sort of thing" automatically. There are a few reasons. In part it's historical; in part it's that, with deriving, Eq satisfies most needs.
But there's also an important principle in play: the public interface of a type ideally represents what you can actually do with values of that type. Your specific type is a bad example, because it exposes all its constructors, and it really looks like there should be an Eq instance for it. But there are many other libraries that do not expose the implementation of their types, and instead require you to use provided functions (and class instances). Haskell allows that abstraction; a library author can rely on the fact that a consumer can't muck about with implementation details (at least without using unsafe tools like unsafeCoerce).
Related
I am trying to understand type classes in Haskell.
I pose the question using a simple example:
Lets consider two datatypes, A and B.
Lets also assume that a1 and a2 are of type A and b1 and b2 are of type B.
Now let's assume that we want to define functions to check if a1 equals a2 and whether b1 is the same as b2.
For this we have two options:
Option 1:
Define two equality functions using separate names like this :
eqA :: A -> A -> Bool
eqA = ...
eqB :: B -> B -> Bool
eqB = ...
Now if we want to check if a1 equals a2 then we write : eqA a1 a2, similarly for B : eqB b1 b2.
Option 2:
We make A and B instances of the Eq typeclass.
In this case we can use the == sign for checking for equality as follows.
a1==a2
b1==b2
So, now the question : as far as I understand the sole purpose of the existence of type classes is that we can reuse the function name == for the equivalence checking function, as shown in Option 2. So if we use type classes we do not have to use different function names for essentially the same purpose ( i.e. checking for equality).
Is that correct? Is name reuse the only advantage that type classes give? Or is there something more to type classes ?
In other words, given any program using type classes, I could rewrite that program to an isomorphic program that does not use type classes by giving separate names (i.e. eqA and eqB) to the overloaded function names (i.e. == in Option 2) that are defined in the type class instance declarations.
In the simple example this transformation would replace Option 2 with Option 1.
Is such a mechanic transformation always possible?
Is that not what the compiler is doing behind the scenes?
Thank you for reading.
EDIT:
Thanks for all the illuminating answers !
There is this very good article regarding type class which shows their desugaring process: https://www.fpcomplete.com/user/jfischoff/instances-and-dictionaries
So, yes, you could do without them, but you will need to have a different function for each type as you did in your question, or you will need to pass the instance dictionary explicitly as described in the article, while the type class do it implicitly so you don't need to bother. In many cases, you would end up with a LOT of functions with different names!
Nevertheless, your question leads also to the fact that type classes are often too much used when other patterns would solve the problem as well:
http://lukepalmer.wordpress.com/2010/01/24/haskell-antipattern-existential-typeclass/
Type classes are syntax sugar, so there is nothing that you cannot do without them. It is just a question of how hard it would be to do without them.
My favorite example is QuickCheck, which uses type classes to recurse over function arguments. (Yes, you can do that.) QuickCheck takes a function with an arbitrary number of arguments, and auto-generates random test data for each argument, and checks that the function always returns true. It uses type classes to figure out how to "randomly" generate data of each individual type. And it uses clever type class tricks to support functions with any number of arguments.
You don't need this; you could have a test1 function, and then a test2 function, and then a test3 function, etc. But it's nice to be able to have just "one function" which automatically figures out how many arguments there are. Indeed, without type classes you'd end up having to write something like
test_Int
test_Double
test_Char
test_Int_Int
test_Int_Double
test_Int_Char
test_Double_Int
test_Double_Double
test_Double_Char
...
...for every possible combination of inputs. And that would just be really annoying. (And good luck trying to extend it within your own code.)
Really, type classes make your code more high-level and declarative. It makes it easier to see what the code is trying to do. (Most of the time.) There are really advanced cases where I start to think maybe the design isn't quite right, but for the vast majority of stuff, type classes work really well.
For the most part the answer is that type classes give you name re-use. Sometimes the amount of name re-use you get is more than you might think.
A Type Class is somewhat similar to an Interface in Object Oriented languages. Occasionally someone might ask "is the only purpose of an Interface to enforce contracts in APIs?". We might see this code in java
public <A> void foo (List<A> list) {
list.clear();
}
List<A> list = new ArrayList<>();
List<A> list2 = new LinkedList<>();
foo(list);
foo(list2);
Both ArrayList and LinkedList implement the List interface. As such, we can write a general function that foo which operates on any List. This is similar to your Eq example. In Haskell we have many higher order functions like this. For example foldl can be implemented by more than just lists, and we can use foldl on anything that uses Foldable from Data.Foldable.
In OO you can use generics to do some really interesting abstractions and inheritance (e.g. in Java things like Foo<A extends Foo<A>>). In Haskell we can use Type Classes to do different types of abstractions.
Let's take the Monad class, which requires you to implement "return". Maybe implements return something like this:
instance Monad Maybe where
return a = Just a
Whilst List implements return like this:
instance Monad [] where
return a = [a]
Return, as it is done in Haskell, is not possible in a language like Java. In Java if you want to create a new instance of something in a container (say a new List<String>) you have to use new ArrayList<String>();. You can't create an instance of something because it fulfills a contract. While in Haskell we can write this code which would not be possible in Java:
foobar :: (Monad m) => (m a -> m b) -> a -> m b
foobar g a = g $ return a
Now I can pass anything to foobar. I could pass a function Maybe a -> Maybe b or [a] -> [b] and foobar will still accept.
Now of course this could all be broken down and functions could be written one by one. Just like Java could ignore interfaces and implement everything from scratch on each class of List. But just like the inheritance that's possible in OO once you factor in generics, in Haskell we can do some interesting things in Type Classes using language extensions. For example, check out http://www.haskell.org/haskellwiki/Functional_dependencies_vs._type_families
I was revisiting a piece of code I wrote to do combinatorial search a few months ago, and noticed that there was an alternative, simpler way to do something that I'd previously achieved with a type class.
Specifically, I previously had a type class for the type of search problems, which have an states of type s, actions (operations on states) of type a, an initial state, a way of getting a list of (action,state) pairs and a way of testing whether a state is a solution or not:
class Problem p s a where
initial :: p s a -> s
successor :: p s a -> s -> [(a,s)]
goaltest :: p s a -> s -> Bool
This is somewhat unsatisfactory, as it requires the MultiParameterTypeClass extension, and generally needs FlexibleInstances and possibly TypeSynonymInstances when you want to make instances of this class. It also clutters up your function signatures, e.g.
pathToSolution :: Problem p => p s a -> [(a,s)]
I noticed today that I can get rid of the class entirely, and use a type instead, along the following lines
data Problem s a {
initial :: s,
successor :: s -> [(a,s)],
goaltest :: s -> Bool
}
This doesn't require any extensions, the function signatures look nicer:
pathToSolution :: Problem s a -> [(a,s)]
and, most importantly, I found that after refactoring my code to use this abstraction instead of a type class, I was left with 15-20% fewer lines than I had previously.
The biggest win was in code that created abstractions using the type class - previously I had to create new data structures that wrapped the old ones in a complicated way, and then make them into instances of the Problem class (which required more language extensions) - lots of lines of code to do something relatively simple. After the refactor, I just had a couple of functions that did exactly what I wanted to.
I'm now looking through the rest of the code, trying to spot instances where I can replace type classes with types, and make more wins.
My question is: in what situation does will this refactoring not work? In what cases is it actually just better to use a type class rather than a data type, and how can you recognise those situations ahead of time, so you don't have to go through a costly refactoring?
Consider a situation where both the type and class exist in the same program. The type can be an instance of the class, but that's rather trivial. More interesting is that you can write a function fromProblemClass :: (CProblem p s a) => p s a -> TProblem s a.
The refactoring you performed is roughly equivalent to manually inlining fromProblemClass everywhere you construct something used as a CProblem instance, and making every function that accepts a CProblem instance instead accept TProblem.
Since the only interesting parts of this refactoring are the definition of TProblem and the implementation of fromProblemClass, if you can write a similar type and function for any other class, you can likewise refactor it to eliminate the class entirely.
When does this work?
Think about the implementation of fromProblemClass. You'll essentially be partially applying each function of the class to a value of the instance type, and in the process eliminating any reference to the p parameter (which is what the type replaces).
Any situation where refactoring away a type class is straightforward is going to follow a similar pattern.
When is this counterproductive?
Imagine a simplified version of Show, with only the show function defined. This permits the same refactoring, applying show and replacing each instance with... a String. Clearly we've lost something here--namely, the ability to work with the original types and convert them to a String at various points. The value of Show is that it's defined on a wide variety of unrelated types.
As a rule of thumb, if there are many different functions specific to the types which are instances of the class, and these are often used in the same code as the class functions, delaying the conversion is useful. If there's a sharp dividing line between code that treats the types individually and code that uses the class, conversion functions might be more appropriate with a type class being a minor syntactic convenience. If the types are used almost exclusively through the class functions, the type class is probably completely superfluous.
When is this impossible?
Incidentally, the refactoring here is similar to the difference between a class and interface in OO languages; similarly, the type classes where this refactoring is impossible are those which can't be expressed directly at all in many OO languages.
More to the point, some examples of things you can't translate easily, if at all, in this manner:
The class's type parameter appearing only in covariant position, such as the result type of a function or as a non-function value. Notable offenders here are mempty for Monoid and return for Monad.
The class's type parameter appearing more than once in a function's type may not make this truly impossible but it complicates matters quite severely. Notable offenders here include Eq, Ord, and basically every numeric class.
Non-trivial use of higher kinds, the specifics of which I'm not sure how to pin down, but (>>=) for Monad is a notable offender here. On the other hand, the p parameter in your class is not an issue.
Non-trivial use of multi-parameter type classes, which I'm also uncertain how to pin down and gets horrendously complicated in practice anyway, being comparable to multiple dispatch in OO languages. Again, your class doesn't have an issue here.
Note that, given the above, this refactoring is not even possible for many of the standard type classes, and would be counterproductive for the few exceptions. This is not a coincidence. :]
What do you give up by applying this refactoring?
You give up the ability to distinguish between the original types. This sounds obvious, but it's potentially significant--if there are any situations where you really need to control which of the original class instance types was used, applying this refactoring loses some degree of type safety, which you can only recover by jumping through the same sort of hoops used elsewhere to ensure invariants at run-time.
Conversely, if there are situations where you really need to make the various instance types interchangeable--the convoluted wrapping you mentioned being a classic symptom of this--you gain a great deal by throwing away the original types. This is most often the case where you don't actually care much about the original data itself, but rather about how it lets you operate on other data; thus using records of functions directly is more natural than an extra layer of indirection.
As noted above, this relates closely to OOP and the type of problems it's best suited to, as well as representing the "other side" of the Expression Problem from what's typical in ML-style languages.
Your refactoring is closely related to this blog entry by Luke Palmer: "Haskell Antipattern: Existential Typeclass".
I think we can prove that your refactoring will always work. Why? Intuitively, because if some type Foo contains enough information so that we can make it into an instance of your Problem class, we can always write a Foo -> Problem function that "projects" Foo's relevant information into a Problem that contains exactly the information needed.
A bit more formally, we can sketch a proof that your refactoring always works. First, to set the stage, the following code defines a translation of a Problem class instance into a concrete CanonicalProblem type:
{-# LANGUAGE MultiParamTypeClasses, FlexibleInstances #-}
class Problem p s a where
initial :: p s a -> s
successor :: p s a -> s -> [(a,s)]
goaltest :: p s a -> s -> Bool
data CanonicalProblem s a = CanonicalProblem {
initial' :: s,
successor' :: s -> [(a,s)],
goaltest' :: s -> Bool
}
instance Problem CanonicalProblem s a where
initial = initial'
successor = successor'
goaltest = goaltest'
canonicalize :: Problem p s a => p s a -> CanonicalProblem s a
canonicalize p = CanonicalProblem {
initial' = initial p,
successor' = successor p,
goaltest' = goaltest p
}
Now we want to prove the following:
For any type Foo such that instance Problem Foo s a, it is possible to write a canonicalizeFoo :: Foo s a -> CanonicalProblem s a function that produces the same result as canonicalize when applied to any Foo s a.
It's possible to rewrite any function that uses the Problem class into an equivalent function that uses CanonicalProblem instead. E.g., if you have solve :: Problem p s a => p s a -> r, you can write a canonicalSolve :: CanonicalProblem s a -> r that's equivalent to solve . canonicalize
I'll just sketch proofs. In the case of (1), suppose you have a type Foo with this Problem instance:
instance Problem Foo s a where
initial = initialFoo
successor = successorFoo
goaltest = goaltestFoo
Then given x :: Foo s a you can trivially prove the following by substitution:
-- definition of canonicalize
canonicalize :: Problem p s a => p s a -> CanonicalProblem s a
canonicalize x = CanonicalProblem {
initial' = initial x,
successor' = successor x,
goaltest' = goaltest x
}
-- specialize to the Problem instance for Foo s a
canonicalize :: Foo s a -> CanonicalProblem s a
canonicalize x = CanonicalProblem {
initial' = initialFoo x,
successor' = successorFoo x,
goaltest' = goaltestFoo x
}
And the latter can be used directly to define our desired canonicalizeFoo function.
In the case of (2), for any function solve :: Problem p s a => p s a -> r (or similar types that involve Problem constraints), and for any type Foo such that instance Problem Foo s a:
Define canonicalSolve :: CanonicalProblem s a -> r' by taking the definition of solve and substituting all occurrences of Problem methods with their CanonicalProblem instance definitions.
Prove that for any x :: Foo s a, solve x is equivalent to canonicalSolve (canonicalize x).
Concrete proofs of (2) require concrete definitions of solve or related functions. A general proof could go either of these two ways:
Induction over all types that have Problem p s a constraints.
Prove that all Problem functions can be written in terms of a small subset of the functions, prove that this subset has CanonicalProblem equivalents, and that the various ways of using them together preserves the equivalence.
If you are from OOP backaground. You can think of typeclasses as interfaces in java. They are generally used when you want to provide same interface to different data types, generally involving data type specific implementations for each.
In your case there is no use of using a typeclass, it will only over complicate your code.
for more informaction you can always refer to haskellwiki for better understanding.
http://www.haskell.org/haskellwiki/OOP_vs_type_classes
The general rule of thumb is: If you are doubtful whether you need type classes or not, then you probably don't need them.
What it says in the title. If I write a type signature, is it possible to algorithmically generate an expression which has that type signature?
It seems plausible that it might be possible to do this. We already know that if the type is a special-case of a library function's type signature, Hoogle can find that function algorithmically. On the other hand, many simple problems relating to general expressions are actually unsolvable (e.g., it is impossible to know if two functions do the same thing), so it's hardly implausible that this is one of them.
It's probably bad form to ask several questions all at once, but I'd like to know:
Can it be done?
If so, how?
If not, are there any restricted situations where it becomes possible?
It's quite possible for two distinct expressions to have the same type signature. Can you compute all of them? Or even some of them?
Does anybody have working code which does this stuff for real?
Djinn does this for a restricted subset of Haskell types, corresponding to a first-order logic. It can't manage recursive types or types that require recursion to implement, though; so, for instance, it can't write a term of type (a -> a) -> a (the type of fix), which corresponds to the proposition "if a implies a, then a", which is clearly false; you can use it to prove anything. Indeed, this is why fix gives rise to ⊥.
If you do allow fix, then writing a program to give a term of any type is trivial; the program would simply print fix id for every type.
Djinn is mostly a toy, but it can do some fun things, like deriving the correct Monad instances for Reader and Cont given the types of return and (>>=). You can try it out by installing the djinn package, or using lambdabot, which integrates it as the #djinn command.
Oleg at okmij.org has an implementation of this. There is a short introduction here but the literate Haskell source contains the details and the description of the process. (I'm not sure how this corresponds to Djinn in power, but it is another example.)
There are cases where is no unique function:
fst', snd' :: (a, a) -> a
fst' (a,_) = a
snd' (_,b) = b
Not only this; there are cases where there are an infinite number of functions:
list0, list1, list2 :: [a] -> a
list0 l = l !! 0
list1 l = l !! 1
list2 l = l !! 2
-- etc.
-- Or
mkList0, mkList1, mkList2 :: a -> [a]
mkList0 _ = []
mkList1 a = [a]
mkList2 a = [a,a]
-- etc.
(If you only want total functions, then consider [a] as restricted to infinite lists for list0, list1 etc, i.e. data List a = Cons a (List a))
In fact, if you have recursive types, any types involving these correspond to an infinite number of functions. However, at least in the case above, there is a countable number of functions, so it is possible to create an (infinite) list containing all of them. But, I think the type [a] -> [a] corresponds to an uncountably infinite number of functions (again restrict [a] to infinite lists) so you can't even enumerate them all!
(Summary: there are types that correspond to a finite, countably infinite and uncountably infinite number of functions.)
This is impossible in general (and for languages like Haskell that does not even has the strong normalization property), and only possible in some (very) special cases (and for more restricted languages), such as when a codomain type has the only one constructor (for example, a function f :: forall a. a -> () can be determined uniquely). In order to reduce a set of possible definitions for a given signature to a singleton set with just one definition need to give more restrictions (in the form of additional properties, for example, it is still difficult to imagine how this can be helpful without giving an example of use).
From the (n-)categorical point of view types corresponds to objects, terms corresponds to arrows (constructors also corresponds to arrows), and function definitions corresponds to 2-arrows. The question is analogous to the question of whether one can construct a 2-category with the required properties by specifying only a set of objects. It's impossible since you need either an explicit construction for arrows and 2-arrows (i.e., writing terms and definitions), or deductive system which allows to deduce the necessary structure using a certain set of properties (that still need to be defined explicitly).
There is also an interesting question: given an ADT (i.e., subcategory of Hask) is it possible to automatically derive instances for Typeable, Data (yes, using SYB), Traversable, Foldable, Functor, Pointed, Applicative, Monad, etc (?). In this case, we have the necessary signatures as well as additional properties (for example, the monad laws, although these properties can not be expressed in Haskell, but they can be expressed in a language with dependent types). There is some interesting constructions:
http://ulissesaraujo.wordpress.com/2007/12/19/catamorphisms-in-haskell
which shows what can be done for the list ADT.
The question is actually rather deep and I'm not sure of the answer, if you're asking about the full glory of Haskell types including type families, GADT's, etc.
What you're asking is whether a program can automatically prove that an arbitrary type is inhabited (contains a value) by exhibiting such a value. A principle called the Curry-Howard Correspondence says that types can be interpreted as mathematical propositions, and the type is inhabited if the proposition is constructively provable. So you're asking if there is a program that can prove a certain class of propositions to be theorems. In a language like Agda, the type system is powerful enough to express arbitrary mathematical propositions, and proving arbitrary ones is undecidable by Gödel's incompleteness theorem. On the other hand, if you drop down to (say) pure Hindley-Milner, you get a much weaker and (I think) decidable system. With Haskell 98, I'm not sure, because type classes are supposed to be able to be equivalent to GADT's.
With GADT's, I don't know if it's decidable or not, though maybe some more knowledgeable folks here would know right away. For example it might be possible to encode the halting problem for a given Turing machine as a GADT, so there is a value of that type iff the machine halts. In that case, inhabitability is clearly undecidable. But, maybe such an encoding isn't quite possible, even with type families. I'm not currently fluent enough in this subject for it to be obvious to me either way, though as I said, maybe someone else here knows the answer.
(Update:) Oh a much simpler interpretation of your question occurs to me: you may be asking if every Haskell type is inhabited. The answer is obviously not. Consider the polymorphic type
a -> b
There is no function with that signature (not counting something like unsafeCoerce, which makes the type system inconsistent).
I have a list of values (or functions) of any type. I have another list of functions of any type. The user at runtime will choose one from the first list, and another from the second list. I have a mechanism to ensure that the two items are type compatible (value or output from first is compatible with input of second).
I need some way to call the function with the value (or compose the functions). If the second function has concrete types, unsafeCoerce works fine. But if it's of the form:
polyFunc :: MyTypeclass a => a -> IO ()
polyFunc x = print . show . typeclassFunc x
Then unsafeCoerce doesn't work since it can't resolve to a concrete type.
Is there any way to do what I'm trying to do?
Here's an example of what the lists might look like. However... I'm not limited to this, if there is some other way to represent these that will solve the problem, I would like to know. A critical thing to consider is that: the list can change at runtime so I do not know at compile time all the possible types that might be involved.
data Wrapper = forall a. Wrapper a
firstList :: [Wrapper]
firstList = [Wrapper "blue", Wrapper 5, Wrapper valueOfMyTypeclass]
data OtherWrapper = forall a. Wrapper (a -> IO ())
secondList :: [OtherWrapper]
secondList = [OtherWrapper print, OtherWrapper polyFunc]
Note: As for why I want to do such a crazy thing:
I'm generating code and typechecking it with hint. But that happens at runtime. The problem is that hint is slow at actually executing things and high performance for this is critical. Also, at least in certain cases, I do not want to generate code and run it through ghc at runtime (though we have done some of that, too). So... I'm trying to find somewhere in the middle: dynamically hook things together without having to generate code and compile, but run it at compiled speed instead of interpreted.
Edit: Okay, so now that I see what's going on a bit more, here's a very general approach -- don't use polymorphic functions directly at all! Instead, use functions of type Dynamic -> IO ()! Then, they can use "typecase"-style dispatch directly to choose which monomorphic function to invoke -- i.e. just switching on the TypeRep. You do have to encode this dispatch directly for each polymorphic function you're wrapping. However, you can automate this with some template Haskell if it becomes enough of a hassle.
Essentially, rather than overloading Haskell's polymorphism, just as Dynamic embeds an dynamically typed language in a statically typed language, you now extend that to embed dynamic polymorphism in a statically typed language.
--
Old answer: More code would be helpful. But, as far as I can tell, this is the read/show problem. I.e. You have a function that produces a polymorphic result, and a function that takes a polymorphic input. The issue is that you need to pick what the intermediate value is, such that it satisfies both constraints. If you have a mechanism to do so, then the usual tricks will work, making sure you satisfy that open question which the compiler can't know the answer to.
I'm not sure that I completely understand your question. But since you have value and function which have compatible types you could combine them into single value. Then compiler could prove that types do match.
{-# LANGUAGE ExistentialQuantification #-}
data Vault = forall a . Vault (a -> IO ()) a
runVault :: Vault -> IO ()
runVault (Vault f x) = f xrun
Firstly, this question isn't 100% specific to Haskell, feel free to comment on the general design of typeclasses, interfaces and types.
I'm reading LYAH - creating types and typeclasses The following is the passage that I'm looking for more information on:
Data (Ord k) => Map k v = ...
However, it's a very strong convention
in Haskell to never add typeclass
constraints in data declarations. Why?
Well, because we don't benefit a lot,
but we end up writing more class
constraints, even when we don't need
them. If we put or don't put the Ord k
constraint in the data declaration for
Map k v, we're going to have to put
the constraint into functions that
assume the keys in a map can be
ordered. But if we don't put the
constraint in the data declaration, we
don't have to put (Ord k) => in the
type declarations of functions that
don't care whether the keys can be
ordered or not. An example of such a
function is toList, that just takes a
mapping and converts it to an
associative list. Its type signature
is toList :: Map k a -> [(k, a)]. If
Map k v had a type constraint in its
data declaration, the type for toList
would have to be toList :: (Ord k) =>
Map k a -> [(k, a)], even though the
function doesn't do any comparing of
keys by order.
This at first, seems logical -- but isn't there an upside to having the typeclass attached to the type? If the typeclass is the behavior of the type, then why should the behavior be defined by the use of the type (through functions), and not the type itself? I assume there is some meta-programming that could make use of it, and it is certainly nice and descriptive code-documentation. Conversely, would this be a good idea in other languages? Would it be ideal to specify the interface the object should conform to on the method, such that if the method isn't used by the caller the object doesn't have to conform to the interface? Moreover, why can Haskell not infer that a function using type Foo, has to pull in the typeclass constraints identified in type Foo's declaration? Is there a pragma to enable this?
The first time I read it, it conjured a "that's a hack (or workaround) response". On second read with some thought, it sounded clever. On third read, drawing a compairison to the OO world, it sounded like a hack again.
So here I am.
Perhaps Map k v wasn't the best example to illustrate the point. Given the definition of Map, even though there are some functions that won't need the (Ord k) constraint, there is no possible way to construct a Map without it.
One often finds that a type is quite usable with the sub-set of functions that work on without some particular constraint, even when you envisioned the constraint as an obvious aspect of your original design. In such cases, having left the constraint off the type declaration leaves it more flexible.
For example, Data.List contains plenty of functions that require (Eq a), but of course lists are perfectly useful without that constraint.
The short answer is: Haskell does this because that's how the language spec is written.
The long answer involves quoting from the GHC documentation language extensions section:
Any data type that can be declared in standard Haskell-98 syntax can also be declared using GADT-style syntax. The choice is largely stylistic, but GADT-style declarations differ in one important respect: they treat class constraints on the data constructors differently. Specifically, if the constructor is given a type-class context, that context is made available by pattern matching. For example:
data Set a where
MkSet :: Eq a => [a] -> Set a
(...)
All this behaviour contrasts with Haskell 98's peculiar treatment of contexts on a data type declaration (Section 4.2.1 of the Haskell 98 Report). In Haskell 98 the definition
data Eq a => Set' a = MkSet' [a]
gives MkSet' the same type as MkSet above. But instead of making available an (Eq a) constraint, pattern-matching on MkSet' requires an (Eq a) constraint! GHC faithfully implements this behaviour, odd though it is. But for GADT-style declarations, GHC's behaviour is much more useful, as well as much more intuitive.
The main reason to avoid typeclass constraints in data declarations is that they accomplish absolutely nothing; in fact, I believe GHC refers to such a class context as the "stupid context". The reason for this is that the class dictionary is not carried around with the values of the data type, so you have to add it to every function operating on the values anyway.
As a way of "forcing" the typeclass constraint on functions operating on the data type, it also doesn't really accomplish anything; functions should generally be as polymorphic as possible, so why force the constraint onto things that don't need it?
At this point, you might think that it should be possible to change the semantics of ADTs in order to carry the dictionary around with the values. In fact, it seems like this is the whole point of GADTs; for example, you can do:
data Foo a where { Foo :: (Eq a) => a -> Foo a }
eqfoo :: Foo t -> Foo t -> Bool
eqfoo (Foo a) (Foo b) = a == b
Notice that the type of eqfoo does not need the Eq constraint, as it is "carried" by the Foo data type itself.
I would like to point out that if you are worried that one could construct an object that requires constraints for its operations but doesn't for its creation, say mkFoo, you can always artifically put the constraint on the mkFoo function to enforce the use of the typeclass by people who use the code. The idea also extends to non mkFoo type functions that operate on Foo. Then when defining the module, don't export anything that doesn't enforce the constraints.
Though I have to admit, I don't see any use for this.