Performance varies dramatically if a function is moved between modules - haskell

If I move a function from where it's used into a separate module, I've noticed the performance of the program drops significantly.
calc = sum . nub . map third . filter isProd . concat . map parts . permutations
where third (_,_,b) = fromDigits b
isProd (a,b,p) = fromDigits a * fromDigits b == fromDigits p
-- All possibilities have digits: A x AAAA or AA x AAA
parts (a:b:c:d:e:rest) = [([a], [b,c,d,e], rest)
,([a,b], [c,d,e], rest)]
in another module:
fromDigits :: Integral a => [a] -> a
fromDigits = foldl1' (\a b -> 10 * a + b)
This runs in 0.1 seconds when fromDigits is in the same module, but 0.4 seconds when I move it to another module.
I assume this is because GHC can't inline the function if it's in different module, but I feel like it should be able to, since they are in the same package.
I'm not sure what the compiler settings are, but it's built with Leksah/cabal defaults. I'm fairly sure that's with -O2 as a minimum.

For the type-class polymorphic fromDigits, you get a function that is, due to the dictionary lookups for (+), (*) and fromInteger, too large to have its unfolding automatically exposed. That means it can't be specialised at the call sites and the dictionary lookups can't be eliminated to possibly inline addition and multiplication (which might enable further optimisation).
When it is defined in the same module as it is used in, with optimisations, GHC creates a specialised version for the type it's used at, if that is known. Then the dictionary lookups can be eliminated and the (+) and (*) operations can be inlined (if the type they're used at has operations suitable for inlining).
But that depends on the type being known. So if you have the polymorphic calc and fromDigits in one module, but use it only in some other module, you are again in the position that only the generic version is available, but since its unfolding is not exposed, it can't be specialised or otherwise optimised at the call site.
One solution is to make the unfolding of the function exposed in the interface file, so it can be properly optimised where it is used, when the necessary data (in particular the type) is available. You can expose the function's unfolding in the interface file by adding an {-# INLINE #-}, or, as of GHC 7, an {-# INLINABLE #-} pragma to the function. That makes the almost unchanged source code available when compiling the calling code, so the function can be properly optimised with more information available.
The downside to this is code-bloat, you get a copy of the optimised code at every call site (for INLINABLE it's not so extreme, you get at least one copy per calling module, that's usually not too bad).
An alternative solution is to generate specialised versions in the defining module by adding {-# SPECIALISE #-} pragmas (US spelling also accepted) to let GHC create optimised versions for the important types (Int, Integer, Word, ?). That also creates rewrite rules, so that uses at the specialised-for types get rewritten to use the specialised version (when compiling with optimisations).
The downside to this is that some optimisations that would be possible when the code is inlined aren't.

You can tell GHC to inline a function at the call site using the inline function: http://www.haskell.org/ghc/docs/7.0.4/html/libraries/ghc-prim-0.2.0.0/GHC-Prim.html#v%3Ainline. You may want to use it in conjunction with the INLINABLE pragma: http://www.haskell.org/ghc/docs/7.0.4/html/users_guide/pragmas.html#inlinable-pragma

Related

How can I add constraints to a type in a data type in haskell? [duplicate]

In many articles about Haskell they say it allows to make some checks during compile time instead of run time. So, I want to implement the simplest check possible - allow a function to be called only on integers greater than zero. How can I do it?
module Positive (toPositive, getPositive, Positive) where
newtype Positive = Positive { unPositive :: Int }
toPositive :: Int -> Maybe Positive
toPositive n = if (n <= 0) then Nothing else Just (Positive n)
-- We can't export unPositive, because unPositive can be used
-- to update the field. Trivially renaming it to getPositive
-- ensures that getPositive can only be used to access the field
getPositive :: Positive -> Int
getPositive = unPositive
The above module doesn't export the constructor, so the only way to build a value of type Positive is to supply toPositive with a positive integer, which you can then unwrap using getPositive to access the actual value.
You can then write a function that only accepts positive integers using:
positiveInputsOnly :: Positive -> ...
Haskell can perform some checks at compile time that other languages perform at runtime. Your question seems to imply you are hoping for arbitrary checks to be lifted to compile time, which isn't possible without a large potential for proof obligations (which could mean you, the programmer, would need to prove the property is true for all uses).
In the below, I don't feel like I'm saying anything more than what pigworker touched on while mentioning the very cool sounding Inch tool. Hopefully the additional words on each topic will clarify some of the solution space for you.
What People Mean (when speaking of Haskell's static guarantees)
Typically when I hear people talk about the static guarantees provided by Haskell they are talking about the Hindley Milner style static type checking. This means one type can not be confused for another - any such misuse is caught at compile time (ex: let x = "5" in x + 1 is invalid). Obviously, this only scratches the surface and we can discuss some more aspects of static checking in Haskell.
Smart Constructors: Check once at runtime, ensure safety via types
Gabriel's solution is to have a type, Positive, that can only be positive. Building positive values still requires a check at runtime but once you have a positive there are no checks required by consuming functions - the static (compile time) type checking can be leveraged from here.
This is a good solution for many many problems. I recommended the same thing when discussing golden numbers. Never-the-less, I don't think this is what you are fishing for.
Exact Representations
dflemstr commented that you can use a type, Word, which is unable to represent negative numbers (a slightly different issue than representing positives). In this manner you really don't need to use a guarded constructor (as above) because there is no inhabitant of the type that violates your invariant.
A more common example of using proper representations is non-empty lists. If you want a type that can never be empty then you could just make a non-empty list type:
data NonEmptyList a = Single a | Cons a (NonEmptyList a)
This is in contrast to the traditional list definition using Nil instead of Single a.
Going back to the positive example, you could use a form of Peano numbers:
data NonNegative = One | S NonNegative
Or user GADTs to build unsigned binary numbers (and you can add Num, and other instances, allowing functions like +):
{-# LANGUAGE GADTs #-}
data Zero
data NonZero
data Binary a where
I :: Binary a -> Binary NonZero
O :: Binary a -> Binary a
Z :: Binary Zero
N :: Binary NonZero
instance Show (Binary a) where
show (I x) = "1" ++ show x
show (O x) = "0" ++ show x
show (Z) = "0"
show (N) = "1"
External Proofs
While not part of the Haskell universe, it is possible to generate Haskell using alternate systems (such as Coq) that allow richer properties to be stated and proven. In this manner the Haskell code can simply omit checks like x > 0 but the fact that x will always be greater than 0 will be a static guarantee (again: the safety is not due to Haskell).
From what pigworker said, I would classify Inch in this category. Haskell has not grown sufficiently to perform your desired tasks, but tools to generate Haskell (in this case, very thin layers over Haskell) continue to make progress.
Research on More Descriptive Static Properties
The research community that works with Haskell is wonderful. While too immature for general use, people have developed tools to do things like statically check function partiality and contracts. If you look around you'll find it's a rich field.
I would be failing in my duty as his supervisor if I failed to plug Adam Gundry's Inch preprocessor, which manages integer constraints for Haskell.
Smart constructors and abstraction barriers are all very well, but they push too much testing to run time and don't allow for the possibility that you might actually know what you're doing in a way that checks out statically, with no need for Maybe padding. (A pedant writes. The author of another answer appears to suggest that 0 is positive, which some might consider contentious. Of course, the truth is that we have uses for a variety of lower bounds, 0 and 1 both occurring often. We also have some use for upper bounds.)
In the tradition of Xi's DML, Adam's preprocessor adds an extra layer of precision on top of what Haskell natively offers but the resulting code erases to Haskell as is. It would be great if what he's done could be better integrated with GHC, in coordination with the work on type level natural numbers that Iavor Diatchki has been doing. We're keen to figure out what's possible.
To return to the general point, Haskell is currently not sufficiently dependently typed to allow the construction of subtypes by comprehension (e.g., elements of Integer greater than 0), but you can often refactor the types to a more indexed version which admits static constraint. Currently, the singleton type construction is the cleanest of the available unpleasant ways to achieve this. You'd need a kind of "static" integers, then inhabitants of kind Integer -> * capture properties of particular integers such as "having a dynamic representation" (that's the singleton construction, giving each static thing a unique dynamic counterpart) but also more specific things like "being positive".
Inch represents an imagining of what it would be like if you didn't need to bother with the singleton construction in order to work with some reasonably well behaved subsets of the integers. Dependently typed programming is often possible in Haskell, but is currently more complicated than necessary. The appropriate sentiment toward this situation is embarrassment, and I for one feel it most keenly.
I know that this was answered a long time ago and I already provided an answer of my own, but I wanted to draw attention to a new solution that became available in the interim: Liquid Haskell, which you can read an introduction to here.
In this case, you can specify that a given value must be positive by writing:
{-# myValue :: {v: Int | v > 0} #-}
myValue = 5
Similarly, you can specify that a function f requires only positive arguments like this:
{-# f :: {v: Int | v > 0 } -> Int #-}
Liquid Haskell will verify at compile-time that the given constraints are satisfied.
This—or actually, the similar desire for a type of natural numbers (including 0)—is actually a common complaints about Haskell's numeric class hierarchy, which makes it impossible to provide a really clean solution to this.
Why? Look at the definition of Num:
class (Eq a, Show a) => Num a where
(+) :: a -> a -> a
(*) :: a -> a -> a
(-) :: a -> a -> a
negate :: a -> a
abs :: a -> a
signum :: a -> a
fromInteger :: Integer -> a
Unless you revert to using error (which is a bad practice), there is no way you can provide definitions for (-), negate and fromInteger.
Type-level natural numbers are planned for GHC 7.6.1: https://ghc.haskell.org/trac/ghc/ticket/4385
Using this feature it's trivial to write a "natural number" type, and gives a performance you could never achieve (e.g. with a manually written Peano number type).

Specializing Imported Function in GHC Haskell

I'm working on a project right now where I'm dealing with
the Prim typeclass and I need to ensure that a particular
function I've written is specialized. That is, I need to make sure that
when I call it, I get a specialized version of the function in which the
Prim dictionaries get inlined into the specialized
definition instead of being passed at runtime.
Fortunately, this is a pretty well-understood thing in GHC. You can just
write:
{-# SPECIALIZE foo :: ByteArray Int -> Int #-}
foo :: Prim a => ByteArray a -> Int
foo = ...
And in my code, this approach is working fine. But, since typeclasses are
open, there can be Prim instances that I don't know about yet when
the library is being written. This brings me to the problem at hand.
The GHC user guide's documentation of SPECIALIZE
provides two ways to use it. The first is putting SPECIALIZE at the
site of the definition, as I did in the example above. The second is
putting the SPECIALIZE pragma in another module where the function is imported.
For reference, the example the user manual provides is:
module Map( lookup, blah blah ) where
lookup :: Ord key => [(key,a)] -> key -> Maybe a
lookup = ...
{-# INLINABLE lookup #-}
module Client where
import Map( lookup )
data T = T1 | T2 deriving( Eq, Ord )
{-# SPECIALISE lookup :: [(T,a)] -> T -> Maybe a
The problem I'm having is that this is not working in my code. The project
is on github,
and the relevant lines are:
bench/Main.hs line 24
src/BTree/Compact.hs line 149
To run the benchmark, run these commands:
git submodule init && git submodule update
cabal new-build bench && ./dist-newstyle/build/btree-0.1.0.0/build/bench/bench
When I run the benchmark as is, there is a part of the output that reads:
Off-heap tree, Amount of time taken to build:
0.293197796
If I uncomment line 151 of BTree.Compact,
that part of the benchmark runs fifty times faster:
Off-heap tree, Amount of time taken to build:
5.626834e-2
It's worth pointing out that the function in question, modifyWithM, is enormous.
It's implementation is over 100 lines, but I do not think this should make a
difference. The docs claim:
... mark the definition of f as INLINABLE, so that GHC guarantees to expose an unfolding regardless of how big it is.
So, my understanding is that, if specializing at the definition site works, it
should always be possible to instead specialize at the call site. I would appreciate
any insights from people who understand this machinery better than I do, and I'm
happy to provide more information if something is unclear. Thanks.
EDIT: I've realized that in the git commit I linked to in this post, there is a problem with the benchmark code. It repeatedly inserts the same value. However, even after fixing this, the specialization problem is still happening.

reuse/memoization of global polymorphic (class) values in Haskell

I'm concerned with if and when a polymorphic "global" class value is shared/memoized, particularly across module boundaries. I have read this and this, but they don't quite seem to reflect my situation, and I'm seeing some different behavior from what one might expect from the answers.
Consider a class that exposes a value that can be expensive to compute:
{-# LANGUAGE FlexibleInstances, UndecidableInstances #-}
module A
import Debug.Trace
class Costly a where
costly :: a
instance Num i => Costly i where
-- an expensive (but non-recursive) computation
costly = trace "costly!" $ (repeat 1) !! 10000000
foo :: Int
foo = costly + 1
costlyInt :: Int
costlyInt = costly
And a separate module:
module B
import A
bar :: Int
bar = costly + 2
main = do
print foo
print bar
print costlyInt
print costlyInt
Running main yields two separate evaluations of costly (as indicated by the trace): one for foo, and one for bar. I know that costlyInt just returns the (evaluated) costly from foo, because if I remove print foo from main then the first costlyInt becomes costly. (I can also cause costlyInt to perform a separate evaluation no matter what, by generalizing the type of foo to Num a => a.)
I think I know why this behavior happens: the instance of Costly is effectively a function that takes a Num dictionary and generates a Costly dictionary. So when compiling bar and resolving the reference to costly, ghc generates a fresh Costly dictionary, which has an expensive thunk in it. Question 1: am I correct about this?
There are a few ways to cause just one evaluation of costly, including:
Put everything in one module.
Remove the Num i instance constraint and just define a Costly Int instance.
Unfortunately, the analogs of these solutions are not feasible in my program -- I have several modules that use the class value in its polymorphic form, and only in the top-level source file are concrete types finally used.
There are also changes that don't reduce the number of evaluations, such as:
Using INLINE, INLINABLE, or NOINLINE on the costly definition in the instance. (I didn't expect this to work, but hey, worth a shot.)
Using a SPECIALIZE instance Costly Int pragma in the instance definition.
The latter is surprising to me -- I'd expected it to be essentially equivalent to the second item above that did work. That is, I thought it would generate a special Costly Int dictionary, which all of foo, bar, and costlyInt would share. My question 2: what am I missing here?
My final question: is there any relatively simple and foolproof way to get what I want, i.e., all references to costly of a particular concrete type being shared across modules? From what I've seen so far, I suspect the answer is no, but I'm still holding out hope.
Controlling sharing is tricky in GHC. There are many optimizations that GHC does which can affect sharing (such as inlining, floating things out, etc).
In this case, to answer the question why the SPECIALIZE pragma did not achieve the intended effect, let's look at the Core of the B module, in particular of the bar function:
Rec {
bar_xs
bar_xs = : x1_r3lO bar_xs
end Rec }
bar1 = $w!! bar_xs 10000000
-- ^^^ this repeats the computation. bar_xs is just repeat 1
bar =
case trace $fCostlyi2 bar1 of _ { I# x_aDm -> I# (+# x_aDm 2) }
-- ^^^ this is just the "costly!" string
That didn't work as we wanted. Instead of reusing costly, GHC decided to just inline the costly function.
So we have to prevent GHC from inlining costly, or the computation will be duplicated. How do we do that? You might think adding a {-# NOINLINE costly #-} pragma would be enough, but unfortunately specialization without inlining don't seem to work together well:
A.hs:13:3: Warning:
Ignoring useless SPECIALISE pragma for NOINLINE function: ‘$ccostly’
But there is a trick to convince GHC to do what we want: we can write costly in the following way:
instance Num i => Costly i where
-- an expensive (but non-recursive) computation
costly = memo where
memo :: i
memo = trace "costly!" $ (repeat 1) !! 10000000
{-# NOINLINE memo #-}
{-# SPECIALIZE instance Costly Int #-}
-- (this might require -XScopedTypeVariables)
This allows us to specialize costly, will simultanously avoiding the inlining of our computation.

GHC code generation for type class function calls

In Haskell to define an instance of a type class you need to supply a dictionary of functions required by the type class. I.e. to define an instance of Bounded, you need to supply a definition for minBound and maxBound.
For the purpose of this question, let's call this dictionary the vtbl for the type class instance. Let me know if this is poor analogy.
My question centers around what kind of code generation can I expect from GHC when I call a type class function. In such cases I see three possibilities:
the vtbl lookup to find the implementation function is down at run time
the vtbl lookup is done at compile time and a direct call to the implementation function is emitted in the generated code
the vtbl lookup is done at compile time and the implementation function is inlined at the call site
I'd like to understand when each of these occur - or if there are other possibilities.
Also, does it matter if the type class was defined in a separately compiled module as opposed to being part of the "main" compilation unit?
In a runnable program it seems that Haskell knows the types of all the functions and expressions in the program. Therefore, when I call a type class function the compiler should know what the vtbl is and exactly which implementation function to call. I would expect the compiler to at least generate a direct call to implementation function. Is this true?
(I say "runnable program" here to distinguish it from compiling a module which you don't run.)
As with all good questions, the answer is "it depends". The rule of thumb is that there's a runtime cost to any typeclass-polymorphic code. However, library authors have a lot of flexibility in eliminating this cost with GHC's rewrite rules, and in particular there is a {-# SPECIALIZE #-} pragma that can automatically create monomorphic versions of polymorphic functions and use them whenever the polymorphic function can be inferred to be used at the monomorphic type. (The price for doing this is library and executable size, I think.)
You can answer your question for any particular code segment using ghc's -ddump-simpl flag. For example, here's a short Haskell file:
vDouble :: Double
vDouble = 3
vInt = length [2..5]
main = print (vDouble + realToFrac vInt)
Without optimizations, you can see that GHC does the dictionary lookup at runtime:
Main.main :: GHC.Types.IO ()
[GblId]
Main.main =
System.IO.print
# GHC.Types.Double
GHC.Float.$fShowDouble
(GHC.Num.+
# GHC.Types.Double
GHC.Float.$fNumDouble
(GHC.Types.D# 3.0)
(GHC.Real.realToFrac
# GHC.Types.Int
# GHC.Types.Double
GHC.Real.$fRealInt
GHC.Float.$fFractionalDouble
(GHC.List.length
# GHC.Integer.Type.Integer
(GHC.Enum.enumFromTo
# GHC.Integer.Type.Integer
GHC.Enum.$fEnumInteger
(__integer 2)
(__integer 5)))))
...the relevant bit being realToFrac #Int #Double. At -O2, on the other hand, you can see it did the dictionary lookup statically and inlined the implementation, the result being a single call to int2Double#:
Main.main2 =
case GHC.List.$wlen # GHC.Integer.Type.Integer Main.main3 0
of ww_a1Oq { __DEFAULT ->
GHC.Float.$w$sshowSignedFloat
GHC.Float.$fShowDouble_$sshowFloat
GHC.Show.shows26
(GHC.Prim.+## 3.0 (GHC.Prim.int2Double# ww_a1Oq))
(GHC.Types.[] # GHC.Types.Char)
}
It's also possible for a library author to choose to rewrite the polymorphic function to a call to a monomorphic one but not inline the implementation of the monomorphic one; this means that all of the possibilities you proposed (and more) are possible.
If the compiler can "tell", at compile-time, what actual type you're using, then the method lookup happens at compile-time. Otherwise it happens at run-time. If lookup happens at compile-time, the method code may be inlined, depending on how large the method is. (This goes for regular functions too: If the compiler knows which function you're calling, it will inline it if that function is "small enough".)
Consider, for example, (sum [1 .. 10]) :: Integer. Here the compiler statically knows that the list is a list of Integer takes, so it can inline the + function for Integer. On the other hand, if you do something like
foo :: Num x => [x] -> x
foo xs = sum xs - head x
then, when you call sum, the compiler doesn't know what type you're using. (It depends on what type is given to foo), so it can't do any compile-time lookup.
On the other hand, using the {-# SPECIALIZE #-} pragma, you can do something like
{-# SPECIALIZE foo:: [Int] -> Int #-}
What this does is tell the compiler to compile a special version of foo where the input is a list of Int values. This obviously means that for that version, the compiler can do all the method lookups at compile-time (and almost certainly inline them all). Now there are two versions of foo - one which works for any type and does run-time type lookups, and one that works only for Int, but is [probably] much faster.
When you call the foo function, the compiler has to decide which version to call. If the compiler can "tell", at compile-time, that you want the Int version, it will do that. If it can't "tell" what type you're going to use, it'll use the slower any-type version.
Note that you can have multiple specialisations of a single function. For example, you could do
{-# SPECIALIZE foo :: [Int] -> Int #-}
{-# SPECIALIZE foo :: [Double] -> Double #-}
{-# SPECIALIZE foo :: [Complex Double] -> Complex Double #-}
Now, whenever the compiler can tell that you're using one of these types, it'll use the version hard-coded for that type. But if the compiler can't tell what type you're using, it'll never use the specialised versions, and always the polymorphic one. (That might mean that you need to specialise the function(s) that call foo, for example.)
If you crawl around the compiler's Core output, you can probably figure out exactly what it did in any particular circumstance. You will probably go stark raving mad though...
As other answers describe, any of these can happen in different situations. For any specific function call, the only way to be sure is to look at the generated core. That said, there are some cases where you can get a good idea of what will happen.
Using a type class method at a monomorphic type.
When a type class method is called in a situation where the type is entirely known at compile time, GHC will perform the lookup at compile time. For example
isFive :: Int -> Bool
isFive i = i == 5
Here the compiler knows that it needs Ints Eq dictionary, so it emits code to call the function statically. Whether or not that call is inlined depends upon GHC's usual inlining rules, and whether or not an INLINE pragma applies to the class method definition.
Exposing a polymorphic function
If a polymorphic function is exposed from a compiled module, then the base case is that the lookup needs to be performed at runtime.
module Foo (isFiveP) where
isFiveP :: (Eq a, Num a) => a -> Bool
isFiveP i = i == 5
What GHC actually does is transform this into a function of the form (more or less)
isFiveP_ eqDict numDict i = (eq_op eqDict) i (fromIntegral_fn numDict 5)
so the function lookups would need to be performed at runtime.
That's the base case, anyway. What actually happens is that GHC can be quite aggressive about cross-module inlining. isFiveP is small enough that it would be inlined into the call site. If the type can be determined at the call site, then the dictionary lookups will all be performed at compile time. Even if a polymorphic function isn't directly inlined at the call site, the dictionary lookups may still be performed at compile time due to GHC's usual function transformations, if the code ever gets to a form where the function (with class dictionary parameters) can be applied to a statically-known dictionary.

In Haskell, is there some way to forcefully coerce a polymorphic call?

I have a list of values (or functions) of any type. I have another list of functions of any type. The user at runtime will choose one from the first list, and another from the second list. I have a mechanism to ensure that the two items are type compatible (value or output from first is compatible with input of second).
I need some way to call the function with the value (or compose the functions). If the second function has concrete types, unsafeCoerce works fine. But if it's of the form:
polyFunc :: MyTypeclass a => a -> IO ()
polyFunc x = print . show . typeclassFunc x
Then unsafeCoerce doesn't work since it can't resolve to a concrete type.
Is there any way to do what I'm trying to do?
Here's an example of what the lists might look like. However... I'm not limited to this, if there is some other way to represent these that will solve the problem, I would like to know. A critical thing to consider is that: the list can change at runtime so I do not know at compile time all the possible types that might be involved.
data Wrapper = forall a. Wrapper a
firstList :: [Wrapper]
firstList = [Wrapper "blue", Wrapper 5, Wrapper valueOfMyTypeclass]
data OtherWrapper = forall a. Wrapper (a -> IO ())
secondList :: [OtherWrapper]
secondList = [OtherWrapper print, OtherWrapper polyFunc]
Note: As for why I want to do such a crazy thing:
I'm generating code and typechecking it with hint. But that happens at runtime. The problem is that hint is slow at actually executing things and high performance for this is critical. Also, at least in certain cases, I do not want to generate code and run it through ghc at runtime (though we have done some of that, too). So... I'm trying to find somewhere in the middle: dynamically hook things together without having to generate code and compile, but run it at compiled speed instead of interpreted.
Edit: Okay, so now that I see what's going on a bit more, here's a very general approach -- don't use polymorphic functions directly at all! Instead, use functions of type Dynamic -> IO ()! Then, they can use "typecase"-style dispatch directly to choose which monomorphic function to invoke -- i.e. just switching on the TypeRep. You do have to encode this dispatch directly for each polymorphic function you're wrapping. However, you can automate this with some template Haskell if it becomes enough of a hassle.
Essentially, rather than overloading Haskell's polymorphism, just as Dynamic embeds an dynamically typed language in a statically typed language, you now extend that to embed dynamic polymorphism in a statically typed language.
--
Old answer: More code would be helpful. But, as far as I can tell, this is the read/show problem. I.e. You have a function that produces a polymorphic result, and a function that takes a polymorphic input. The issue is that you need to pick what the intermediate value is, such that it satisfies both constraints. If you have a mechanism to do so, then the usual tricks will work, making sure you satisfy that open question which the compiler can't know the answer to.
I'm not sure that I completely understand your question. But since you have value and function which have compatible types you could combine them into single value. Then compiler could prove that types do match.
{-# LANGUAGE ExistentialQuantification #-}
data Vault = forall a . Vault (a -> IO ()) a
runVault :: Vault -> IO ()
runVault (Vault f x) = f xrun

Resources