Memoizing arguments independently - haskell

I have a simulation with lots of calls to functions of the type F = A -> B -> C -> D, where A..D are concrete types.
The objects of type A have a medium lifetime. (It is the genome of codegolf's ratrace.)
The most expensive computation arises from parameter A. I can easily memoize like this:
f1 :: F
f1 a = let expensive = trace "expensive computation!" $ expensiveComputation a
in \b c -> expensive
and hold some pre-processed expensive values via partial application:
preProz :: [B -> C -> D]
preProz = [f1 [], f1 [False], f2 []]
The traces indicate that preProz <*> [[],[[]]] <*> [1,2] does not recompute the values to my delight.
Now I found out that some of my Fs would benefit from pre-processing B, too. This pre-processing is independent from A, and, in fact, memoizing like this has no benefit
f2 a = let expensive = trace "expensive computation!" $ expensiveComputation a
in \b -> let dear = trace "expensive computation!" $ expensiveComputation b
in expensive + dear
because dear is recomputed, even is the bs equal.
What I need is something like:
(B -> e) -> A -> e -> C -> D
where e should be memoized. The type of e is sort-of-existential here.
But this forces me to recompute all values A for every B, which is just as bad, and I cannot save the es, which are private to the function.
How can I memoize along 2 parameters independently?

You need a function that memoizes both a and b together:
f12 a b = ...
in \c -> ...
When you want to memoize a but not b, you use f1 a and when you want to memoize both you use f12 a b.
It would of course be nice to share some implementation between f1 and f12. However, you can do that only by having private functions that take the precomputed results in place of the original values:
f1 a = privateA (precomputeA a)
f12 a b = privateAB (precomputeA a) (precomputeB b)
privateA a' b = privateAB a' (precomputeB b)
private AB a' b' c = ...
If the precomputation of b depends on the precomputation of a, then:
f1 a = privateA (precomputeA a)
f12 a b = let a' = precomputeA a in privateAB a' (precomputeB a' b)
privateA a' b = privateAB a' (precomputeB a' b)
private AB a' b' c = ...
I've purposely not used function composition and eta-reduction, to make things clearer. I've also left out any strictness annotations that you might want to use to control times of evaluation.
Perhaps memoizing isn't quite the right term here. You mean something like "partial application with some precomputation as well."

Related

Unpacking tuples using "Either" in Haskell

I'm trying to get a grip on how tuples work in Haskell.
I came across this type constructor leftRight :: (Either a b -> c) -> (a -> c, b -> c) and I'm struggling to see what it does.
So we have (Either a b -> c)which means that either a is applied to c or b is applied to c. But the output is a tuple?
Does that mean that it splits the function so that left side of the tuple takes in argument given at a and right side takes argument given at right?
How would this look in code?
"I came across this type constructor..."
It's not a type constructor - it's a function type declaration.
The -> separates out the parameter types. The final one is the return type, and the previous ones are the input types.
Hence leftRight :: (Either a b -> c) -> (a -> c, b -> c) takes one input and returns one output.
Input function: (Either a b -> c)
Output function pair: (a -> c, b -> c)
The parentheses contain the functions.
The first function takes an Either type (left value is the error type, a, and the right value is the OK type, 'b' - it helps me to think of the latin, sinister for left, and dexter for right - your mileage may vary) as the input and returns something of type c.
The second function comes as a tuple of two separate functions, one is a -> c, and one is b -> c.
A concrete version: type a is a String to contain my error message, type bis an Int, and type c is another string.
leftRight :: (Either String Int -> String) -> (String -> String, Int -> String)
So we have Either a b -> c which means that either a is applied to c or b is applied to c
Wrong, or at least badly worded. Nothing is “applied to c” here.
What it actually means is: the function can accept an a-value or a b-value, and in either case produces a c-value.
This is equivalent to having both a function that accepts only a-values and gives c-results, and one that accepts only b-values and gives c-results. The tuple groups both of these functions together.
It might help to look at an example to illustrate:
fryPancake :: Either Butter Margarine -> Pancake
[Assume we've already taken care of the flour, eggs etc. in the batter]
From this you can obtain both
fryPancakeWithButter :: Butter -> Pancake
fryPancakeWithButter b = fryPancake (Left b)
fryPancakeWithMargarine :: Margarine -> Pancake
fryPancakeWithMargarine m = fryPancake (Right m)
Now you just group both of these together:
fryPancake' :: (Butter -> Pancake, Margarine -> Pancake)
fryPancake' = (fryPancakeWithButter, fryPancakeWithMargarine)
...which is the same as
fryPancake' = leftRight fryPancake
The input to leftRight is a function and the output is two functions. The tuple is barely relevant, except that it groups the two functions together into a single output, so leftRight can return both of them at once.
One function's type is a -> c and the other one's type is b -> c. Presumably, the first function wraps the a in Left and then calls the original function, and the second one wraps the b in Right and then calls the original function:
leftRight :: (Either a b -> c) -> (a -> c, b -> c)
leftRight original = (leftFunc, rightFunc)
where
leftFunc aVal = original (Left aVal)
rightFunc bVal = original (Right bVal)
The key is to use function composition. A function of type a -> c can be created from functions of type a -> Either a b and Either a b -> c. You have Left :: a -> Either a b and you have the argument f :: Either a b -> c passed to leftRight.
The same argument lets you construct a function of type b -> c similarly. This gives you
leftRight :: (Either a b -> c) -> (a -> c, b -> c)
leftRight f = let g1 = ...
g2 = ...
in (g1, g2)
I leave the definitions of g1 and g2 as an exercise.

Understanding this type in Haskell?

I'm having some trouble understanding, how this type declaration works.
The type is: (a -> b) -> (b -> c) -> (c -> d) -> a -> d
So, to me I interpret this as a function that takes a function and that function takes another function which outputs a value d.
So, this is how I make my function:
Example :: (a -> b) -> (b -> c) -> (c -> d) -> a -> d
Example f g h x = f ( g ( h (x) )
I'd really appreciate it, if you guys could help me clarify. Thank you!
I think that you already know the theory behind the type you're writing, so I'll try to inject some intuitive way to read it (at least I hope so, your question is not totally clear to me).
When you read something like (a -> b) inside a type, that's a function, as you said. For example (Int -> Bool) is a function.
Let's make an example:
even :: Int -> Bool -- A more generic version of that is in the Prelude
even n = n `rem` 2 == 0
filter :: (Int -> Bool) -> [Int] -> [Int] -- And of that, too
filter _ [] = []
filter f (x:xs)
| f x = x : filter f xs
| otherwise = filter f xs
filteredEven :: [Int]
filteredEven = filter even [1..5] -- it gives [2, 4]
In this example we have a "high order function", a function that get another function and use it in some way.
In a function like the one you're defining you simply use 3 functions (and another parameter). But you can know more.
Each function you declare in the type accept a value returned from the previous one. So a possible solution is the one you have already showed. But the types are generic. There is not a total function that returns a generic value (where total means that it terminate always returning a value different from bottom if all the values are total and different by bottom, so it don't crash or return undefined, for example). So, if you wants a total function you have to have a way to generate the variables requested, from the context of the function (their parameters).
In the example before, using the names used by you, you have to return a value of type d. You only have a way to produce a value of that type, the h function. But to use the h function you have to get a value of type c. You only have the g function for that. But you need a value of type c. Fortunately you have the function f, that in exchange of a value of type a returns the value needed. We have this value (and don't have any other way to obtain a value of that type), so the function can be written. We can't in any way alter the values obtained (call multiple times the functions don't work, for purity and the fact that we have only a way to produce the values), so that's the only way to construct the function, if we wants it to be total:
Example (a -> b) -> (b -> c) -> (c -> d) -> a -> d
Example f g h x = h (g (f x)))
We can write the function in many other ways, but the results they give will be always the same (if Example, f, g and h are total and x is not bottom). So the type can express really well the function, because we can understand how the function works only looking at the type!

How can a function be "transparently augmented" in Haskell?

Situation
I have function f, which I want to augment with function g, resulting in function named h.
Definitions
By "augment", in the general case, I mean: transform either input (one or more arguments) or output (return value) of function f.
By "augment", in the specific case, (specific to my current situation) I mean: transform only the output (return value) of function f while leaving all the arguments intact.
By "transparent", in the context of "augmentation", (both the general case and the specific case) I mean: To couple g's implementation as loosely to f's implementation as possible.
Specific case
In my current situation, this is what I need to do:
h a b c = g $ f a b c
I am interested in rewriting it to something like this:
h = g . f -- Doesn't type-check.
Because from the perspective of h and g, it doesn't matter what arguments f take, they only care about the return value, hence it would be tight coupling to mention the arguments in any way. For instance, if f's argument count changes in the future, h will also need to be changed.
So far
I asked lambdabot on the #haskell IRC channel: #pl h a b c = g $ f a b c to which I got the response:
h = ((g .) .) . f
Which is still not good enough since the number of (.)'s is dependent on the number of f's arguments.
General case
I haven't done much research in this direction, but erisco on #haskell pointed me towards http://matt.immute.net/content/pointless-fun which hints to me that a solution for the general case could be possible.
So far
Using the functions defined by Luke Palmer in the above article this seems to be an equivalent of what we have discussed so far:
h = f $. id ~> id ~> id ~> g
However, it seems that this method sadly also suffers from being dependent on the number of arguments of f if we want to transform the return value of f -- just as the previous methods.
Working example
In JavaScript, for instance, it is possible to achieve transparent augmentation like this:
function h () { return g(f.apply(this, arguments)) }
Question
How can a function be "transparently augmented" in Haskell?
I am mainly interested in the specific case, but it would be also nice to know how to handle the general case.
You can sort-of do it, but since there is no way to specify a behavior for everything that isn't a function, you'll need a lot of trivial instances for all the other types you care about.
{-# LANGUAGE TypeFamilies, DefaultSignatures #-}
class Augment a where
type Result a
type Result a = a
type Augmented a r
type Augmented a r = r
augment :: (Result a -> r) -> a -> Augmented a r
default augment :: (a -> r) -> a -> r
augment g x = g x
instance Augment b => Augment (a -> b) where
type Result (a -> b) = Result b
type Augmented (a -> b) r = a -> Augmented b r
augment g f x = augment g (f x)
instance Augment Bool
instance Augment Char
instance Augment Integer
instance Augment [a]
-- and so on for every result type of every function you want to augment...
Example:
> let g n x ys = replicate n x ++ ys
> g 2 'a' "bc"
"aabc"
> let g' = augment length g
> g' 2 'a' "bc"
4
> :t g
g :: Int -> a -> [a] -> [a]
> :t g'
g' :: Int -> a -> [a] -> Int
Well, technically, with just enough IncoherentInstances you can do pretty much anything:
{-# LANGUAGE MultiParamTypeClasses, TypeFamilies,
FlexibleInstances, UndecidableInstances, IncoherentInstances #-}
class Augment a b f h where
augment :: (a -> b) -> f -> h
instance (a ~ c, h ~ b) => Augment a b c h where
augment = ($)
instance (Augment a b d h', h ~ (c -> h')) => Augment a b (c -> d) h where
augment g f = augment g . f
-- Usage
t1 = augment not not
r1 = t1 True
t2 = augment (+1) (+)
r2 = t2 2 3
t3 = augment (+1) foldr
r3 = t3 (+) 0 [2,3]
The problem is that the real return value of something like a -> b -> c isn't
c, but b -> c. What you want require some kind of test that tells you if a type isn't
a function type. You could enumerate the types you are interested in, but that's not so
nice. I think HList solve this problem somehow, look at the paper. I managed to understand a bit of the solution with overlapping instances, but the rest goes a bit over my head I'm afraid.
JavaScript works, because its arguments are a sequence, or a list, so there is just one argument, really. In that sense it is the same as a curried version of the functions with a tuple representing the collection of arguments.
In a strongly typed language you need a lot more information to do that "transparently" for a function type - for example, dependent types can express this idea, but require the functions to be of specific types, not a arbitrary function type.
I think I saw a workaround in Haskell that can do this, too, but, again, that works only for specific types, which capture the arity of the function, not any function.

How do I extract information from inner parameters in Haskell?

In most of programming languages that support mutable variables, one can easily implement something like this Java example:
interface Accepter<T> {
void accept(T t);
}
<T> T getFromDoubleAccepter(Accepter<Accepter<T>> acc){
final List<T> l = new ArrayList<T>();
acc.accept(new Accepter<T>(){
#Override
public void accept(T t) {
l.add(t);
}
});
return l.get(0); //Not being called? Exception!
}
Just for those do not understand Java, the above code receives something can can be provided a function that takes one parameter, and it supposed to grape this parameter as the final result.
This is not like callCC: there is no control flow alternation. Only the inner function's parameter is concerned.
I think the equivalent type signature in Haskell should be
getFromDoubleAccepter :: (forall b. (a -> b) -> b) -> a
So, if someone can gives you a function (a -> b) -> b for a type of your choice, he MUST already have an a in hand. So your job is to give them a "callback", and than keep whatever they sends you in mind, once they returned to you, return that value to your caller.
But I have no idea how to implement this. There are several possible solutions I can think of. Although I don't know how each of them would work, I can rate and order them by prospected difficulties:
Cont or ContT monad. This I consider to be easiest.
RWS monad or similar.
Any other monads. Pure monads like Maybe I consider harder.
Use only standard pure functional features like lazy evaluation, pattern-matching, the fixed point contaminator, etc. This I consider the hardest (or even impossible).
I would like to see answers using any of the above techniques (and prefer harder ways).
Note: There should not be any modification of the type signature, and the solution should do the same thing that the Java code does.
UPDATE
Once I seen somebody commented out getFromDoubleAccepter f = f id I realize that I have made something wrong. Basically I use forall just to make the game easier but it looks like this twist makes it too easy. Actually, the above type signature forces the caller to pass back whatever we gave them, so if we choose a as b then that implementation gives the same expected result, but it is just... not expected.
Actually what came up to my mind is a type signature like:
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
And this time it is harder.
Another comment writer asks for reasoning. Let's look at a similar function
getFunctionFromAccepter :: (((a -> b) -> b) -> b) -> a -> b
This one have an naive solution:
getFunctionFromAccepter f = \a -> f $ \x -> x a
But in the following test code it fails on the third:
exeMain = do
print $ getFunctionFromAccepter (\f -> f (\x -> 10)) "Example 1" -- 10
print $ getFunctionFromAccepter (\f -> 20) "Example 2" -- 20
print $ getFunctionFromAccepter (\f -> 10 + f (\x -> 30)) "Example 3" --40, should be 30
In the failing case, we pass a function that returns 30, and we expect to get that function back. However the final result is in turn 40, so it fails. Are there any way to implement doing Just that thing I wanted?
If this can be done in Haskell there are a lot of interesting sequences. For example, tuples (or other "algebraic" types) can be defined as functions as well, since we can say something like type (a,b) = (a->b->())->() and implement fst and snd in term of this. And this, is the way I used in a couple of other languages that do not have native "tuple" support but features "closure".
The type of accept is void accept(T) so the equivalent Haskell type is t -> IO () (since every function in Java is essentially IO). Thus getFromDoubleAccepted can be directly translated as
import Data.IORef
type Accepter t = t -> IO ()
getFromDoubleAccepter :: Accepter (Accepter a) -> IO a
getFromDoubleAccepter acc = do
l <- newIORef $ error "Not called"
acc $ writeIORef l
readIORef l
If you want an idiomatic, non-IO solution in Haskell, you need to be more specific about what your actual end goal is besides trying to imitate some Java-pattern.
EDIT: regarding the update
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
I'm sorry, but this signature is in no way equal to the Java version. What you are saying is that for any a, given a function that takes a function that takes an a but doesn't return anything or do any kind of side effects, you want to somehow conjure up a value of type a. The only implementation that satisfies the given signature is essentially:
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
getFromDoubleAccepter f = getFromDoubleAccepter f
First, I'll transliterate as much as I can. I'm going to lift these computations to a monad because accept returns void (read () in Haskell-land), which is useless unless there is some effect.
type Accepter m t = t -> m ()
getFromDoubleAccepter :: (MonadSomething m) => Accepter m (Accepter m t) -> m t
getFromDoubleAccepter acc = do
l <- {- new mutable list -}
acc $ \t -> add l t
return (head l)
Of course, we can't make a mutable list like that, so we'll have to use some intuitive sparks here. When an action just adds an element to some accumulator, I think of the Writer monad. So maybe that line should be:
acc $ \t -> tell [t]
Since you are simply returning the head of the list at the end, which doesn't have any effects, I think the signature should become:
getFromDoubleAccepter :: Accepter M (Accepter M t) -> t
where M is an appropriate monad. It needs to be able to write [t]s, so that gives us:
type M t = Writer [t]
getFromDoubleAccepter :: Accepter (M t) (Accepter (M t) t) -> t
And now the type of this function informs us how to write the rest of it:
getFromDoubleAccepter acc =
head . execWriter . acc $ \t -> tell [t]
We can check that it does something...
ghci> getFromDoubleAccepter $ \acc -> acc 42
42
So that seems right, I guess. I'm still a bit unclear on what this code is supposed to mean.
The explicit M t in the type signature is a bit aesthetically bothersome to me. If I knew what problem I was solving I would look at that carefully. If you mean that the argument can be a sequence of commands, but otherwise has no computational features available, then you could specialize the type signature to:
getFromDoubleAccepter :: (forall m. (Monad m) => Accepter m (Accepter m t)) -> t
which still works with our example. Of course, this is all a bit silly. Consider
forall m. (Monad m) => Accepter m (Accepter m t))
= forall m. (Monad m) => (t -> m ()) -> m ()
The only thing a function with this type can do is call its argument with various ts in order and then return (). The information in such a function is completely characterized[1] by those ts, so we could just as easily have used
getFromDoubleAccepter :: [t] -> t
getFromDoubleAccepter = head
[1] As long as I'm going on about nothing, I might as well say that that is not quite accurate in the face of infinity. The computation
crazy :: Integer -> Accepter m (Accepter m Integer)
crazy n acc = crazy (n+1) >> acc n
can be used to form the infinite sequence
... >> acc 3 >> acc 2 >> acc 1 >> acc 0
which has no first element. If we tried to interpret this as a list, we would get an infinite loop when trying to find the first element. However this computation has more information than an infinite loop -- if instead of a list, we used the Last monoid to interpret it, we would be able to extract 0 off the end. So really
forall m. (Monad m) => Accepter m (Accepter m t)
is isomorphic to something slightly more general than a list; specifically a free monoid.
Thanks to the above answers, I finally concluded that in Haskell we can do some different things than other languages.
Actually, the motivation of this post is to translate the famous "single axiom classical logic reduction system". I have implemented this in some other languages. It should be no problem to implement the
Axiom: (a|(b|c)) | ((d|(d|d)) | ((e|b) | ((a|e) | (a|e))))
However, since the reduction rule looks like
Rule: a|(b|c), a |-- c
It is necessary to extract the inner parameter as the final result. In other languages, this is done by using side-effects like mutable slots. However, in Haskell we do not have mutable slots and involving IO will be ugly so I keep looking for solutions.
In the first glance (as show in my question), the getFromDoubleAccepter f = f id seems nonsense, but I realise that it actually work in this case! For example:
rule :: (forall r.a -> (b -> c -> r) -> r) -> a -> c
rule abc a = abc a $ flip const
The trick is still the same: since the existential qualification hides r from the caller, and it is up to the callee to pick up a type for it, we can specify c to be r, so we simply apply the given function to get the result. On the other hand, the given function has to use our input to produce the final answer, so it effectively limiting the implementation to what we exactally want!
Putting them together, let's see what we can do with it:
newtype I r a b = I { runI :: a -> b -> r }
rule :: (forall r. I r a (I r b c)) -> a -> c
rule (I abc) a = abc a (I (\b c -> c))
axiom :: I r0 (I r1 a (I r2 b c))
(I r0 (I r3 d (I r3 d d))
(I r4 (I r2 e b) (I r4 (I r1 a e) (I r1 a e))))
axiom = let
a1 (I eb) e = I $ \b c -> eb e b
a2 = I $ \d (I dd) -> dd d d
a3 (I abc) eb = I $ \a e -> abc a (a1 eb e)
a4 abc = I $ \eb aeae -> runI a2 (a3 abc eb) aeae
in I $ \abc (I dddebaeae) -> dddebaeae a2 (a4 abc)
Here I use a naming convention to trace the type signatures: a variable name is combinded by the "effective" type varialbes (means it is not result type - all r* type variable).
I wouldn't repeat the prove represented in the sited essay, but I want to show something. In the above definition of axiom we use some let bindings variables to construct the result. Not surprisingly, those variables themselves can be extracted by using rule and axiom. let's see how:
--Equal to a4
t4 :: I r0 a (I r1 b c) -> I r2 (I r1 d b) (I r2 (I r0 a d) (I r0 a d))
t4 abc = rule axiom abc
--Equal to a3
t3 :: I r0 a (I r1 b c) -> I r1 d b -> I r0 a d
t3 abc eb = rule (t4 abc) eb
--Equal to a2
t2 :: I r a (I r a a)
t2 = rule (t3 axiom (t3 (t4 axiom) axiom)) axiom
--Equal to a1
t1 :: I r a b -> a -> I r b c
t1 ab a = rule (t3 t2 (t3 (t3 t2 t2) ab)) a
One thing left to be proved is that we can use t1 to t4 only to prove all tautologies. I feel it is the case but have not yet proved it.
Compare to other languages, the Haskell salutation seems more effective and expressive.

Mapping over Either's Left

Somewhere in my app I receive an Either ParserError MyParseResult from Parsec. Downstream this result gets some other parsing done over using other libs. During that second phase of parsing there also may occur some kind of error which I would like to pass as a Left String, but for that I need to convert the result from Parsec to String too. To achieve that I need a function which will allow me to map over a Left with a show function.
The mapping function I'm thinking of looks something like this:
mapLeft :: (a -> b) -> Either a c -> Either b c
mapLeft f (Left x) = Left $ f x
mapLeft _ x = x
But I was quite surprised not to find anything matching on hackage db. So now I'm having doubts whether I'm using a correct approach to my problem.
Why isn't there such a function in standard lib? What is wrong with my approach?
We have such a function in the standard libraries,
Control.Arrow.left :: a b c -> a (Either b d) (Either c d)
is the generalisation to arbitrary Arrows. Substitute (->) for a and apply it infix, to get the specialisation
left :: (b -> c) -> Either b d -> Either c d
There is nothing wrong with your approach in principle, it's a sensible way to handle the situation.
Another option is to use Bifunctor instance of Either. Then you have
first :: (a -> b) -> Either a c -> Either b c
(Also Bifunctor can be used to traverse over the first part of (a,b).)
This can be done easily with lens:
import Control.Lens
over _Left (+1) $ Left 10 => Left 11
over _Left (+1) $ Right 10 => Right 10
over _Right (+1) $ Right 10 => Right 11
Another simple option is mapLeft in Data.Either.Combinators:
mapLeft :: (a -> c) -> Either a b -> Either c b

Resources