Situation
I have function f, which I want to augment with function g, resulting in function named h.
Definitions
By "augment", in the general case, I mean: transform either input (one or more arguments) or output (return value) of function f.
By "augment", in the specific case, (specific to my current situation) I mean: transform only the output (return value) of function f while leaving all the arguments intact.
By "transparent", in the context of "augmentation", (both the general case and the specific case) I mean: To couple g's implementation as loosely to f's implementation as possible.
Specific case
In my current situation, this is what I need to do:
h a b c = g $ f a b c
I am interested in rewriting it to something like this:
h = g . f -- Doesn't type-check.
Because from the perspective of h and g, it doesn't matter what arguments f take, they only care about the return value, hence it would be tight coupling to mention the arguments in any way. For instance, if f's argument count changes in the future, h will also need to be changed.
So far
I asked lambdabot on the #haskell IRC channel: #pl h a b c = g $ f a b c to which I got the response:
h = ((g .) .) . f
Which is still not good enough since the number of (.)'s is dependent on the number of f's arguments.
General case
I haven't done much research in this direction, but erisco on #haskell pointed me towards http://matt.immute.net/content/pointless-fun which hints to me that a solution for the general case could be possible.
So far
Using the functions defined by Luke Palmer in the above article this seems to be an equivalent of what we have discussed so far:
h = f $. id ~> id ~> id ~> g
However, it seems that this method sadly also suffers from being dependent on the number of arguments of f if we want to transform the return value of f -- just as the previous methods.
Working example
In JavaScript, for instance, it is possible to achieve transparent augmentation like this:
function h () { return g(f.apply(this, arguments)) }
Question
How can a function be "transparently augmented" in Haskell?
I am mainly interested in the specific case, but it would be also nice to know how to handle the general case.
You can sort-of do it, but since there is no way to specify a behavior for everything that isn't a function, you'll need a lot of trivial instances for all the other types you care about.
{-# LANGUAGE TypeFamilies, DefaultSignatures #-}
class Augment a where
type Result a
type Result a = a
type Augmented a r
type Augmented a r = r
augment :: (Result a -> r) -> a -> Augmented a r
default augment :: (a -> r) -> a -> r
augment g x = g x
instance Augment b => Augment (a -> b) where
type Result (a -> b) = Result b
type Augmented (a -> b) r = a -> Augmented b r
augment g f x = augment g (f x)
instance Augment Bool
instance Augment Char
instance Augment Integer
instance Augment [a]
-- and so on for every result type of every function you want to augment...
Example:
> let g n x ys = replicate n x ++ ys
> g 2 'a' "bc"
"aabc"
> let g' = augment length g
> g' 2 'a' "bc"
4
> :t g
g :: Int -> a -> [a] -> [a]
> :t g'
g' :: Int -> a -> [a] -> Int
Well, technically, with just enough IncoherentInstances you can do pretty much anything:
{-# LANGUAGE MultiParamTypeClasses, TypeFamilies,
FlexibleInstances, UndecidableInstances, IncoherentInstances #-}
class Augment a b f h where
augment :: (a -> b) -> f -> h
instance (a ~ c, h ~ b) => Augment a b c h where
augment = ($)
instance (Augment a b d h', h ~ (c -> h')) => Augment a b (c -> d) h where
augment g f = augment g . f
-- Usage
t1 = augment not not
r1 = t1 True
t2 = augment (+1) (+)
r2 = t2 2 3
t3 = augment (+1) foldr
r3 = t3 (+) 0 [2,3]
The problem is that the real return value of something like a -> b -> c isn't
c, but b -> c. What you want require some kind of test that tells you if a type isn't
a function type. You could enumerate the types you are interested in, but that's not so
nice. I think HList solve this problem somehow, look at the paper. I managed to understand a bit of the solution with overlapping instances, but the rest goes a bit over my head I'm afraid.
JavaScript works, because its arguments are a sequence, or a list, so there is just one argument, really. In that sense it is the same as a curried version of the functions with a tuple representing the collection of arguments.
In a strongly typed language you need a lot more information to do that "transparently" for a function type - for example, dependent types can express this idea, but require the functions to be of specific types, not a arbitrary function type.
I think I saw a workaround in Haskell that can do this, too, but, again, that works only for specific types, which capture the arity of the function, not any function.
Related
(Disclaimer: I'm not 100% sure how codatatype works, especially when not referring to terminal algebras).
Consider the "category of types", something like Hask but with whatever adjustment that fits the discussion. Within such a category, it is said that (1) the initial algebras define datatypes, and (2) terminal algebras define codatatypes.
I'm struggling to convince myself of (2).
Consider the functor T(t) = 1 + a * t. I agree that the initial T-algebra is well-defined and indeed defines [a], the list of a. By definition, the initial T-algebra is a type X together with a function f :: 1+a*X -> X, such that for any other type Y and function g :: 1+a*Y -> Y, there is exactly one function m :: X -> Y such that m . f = g . T(m) (where . denotes the function combination operator as in Haskell). With f interpreted as the list constructor(s), g the initial value and the step function, and T(m) the recursion operation, the equation essentially asserts the unique existance of the function m given any initial value and any step function defined in g, which necessitates an underlying well-behaved fold together with the underlying type, the list of a.
For example, g :: Unit + (a, Nat) -> Nat could be () -> 0 | (_,n) -> n+1, in which case m defines the length function, or g could be () -> 0 | (_,n) -> 0, then m defines a constant zero function. An important fact here is that, for whatever g, m can always be uniquely defined, just as fold does not impose any contraint on its arguments and always produce a unique well-defined result.
This does not seem to hold for terminal algebras.
Consider the same functor T defined above. The definition of the terminal T-algebra is the same as the initial one, except that m is now of type X -> Y and the equation now becomes m . g = f . T(m). It is said that this should define a potentially infinite list.
I agree that this is sometimes true. For example, when g :: Unit + (Unit, Int) -> Int is defined as () -> 0 | (_,n) -> n+1 like before, m then behaves such that m(0) = () and m(n+1) = Cons () m(n). For non-negative n, m(n) should be a finite list of units. For any negative n, m(n) should be of infinite length. It can be verified that the equation above holds for such g and m.
With any of the two following modified definition of g, however, I don't see any well-defined m anymore.
First, when g is again () -> 0 | (_,n) -> n+1 but is of type g :: Unit + (Bool, Int) -> Int, m must satisfy that m(g((b,i))) = Cons b m(g(i)), which means that the result depends on b. But this is impossible, because m(g((b,i))) is really just m(i+1) which has no mentioning of b whatsoever, so the equation is not well-defined.
Second, when g is again of type g :: Unit + (Unit, Int) -> Int but is defined as the constant zero function g _ = 0, m must satisfy that m(g(())) = Nil and m(g(((),i))) = Cons () m(g(i)), which are contradictory because their left hand sides are the same, both being m(0), while the right hand sides are never the same.
In summary, there are T-algebras that have no morphism into the supposed terminal T-algebra, which implies that the terminal T-algebra does not exist. The theoretical modeling of the codatatype Stream (or infinite list), if any, cannot be based on the nonexistant terminal algebra of the functor T(t) = 1 + a * t.
Many thanks to any hint of any flaw in the story above.
(2) terminal algebras define codatatypes.
This is not right, codatatypes are terminal coalgebras. For your T functor, a coalgebra is a type x together with f :: x -> T x. A T-coalgebra morphism between (x1, f1) and (x2, f2) is a g :: x1 -> x2 such that fmap g . f1 = f2 . g. Using this definition, the terminal T-algebra defines the possibly infinite lists (so-called "colists"), and the terminality is witnessed by the unfold function:
unfold :: (x -> Unit + (a, x)) -> x -> Colist a
Note though that a terminal T-algebra does exist: it is simply the Unit type together with the constant function T Unit -> Unit (and this works as a terminal algebra for any T). But this is not very interesting for writing programs.
it is said that (1) the initial algebras define datatypes, and (2) terminal algebras define codatatypes.
On the second point, it is actually said that terminal coalgebras define codatatypes.
A datatype t is defined by its constructors and a fold.
Constructors can be modelled by an algebra F t -> t (for example, the Peano constructors O : nat S : Nat -> Nat are collected as a single function in : Unit + Nat -> Nat).
The fold then gives the catamorphism fold f : t -> x for any algebra f : F x -> x (for nats, fold : ((Unit + x) -> x) -> Nat -> x).
A codatatype t is defined by its destructors and an unfold.
Destructors can be modelled by a coalgebra t -> F t (for example, streams have two destructors head : Stream a -> a and tail : Stream a -> Stream a, and they are collected as a single function out : Stream a -> a * Stream a).
The unfold then gives the anamorphism unfold f : x -> t for any coalgebra f : x -> F x (for streams, unfold : (x -> a * x) -> x -> Stream a).
(Disclaimer: I'm not 100% sure how codatatype works, especially when not referring to terminal algebras).
A codata type, or coinductive data type, is just one defined by its eliminations rather than its introductions.
It seems that sometimes terminal algebra is used (very confusingly) to refer to a final coalgebra, which is what actually defines a codata type.
Consider the same functor T defined above. The definition of the terminal T-algebra is the same as the initial one, except that m is now of type X -> Y and the equation now becomes m . g = f . T(m). It is said that this should define a potentially infinite list.
So I think this is where you’ve gone wrong: “m ∘ g = f ∘ T(m)” should be reversed, and read “T(m) ∘ f = g ∘ m”. That is, the final coalgebra is defined by a carrier set S and a map g : S → T(S) such that for any other coalgebra (R, f : R → T(R)) there is a unique map m : R → S such that T(m) ∘ f = g ∘ m.
m is uniquely defined recursively by the map that returns Left () whenever f maps to Left (), and Right (x, m xs) whenever f maps to Right (x, xs), i.e. it’s the assignment of the coalgebra to its unique morphism to the final coalgebra, and denotes the unique anamorphism/unfold of this type, which should be easy to convince yourself is in fact a possibly-empty & possibly-infinite stream.
Is there a way in haskell to get all function arguments as a list.
Let's supose we have the following program, where we want to add the two smaller numbers and then subtract the largest. Suppose, we can't change the function definition of foo :: Int -> Int -> Int -> Int. Is there a way to get all function arguments as a list, other than constructing a new list and add all arguments as an element of said list? More importantly, is there a general way of doing this independent of the number of arguments?
Example:
module Foo where
import Data.List
foo :: Int -> Int -> Int -> Int
foo a b c = result!!0 + result!!1 - result!!2 where result = sort ([a, b, c])
is there a general way of doing this independent of the number of arguments?
Not really; at least it's not worth it. First off, this entire idea isn't very useful because lists are homogeneous: all elements must have the same type, so it only works for the rather unusual special case of functions which only take arguments of a single type.
Even then, the problem is that “number of arguments” isn't really a sensible concept in Haskell, because as Willem Van Onsem commented, all functions really only have one argument (further arguments are actually only given to the result of the first application, which has again function type).
That said, at least for a single argument- and final-result type, it is quite easy to pack any number of arguments into a list:
{-# LANGUAGE FlexibleInstances #-}
class UsingList f where
usingList :: ([Int] -> Int) -> f
instance UsingList Int where
usingList f = f []
instance UsingList r => UsingList (Int -> r) where
usingList f a = usingList (f . (a:))
foo :: Int -> Int -> Int -> Int
foo = usingList $ (\[α,β,γ] -> α + β - γ) . sort
It's also possible to make this work for any type of the arguments, using type families or a multi-param type class. What's not so simple though is to write it once and for all with variable type of the final result. The reason being, that would also have to handle a function as the type of final result. But then, that could also be intepreted as “we still need to add one more argument to the list”!
With all respect, I would disagree with #leftaroundabout's answer above. Something being
unusual is not a reason to shun it as unworthy.
It is correct that you would not be able to define a polymorphic variadic list constructor
without type annotations. However, we're not usually dealing with Haskell 98, where type
annotations were never required. With Dependent Haskell just around the corner, some
familiarity with non-trivial type annotations is becoming vital.
So, let's take a shot at this, disregarding worthiness considerations.
One way to define a function that does not seem to admit a single type is to make it a method of a
suitably constructed class. Many a trick involving type classes were devised by cunning
Haskellers, starting at least as early as 15 years ago. Even if we don't understand their
type wizardry in all its depth, we may still try our hand with a similar approach.
Let us first try to obtain a method for summing any number of Integers. That means repeatedly
applying a function like (+), with a uniform type such as a -> a -> a. Here's one way to do
it:
class Eval a where
eval :: Integer -> a
instance (Eval a) => Eval (Integer -> a) where
eval i = \y -> eval (i + y)
instance Eval Integer where
eval i = i
And this is the extract from repl:
λ eval 1 2 3 :: Integer
6
Notice that we can't do without explicit type annotation, because the very idea of our approach is
that an expression eval x1 ... xn may either be a function that waits for yet another argument,
or a final value.
One generalization now is to actually make a list of values. The science tells us that
we may derive any monoid from a list. Indeed, insofar as sum is a monoid, we may turn arguments to
a list, then sum it and obtain the same result as above.
Here's how we can go about turning arguments of our method to a list:
class Eval a where
eval2 :: [Integer] -> Integer -> a
instance (Eval a) => Eval (Integer -> a) where
eval2 is i = \j -> eval2 (i:is) j
instance Eval [Integer] where
eval2 is i = i:is
This is how it would work:
λ eval2 [] 1 2 3 4 5 :: [Integer]
[5,4,3,2,1]
Unfortunately, we have to make eval binary, rather than unary, because it now has to compose two
different things: a (possibly empty) list of values and the next value to put in. Notice how it's
similar to the usual foldr:
λ foldr (:) [] [1,2,3,4,5]
[1,2,3,4,5]
The next generalization we'd like to have is allowing arbitrary types inside the list. It's a bit
tricky, as we have to make Eval a 2-parameter type class:
class Eval a i where
eval2 :: [i] -> i -> a
instance (Eval a i) => Eval (i -> a) i where
eval2 is i = \j -> eval2 (i:is) j
instance Eval [i] i where
eval2 is i = i:is
It works as the previous with Integers, but it can also carry any other type, even a function:
(I'm sorry for the messy example. I had to show a function somehow.)
λ ($ 10) <$> (eval2 [] (+1) (subtract 2) (*3) (^4) :: [Integer -> Integer])
[10000,30,8,11]
So far so good: we can convert any number of arguments into a list. However, it will be hard to
compose this function with the one that would do useful work with the resulting list, because
composition only admits unary functions − with some trickery, binary ones, but in no way the
variadic. Seems like we'll have to define our own way to compose functions. That's how I see it:
class Ap a i r where
apply :: ([i] -> r) -> [i] -> i -> a
apply', ($...) :: ([i] -> r) -> i -> a
($...) = apply'
instance Ap a i r => Ap (i -> a) i r where
apply f xs x = \y -> apply f (x:xs) y
apply' f x = \y -> apply f [x] y
instance Ap r i r where
apply f xs x = f $ x:xs
apply' f x = f [x]
Now we can write our desired function as an application of a list-admitting function to any number
of arguments:
foo' :: (Num r, Ord r, Ap a r r) => r -> a
foo' = (g $...)
where f = (\result -> (result !! 0) + (result !! 1) - (result !! 2))
g = f . sort
You'll still have to type annotate it at every call site, like this:
λ foo' 4 5 10 :: Integer
-1
− But so far, that's the best I can do.
The more I study Haskell, the more I am certain that nothing is impossible.
I'm struggling with using the lens library for a particular problem. I'm trying to pass
an updated data structure
a lens focussed on part of that updated structure
to another function, g. I pass both the lens and the data structure because g needs some shared information from the data structure as well as a piece of information. (If it helps, the data structure contains information on a joint probability distribution, but g only works on either marginal and needs to know which marginal I'm looking at. The only difference between the two marginals is their mean with the rest of their definition being shared in the data structure).
My first attempt looked like this
f :: Functor f => Params -> ((Double -> f Double) -> Params -> f Params) -> a
f p l = g (l %~ upd $ p) l
where upd = ...
g p x = go p p^.x
but that fails during compilation because f gets inferred as being Identity for the update and Const Double for the getter.
What's the best way to accomplish what I want to do? I can imagine being able to do one of the following:
make a copy of the lens so that the type inference can be different in each case
rather than passing the updated structure and the lens, I pass the original structure and a lens which returns a modified value (if I only want to update the part of the structure that the lens looks at).
making a better design choice for my functions/data structure
something completely different
Thanks for any help!
András Kovács answer shows how to achieve this with RankNTypes. If you wish to avoid RankNTypes, then you can use ALens and cloneLens:
f :: a -> ALens' a Int -> (Int, a)
f a l = (newvalue, a & cloneLens l .~ newvalue)
where oldvalue = a^.cloneLens l
newvalue = if oldvalue == 0 then 0 else oldvalue - 1
Control.Lens.Loupe provides operators and functions that work on ALens instead of Lens.
Note that in many cases, you should also be able to use <<%~, which is like %~ but also returns the old value, or <%~, which returns the new value:
f :: a -> LensLike' ((,) Int) a Int -> (Int, a)
f a l = a & l <%~ g
where g oldvalue = if oldvalue == 0 then 0 else oldvalue - 1
This has the advantage that it can also work with Isos or sometimes also with Traversals (when the target type is a Monoid).
You want your type signature to look like this:
f :: Params -> Lens Params Params Double Double -> ...
-- alternatively, instead of the long Lens form you can write
-- Lens' Params Double
This is not equivalent to what you wrote out in the signature, because the functor parameter is quantified inside Lens:
type Lens s t a b = forall f. Functor f => (a -> f b) -> (s -> f t)
The correct signature translates to :
f :: Params -> (forall f. Functor f => (Double -> f Double) -> Params -> f Params) -> ...
This prevents the compiler from unifying the different f parameters of different lens usages, i. e. you can use the lens polymorphically. Note that you need the RankNTypes or Rank2Types GHC extension in order to be able to write out the signature.
Benno gave the best general purpose answer.
There is two other options, however, which I offer here for completeness.
1.)
There are several Loupe combinators in Lens.
http://hackage.haskell.org/package/lens-4.1.2/docs/Control-Lens-Loupe.html
They all have names that involve #.
^# and #%= both take ALens which is a lens instantiated at a particular concrete choice of functor.
This can be useful if you need to pass around lists of lenses, or if you really really need multiple passes.
2.)
Another option, and my preferred tactic, is to figure out how to do both operations a the same time.
Here you are modifying, but want the value you just set. Well, yes can give you that by using <%~ instead of %~.
Now you only instantiate the lens at one choice of functor and your code gets faster.
In most of programming languages that support mutable variables, one can easily implement something like this Java example:
interface Accepter<T> {
void accept(T t);
}
<T> T getFromDoubleAccepter(Accepter<Accepter<T>> acc){
final List<T> l = new ArrayList<T>();
acc.accept(new Accepter<T>(){
#Override
public void accept(T t) {
l.add(t);
}
});
return l.get(0); //Not being called? Exception!
}
Just for those do not understand Java, the above code receives something can can be provided a function that takes one parameter, and it supposed to grape this parameter as the final result.
This is not like callCC: there is no control flow alternation. Only the inner function's parameter is concerned.
I think the equivalent type signature in Haskell should be
getFromDoubleAccepter :: (forall b. (a -> b) -> b) -> a
So, if someone can gives you a function (a -> b) -> b for a type of your choice, he MUST already have an a in hand. So your job is to give them a "callback", and than keep whatever they sends you in mind, once they returned to you, return that value to your caller.
But I have no idea how to implement this. There are several possible solutions I can think of. Although I don't know how each of them would work, I can rate and order them by prospected difficulties:
Cont or ContT monad. This I consider to be easiest.
RWS monad or similar.
Any other monads. Pure monads like Maybe I consider harder.
Use only standard pure functional features like lazy evaluation, pattern-matching, the fixed point contaminator, etc. This I consider the hardest (or even impossible).
I would like to see answers using any of the above techniques (and prefer harder ways).
Note: There should not be any modification of the type signature, and the solution should do the same thing that the Java code does.
UPDATE
Once I seen somebody commented out getFromDoubleAccepter f = f id I realize that I have made something wrong. Basically I use forall just to make the game easier but it looks like this twist makes it too easy. Actually, the above type signature forces the caller to pass back whatever we gave them, so if we choose a as b then that implementation gives the same expected result, but it is just... not expected.
Actually what came up to my mind is a type signature like:
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
And this time it is harder.
Another comment writer asks for reasoning. Let's look at a similar function
getFunctionFromAccepter :: (((a -> b) -> b) -> b) -> a -> b
This one have an naive solution:
getFunctionFromAccepter f = \a -> f $ \x -> x a
But in the following test code it fails on the third:
exeMain = do
print $ getFunctionFromAccepter (\f -> f (\x -> 10)) "Example 1" -- 10
print $ getFunctionFromAccepter (\f -> 20) "Example 2" -- 20
print $ getFunctionFromAccepter (\f -> 10 + f (\x -> 30)) "Example 3" --40, should be 30
In the failing case, we pass a function that returns 30, and we expect to get that function back. However the final result is in turn 40, so it fails. Are there any way to implement doing Just that thing I wanted?
If this can be done in Haskell there are a lot of interesting sequences. For example, tuples (or other "algebraic" types) can be defined as functions as well, since we can say something like type (a,b) = (a->b->())->() and implement fst and snd in term of this. And this, is the way I used in a couple of other languages that do not have native "tuple" support but features "closure".
The type of accept is void accept(T) so the equivalent Haskell type is t -> IO () (since every function in Java is essentially IO). Thus getFromDoubleAccepted can be directly translated as
import Data.IORef
type Accepter t = t -> IO ()
getFromDoubleAccepter :: Accepter (Accepter a) -> IO a
getFromDoubleAccepter acc = do
l <- newIORef $ error "Not called"
acc $ writeIORef l
readIORef l
If you want an idiomatic, non-IO solution in Haskell, you need to be more specific about what your actual end goal is besides trying to imitate some Java-pattern.
EDIT: regarding the update
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
I'm sorry, but this signature is in no way equal to the Java version. What you are saying is that for any a, given a function that takes a function that takes an a but doesn't return anything or do any kind of side effects, you want to somehow conjure up a value of type a. The only implementation that satisfies the given signature is essentially:
getFromDoubleAccepter :: ((a -> ()) -> ()) -> a
getFromDoubleAccepter f = getFromDoubleAccepter f
First, I'll transliterate as much as I can. I'm going to lift these computations to a monad because accept returns void (read () in Haskell-land), which is useless unless there is some effect.
type Accepter m t = t -> m ()
getFromDoubleAccepter :: (MonadSomething m) => Accepter m (Accepter m t) -> m t
getFromDoubleAccepter acc = do
l <- {- new mutable list -}
acc $ \t -> add l t
return (head l)
Of course, we can't make a mutable list like that, so we'll have to use some intuitive sparks here. When an action just adds an element to some accumulator, I think of the Writer monad. So maybe that line should be:
acc $ \t -> tell [t]
Since you are simply returning the head of the list at the end, which doesn't have any effects, I think the signature should become:
getFromDoubleAccepter :: Accepter M (Accepter M t) -> t
where M is an appropriate monad. It needs to be able to write [t]s, so that gives us:
type M t = Writer [t]
getFromDoubleAccepter :: Accepter (M t) (Accepter (M t) t) -> t
And now the type of this function informs us how to write the rest of it:
getFromDoubleAccepter acc =
head . execWriter . acc $ \t -> tell [t]
We can check that it does something...
ghci> getFromDoubleAccepter $ \acc -> acc 42
42
So that seems right, I guess. I'm still a bit unclear on what this code is supposed to mean.
The explicit M t in the type signature is a bit aesthetically bothersome to me. If I knew what problem I was solving I would look at that carefully. If you mean that the argument can be a sequence of commands, but otherwise has no computational features available, then you could specialize the type signature to:
getFromDoubleAccepter :: (forall m. (Monad m) => Accepter m (Accepter m t)) -> t
which still works with our example. Of course, this is all a bit silly. Consider
forall m. (Monad m) => Accepter m (Accepter m t))
= forall m. (Monad m) => (t -> m ()) -> m ()
The only thing a function with this type can do is call its argument with various ts in order and then return (). The information in such a function is completely characterized[1] by those ts, so we could just as easily have used
getFromDoubleAccepter :: [t] -> t
getFromDoubleAccepter = head
[1] As long as I'm going on about nothing, I might as well say that that is not quite accurate in the face of infinity. The computation
crazy :: Integer -> Accepter m (Accepter m Integer)
crazy n acc = crazy (n+1) >> acc n
can be used to form the infinite sequence
... >> acc 3 >> acc 2 >> acc 1 >> acc 0
which has no first element. If we tried to interpret this as a list, we would get an infinite loop when trying to find the first element. However this computation has more information than an infinite loop -- if instead of a list, we used the Last monoid to interpret it, we would be able to extract 0 off the end. So really
forall m. (Monad m) => Accepter m (Accepter m t)
is isomorphic to something slightly more general than a list; specifically a free monoid.
Thanks to the above answers, I finally concluded that in Haskell we can do some different things than other languages.
Actually, the motivation of this post is to translate the famous "single axiom classical logic reduction system". I have implemented this in some other languages. It should be no problem to implement the
Axiom: (a|(b|c)) | ((d|(d|d)) | ((e|b) | ((a|e) | (a|e))))
However, since the reduction rule looks like
Rule: a|(b|c), a |-- c
It is necessary to extract the inner parameter as the final result. In other languages, this is done by using side-effects like mutable slots. However, in Haskell we do not have mutable slots and involving IO will be ugly so I keep looking for solutions.
In the first glance (as show in my question), the getFromDoubleAccepter f = f id seems nonsense, but I realise that it actually work in this case! For example:
rule :: (forall r.a -> (b -> c -> r) -> r) -> a -> c
rule abc a = abc a $ flip const
The trick is still the same: since the existential qualification hides r from the caller, and it is up to the callee to pick up a type for it, we can specify c to be r, so we simply apply the given function to get the result. On the other hand, the given function has to use our input to produce the final answer, so it effectively limiting the implementation to what we exactally want!
Putting them together, let's see what we can do with it:
newtype I r a b = I { runI :: a -> b -> r }
rule :: (forall r. I r a (I r b c)) -> a -> c
rule (I abc) a = abc a (I (\b c -> c))
axiom :: I r0 (I r1 a (I r2 b c))
(I r0 (I r3 d (I r3 d d))
(I r4 (I r2 e b) (I r4 (I r1 a e) (I r1 a e))))
axiom = let
a1 (I eb) e = I $ \b c -> eb e b
a2 = I $ \d (I dd) -> dd d d
a3 (I abc) eb = I $ \a e -> abc a (a1 eb e)
a4 abc = I $ \eb aeae -> runI a2 (a3 abc eb) aeae
in I $ \abc (I dddebaeae) -> dddebaeae a2 (a4 abc)
Here I use a naming convention to trace the type signatures: a variable name is combinded by the "effective" type varialbes (means it is not result type - all r* type variable).
I wouldn't repeat the prove represented in the sited essay, but I want to show something. In the above definition of axiom we use some let bindings variables to construct the result. Not surprisingly, those variables themselves can be extracted by using rule and axiom. let's see how:
--Equal to a4
t4 :: I r0 a (I r1 b c) -> I r2 (I r1 d b) (I r2 (I r0 a d) (I r0 a d))
t4 abc = rule axiom abc
--Equal to a3
t3 :: I r0 a (I r1 b c) -> I r1 d b -> I r0 a d
t3 abc eb = rule (t4 abc) eb
--Equal to a2
t2 :: I r a (I r a a)
t2 = rule (t3 axiom (t3 (t4 axiom) axiom)) axiom
--Equal to a1
t1 :: I r a b -> a -> I r b c
t1 ab a = rule (t3 t2 (t3 (t3 t2 t2) ab)) a
One thing left to be proved is that we can use t1 to t4 only to prove all tautologies. I feel it is the case but have not yet proved it.
Compare to other languages, the Haskell salutation seems more effective and expressive.
I'm using haskell to implement a pattern involving functions that return a value, and themselves (or a function of the same type). Right now I've implemented this like so:
newtype R a = R (a , a -> R a)
-- some toy functions to demonstrate
alpha :: String -> R String
alpha str
| str == reverse str = R (str , omega)
| otherwise = R (reverse str , alpha)
omega :: String -> R String
omega (s:t:r)
| s == t = R (s:t:r , alpha)
| otherwise = R (s:s:t:r , omega)
The driving force for these types of functions is a function called cascade:
cascade :: (a -> R a) -> [a] -> [a]
cascade _ [] = []
cascade f (l:ls) = el : cascade g ls where
R (el , g) = f l
Which takes a seed function and a list, and returns a list created by applying the seed function to the first element of the list, applying the function returned by that to the second element of the list, and so on and so forth.
This works--however, in the process of using this for slightly more useful things, I noticed that a lot of times I had the basic units of which are functions that returned functions other than themselves only rarely; and explicitly declaring a function to return itself was becoming somewhat tedious. I'd rather be able to use something like a Monad's return function, however, I have no idea what bind would do for functions of these types, especially since I never intended these to be linked with anything other than the function they return in the first place.
Trying to shoehorn this into a Monad started worrying me about whether or not what I was doing was useful, so, in short, what I want to know is:
Is what I'm doing a Bad Thing? if not,
Has what I'm doing been done before/am I reinventing the wheel here? if not,
Is there an elegant way to do this, or have I already reached this and am being greedy by wanting some kind of return analogue?
(Incidentally, besides, 'functions that return themeselves' or 'recursive data structure (of functions)', I'm not quite sure what this kind of pattern is called, and has made trying to do effective research in it difficult--if anyone could give me a name for this pattern (if it indeed has one), that alone would be very helpful)
As a high-level consideration, I'd say that your type represents a stateful stream transformer. What's a bit confusing here is that your type is defined as
newtype R a = R (a , a -> R a)
instead of
newtype R a = R (a -> (R a, a))
which would be a bit more natural in the streaming context because you typically don't "produce" something if you haven't received anything yet. Your functions would then have simpler types too:
alpha, omage :: R String
cascade :: R a -> [a] -> [a]
If we try to generalize this idea of a stream transformer, we soon realize that the case where we transform a list of as into a list of as is just a special case. With the proper infrastructure in place we could just as well produce a list of bs. So we try to generalize the type R:
newtype R a b = R (a -> (R a b, b))
I've seen this kind of structure being called a Circuit, which happens to be a full-blown arrow. Arrows are a generalization of the concept of functions and are an even more powerful construct than monads. I can't pretend to understand the category-theoretical background, but it's definitely interesting to play with them. For example, the trivial transformation is just Cat.id:
import Control.Category
import Control.Arrow
import Prelude hiding ((.), id)
import qualified Data.List as L
-- ... Definition of Circuit and instances
cascade :: Circuit a b -> [a] -> [b]
cascade cir = snd . L.mapAccumL unCircuit cir
--
ghci> cascade (Cat.id) [1,2,3,4]
[1,2,3,4]
We can also simulate state by parameterizing the circuit we return as the continuation:
countingCircuit :: (a -> b) -> Circuit a (Int, b)
countingCircuit f = cir 0
where cir i = Circuit $ \x -> (cir (i+1), (i, f x))
--
ghci> cascade (countingCircuit (+5)) [10,3,2,11]
[(0,15),(1,8),(2,7),(3,16)]
And the fact that our circuit type is a category gives us a nice way to compose circuits:
ghci> cascade (countingCircuit (+5) . arr (*2)) [10,3,2,11]
[(0,25),(1,11),(2,9),(3,27)]
It looks like what you have is a simplified version of a stream. That is to
say, a representation of an infinite stream of values. I don't think you can
easily define this as a monad, because you use the same type for your seed as
for your elements, which makes defining fmap difficult (it seems that you
would need to invert the function provided to fmap so as to be able to
recover the seed). You can make this a monad by making the seed type
independent of the element type like so
{-# LANGUAGE ExistentialQuantification #-}
data Stream a = forall s. Stream a s (s -> Stream a)
This will allow you to define a Functor and Monad instance as follows
unfold :: (b -> (a, b)) -> b -> Stream a
unfold f b = Stream a b' (unfold f)
where (a, b') = f b
shead :: Stream a -> a
shead (Stream a _ _) = a
stail :: Stream a -> Stream a
stail (Stream _ b f) = f b
diag :: Stream (Stream a) -> Stream a
diag = unfold f
where f str = (shead $ shead str, stail $ fmap stail str)
sjoin :: Stream (Stream a) -> Stream a
sjoin = diag
instance Functor Stream where
fmap f (Stream a b g) = Stream (f a) b (fmap f . g)
instance Monad Stream where
return = unfold (\x -> (x, x))
xs >>= f = diag $ fmap f xs
Note that this only obeys the Monad laws when viewed as a set, as it does not
preserve element ordering.
This explanation
of the stream monad uses infinite lists, which works just as well in Haskell
since they can be generated in a lazy fashion. If you check out the
documentation for the Stream type in the vector library, you will
find a more complicated version, so that it can be used in efficient stream fusion.
I don't have much to add, except to note that your cascade function can be written as a left fold (and hence also as a right fold, though I haven't done the transformation.)
cascade f = reverse . fst . foldl func ([], f)
where
func (rs,g) s = let R (r,h) = g s in (r:rs,h)