On control flow structures in Haskell (multiple if-then-else) - haskell

I want to translate the following procedural program to Haskell [written in pseudocode]:
f(x) {
if(c1(x)) {
if(c2(x)) {
return a(x);
}
else if (c3(x)) {
if(c4(x)) {
return b(x);
}
}
return d(x);
}
I have written the following implementation:
f x =
if (c1 x) then
if(c2 x) then
a x
else if (c3 x) then
if (c4 x) then
b x
else d x
else d x
else d x
Unfortunately it contains (else d x) three times.
Is there a better way to implement the function? (i.e, to return (d x) if none of the conditions was met?)
I understand that we could combine conditions c1 and c2 into (c1 x) && (c2 x) to make the number of if's smaller, but my conditions c1, c2, c3, c4 are indeed very long and if I combine them I will get a condition which takes more than one line.

Easiest, most apparent solution
If you're using GHC, you can turn on
{-# LANGUAGE MultiWayIf #-}
and your entire thing becomes
f x = if | c1 x && c2 x -> a x
| c1 x && c3 x && c4 x -> b x
| otherwise -> d x
Slightly more advanced and flexible solution
However, it's not always you want to blindly replicate imperative code in Haskell. Often, it's useful to think of your code as data instead. What you are really doing is setting up a list of requirements that x must satisfy, and then if x satisfies those requirements you take some action on x.
We can represent this with actual lists of functions in Haskell. It would look something like
decisions :: [([a -> Bool], a -> b)]
decisions = [([c1, c2], a)
,([c1, c3, c4], b)]
,([], d)]
Here, we should read this as, "if x satisfies both c1 and c2, take action a on x" and so on. Then we can define f as
f x = let maybeMatch = find (all ($ x) . fst) decisions
match = fromMaybe (error "no match!") maybeMatch
result = snd match
in result x
This works by walking through the list of requirements and finding the first set of decisions that x satisfy (maybeMatch). It pulls that out of the Maybe (you might want some better error handling there!) Then it chooses the corresponding function (result), and then it runs x through that.
Very advanced and flexible solution
If you have a really complex tree of decisions, you might not want to represent it with a flat list. This is where actual data trees come in handy. You can create a tree of the functions you want, and then search that tree until you hit a leaf node. That tree might in this example look something like
+-> c1 +-> c2 -> a
| |
| +-> c3 -> c4 -> b
+-> d
In other words, if x satisfies c1, it's gonna see if it satisfies c2 too, and if it does take action a on x. If it doesn't, it goes on to the next branch with c3, and so on, until it reaches an action (or has walked through the entire tree).
But first you're going to need a data type to tell the difference between a requirement (c1, c2 etc.) and an action (a, b etc.)
data Decision a b = Requirement (a -> Bool)
| Action (a -> b)
Then you build a tree of decisions as
decisions =
Node (Requirement (const True))
[Node (Requirement c1)
[Node (Requirement c2)
[Node (Action a) []]
,Node (Requirement c3)
[Node (Requirement c4)
[Node (Action b) []]]
,Node (Action d) []]
This looks more complicated than it is, so you should probably invent a neater way of expressing decision trees. If you define the functions
iff = Node . Requirement
action = flip Node [] . Action
you can write the tree as
decisions =
iff (const True) [
iff (c1) [
iff (c2) [
action a
],
iff (c3) [
iff (c4) [
action b
]
]
],
action d
]
and suddenly it's very similar to the imperative code you started with, despite the fact that it's valid Haskell code that's just building a data structure! Haskell is powerful for defining custom little "languages inside the language" like this.
Then you need to search through the tree for the first action you can reach.
decide :: a -> Tree (Decision a b) -> Maybe b
decide x (Node (Action f) _) = Just (f x)
decide x (Node (Requirement p) subtree)
| p x = asum $ map (decide x) subtree
| otherwise = Nothing
This uses a little bit of Maybe magic (asum) to stop at the first successful hit. This in turn means it will not compute the conditions of any branch in vain (which is efficient and important if the computations are expensive) and it should handle infinite decision trees just fine.
You can make decide even more general, taking full advantage of the Alternative class, but I've chosen to specialise it for Maybe so as to not write a book about this. Making it even more general might allow you to have fancy monadic decisions too, which would be very cool!
But, lastly, as a very simple example of this in action – take the Collatz conjecture. If you give me a number, and ask me what the next number should be, I can build a decision tree to find out. The tree may look like this:
collatz =
iff (> 0) [
iff (not . even) [
action (\n -> 3*n + 1)
],
action (`div` 2)
]
so the number has to be bigger than 0, and then if it's odd you multiply by three and add one, otherwise you halve it. Test runs show that
λ> decide 3 collatz
Just 10
λ> decide 10 collatz
Just 5
λ> decide (-4) collatz
Nothing
You can probably imagine much more interesting decision trees.
Edit like a year later: The generalisation to Alternative is actually very simple, and fairly interesting. The decide function gets the new look
decide :: Alternative f => a -> Tree (Decision a b) -> f b
decide x (Node (Action f) _) = pure (f x)
decide x (Node (Requirement p) subtree)
| p x = asum $ map (decide x) subtree
| otherwise = empty
(that's a total of only three changes, for those keeping count.) What this gives you is the opportunity to assemble "all" actions the input satisfies by using the applicative instance of lists instead of Maybe. This reveals an "error" in our collatz tree – if we look carefully at it, we see it says that all odd and positive integers n turn to 3*n +1 but it also says that all positive numbers turn to n/2. There is no additional requirement that says the number has to be even.
In other words, the (`div` 2) action is only under the (>0) requirement and nothing else. This is technically incorrect, but it happens to work if we just get the first result (which is basically what using the Maybe Alternative instance does). If we list all results, we also get an incorrect one.
When is getting multiple results interesting? Maybe we're writing the decision tree for an AI, and we want to humanise the behaviour by first getting all the valid decisions, and then picking one of them at random. Or ranking them based on how good they are in the circumstances, or something else.

You could use guards and a where clause:
f x | cb && c2 x = a x
| cb && c3 x && c4 x = b x
| otherwise = d x
where cb = c1 x

If you're just worried about writing them out then that's what where blocks are for
f x =
case () of
() | c1 && c2 -> a x
| c1 && c3 && c4 -> b x
| otherwise -> d x
where
c1 = ...
c2 = ...
c3 = ...
c4 = ...
Not that I'm using the case trick to introduce a new place for guard statements. I can't use guards on the function definition itself because the where clause won't scope over all of the guards. You could use if just the same, but guards have nice pass-through semantics.

There's another pattern you can use: I wouldn't use it in your specific example but there are very similar situations where I have used it.
f x = case (c1 x, c2 x, c3 x, c4 x) of
(True,True,_,_) -> a x
(True,False,True,True) -> b x
_ -> d x
Only the bare minimum evaluation required to choose which path to take will actually be evaluated: it won't actually evaluate c2 x unless c1 x is True.

Related

Haskell backtracking

Two friends P1 and P2 send the SAME message M to a mutual friend say P3.
However due to some network damage P3 only receives only one character at a time without knowing if the character received belongs to P1 or P2.
Furthermore P3 might receive X characters from P1 then Y characters from P2 or vice versa but whatever the order P3 will receive ALL characters that both P1 and P2 sent.
Given the sequence S of characters that P3 received help him determine the initial message M that consists only of 0s and 1s
Note that there might be more than one solutions to the problem however getting just one is fine.
Examples :
1) S = [0,1,0,0,1,0] then M = "010"
2) S = [0,0,1,1,0,0,1,1,0,0] then M = "01010" or M = "00110"
To clarify the order and the ownership of each character :
Say M = "cat" then S might be :
1) [c1,c2,a2,t2,a1,t1]
2) [c1,a1,t1,c2,a2,t2]
3) [c1,c2,a1,a2,t2,t1]
Where xi stands for : Character x belongs to person i.
Given the fact that P1 and P2 send the same message then :
There is a fixed amount of 0s that P1 and P2 can send
There is also a fixed amount of 1s that P1 and P2 can send
Length of M will obviously be an even number
At first I implemented the predicate above using Prolog and A's (0) and B's (1) where backtracking is fairly easy and I applied a constraint that prunes my search tree so that my approach is not a brute force one :
Prolog Code :
countCharacters([],A,B,A,B).
countCharacters([C|T],A,B,X,Y) :- % Count A's per person and B's per person
(C == a -> A1 is A + 1,countCharacters(T,A1,B,X,Y);
B1 is B + 1,countCharacters(T,A,B1,X,Y)).
countCharacters(L,A,B) :-
countCharacters(L,0,0,X,Y),
A is X / 2,
B is Y / 2.
rightOrder([],_) :- !.
rightOrder(_,[]) :- !.
rightOrder([C1|_],[C2|_]) :- C1 \= C2,!,false.
rightOrder([C|T1],[C|T2]) :- % Constraint that checks if two lists have the same order
rightOrder(T1,T2).
determine([],M1,M2,_,_,_,_,M1) :- M1 == M2,!.
determine(L,M1,M2,A1,B1,A2,B2,X) :-
A1 == 0,
B1 == 0,
append(M2,L,NM2),
rightOrder(M1,NM2),
determine([],M1,NM2,A1,B1,A2,B2,X).
determine([a|T],M1,M2,A1,B1,A2,B2,X) :-
A1 > 0,
NA1 is A1 - 1,
append(M1,[a],NM1),
determine(T,NM1,M2,NA1,B1,A2,B2,X).
determine([b|T],M1,M2,A1,B1,A2,B2,X) :-
B1 > 0,
NB1 is B1 - 1,
append(M1,[b],NM1),
determine(T,NM1,M2,A1,NB1,A2,B2,X).
determine([a|T],M1,M2,A1,B1,A2,B2,X) :-
A2 > 0,
NA2 is A2 - 1,
append(M2,[a],NM2),
rightOrder(M1,NM2),
determine(T,M1,NM2,A1,B1,NA2,B2,X).
determine([b|T],M1,M2,A1,B1,A2,B2,X) :-
B2 > 0,
NB2 is B2 - 1,
append(M2,[b],NM2),
rightOrder(M1,NM2),
determine(T,M1,NM2,A1,B1,A2,NB2,X).
determine(L,M) :-
countCharacters(L,AS,BS),
determine(L,[],[],AS,BS,AS,BS,M).
The code above is not that optimized as I've been studying Prolog for just a few weeks now, however I need some help or insight on how to implement the same predicate in Haskell as I have no clue on how to backtrack.
If you need more clarifications let me know.
An inefficient way to do this in Haskell would be with the list monad, which simulates nondeterminism.
One way to arrive at a solution is to consider the problem from the opposite direction: how would you generate the possible ways the message could have been interleaved? Essentially for every element in the output, there will have been a choice between taking it from one sender or the other, or all the remaining elements will come from the same sender if one has run out of elements. Expressed literally:
-- Compute all the possible interleavings of a list with itself.
interleavings :: [a] -> [[a]]
interleavings xs0 = go xs0 xs0
where
-- If the first list has run out,
-- return the remainder of the second.
go [] rs = pure rs
-- And vice versa.
go ls [] = pure ls
-- If both lists are nonempty:
go ls#(l : ls') rs#(r : rs') = do
-- Toss a coin;
choice <- [False, True]
case choice of
-- If tails, take an element from the left sender
-- and prepend it to all possible remaining interleavings.
False -> fmap (l :) (go ls' rs)
-- If heads, take from the right sender.
True -> fmap (r :) (go ls rs')
Note that this generates many duplicate entries, since it doesn’t backtrack or prune:
> interleavings "10"
["1010","1100","1100","1100","1100","1010"]
However, it does point the way to the start of a solution. You want to run the above process in reverse: given an interleaving, generate a series of choices and assume that each element came from the assumed list, keeping track of the deinterleaved lists. If they’re equal at the end, then they represent a valid deinterleaving:
-- The possible deinterleavings of a list
-- whose elements can be compared for equality.
deinterleavings :: (Eq a) => [a] -> [[a]]
-- Begin searching assuming no elements have been sent by either sender.
deinterleavings xs0 = go [] [] xs0
where
-- If there is an element remaining:
go ls rs (x : xs) = do
-- Toss a coin;
choice <- [False, True]
case choice of
-- If tails, assume it came from the left sender and proceed.
-- (Note that this accumulates in reverse, adding to the head.)
False -> go (x : ls) rs xs
-- If heads, assume the right sender.
True -> go ls (x : rs) xs
-- If there are no elements remaining:
go ls rs [] = do
-- Require that the accumulated messages be identical.
guard (ls == rs)
-- Return the (de-reversed) message.
pure (reverse ls)
Again this is extremely inefficient:
> deinterleavings "0011001100"
["00110","00110","01100","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01100","01100","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01010","01100","00110","00110"]
But I hope it illustrates the general structure of a solution that you can improve upon.
Consider how you could introduce guards earlier, or accumulate elements differently to prune the search; or use a different monad that does backtracking like Logic; or maintain a stateful set of results with State (or even IO) so that you can check during the computation which results you’ve already seen. Also consider how you could approach the problem from another angle entirely, based on the fact that the interleaved message contains the same string twice as subsequences, since there are standard efficient memoised algorithms for the “longest common subsequence” and “longest repeating subsequence”.

Doing greatest common divisor function with Haskell using until

I need help proggraming a function in Haskell to calculate the greatest common divisor. The problem is I need to use the until function which I'm not being able to implement. I tried using the following code (that didn't work):
mygcd a b = until (==0) (`mod` (a b)) b
Thank for your help!
I would try something like this (pseudocode)
mygcd a b = finalize (until endState nextState (a,b))
where
finalize (x,y) = ...
endState (x,y) = some condition here
nextState (x,y) = (x',y') computed in some way using mod
mygcd a b = until (==0) (`mod` (a b)) b
> :t until
until :: (a -> Bool) -> (a -> a) -> a -> a
So we have our predicate (==0), our function (`mod`(a b)), and starting value b. My first issue with this, however: this won't terminate until the predicate holds true, so it can never produce anything but 0. What does that tell us? We also see that a must be of type Integral a => a -> a since it's applied to b to get something we can use in b `mod` x. And repeating that won't yield a different result, so the expression either produces 0 immediately or after one mod, or never finishes.
I think, to get something useful out of until, you need a more complicated predicate than an equality check. Perhaps a question like "what's the lowest multiple of 13 over 100", or you might make a a tuple and check one part of it. It looks a bit to me like:
until p f v = head (dropWhile (not . p) (iterate f v))

Haskell pattern-matching idiom

I'm making a calculator on abstract integers and I'm doing an awful lot of pattern matching. I can write
add Zero x = x
add (P x) y = next $ add (prev $ P x) y
add (N x) y = prev $ add (next $ N x) y
or
add Zero x = x
add x y = case x of
P _ -> next $ add (prev x) y
_ -> prev $ add (next x) y
While the first way is shorter, something in the second way appeals to me more.
Which is the preferred way to do this?
Use as-patterns.
add Zero y = y
add x#(P _) y = next $ add (prev x) y
add x#(N _) y = prev $ add (next x) y
I'd also consider abstracting out the common structure of your two recursive branches by noting that you just swap the roles of the prev and next functions depending on whether x is positive or negative:
add Zero x = x
add x y = f $ add (g x) y
where (f, g) = case x of
P _ -> (next, prev)
N _ -> (prev, next)
About this style:
add Zero x = x
add x y = case x of
P _ -> next $ add (prev x) y
_ -> prev $ add (next x) y
On the positive side, it avoids some repetition, which is good.
On the negative side, the case looks to be non-exhaustive at a first sight. Indeed, to convince oneself that the pattern match is really exhaustive, we have to reason about the possible values for the x in case x of, and see that at runtime that can not be Zero, because that was handled above. This requires far more mental effort than the first snippet, which is obviously exhaustive.
Worse, when turning on warnings, as we should always do, GHC complains since it is not convinced that the case is exhaustive.
Personally, I wish the designers of Haskell had forbidden non exhaustive matches entirely. I'd use a -Werror-on-non-exhaustive-matches if there were one. I would like to be forced to write e.g.
case something of
A -> ...
B -> ...
_ -> error "the impossible happened"
than having the last branch being silently inserted by the compiler for me.
Consider using the math-style definition of integers as congruence classes of pairs of naturals under the equivalence relation:
{((a,b), (c,d)) | b+c == d+a}
The intuition is that the pair of naturals (a,b) represents b-a. As mentioned in the Wikipedia article, this often reduces the number of special cases compared to the "0/positive/negative" definition. In particular, the addition operation you ask about implementing becomes a one-liner:
-- both Int and Integer are taken
data Int' = Int Nat Nat
instance Num Int' where
-- b-a + d-c = (b+d)-(a+c)
Int a b + Int c d = Int (a + c) (b + d)
It's kind of fun to work through the different operations with this representation. For example, Eq can be implemented with the equation given above, and Ord is similar:
instance Eq Int' where
-- b-a == d-c = b+c == d+a
Int a b == Int c d = b+c == d+a
instance Ord Int' where
-- compare (b-a) (d-c) = compare (b+c) (d+a)
compare (Int a b) (Int c d) = compare (b+c) (d+a)
On occasion, it can be handy to normalize these things. Just like fractions can be reduced by multiplying the numerator and denominator by the same number until they're relatively prime, these things can be reduced by adding or subtracting the same number to both parts until (at least) one of them is zero.
normalize (Int (S a) (S b)) = normalize (Int a b)
normalize v = v

Dynamic List Comprehension in Haskell

Suppose I have a list comprehension that returns a list of sequences, where the elements chosen depend on each other (see example below). Is there a way to (conveniently) program the number of elements and their associated conditions based on an earlier computation? For example, return type [[a,b,c]] or [[a,b,c,d,e]] depending on another value in the program? Also, are there other/better ways than a list comprehension to formulate the same idea?
(I thought possible, although cumbersome and limited, to write out a larger list comprehension to start with and trim it by adding to s a parameter and helper functions that could make one or more of the elements a value that could easily be filtered later, and the associated conditions True by default.)
s = [[a, b, c, d] | a <- list, someCondition a,
b <- list, b /= a, not (someCondition b),
otherCondition a b,
c <- list, c /= a, c /= b, not (someCondition c),
otherCondition b c,
d <- list, d /= a, d /= b, d /= c,
someCondition d, someCondition (last d),
otherCondition c d]
The question is incredibly difficult to understand.
Is there a way to (conveniently) program the number of elements and their associated conditions based on an earlier computation?
The problem is "program" is not really an understandable verb in this sentence, because a human programs a computer, or programs a VCR, but you can't "program a number". So I don't understand what you are trying to say here.
But I can give you code review, and maybe through code review I can understand the question you are asking.
Unsolicited code review
It sounds like you are trying to solve a maze by eliminating dead ends, maybe.
What your code actually does is:
Generate a list of cells that are not dead ends or adjacent to dead ends, called filtered
Generate a sequence of adjacent cells from step 1, sequences
Concatenate four such adjacent sequences into a route.
Major problem: this only works if a correct route is exactly eight tiles long! Try to solve this maze:
[E]-[ ]-[ ]-[ ]
|
[ ]-[ ]-[ ]-[ ]
|
[ ]-[ ]-[ ]-[ ]
|
[ ]-[ ]-[ ]-[ ]
|
[ ]-[ ]-[ ]-[E]
So, working backwards from the code review, it sounds like your question is:
How do I generate a list if I don't know how long it is beforehand?
Solutions
You can solve a maze with a search (DFS, BFS, A*).
import Control.Monad
-- | Maze cells are identified by integers
type Cell = Int
-- | A maze is a map from cells to adjacent cells
type Maze = Cell -> [Cell]
maze :: Maze
maze = ([[1], [0,2,5], [1,3], [2],
[5], [4,6,1,9], [5,7], [6,11],
[12], [5,13], [9], [7,15],
[8,16], [14,9,17], [13,15], [14,11],
[12,17], [13,16,18], [17,19], [18]] !!)
-- | Find paths from the given start to the end
solve :: Maze -> Cell -> Cell -> [[Cell]]
solve maze start = solve' [] where
solve' path end =
let path' = end : path
in if start == end
then return path'
else do neighbor <- maze end
guard (neighbor `notElem` path)
solve' path' neighbor
The function solve works by depth-first search. Rather than putting everything in a single list comprehension, it works recursively.
In order to find a path from start to end, if start /= end,
Look at all cells adjacent to the end, neighbor <- maze end,
Make sure that we're not backtracking over a cell guard (negihbor `notElem` path),
Try to find a path from start to neighbor.
Don't try to understand the whole function at once, just understand the bit about recursion.
Summary
If you want to find the route from cell 0 to cell 19, recurse: We know that cell 18 and 19 are connected (because they are directly connected), so we can instead try to solve the problem of finding a route from cell 0 to cell 18.
This is recursion.
Footnotes
The guard,
someCondition a == True
Is equivalent to,
someCondition a
And therefore also equivalent to,
(someCondition a == True) == True
Or,
(someCondition a == (True == True)) == (True == (True == True))
Or,
someCondition a == (someCondition a == someCondition a)
The first one, someCondition a, is fine.
Footnote about do notation
The do notation in the above example is equivalent to list comprehension,
do neighbor <- maze end
guard (neighbor `notElem` path)
solve' path' neighbor
The equivalent code in list comprehension syntax is,
[result | neighbor <- maze end,
neighbor `notElem` path,
result <- solve' path' neighbor]
Is there a way to (conveniently) program the number of elements and their associated conditions based on an earlier computation? For example, return type [[a,b,c]] or [[a,b,c,d,e]] depending on another value in the program?
I suppose you want to encode the length of the list (or vector) statically in the type signature. Length of the standard lists cannot be checked on type level.
One approach to do that is to use phantom types, and introduce dummy data types which will encode different sizes:
newtype Vector d = Vector { vecArray :: UArray Int Float }
-- using EmptyDataDecls extension too
data D1
data D2
data D3
Now you can create vectors of different length which will have distinct types:
vector2d :: Float -> Float -> Vector D2
vector2d x y = Vector $ listArray (1,2) [x,y]
vector3d :: Float -> Float -> Float -> Vector D3
vector3d x y z = Vector $ listArray (1,3) [x,y,z]
If the length of the output depends on the length of the input, then consider using type-level arithmetics to parametrize the output.
You can find more by googling for "Haskell statically sized vectors".
A simpler solution is to use tuples, which are fixed length. If your function can produce either a 3-tuple, or a 5-tuple, wrap them with an Either data type: `Either (a,b,c) (a,b,c,d,e).
Looks like you're trying to solve some logic puzzle by unique selection from finite domain. Consult these:
Euler 43 - is there a monad to help write this list comprehension?
Splitting list into a list of possible tuples
The way this helps us is, we carry our domain around while we're making picks from it; and the next pick is made from the narrowed domain containing what's left after the previous pick, so a chain is naturally formed. E.g.
p43 = sum [ fromDigits [v0,v1,v2,v3,v4,v5,v6,v7,v8,v9]
| (dom5,v5) <- one_of [0,5] [0..9] -- [0..9] is the
, (dom6,v6) <- pick_any dom5 -- initial domain
, (dom7,v7) <- pick_any dom6
, rem (100*d5+10*d6+d7) 11 == 0
....
-- all possibilities of picking one elt from a domain
pick_any :: [a] -> [([a], a)]
pick_any [] = []
pick_any (x:xs) = (xs,x) : [ (x:dom,y) | (dom,y) <- pick_any xs]
-- all possibilities of picking one of provided elts from a domain
-- (assume unique domains, i.e. no repetitions)
one_of :: (Eq a) => [a] -> [a] -> [([a], a)]
one_of ns xs = [ (ys,y) | let choices = pick_any xs, n <- ns,
(ys,y) <- take 1 $ filter ((==n).snd) choices ]
You can trivially check a number of elements in your answer as a part of your list comprehension:
s = [answer | a <- .... , let answer=[....] , length answer==4 ]
or just create different answers based on a condition,
s = [answer | a <- .... , let answer=if condition then [a,b,c] else [a]]
You have Data.List.subsequences
You can write your list comprehension in monadic form (see guards in Monad Comprehensions):
(Explanation: The monad must be an instance of MonadPlus which supports failure.
guard False makes the monad fail evaluating to mzero., subsequent results are appended with mplus = (++) for the List monad.)
import Control.Monad (guard)
myDomain = [1..9] -- or whatever
validCombinations :: [a] -> [[a]]
validCombinations domainList = do
combi <- List.subsequences domainList
case combi of
[a,b] -> do
guard (propertyA a && propertyB b)
return combi
[a,b,c] -> do
guard (propertyA a && propertyB b && propertyC c)
return combi
_ -> guard False
main = do
forM_ (validCombinations myDomain) print
Update again, obtaining elements recursively, saving combinations and checks
import Control.Monad
validCombinations :: Eq a => Int -> Int -> [a] -> [(a -> Bool)] -> [a] -> [[a]]
validCombinations indx size domainList propList accum = do
elt <- domainList -- try all domain elements
let prop = propList!!indx
guard $ prop elt -- some property
guard $ elt `notElem` accum -- not repeated
{-
case accum of
prevElt : _ -> guard $ some_combined_check_with_previous elt prevElt
_ -> guard True
-}
if size > 1 then do
-- append recursively subsequent positions
other <- validCombinations (indx+1) (size-1) domainList propList (elt : accum)
return $ elt : other
else
return [elt]
myDomain = [1..3] :: [Int]
myProps = repeat (>1)
main = do
forM_ (validCombinations 0 size myDomain myProps []) print
where
size = 2
result for size 2 with non trivial result:
[2,3]
[3,2]

CPS in curried languages

How does CPS in curried languages like lambda calculus or Ocaml even make sense? Technically, all function have one argument. So say we have a CPS version of addition in one such language:
cps-add k n m = k ((+) n m)
And we call it like
(cps-add random-continuation 1 2)
This is then the same as:
(((cps-add random-continuation) 1) 2)
I already see two calls there that aren't tail calls and in reality a complexly nested expression, the (cps-add random-continuation) returns a value, namely a function that consumes a number, and then returns a function which consumes another number and then delivers the sum of both to that random-continuation. But we can't work around this value returning by simply translating this into CPS again, because we can only give each function one argument. We need to have at least two to make room for the continuation and the 'actual' argument.
Or am I missing something completely?
Since you've tagged this with Haskell, I'll answer in that regard: In Haskell, the equivalent of doing a CPS transform is working in the Cont monad, which transforms a value x into a higher-order function that takes one argument and applies it to x.
So, to start with, here's 1 + 2 in regular Haskell: (1 + 2) And here it is in the continuation monad:
contAdd x y = do x' <- x
y' <- y
return $ x' + y'
...not terribly informative. To see what's going on, let's disassemble the monad. First, removing the do notation:
contAdd x y = x >>= (\x' -> y >>= (\y' -> return $ x' + y'))
The return function lifts a value into the monad, and in this case is implemented as \x k -> k x, or using an infix operator section as \x -> ($ x).
contAdd x y = x >>= (\x' -> y >>= (\y' -> ($ x' + y')))
The (>>=) operator (read "bind") chains together computations in the monad, and in this case is implemented as \m f k -> m (\x -> f x k). Changing the bind function to prefix form and substituting in the lambda, plus some renaming for clarity:
contAdd x y = (\m1 f1 k1 -> m1 (\a1 -> f1 a1 k1)) x (\x' -> (\m2 f2 k2 -> m2 (\a2 -> f2 a2 k2)) y (\y' -> ($ x' + y')))
Reducing some function applications:
contAdd x y = (\k1 -> x (\a1 -> (\x' -> (\k2 -> y (\a2 -> (\y' -> ($ x' + y')) a2 k2))) a1 k1))
contAdd x y = (\k1 -> x (\a1 -> y (\a2 -> ($ a1 + a2) k1)))
And a bit of final rearranging and renaming:
contAdd x y = \k -> x (\x' -> y (\y' -> k $ x' + y'))
In other words: The arguments to the function have been changed from numbers, into functions that take a number and return the final result of the entire expression, just as you'd expect.
Edit: A commenter points out that contAdd itself still takes two arguments in curried style. This is sensible because it doesn't use the continuation directly, but not necessary. To do otherwise, you'd need to first break the function apart between the arguments:
contAdd x = x >>= (\x' -> return (\y -> y >>= (\y' -> return $ x' + y')))
And then use it like this:
foo = do f <- contAdd (return 1)
r <- f (return 2)
return r
Note that this is really no different from the earlier version; it's simply packaging the result of each partial application as taking a continuation, not just the final result. Since functions are first-class values, there's no significant difference between a CPS expression holding a number vs. one holding a function.
Keep in mind that I'm writing things out in very verbose fashion here to make explicit all the steps where something is in continuation-passing style.
Addendum: You may notice that the final expression looks very similar to the de-sugared version of the monadic expression. This is not a coincidence, as the inward-nesting nature of monadic expressions that lets them change the structure of the computation based on previous values is closely related to continuation-passing style; in both cases, you have in some sense reified a notion of causality.
Short answer : of course it makes sense, you can apply a CPS-transform directly, you will only have lots of cruft because each argument will have, as you noticed, its own attached continuation
In your example, I will consider that there is a +(x,y) uncurried primitive, and that you're asking what is the translation of
let add x y = +(x,y)
(This add faithfully represent OCaml's (+) operator)
add is syntaxically equivalent to
let add = fun x -> (fun y -> +(x, y))
So you apply a CPS transform¹ and get
let add_cps = fun x kx -> kx (fun y ky -> ky +(x,y))
If you want a translated code that looks more like something you could have willingly written, you can devise a finer transformation that actually considers known-arity function as non-curried functions, and tream all parameters as a whole (as you have in non-curried languages, and as functional compilers already do for obvious performance reasons).
¹: I wrote "a CPS transform" because there is no "one true CPS translation". Different translations have been devised, producing more or less continuation-related garbage. The formal CPS translations are usually defined directly on lambda-calculus, so I suppose you're having a less formal, more hand-made CPS transform in mind.
The good properties of CPS (as a style that program respect, and not a specific transformation into this style) are that the order of evaluation is completely explicit, and that all calls are tail-calls. As long as you respect those, you're relatively free in what you can do. Handling curryfied functions specifically is thus perfectly fine.
Remark : Your (cps-add k 1 2) version can also be considered tail-recursive if you assume the compiler detect and optimize that cps-add actually always take 3 arguments, and don't build intermediate closures. That may seem far-fetched, but it's the exact same assumption we use when reasoning about tail-calls in non-CPS programs, in those languages.
yes, technically all functions can be decomposed into functions with one method, however, when you want to use CPS the only thing you are doing is saying is that at a certain point of computation, run the continuation method.
Using your example, lets have a look. To make things a little easier, let's deconstruct cps-add into its normal form where it is a function only taking one argument.
(cps-add k) -> n -> m = k ((+) n m)
Note at this point that the continuation, k, is not being evaluated (Could this be the point of confusion for you?).
Here we have a method called cps-add k that receives a function as an argument and then returns a function that takes another argument, n.
((cps-add k) n) -> m = k ((+) n m)
Now we have a function that takes an argument, m.
So I suppose what I am trying to point out is that currying does not get in the way of CPS style programming. Hope that helps in some way.

Resources