What is the connection between primitive recursion and catamorphisms? - haskell

Using the following catamorphism for natural numbers I can implement various arithmetic algorithms whithout having to deal with recursion:
cataNat :: b -> (b -> b) -> Natural -> b
cataNat zero succ = go
where
go n = if (n <= 0) then zero else succ (go (n - 1))
fib :: Natural -> Natural
fib = fst . cataNat (0, 1) (\(a, b) -> (b, a + b))
cataNat looks similar to primitive recursion to me. At least each application of it seems garuanteed to terminate, no matter which combination of zero and succ is provided. With each iteration the overall problem is decomposed by the smallest/simplest problem instance. So even if it is technically not primitive recursion it seems to be equally expressive. If this is true it would mean that a catamorphism is not enough to express general recursion. We would probably need a hylomorphism for that. Is my reasoning correct, that is, does the equivalence hold for any type of catamorphism, not just for natural numbers?

Primitive recursion corresponds directly to a paramorphism.
You're correct that a catamorphism has equivalent theoretical power to a paramorphism, but they can be different in important ways in operational terms. For an example, let's go to lists instead of Nats.
cata :: b -> (a -> b -> b) -> [a] -> b
cata = flip foldr -- I'm lazy, but this argument order makes a bit more sense for this example
para :: b -> (a -> [a] -> b -> b) -> [a] -> b
para z _ [] = z
para z f (x:xs) = f x xs (para z f xs)
-- Removes the first element from the list which is equal to the other argument
delete1 :: Eq a => a -> [a] -> [a]
delete1 x xs = cata (const []) (\el k found -> if not found && el == x then k True else el : k found) xs False
-- Removes the first element from the list which is equal to the other argument
delete2 :: Eq a => a -> [a] -> [a]
delete2 x xs = para [] (\el raw processed -> if el == x then raw else el : processed) xs
Look at how awkward delete1 is, compared to delete2. Not only do you have to contort your logic by making the result of cata a function, but there's a very real operational cost, too. You have to traverse everything in the list after finding a matching element, and re-create all the (:) constructors. That can have a noticeable cost in efficiency. In comparison, delete2, when it finds the target element, can just use the existing tail of the list for the remainder, without even looking at it. Of course, most uses of foldr (real world, not this example) don't produce a function and don't want access to the unprocessed tail of the list. For them, the catamorphism is going to be slightly more efficient simply because of passing around less data.
So in terms of theoretical power, they're equivalent. In operational terms, each has a use, though catamorphisms are a lot more common.
For some expansion of the idea in more general terms, see the recursion-schemes library. It uses a rather different-looking formulation of the idea so that it can abstract over data types with different shapes, instead of needing a different type for cata/para for each data type they can be applied to. But it really is just an alternate way of packing up the same ideas, and other kinds of morphisms are covered as well, including much more niche (or even possibly useless) ones.

Related

How do I fix ‘Eq a’ has kind ‘GHC.Prim.Constraint’ error in haskell?

I am writing a small function in Haskell to check if a list is a palindrome by comparing it with it's reverse.
checkPalindrome :: [Eq a] -> Bool
checkPalindrome l = (l == reverse l)
where
reverse :: [a] -> [a]
reverse xs
| null xs = []
| otherwise = (last xs) : reverse newxs
where
before = (length xs) - 1
newxs = take before xs
I understand that I should use [Eq a] in the function definition because I use the equality operator later on, but I get this error when I compile:
Expected kind ‘*’, but ‘Eq a’ has kind ‘GHC.Prim.Constraint’
In the type signature for ‘checkPalindrome’:
checkPalindrome :: [Eq a] -> Bool
P.s Feel free to correct me if I am doing something wrong with my indentation, I'm very new to the language.
Unless Haskell adopted a new syntax, your type signature should be:
checkPalindrome :: Eq a => [a] -> Bool
Declare the constraint on the left hand side of a fat-arrow, then use it on the right hand side.
Unlike OO languages, Haskell makes a quite fundamental distinction between
Constraints – typeclasses like Eq.
Types – concrete types like Bool or lists of some type.
In OO languages, both of these would be represented by classes†, but a Haskell type class is completely different. You never have “values of class C”, only “types of class C”. (These concrete types may then contain values, but the classes don't.)
This distinction may seem pedantic, but it's actually very useful. What you wrote, [Eq a] -> Bool, would supposedly mean: each element of the list must be comparable... but comparable to what? You could have elements of different type in the list, how do you know that these elements are comparable to each other? In Haskell, that's no issue, because whenever the function is used you first settle on one type a. This type must be in the Eq class. The list then must have all elements from the same type a. This way you ensure that each element of the list is comparable to all of the others, not just, like, comparable to itself. Hence the signature
checkPalindrome :: Eq a => [a] -> Bool
This is the usual distinction on the syntax level: constraints must always‡ be written on the left of an => (implication arrow).
The constraints before the => are “implicit arguments”: you don't explicitly “pass Eq a to the function” when you call it, instead you just pass the stuff after the =>, i.e. in your example a list of some concrete type. The compiler will then look at the type and automatically look up its Eq typeclass instance (or raise a compile-time error if the type does not have such an instance). Hence,
GHCi, version 7.10.2: http://www.haskell.org/ghc/ :? for help
Prelude> let palin :: Eq a => [a] -> Bool; palin l = l==reverse l
Prelude> palin [1,2,3,2,1]
True
Prelude> palin [1,2,3,4,5]
False
Prelude> palin [sin, cos, tan]
<interactive>:5:1:
No instance for (Eq (a0 -> a0))
(maybe you haven't applied enough arguments to a function?)
arising from a use of ‘palin’
In the expression: palin [sin, cos, tan]
In an equation for ‘it’: it = palin [sin, cos, tan]
...because functions can't be equality-compared.
†Constraints may in OO also be interfaces / abstract base classes, which aren't “quite proper classes” but are still in many ways treated the same way as OO value-classes. Most modern OO languages now also support Haskell-style parametric polymorphism in addition to “element-wise”/covariant/existential polymorphism, but they require somewhat awkward extends trait-mechanisms because this was only implemented as an afterthought.
‡There are also functions which have “constraints in the arguments”, but that's a more advanced concept called rank-n polymorphism.
This is really an extended comment. Aside from your little type error, your function has another problem: it's extremely inefficient. The main problem is your definition of reverse.
reverse :: [a] -> [a]
reverse xs
| null xs = []
| otherwise = (last xs) : reverse newxs
where
before = (length xs) - 1
newxs = take before xs
last is O(n), where n is the length of the list. length is also O(n), where n is the length of the list. And take is O(k), where k is the length of the result. So your reverse will end up taking O(n^2) time. One fix is to just use the standard reverse function instead of writing your own. Another is to build up the result recursively, accumulating the result as you go:
reverse :: [a] -> [a]
reverse xs0 = go [] xs0
go acc [] = acc
go acc (x : xs) = go (x : acc) xs
This version is O(n).
There's another source of inefficiency in your implementation:
checkPalindrome l = (l == reverse l)
This isn't nearly as bad, but let's look at what it does. Suppose we have the string "abcdefedcba". Then we test whether "abcdefedcba" == "abcdefedcba". By the time we've checked half the list, we already know the answer. So we'd like to stop there! There are several ways to accomplish this. The simplest efficient one is probably to calculate the length of the list as part of the process of reversing it so we know how much we'll need to check:
reverseCount :: [a] -> (Int, [a])
reverseCount xs0 = go 0 [] xs0 where
go len acc [] = (len, acc)
go len acc (x : xs) = len `seq`
go (len + 1) (x : acc) xs
Don't worry about the len `seq` bit too much; that's just a bit of defensive programming to make sure laziness doesn't make things inefficient; it's probably not even necessary if optimizations are enabled. Now you can write a version of == that only looks at the first n elements of the lists:
eqTo :: Eq a => Int -> [a] -> [a] -> Bool
eqTo 0 _ _ = True
eqTo _ [] [] = True
eqTo n (x : xs) (y : ys) =
x == y && eqTo (n - 1) xs ys
eqTo _ _ _ = False
So now
isPalindrome xs = eqTo ((len + 1) `quot` 2) xs rev_xs
where
(len, rev_xs) = reverseCount xs
Here's another way, that's more efficient and arguably more elegant, but a bit tricky. We don't actually need to reverse the whole list; we only need to reverse half of it. This saves memory allocation. We can use a tortoise and hare trick:
splitReverse ::
[a] ->
( [a] -- the first half, reversed
, Maybe a -- the middle element
, [a] ) -- the second half, in order
splitReverse xs0 = go [] xs0 xs0 where
go front rear [] = (front, Nothing, rear)
go front (r : rs) [_] = (front, Just r, rs)
go front (r : rs) (_ : _ : xs) =
go (r : front) rs xs
Now
isPalindrome xs = front == rear
where
(front, _, rear) = splitReverse xs
Now for some numbers, using the test case
somePalindrome :: [Int]
somePalindrome = [1..10000] ++ [10000,9999..1]
Your original implementation takes 7.523s (2.316 mutator; 5.204 GC) and allocates 11 gigabytes to build the test list and check if it's a palindrome. My counting implementation takes less than 0.01s and allocates 2.3 megabytes. My tortoise and hare implementation takes less than 0.01s and allocates 1.7 megabytes.

Directly generating specific subsets of a powerset?

Haskell's expressiveness enables us to rather easily define a powerset function:
import Control.Monad (filterM)
powerset :: [a] -> [[a]]
powerset = filterM (const [True, False])
To be able to perform my task it is crucial for said powerset to be sorted by a specific function, so my implementation kind of looks like this:
import Data.List (sortBy)
import Data.Ord (comparing)
powersetBy :: Ord b => ([a] -> b) -> [a] -> [[a]]
powersetBy f = sortBy (comparing f) . powerset
Now my question is whether there is a way to only generate a subset of the powerset given a specific start and endpoint, where f(start) < f(end) and |start| < |end|. For example, my parameter is a list of integers ([1,2,3,4,5]) and they are sorted by their sum. Now I want to extract only the subsets in a given range, lets say 3 to 7. One way to achieve this would be to filter the powerset to only include my range but this seems (and is) ineffective when dealing with larger subsets:
badFunction :: Ord b => b -> b -> ([a] -> b) -> [a] -> [[a]]
badFunction start end f = filter (\x -> f x >= start && f x <= end) . powersetBy f
badFunction 3 7 sum [1,2,3,4,5] produces [[1,2],[3],[1,3],[4],[1,4],[2,3],[5],[1,2,3],[1,5],[2,4],[1,2,4],[2,5],[3,4]].
Now my question is whether there is a way to generate this list directly, without having to generate all 2^n subsets first, since it will improve performance drastically by not having to check all elements but rather generating them "on the fly".
If you want to allow for completely general ordering-functions, then there can't be a way around checking all elements of the powerset. (After all, how would you know the isn't a special clause built in that gives, say, the particular set [6,8,34,42] a completely different ranking from its neighbours?)
However, you could make the algorithm already drastically faster by
Only sorting after filtering: sorting is O (n · log n), so you want keep n low here; for the O (n) filtering step it matters less. (And anyway, number of elements doesn't change through sorting.)
Apply the ordering-function only once to each subset.
So
import Control.Arrow ((&&&))
lessBadFunction :: Ord b => (b,b) -> ([a]->b) -> [a] -> [[a]]
lessBadFunction (start,end) f
= map snd . sortBy (comparing fst)
. filter (\(k,_) -> k>=start && k<=end)
. map (f &&& id)
. powerset
Basically, let's face it, powersets of anything but a very small basis are infeasible. The particular application “sum in a certain range” is pretty much a packaging problem; there are quite efficient ways to do that kind of thing, but you'll have to give up the idea of perfect generality and of quantification over general subsets.
Since your problem is essentially a constraint satisfaction problem, using an external SMT solver might be the better alternative here; assuming you can afford the extra IO in the type and the need for such a solver to be installed. The SBV library allows construction of such problems. Here's one encoding:
import Data.SBV
-- c is the cost type
-- e is the element type
pick :: (Num e, SymWord e, SymWord c) => c -> c -> ([SBV e] -> SBV c) -> [e] -> IO [[e]]
pick begin end cost xs = do
solutions <- allSat constraints
return $ map extract $ extractModels solutions
where extract ts = [x | (t, x) <- zip ts xs, t]
constraints = do tags <- mapM (const free_) xs
let tagged = zip tags xs
finalCost = cost [ite t (literal x) 0 | (t, x) <- tagged]
solve [finalCost .>= literal begin, finalCost .<= literal end]
test :: IO [[Integer]]
test = pick 3 7 sum [1,2,3,4,5]
We get:
Main> test
[[1,2],[1,3],[1,2,3],[1,4],[1,2,4],[1,5],[2,5],[2,3],[2,4],[3,4],[3],[4],[5]]
For large lists, this technique will beat out generating all subsets and filtering; assuming the cost function generates reasonable constraints. (Addition will be typically OK, if you've multiplications, the backend solver will have a harder time.)
(As a side note, you should never use filterM (const [True, False]) to generate power-sets to start with! While that expression is cute and fun, it is extremely inefficient!)

List based on right Kan extension

In the ``Kan Extensions for Program Optimisation'' by Ralf Hinze there is the definition of List type based on right Kan extension of the forgetful functor from the category of monoids along itself (section 7.4). The paper gives Haskell implementation as follows:
newtype List a = Abstr {
apply :: forall z . (Monoid z) => (a -> z) -> z
}
I was able to define usual nil and cons constructors:
nil :: List a
nil = Abstr (\f -> mempty)
cons :: a -> List a -> List a
cons x (Abstr app) = Abstr (\f -> mappend (f x) (app f))
With the following instance of Monoid class for Maybe functor, I managed to define head function:
instance Monoid (Maybe a) where
mempty = Nothing
mappend Nothing m = m
mappend (Just a) m = Just a
head :: List a -> Maybe a
head (Abstr app) = app Just
Question: How can one define tail function?
Here is a rather principled solution to implementing head and tail in one go (full gist):
First of all, we know how to append lists (it will be useful later on):
append :: List a -> List a -> List a
append (Abstr xs) (Abstr ys) = Abstr (\ f -> xs f <> ys f)
Then we introduce a new type Split which we will use to detect whether a List is empty or not (and get, in the case it's non empty, a head and a tail):
newtype Split a = Split { outSplit :: Maybe (a, List a) }
This new type forms a monoid: indeed we know how to append two lists.
instance Monoid (Split a) where
mempty = Split Nothing
mappend (Split Nothing) (Split nns) = Split nns
mappend (Split mms) (Split Nothing) = Split mms
mappend (Split (Just (m, ms))) (Split (Just (n, ns))) =
Split $ Just (m, append ms (cons n ns))
Which means that we can get a function from List a to Split a using List a's apply:
split :: List a -> Split a
split xs = apply xs $ \ a -> Split $ Just (a, nil)
head and tail can finally be trivially derived from split:
head :: List a -> Maybe a
head = fmap fst . outSplit . split
tail :: List a -> Maybe (List a)
tail = fmap snd . outSplit . split
This implementation of lists as free monoids is provided in the package fmlist, which notes some interesting properties of it (unlike most implementations of lists, which are right-biased, this one is truly unbiased; you can make an arbitrary tree, and although of course the monoid laws force you to see it as flattened, you can still observe some differences in the infinite case. This is almost a Haskell quirk -- usually, free monoids). It also has an implementation of tail, so that's sort of an answer to your question (but see below).
With these sorts of representations (not just this particular one one, but also e.g. forall r. (a -> r -> r) -> r -> r lists), there are usually some operations (e.g. appending) that become easier, and some (e.g. zip and tail) that become more difficult. This is discussed a bit in various places, e.g. How to take the tail of a functional stream.
Looking more closely at fmlist, though, its solution is pretty unsatisfactory: It just converts the nice balanced tree that you give it to a right-biased list using foldr, which allows it to do regular list operations, but loses the monoidal structure. The tail of a "middle-infinite" list is no longer "middle-infinite", it's just right-infinite like a regular list.
It should be possible to come up with a clever Monoid instance to compute the tail while disturbing the rest of the structure as little as possible, but an obvious one doesn't come to mind off-hand. I can think of a non-clever "brute force" solution, though: Cheat and reify the "list" into a tree using an invalid Monoid instance, inspect the tree, and then fold it back up so the end result is valid. Here's what it would look like with my nonfree package and fmlist:
nail :: FM.FMList a -> FM.FMList a
nail (FM.FM k) = FM.FM $ \f -> foldMap f (nail' (k N))
nail' :: N a -> N a
nail' NEmpty = error "nail' NEmpty"
nail' (N x) = NEmpty
nail' (NAppend l r) =
case normalize l of
NEmpty -> nail' r
N x -> r
l' -> NAppend (nail' l') r
-- Normalize a tree so that the left side of a root NAppend isn't an empty
-- subtree of any shape. If the tree is infinite in a particular way, this
-- won't terminate, so in that sense taking the tail of a list can make it
-- slightly worse (but you were already in pretty bad shape as far as
-- operations on the left side are concerned, and this is a pathological case
-- anyway).
normalize :: N a -> N a
normalize (NAppend l r) =
case normalize l of
NEmpty -> normalize r
l' -> NAppend l' r
normalize n = n

Inverting a fold

Suppose for a minute that we think the following is a good idea:
data Fold x y = Fold {start :: y, step :: x -> y -> y}
fold :: Fold x y -> [x] -> y
Under this scheme, functions such as length or sum can be implemented by calling fold with the appropriate Fold object as argument.
Now, suppose you want to do clever optimisation tricks. In particular, suppose you want to write
unFold :: ([x] -> y) -> Fold x y
It should be relatively easy to rule a RULES pragma such that fold . unFold = id. But the interesting question is... can we actually implement unFold?
Obviously you can use RULES to apply arbitrary code transformations, whether or not they preserve the original meaning of the code. But can you really write an unFold implementation which actually does what its type signature suggests?
No, it's not possible. Proof: let
f :: [()] -> Bool
f[] = False
f[()] = False
f _ = True
First we must, for f' = unFold f, have start f' = False, because when folding over the empty list we directly get the start value. Then we must require step f' () False = False to achieve fold f' [()] = False. But when now evaluating fold f' [(),()], we would again only get a call step f' () False, which we had to define as False, leading to fold f' [(),()] ≡ False, whereas f[(),()] ≡ True. So there exists no unFold f that fulfills fold $ unFold f ≡ f.                                                                                                                                              □
You can, but you need to make a slight modification to Fold in order to pull it off.
All functions on lists can be expressed as a fold, but sometimes to accomplish this, extra bookkeeping is needed. Suppose we add an additional type parameter to your Fold type, which passes along this additional contextual information.
data Fold a c r = Fold { _start :: (c, r), _step :: a -> (c,r) -> (c,r) }
Now we can implement fold like so
fold :: Fold a c r -> [a] -> r
fold (Fold step start) = snd . foldr step start
Now what happens when we try to go the other way?
unFold :: ([a] -> r) -> Fold a c r
Where does the c come from? Functions are opaque values, so it's hard to know how to inspect a function and know which contextual information it relies on. So, let's cheat a little. We're going to have the "contextual information" be the entire list, so then when we get to the leftmost element, we can just apply the function to the original list, ignoring the prior cumulative results.
unFold :: ([a] -> r) -> Fold a [a] r
unFold f = Fold { _start = ([], f [])
, _step = \a (c, _r) -> let c' = a:c in (c', f c') }
Now, sadly, this does not necessarily compose with fold, because it requires that c must be [a]. Let's fix that by hiding c with existential quantification.
{-# LANGUAGE ExistentialQuantification #-}
data Fold a r = forall c. Fold
{ _start :: (c,r)
, _step :: a -> (c,r) -> (c,r) }
fold :: Fold a r -> [a] -> r
fold (Fold start step) = snd . foldr step start
unFold :: ([a] -> r) -> Fold a r
unFold f = Fold start step where
start = ([], f [])
step a (c, _r) = let c' = a:c in (c', f c')
Now, it should always be true that fold . unFold = id. And, given a relaxed notion of equality for the Fold data type, you could also say that unFold . fold = id. You can even provide a smart constructor that acts like the old Fold constructor:
makeFold :: r -> (a -> r -> r) -> Fold a r
makeFold start step = Fold start' step' where
start' = ((), start)
step' a ((), r) = ((), step a r)
tl;dr:
Conclusion 1: you can't
What you asked for originally isn't possible, at least not by any version of what you wanted I can come up with. (See below.)
If change your data type to allow me to store intermediate calculations, I think I'll be fine, but even then,
the function unFold would be rather inefficient, which seems to run counter to your clever optimisation tricks agenda!
Conclusion 2: I don't think it achieves what you want, even if you work around it by changing the types
Any optimisation of the list algorithm would be subject to the problem that you've calculated the step function using the original unoptimised function, and quite probably in a complicated way.
Since there's no equality on functions, optimising step to something efficient isn't possible. I think you need a human to do unFold, not a compiler.
Anyway, back to the original question:
Could fold . unFold = id ?
No. Suppose we have
isSingleton :: [a] -> Bool
isSingleton [x] = True
isSingleton _ = False
then if we had unFold :: ([x] -> y) -> Fold x y then if foldSingleton was the same as unFold isSingleton would need to have
foldSingleton = Fold {start = False , step = ???}
Where step takes an element of the list and updates the result.
Now isSingleton "a" == True, we need
step False = True
and because isSingleton "ab" == False, we need
step True = False
so step = not would do so far, but also isSingleton "abc" == False so we also need
step False = False
Since there are functions ([x] -> y) that cannot be represented by a value of type Fold x y, there cannot exist a function unFold :: ([x] -> y) -> Fold x y such that fold . unFold = id, because id is a total function.
Edit:
It turns out you're not convinced by this, because you only expected unFold to work on functions that had a representation as a fold, so maybe you meant unFold.fold = id.
Could unFold . fold = id ?
No.
Even if you just want unFold to work on functions ([x] -> y) that can be obtained using fold :: Fold x y -> ([x] -> y), I don't think it's possible. Let's address the question by assuming now we have defined
combine :: X -> Y -> Y
initial :: Y
folded :: [X] -> Y
folded = fold $ Fold initial combine
Recovering the value initial is trivial: initial = folded [].
Recovery of the original combine is not, because there's no way to go from a function that gives you some values of Y to one which combines arbitrary values of Y.
For an example, if we had X = Y = Int and I defined
combine x y | y < 0 = -10
| otherwise = y + 1
initial = 0
then since combine just adds one to y every time you use it on positive y, and the initial value is 0, folded is indistinguishable from length in terms of its output. Notice that since folded xs is never negative, it's also impossible to define a function unFold :: ([x] -> y) -> Fold x y that ever recovers our combine function. This boils down to the fact that fold is not injective; it carries different values of type Fold x y to the same value of type [x] -> y.
Thus I've proved two things: if unFold :: ([x] -> y) -> Fold x y then both fold.unFold /= id and now also unFold.fold /= id
I bet you're not convinced by this either, because you don't really care whether you got Fold 0 (\_ y -> y+1) or Fold 0 combine back from unFold folded, seeing as they have the same value when refolded! Let's narrow the goalposts one more time. Perhaps you want unFold to work whenever the function is obtainable via fold, and you're happy for it not to give you inconsistent answers as long as when you fold the result again, you get the same function. I can summarise that with this next question:
Could fold . unFold . fold = fold ?
i.e. Could you define unFold so that fold.unFold is the identity on the set of functions obtainable via fold?
I'm really convinced this isn't possible, because it's not a tractible problem to calculate the step function without retaining extra information about intermediate values on sublists.
Suppose we had
unFold f = Fold {start = f [], step = recoverstep f}
we need
recoverstep f x1 initial == f [x1]
so if there's an Eq instance for x (ring the alarm bells!), then recoverstep must have the same effect as
recoverstep f x1 y | y == initial = f [x1]
also we need
recoverstep f x2 (f [x1]) == f [x1,x2]
so if there's an Eq instance for x, then recoverstep must have the same effect as
recoverstep f x2 y | y == (f [x1]) = f [x1,x2]
but there's a massive problem here: the variable x1 is free in the right hand side of this equation.
This means that logically, we can't tell what value the step function should have on an x unless we already
know what values it has been used on. We would need to store the values of f [x1], f [x1,x2] etc in the Fold
data type to make it work, and this is the clincher as to why we can't define unFold. If you change the data type Fold
to allow us to store information about intermediate lists, I can see it would work, but as it stands it's impossible
to recover the context.
Similar to Dan's answer, but using a slightly different approach. Instead of pairing the accumulator with partial results which will be thrown away at the end, we add a "post-processing" function which will convert from the accumulator type to the final result.
The same "cheat" for unFold just does all the work in the post-processing step:
{-# LANGUAGE ExistentialQuantification #-}
data Fold a r = forall c. Fold
{ _start :: c
, _step :: a -> c -> c
, _result :: c -> r }
fold :: Fold a r -> [a] -> r
fold (Fold start step result) = result . foldr step start
unFold :: ([a] -> r) -> Fold a r
unFold f = Fold [] (:) f
makeFold :: r -> (a -> r -> r) -> Fold a r
makeFold start step = Fold start step id

How can I iterate over a string without recursion?

isTogether' :: String -> Bool
isTogether' (x:xs) = isTogether (head xs) (head (tail xs))
For the above code, I want to go through every character in the string. I am not allowed to use recursion.
isTogether' (x:xs) = isTogether (head xs) (head (tail xs))
If I've got it right, you are interested in getting consequential char pairs from some string. So, for example, for abcd you need to test (a,b), (b,c), (c,d) with some (Char,Char) -> Bool or Char -> Char -> Bool function.
Zip could be helpful here:
> let x = "abcd"
> let pairs = zip x (tail x)
it :: [(Char, Char)]
And for some f :: Char -> Char -> Bool function we can get uncurry f :: (Char, Char) -> Bool.
And then it's easy to get [Bool] value of results with map (uncurry f) pairs :: [Bool].
In Haskell, a String is just a list of characters ([Char]). Thus, all of the normal higher-order list functions like map work on strings. So you can use whichever higher-order function is most applicable to your problem.
Note that these functions themselves are defined recursively; in fact, there is no way to go through the entire list in Haskell without either recursing explicitly or using a function that directly or indirectly recurses.
To do this without recursion, you will need to use a higher order function or a list comprehension. I don't understand what you're trying to accomplish so I can only give generic advice. You probably will want one of these:
map :: (a -> b) -> [a] -> [b]
Map converts a list of one type into another. Using map lets you perform the same action on every element of the list, given a function that operates on the kinds of things you have in the list.
filter :: (a -> Bool) -> [a] -> [a]
Filter takes a list and a predicate, and gives you a new list with only the elements that satisfy the predicate. Just with these two tools, you can do some pretty interesting things:
import Data.Char
map toUpper (filter isLower "A quick test") -- => "QUICKTEST"
Then you have folds of various sorts. A fold is really a generic higher order function for doing recursion on some type, so using it takes a bit of getting used to, but you can accomplish pretty much any recursive function on a list with a fold instead. The basic type of foldr looks like this:
foldr :: (a -> b -> b) -> b -> [a] -> b
It takes three arguments: an inductive step, a base case and a value you want to fold. Or, in less mathematical terms, you could think of it as taking an initial state, a function to take the next item and the previous state to produce the next state, and the list of values. It then returns the final state it arrived at. You can do some pretty surprising things with fold, but let's say you want to detect if a list has a run of two or more of the same item. This would be hard to express with map and filter (impossible?), but it's easy with recursion:
hasTwins :: (Eq a) => [a] -> Bool
hasTwins (x:y:xs) | x == y = True
hasTwins (x:y:xs) | otherwise = hasTwins (y:xs)
hasTwins _ = False
Well, you can express this with a fold like so:
hasTwins :: (Eq a) => [a] -> Bool
hasTwins (x:xs) = snd $ foldr step (x, False) xs
where
step x (prev, seenTwins) = (x, prev == x || seenTwins)
So my "state" in this fold is the previous value and whether we've already seen a pair of identical values. The function has no explicit recursion, but my step function passes the current x value along to the next invocation through the state as the previous value. But you don't have to be happy with the last state you have; this function takes the second value out of the state and returns that as the overall return value—which is the boolean whether or not we've seen two identical values next to each other.

Resources