It it always safe to use unsafeCoerce for valid equalities? - haskell

I have a witness type for type-level lists,
data List xs where
Nil :: List '[]
Cons :: proxy x -> List xs -> List (x ': xs)
as well as the following utilities.
-- Type level append
type family xs ++ ys where
'[] ++ ys = ys
(x ': xs) ++ ys = x ': (xs ++ ys)
-- Value level append
append :: List xs -> List ys -> List (xs ++ ys)
append Nil ys = ys
append (Cons x xs) ys = Cons x (append xs ys)
-- Proof of associativity of (++)
assoc :: List xs -> proxy ys -> proxy' zs -> ((xs ++ ys) ++ zs) :~: (xs ++ (ys ++ zs))
assoc Nil _ _ = Refl
assoc (Cons _ xs) ys zs = case assoc xs ys zs of Refl -> Refl
Now, I have two different but equivalent definitions of a type-level reverse function,
-- The first version, O(n)
type Reverse xs = Rev '[] xs
type family Rev acc xs where
Rev acc '[] = acc
Rev acc (x ': xs) = Rev (x ': acc) xs
-- The second version, O(n²)
type family Reverse' xs where
Reverse' '[] = '[]
Reverse' (x ': xs) = Reverse' xs ++ '[x]
The first is more efficient, but the second is easier to use when proving things to the compiler, so it would be nice to have a proof of equivalence. In order to do this, I need a proof of Rev acc xs :~: Reverse' xs ++ acc. This is what I came up with:
revAppend :: List acc -> List xs -> Rev acc xs :~: Reverse' xs ++ acc
revAppend _ Nil = Refl
revAppend acc (Cons x xs) =
case (revAppend (Cons x acc) xs, assoc (reverse' xs) (Cons x Nil) acc) of
(Refl, Refl) -> Refl
reverse' :: List xs -> List (Reverse' xs)
reverse' Nil = Nil
reverse' (Cons x xs) = append (reverse' xs) (Cons x Nil)
Unfortunately, revAppend is O(n³), which completely defeats the purpose of this exercise. However, we can bypass all this and get O(1) by using unsafeCoerce:
revAppend :: Rev acc xs :~: Reverse' xs ++ acc
revAppend = unsafeCoerce Refl
Is this safe? What about the general case? For example, if I have two type families F :: k -> * and G :: k -> *, and I know that they are equivalent, is it safe to define the following?
equal :: F a :~: G a
equal = unsafeCoerce Refl

It would be very nice if GHC used a termination checker on expressions e::T where T has only one constructor with no arguments K (e.g. :~:, ()). When the check succeeds, GHC could rewrite e as K skipping the computation completely. You would have to rule out FFI, unsafePerformIO, trace, ... but it seems feasible. If this were implemented, it would solve the posted question very nicely, allowing one to actually write proofs having zero runtime cost.
Failing this, you can use unsafeCoerce in the meanwhile, as you propose. If you are really, really sure that two type are the same you can use it safely. The typical example is implementing Data.Typeable. Of course, a misuse of unsafeCoerce on different types would lead to unpredictable effects, hopefully a crash.
You could even write your own "safer" variant of unsafeCoerce:
unsafeButNotSoMuchCoerce :: (a :~: b) -> a -> b
#ifdef CHECK_TYPEEQ
unsafeButNotSoMuchCoerce Refl = id
#else
unsafeButNotSoMuchCoerce _ = unsafeCoerce
#endif
If CHECK_TYPEEQ is defined it leads to slower code. If undefined, it skips it and coerces at zero cost. In the latter case it is still unsafe because you can pass bottom as the first arg and the program will not loop but will instead perform the wrong coercion. In this way you can test your program with the safe but slow mode, and then turn to the unsafe mode and pray your "proofs" were always terminating.

Related

Getting the last element of a heterogeneous list

I'm trying to define a function (hLast) that returns the last element of a heterogenous list:
type family HLastR xs where
HLastR '[x] = x
HLastR (x ': xs) = HLastR xs
class HLast xs where
hLast :: HList xs -> HLastR xs
instance HLast '[x] where
hLast (x `HCons` HNil) = x
instance (HLast xs, HLastR xs ~ HLastR (x ': xs)) => HLast (x ': xs) where
hLast (x `HCons` xs) = hLast xs
With this code, GHC always complains that there are overlapping instances for HLast. I don't understand how the instances could ever overlap: An HList of more than one element only matches the second instance and singletons should only match the first instance since the second instance requires an HLast instance for the tail (HNil in the singleton case, for which there is no instance).
Adding an {-# OVERLAPPING #-} directive to the first instance solves the problem, but this feels like an overly hacky solution for such a simple problem. Is there a better way to do this? Thanks!
Assuming HList is defined as a GADT, you don't need a type class in the first place.
data HList xs where
HCons :: x -> HList xs -> HList (x ': xs)
HNil :: HList '[]
type family HLastR xs where
HLastR '[x] = x
HLastR (x ': xs) = HLastR xs
hlast :: HList xs -> HLastR xs
hlast (HCons x HNil) = x
hlast (HCons _ xs#(HCons _ _)) = hlast xs

Couldn't match expected type `[a1]' with actual type `([a1], [a1])'

I am new to coding with Haskell and am stuck on this code that my professor wanted us to write. I am supposed to deal a single list into a pair of lists like so:
deal [1,2,3,4,5,6,7] = ([1,3,5,7], [2,4,6])
but I am getting this error on my 'xs' and also 'ys'
* Couldn't match expected type `[a1]'
with actual type `([a1], [a1])'
* In the expression: deal xs
In an equation for `xs': xs = deal xs
In an equation for `deal':
deal (x : y : xs : ys)
= (x : xs, y : ys)
where
xs = deal xs
ys = deal ys
* Relevant bindings include xs :: [a1] (bound at lab2.hs:16:17)
|
| xs = deal xs
| ^^^^^^^
Here is my code:
deal :: [a] -> ([a],[a])
deal [] = ([], [])
deal [x] = ([x], [])
deal (x:y:xs:ys) = (x:xs,y:ys)
where
xs = deal xs
ys = deal ys
This is logical, since here your deal xs and deal ys will return, given the signature a 2-tuple of lists, and xs has type [a]. Note that by using the same name, you here made a recursive expression, which will not work. Using the same name multiple times is not a good idea. If you turn on warnings, the compiler will normally warn against that.
You probably want to call deal on the rest of the list, and then retrieve the two lists that you use as tails:
deal :: [a] -> ([a],[a])
deal [] = ([], [])
deal [x] = ([x], [])
deal (x:y:rest) = (x:xs, y:ys)
where (xs, ys) = deal rest
or we can make use of (***) :: a b c -> a b' c' -> a (b, b') (c, c'):
import Control.Arrow((***))
deal :: [a] -> ([a],[a])
deal [] = ([], [])
deal [x] = ([x], [])
deal (x:y:rest) = ((x:) *** (y:)) (deal rest)
an alternative is to each time swap the tuple, and append to the other side:
import Control.Arrow(first)
import Data.Tuple(swap)
deal :: [a] -> ([a],[a])
deal [] = ([], [])
deal (x:xs) = first (x:) (swap (deal xs))
we can thus define this as a foldr pattern:
import Control.Arrow(first)
import Data.Tuple(swap)
deal :: Foldable f => f a -> ([a],[a])
deal [] = foldr ((. swap) . first . (:)) ([], [])
This gives us the expected result:
Prelude> deal [1,2,3,4,5,6,7]
([1,3,5,7],[2,4,6])

Explicit Purely-Functional Data-Structure For Difference Lists

In Haskell, difference lists, in the sense of
[a] representation of a list with an efficient concatenation operation
seem to be implemented in terms of function composition.
Functions and (dynamic) function compositions, though, must also be represented somehow in the computer's memory using data structures, which raises the question of how dlists could be implemented in Haskell without using function compositions, but, rather, through some basic purely-functional node-based data structures. How could this be done with the same performance guarantees as those of function composition?
(++)'s notoriously bad asymptotics come about when you use it in a left-associative manner - that is, when (++)'s left argument is the result of another call to (++). Right-associative expressions run efficiently, though.
To put it more concretely: evaluating a left-nested append of m lists like
((ws ++ xs) ++ ys) ++ zs -- m = 3 in this example
to WHNF requires you to force m thunks, because (++) is strict in its left argument.
case (
case (
case ws of { [] -> xs ; (w:ws) -> w:(ws ++ xs) }
) of { [] -> ys ; (x:xs) -> x:(xs ++ ys) }
) of { [] -> zs ; (y:ys) -> y:(ys ++ zs) }
In general, to fully evaluate n elements of such a list, this'll require forcing that stack of m thunks n times, for O(m*n) complexity. When the entire list is built from appends of singleton lists (ie (([w] ++ [x]) ++ [y]) ++ [z]), m = n so the cost is O(n2).
Evaluating a right-nested append like
ws ++ (xs ++ (ys ++ zs))
to WHNF is much easier (O(1)):
case ws of
[] -> xs ++ (ys ++ zs)
(w:ws) -> w:(ws ++ (xs ++ (ys ++ zs)))
Evaluating n elements just requires evaluating n thunks, which is about as good as you can expect to do.
Difference lists work by paying a small (O(m)) up-front cost to automatically re-associate calls to (++).
newtype DList a = DList ([a] -> [a])
fromList xs = DList (xs ++)
toList (DList f) = f []
instance Monoid (DList a) where
mempty = fromList []
DList f `mappend` DList g = DList (f . g)
Now a left-nested expression,
toList (((fromList ws <> fromList xs) <> fromList ys) <> fromList zs)
gets evaluated in a right-associative manner:
((((ws ++) . (xs ++)) . (ys ++)) . (zs ++)) []
-- expand innermost (.)
(((\l0 -> ws ++ (xs ++ l0)) . (ys ++)) . (zs ++)) []
-- expand innermost (.)
((\l1 -> (\l0 -> ws ++ (xs ++ l0)) (ys ++ l1)) . (zs ++)) []
-- beta reduce
((\l1 -> ws ++ (xs ++ (ys ++ l1))) . (zs ++)) []
-- expand innermost (.)
(\l2 -> (\l1 -> ws ++ (xs ++ (ys ++ l1))) (zs ++ l2)) []
-- beta reduce
(\l2 -> ws ++ (xs ++ (ys ++ (zs ++ l2)))) []
-- beta reduce
ws ++ (xs ++ (ys ++ (zs ++ [])))
You perform O(m) steps to evaluate the composed function, then O(n) steps to evaluate the resulting expression, for a total complexity of O(m+n), which is asymptotically better than O(m*n). When the list is made up entirely of appends of singleton lists, m = n and you get O(2n) ~ O(n), which is asymptotically better than O(n2).
This trick works for any Monoid.
newtype RMonoid m = RMonoid (m -> m) -- "right-associative monoid"
toRM m = RMonoid (m <>)
fromRM (RMonoid f) = f mempty
instance Monoid m => Monoid (RMonoid m):
mempty = toRM mempty
RMonoid f `mappend` RMonoid g = RMonoid (f . g)
See also, for example, the Codensity monad, which applies this idea to monadic expressions built using (>>=) (rather than monoidal expressions built using (<>)).
I hope I have convinced you that (++) only causes a problem when used in a left-associative fashion. Now, you can easily write a list-like data structure for which append is strict in its right argument, and left-associative appends are therefore not a problem.
data Snoc a = Nil | Snoc (Snoc a) a
xs +++ Nil = xs
xs +++ (Snoc ys y) = Snoc (xs +++ ys) y
We recover O(1) WHNF of left-nested appends,
((ws +++ xs) +++ ys) +++ zs
case zs of
Nil -> (ws +++ xs) +++ ys
Snoc zs z -> Snoc ((ws +++ xs) +++ ys) +++ zs) z
but at the expense of slow right-nested appends.
ws +++ (xs +++ (ys +++ zs))
case (
case (
case zs of { Nil -> ys ; (Snoc zs z) -> Snoc (ys +++ zs) z }
) of { Nil -> xs ; (Snoc ys y) -> Snoc (xs +++ ys) y }
) of { Nil -> ws ; (Snoc xs x) -> Snoc (ws +++ xs) y }
Then, of course, you end up writing a new type of difference list which reassociates appends to the left!
newtype LMonoid m = LMonoid (m -> m) -- "left-associative monoid"
toLM m = LMonoid (<> m)
fromLM (LMonoid f) = f mempty
instance Monoid m => Monoid (LMonoid m):
mempty = toLM mempty
LMonoid f `mappend` LMonoid g = LMonoid (g . f)
Carl hit it in his comment. We can write
data TList a = Nil | Single a | Node !(TList a) (TList a)
singleton :: a -> TList a
singleton = Single
instance Monoid (TList a) where
mempty = Nil
mappend = Node
We could get toList by just deriving Foldable, but let's write it out instead to see just what's going on.
instance Foldable TList where
foldMap _ Nil = mempty
foldMap f (Single a) = f a
foldMap f (Node t u) = foldMap f t <> foldMap f u
toList as0 = go as0 [] where
go Nil k = k
go (Single a) k = a : k
go (Node l r) k = go l (go r k)
toList is O(n), where n is the total number of internal nodes (i.e., the total number of mappend operations used to form the TList). This should be pretty clear: each Node is inspected exactly once. mempty, mappend, and singleton are each obviously O(1).
This is exactly the same as for a DList:
newtype DList a = DList ([a] -> [a])
singletonD :: a -> DList a
singletonD a = DList (a:)
instance Monoid (DList a) where
mempty = DList id
mappend (DList f) (DList g) = DList (f . g)
instance Foldable DList where
foldr c n xs = foldr c n (toList xs)
toList (DList f) = f []
Why, operationally, is this the same? Because, as you indicate in your question, the functions are represented in memory as trees. And they're represented as trees that look a lot like TLists! singletonD x produces a closure containing a (boring) (:) and an exciting x. When applied, it does O(1) work. mempty just produces the id function, which when applied does O(1) work. mappend as bs produces a closure that, when applied, does O(1) work of its own, plus O(length as + length bs) work in its children.
The shapes of the trees produced for TList and DList are actually the same. You should be able to convince yourself that they also have identical asymptotic performance when used incrementally: in each case, the program has to walk down the left spine of the tree to get to the first list element.
Both DList and TList are equally okay when built up and then used only once. They're equally lousy when built once and converted to lists multiple times.
As Will Ness showed with a similar type, the explicit tree representation is better if you want to add support for deconstructing the representation, as you can actually get your hands on the structure. TList can support a reasonably efficient uncons operation (that improves the structure as it works). To get efficient unsnoc as well, you'll need to use a fancier representation (a catenable deque). This implementation also has potentially wretched cache performance. You can switch to a cache-oblivious data structure, but it is practically guaranteed to be complicated.
As shown in this answer the trick is the rearrangement of (.) tree into the ($) list on access.
We can emulate this with
data Dlist a = List [a]
| Append (Dlist a) (Dlist a)
which will juggle the Append nodes, rotating the tree to the right, to push the leftmost node up so it becomes the uppermost-left, on the first access, after which the next tail(**) operation becomes O(1) (*) :
let x = (List [1..10] `Append` List [11..20]) `Append` List [21..30]
and tail x(**) is to produce
List [2..10] `Append` (List [11..20] `Append` List [21..30])
tail(**) should be trivial to implement. Of course if will only pattern match the leftmost List node's list (with (x:xs)) when that node is finally discovered, and won't touch contents of anything else inside the Append nodes as they are juggled. Laziness is thus naturally preserved.
(**) 2020 edit: what this actually means is to have one uncons :: Dlist a -> (a, Dlist a) operation producing the head and the new, rotated tail, simultaneously, so that uncons on the new tail is O(1).(*)
(*) edit: O(1) in case the leftmost List node's list isn't empty. Overall, taking into account the possible nested-on-the-left Append nodes that will have to be rearranged as they come to the fore after the first leftmost List node is exhausted, the access to all n elements of the result will be O(n+m), where m is the number of empty lists.
update: A historical note: this is actually quite similar (if not exactly the same) as the efficient tree-fringe enumeration problem, dealt with in 1977 by John McCarthy, whose gopher function did exactly the same kind of nodes re-arrangement (tree rotations to the right) as the proposed here tail(**) would.

interleaving two strings, preserving order: functional style

In this question, the author brings up an interesting programming question: given two string, find possible 'interleaved' permutations of those that preserves order of original strings.
I generalized the problem to n strings instead of 2 in OP's case, and came up with:
-- charCandidate is a function that finds possible character from given strings.
-- input : list of strings
-- output : a list of tuple, whose first value holds a character
-- and second value holds the rest of strings with that character removed
-- i.e ["ab", "cd"] -> [('a', ["b", "cd"])] ..
charCandidate xs = charCandidate' xs []
charCandidate' :: [String] -> [String] -> [(Char, [String])]
charCandidate' [] _ = []
charCandidate' ([]:xs) prev =
charCandidate' xs prev
charCandidate' (x#(c:rest):xs) prev =
(c, prev ++ [rest] ++ xs) : charCandidate' xs (x:prev)
interleavings :: [String] -> [String]
interleavings xs = interleavings' xs []
-- interleavings is a function that repeatedly applies 'charCandidate' function, to consume
-- the tuple and build permutations.
-- stops looping if there is no more tuple from charCandidate.
interleavings' :: [String] -> String -> [String]
interleavings' xs prev =
let candidates = charCandidate xs
in case candidates of
[] -> [prev]
_ -> concat . map (\(char, ys) -> interleavings' ys (prev ++ [char])) $ candidates
-- test case
input :: [String]
input = ["ab", "cd"]
-- interleavings input == ["abcd","acbd","acdb","cabd","cadb","cdab"]
it works, however I'm quite concerned with the code:
it is ugly. no point-free!
explicit recursion and additional function argument prev to preserve states
using tuples as intermediate form
How can I rewrite the above program to be more "haskellic", concise, readable and more conforming to "functional programming"?
I think I would write it this way. The main idea is to treat creating an interleaving as a nondeterministic process which chooses one of the input strings to start the interleaving and recurses.
Before we start, it will help to have a utility function that I have used countless times. It gives a convenient way to choose an element from a list and know which element it was. This is a bit like your charCandidate', except that it operates on a single list at a time (and is consequently more widely applicable).
zippers :: [a] -> [([a], a, [a])]
zippers = go [] where
go xs [] = []
go xs (y:ys) = (xs, y, ys) : go (y:xs) ys
With that in hand, it is easy to make some non-deterministic choices using the list monad. Notionally, our interleavings function should probably have a type like [NonEmpty a] -> [[a]] which promises that each incoming string has at least one character in it, but the syntactic overhead of NonEmpty is too annoying for a simple exercise like this, so we'll just give wrong answers when this precondition is violated. You could also consider making this a helper function and filtering out empty lists from your top-level function before running this.
interleavings :: [[a]] -> [[a]]
interleavings [] = [[]]
interleavings xss = do
(xssL, h:xs, xssR) <- zippers xss
t <- interleavings ([xs | not (null xs)] ++ xssL ++ xssR)
return (h:t)
You can see it go in ghci:
> interleavings ["abc", "123"]
["abc123","ab123c","ab12c3","ab1c23","a123bc","a12bc3","a12b3c","a1bc23","a1b23c","a1b2c3","123abc","12abc3","12ab3c","12a3bc","1abc23","1ab23c","1ab2c3","1a23bc","1a2bc3","1a2b3c"]
> interleavings ["a", "b", "c"]
["abc","acb","bac","bca","cba","cab"]
> permutations "abc" -- just for fun, to compare
["abc","bac","cba","bca","cab","acb"]
This is fastest implementation I've come up with so far. It interleaves a list of lists pairwise.
interleavings :: [[a]] -> [[a]]
interleavings = foldr (concatMap . interleave2) [[]]
This horribly ugly mess is the best way I could find to interleave two lists. It's intended to be asymptotically optimal (which I believe it is); it's not very pretty. The constant factors could be improved by using a special-purpose queue (such as the one used in Data.List to implement inits) rather than sequences, but I don't feel like including that much boilerplate.
{-# LANGUAGE BangPatterns #-}
import Data.Monoid
import Data.Foldable (toList)
import Data.Sequence (Seq, (|>))
interleave2 :: [a] -> [a] -> [[a]]
interleave2 xs ys = interleave2' mempty xs ys []
interleave2' :: Seq a -> [a] -> [a] -> [[a]] -> [[a]]
interleave2' !prefix xs ys rest =
(toList prefix ++ xs ++ ys)
: interleave2'' prefix xs ys rest
interleave2'' :: Seq a -> [a] -> [a] -> [[a]] -> [[a]]
interleave2'' !prefix [] _ = id
interleave2'' !prefix _ [] = id
interleave2'' !prefix xs#(x : xs') ys#(y : ys') =
interleave2' (prefix |> y) xs ys' .
interleave2'' (prefix |> x) xs' ys
Using foldr over interleave2
interleave :: [[a]] -> [[a]]
interleave = foldr ((concat .) . map . iL2) [[]] where
iL2 [] ys = [ys]
iL2 xs [] = [xs]
iL2 (x:xs) (y:ys) = map (x:) (iL2 xs (y:ys)) ++ map (y:) (iL2 (x:xs) ys)
Another approach would be to use the list monad:
interleavings xs ys = interl xs ys ++ interl ys xs where
interl [] ys = [ys]
interl xs [] = [xs]
interl xs ys = do
i <- [1..(length xs)]
let (h, t) = splitAt i xs
map (h ++) (interl ys t)
So the recursive part will alternate between the two lists, taking all from 1 to N elements from each list in turns and then produce all possible combinations of that. Fun use of the list monad.
Edit: Fixed bug causing duplicates
Edit: Answer to dfeuer. It turned out tricky to do code in the comment field. An example of solutions that do not use length could look something like:
interleavings xs ys = interl xs ys ++ interl ys xs where
interl [] ys = [ys]
interl xs [] = [xs]
interl xs ys = splits xs >>= \(h, t) -> map (h ++) (interl ys t)
splits [] = []
splits (x:xs) = ([x], xs) : map ((h, t) -> (x:h, t)) (splits xs)
The splits function feels a bit awkward. It could be replaced by use of takeWhile or break in combination with splitAt, but that solution ended up a bit awkward as well. Do you have any suggestions?
(I got rid of the do notation just to make it slightly shorter)
Combining the best ideas from the existing answers and adding some of my own:
import Control.Monad
interleave [] ys = return ys
interleave xs [] = return xs
interleave (x : xs) (y : ys) =
fmap (x :) (interleave xs (y : ys)) `mplus` fmap (y :) (interleave (x : xs) ys)
interleavings :: MonadPlus m => [[a]] -> m [a]
interleavings = foldM interleave []
This is not the fastest possible you can get, but it should be good in terms of general and simple.

Maybe and recursion on list

I try to implement the reverse function with Maybe. I don't know how to return Just in pattern matching using recursion. By example, ghci> myReverse [1,2,3] need to return Just [3,2,1]. Here is my code :
myReverse :: [a] -> Maybe [a]
myReverse [] = Nothing
myReverse [x] = Just [x]
myReverse (x:xs) = myReverse xs ++ [x] -- here's my problem.
I thought that myReverse (x:xs) = Just $ myReverse xs ++ [x] work, but no and I don't know how to do it. What I would to know it is how to do it and why.
Thanks for your help!
myReverse returns a Maybe [a], which can't be directly appended to something because it is not a list. IOW the value of myReverse xs will be either Nothing, or Just <some list>. You need to pattern match on the result.
myReverse (x:xs) =
case myReverse xs of
Just list -> ...
Nothing -> ...
And of course, you need to decide what needs to be done in each of these cases, depending on what you want myReverse to do.
Also keep in mind that not every function needs to be recursive, so you can call the regular reverse from myReverse if you need it.
As [a] is a Monoid define by,
instance Monoid [a] where
mempty = []
mappend = (++)
Then Maybe [a] is also a Monoid,
instance Monoid a => Monoid (Maybe a) where
mempty = Nothing
Nothing `mappend` m = m
m `mappend` Nothing = m
Just m1 `mappend` Just m2 = Just (m1 `mappend` m2)
Note the type constraint in the instance declaration which impose a to be a Monoid or else Maybe a won't.
We can then use mappend, (<>), to chain our recursive call at the condition to transform the head of the list to a singleton.
import Data.Monoid ((<>))
myReverse :: [a] -> Maybe [a]
myReverse [] = Nothing
myReverse (x:xs) = myReverse xs <> Just [x]
Last note, the previous fold solution can be improve too.
>>> let mrev = foldl' (\x y -> Just [y] <> x ) Nothing
>>> mrev []
Nothing
>>> mrev "hello"
Just "olleh"
Previous fold answer
Knowing that reverse can be define using fold as follow,
>>> foldl' (flip (:)) [] [1..5]
[5,4,3,2,1]
This can be rewritten as,
>>> foldl' (\x y -> y:x) [] [1..5]
[5,4,3,2,1]
To adapt for Maybe type, we do the following transformation,
The seed [] become (Just [])
The anonymous function must now be apply inside Just, we use fmap to do it.
This lead us to,
>>> foldl' (\x y -> fmap (y:) x) (Just []) [1..5]
Just [5,4,3,2,1]
Finally,
mreverse xs | null xs = Nothing
| foldl' (\x y -> fmap (y:) x) (Just []) xs
I thought of something along the lines of luqui's, except applying the Maybe at the end:
myReverse :: [a] -> Maybe [a]
myReverse ys
| null (myReverse' ys) = Nothing
| otherwise = Just (myReverse' ys)
where
myReverse' [] = []
myReverse' (x:xs) = myReverse' xs ++ [x]
Or, if you will,
myReverse ys | null (reverse ys) = Nothing
| otherwise = Just (reverse ys)

Resources