Extract a real or default value from a Monad with fmap or <$> or the likes -- how? [duplicate] - haskell

Is there a built-in function with signature :: (Monad m) => m a -> a ?
Hoogle tells that there is no such function.
Can you explain why?

A monad only supplies two functions:
return :: Monad m => a -> m a
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Both of these return something of type m a, so there is no way to combine these in any way to get a function of type Monad m => m a -> a. To do that, you'll need more than these two functions, so you need to know more about m than that it's a monad.
For example, the Identity monad has runIdentity :: Identity a -> a, and several monads have similar functions, but there is no way to provide it generically. In fact, the inability to "escape" from the monad is essential for monads like IO.

There is probably a better answer than this, but one way to see why you cannot have a type (Monad m) => m a -> a is to consider a null monad:
data Null a = Null
instance Monad Null where
return a = Null
ma >>= f = Null
Now (Monad m) => m a -> a means Null a -> a, ie getting something out of nothing. You can't do that.

This doesn't exist because Monad is a pattern for composition, not a pattern for decomposition. You can always put more pieces together with the interface it defines. It doesn't say a thing about taking anything apart.
Asking why you can't take something out is like asking why Java's Iterator interface doesn't contain a method for adding elements to what it's iterating over. It's just not what the Iterator interface is for.
And your arguments about specific types having a kind of extract function follows in the exact same way. Some particular implementation of Iterator might have an add function. But since it's not what Iterators are for, the presence that method on some particular instance is irrelevant.
And the presence of fromJust is just as irrelevant. It's not part of the behavior Monad is intended to describe. Others have given lots of examples of types where there is no value for extract to work on. But those types still support the intended semantics of Monad. This is important. It means that Monad is a more general interface than you are giving it credit for.

Suppose there was such a function:
extract :: Monad m => m a -> a
Now you could write a "function" like this:
appendLine :: String -> String
appendLine str = str ++ extract getLine
Unless the extract function was guaranteed never to terminate, this would violate referential transparency, because the result of appendLine "foo" would (a) depend on something other than "foo", (b) evaluate to different values when evaluated in different contexts.
Or in simpler words, if there was an actually useful extract operation Haskell would not be purely functional.

Is there a build-in function with signature :: (Monad m) => m a -> a ?
If Hoogle says there isn't...then there probably isn't, assuming your definition of "built in" is "in the base libraries".
Hoogle tells that there is no such function. Can you explain why?
That's easy, because Hoogle didn't find any function in the base libraries that matches that type signature!
More seriously, I suppose you were asking for the monadic explanation. The issues are safety and meaning. (See also my previous thoughts on magicMonadUnwrap :: Monad m => m a -> a)
Suppose I tell you I have a value which has the type [Int]. Since we know that [] is a monad, this is similar to telling you I have a value which has the type Monad m => m Int. So let's suppose you want to get the Int out of that [Int]. Well, which Int do you want? The first one? The last one? What if the value I told you about is actually an empty list? In that case, there isn't even an Int to give you! So for lists, it is unsafe to try and extract a single value willy-nilly like that. Even when it is safe (a non-empty list), you need a list-specific function (for example, head) to clarify what you mean by desiring f :: [Int] -> Int. Hopefully you can intuit from here that the meaning of Monad m => m a -> a is simply not well defined. It could hold multiple meanings for the same monad, or it could mean absolutely nothing at all for some monads, and sometimes, it's just simply not safe.

Because it may make no sense (actually, does make no sense in many instances).
For example, I might define a Parser Monad like this:
data Parser a = Parser (String ->[(a, String)])
Now there is absolutely no sensible default way to get a String out of a Parser String. Actually, there is no way at all to get a String out of this with just the Monad.

There is a useful extract function and some other functions related to this at http://hackage.haskell.org/package/comonad-5.0.4/docs/Control-Comonad.html
It's only defined for some functors/monads and it doesn't necessarily give you the whole answer but rather gives an answer. Thus there will be possible subclasses of comonad that give you intermediate stages of picking the answer where you could control it. Probably related to the possible subclasses of Traversable. I don't know if such things are defined anywhere.
Why hoogle doesn't list this function at all appears to be because the comonad package isn't indexed otherwise I think the Monad constraint would be warned and extract would be in the results for those Monads with a Comonad instance. Perhaps this is because the hoogle parser is incomplete and fails on some lines of code.
My alternative answers:
you can perform a - possibly recursive - case analysis if you've imported the type's constructors
You can slink your code that would use the extracted values into the monad using monad >>= \a -> return $ your code uses a here as an alternative code structure and as long as you can convert the monad to "IO ()" in a way that prints your outputs you're done. This doesn't look like extraction but maths isn't the same as the real world.

Well, technicaly there is unsafePerformIO for the IO monad.
But, as the name itself suggests, this function is evil and you should only use it if you really know what you are doing (and if you have to ask wether you know or not then you don't)

Related

How does return statement work in Haskell? [duplicate]

This question already has answers here:
What's so special about 'return' keyword
(3 answers)
Closed 5 years ago.
Consider these functions
f1 :: Maybe Int
f1 = return 1
f2 :: [Int]
f2 = return 1
Both have the same statement return 1. But the results are different. f1 gives value Just 1 and f2 gives value [1]
Looks like Haskell invokes two different versions of return based on return type. I like to know more about this kind of function invocation. Is there a name for this feature in programming languages?
This is a long meandering answer!
As you've probably seen from the comments and Thomas's excellent (but very technical) answer You've asked a very hard question. Well done!
Rather than try to explain the technical answer I've tried to give you a broad overview of what Haskell does behind the scenes without diving into technical detail. Hopefully it will help you to get a big picture view of what's going on.
return is an example of type inference.
Most modern languages have some notion of polymorphism. For example var x = 1 + 1 will set x equal to 2. In a statically typed language 2 will usually be an int. If you say var y = 1.0 + 1.0 then y will be a float. The operator + (which is just a function with a special syntax)
Most imperative languages, especially object oriented languages, can only do type inference one way. Every variable has a fixed type. When you call a function it looks at the types of the argument and chooses a version of that function that fits the types (or complains if it can't find one).
When you assign the result of a function to a variable the variable already has a type and if it doesn't agree with the type of the return value you get an error.
So in an imperative language the "flow" of type deduction follows time in your program Deduce the type of a variable, do something with it and deduce the type of the result. In a dynamically typed language (such as Python or javascript) the type of a variable is not assigned until the value of the variable is computed (which is why there don't seem to be types). In a statically typed language the types are worked out ahead of time (by the compiler) but the logic is the same. The compiler works out what the types of variables are going to be, but it does so by following the logic of the program in the same way as the program runs.
In Haskell the type inference also follows the logic of the program. Being Haskell it does so in a very mathematically pure way (called System F). The language of types (that is the rules by which types are deduced) are similar to Haskell itself.
Now remember Haskell is a lazy language. It doesn't work out the value of anything until it needs it. That's why it makes sense in Haskell to have infinite data structures. It never occurs to Haskell that a data structure is infinite because it doesn't bother to work it out until it needs to.
Now all that lazy magic happens at the type level too. In the same way that Haskell doesn't work out what the value of an expression is until it really needs to, Haskell doesn't work out what the type of an expression is until it really needs to.
Consider this function
func (x : y : rest) = (x,y) : func rest
func _ = []
If you ask Haskell for the type of this function it has a look at the definition, sees [] and : and deduces that it's working with lists. But it never needs to look at the types of x and y, it just knows that they have to be the same because they end up in the same list. So it deduces the type of the function as [a] -> [a] where a is a type that it hasn't bothered to work out yet.
So far no magic. But it's useful to notice the difference between this idea and how it would be done in an OO language. Haskell doesn't convert the arguments to Object, do it's thing and then convert back. Haskell just hasn't been asked explicitly what the type of the list is. So it doesn't care.
Now try typing the following into ghci
maxBound - length ""
maxBound : "Hello"
Now what just happened !? minBound bust be a Char because I put it on the front of a string and it must be an integer because I added it to 0 and got a number. Plus the two values are clearly very different.
So what is the type of minBound? Let's ask ghci!
:type minBound
minBound :: Bounded a => a
AAargh! what does that mean? Basically it means that it hasn't bothered to work out exactly what a is, but is has to be Bounded if you type :info Bounded you get three useful lines
class Bounded a where
minBound :: a
maxBound :: a
and a lot of less useful lines
So if a is Bounded there are values minBound and maxBound of type a.
In fact under the hood Bounded is just a value, it's "type" is a record with fields minBound and maxBound. Because it's a value Haskell doesn't look at it until it really needs to.
So I appear to have meandered somewhere in the region of the answer to your question. Before we move onto return (which you may have noticed from the comments is a wonderfully complex beast.) let's look at read.
ghci again
read "42" + 7
read "'H'" : "ello"
length (read "[1,2,3]")
and hopefully you won't be too surprised to find that there are definitions
read :: Read a => String -> a
class Read where
read :: String -> a
so Read a is just a record containing a single value which is a function String -> a. Its very tempting to assume that there is one read function which looks at a string, works out what type is contained in the string and returns that type. But it does the opposite. It completely ignores the string until it's needed. When the value is needed, Haskell first works out what type it's expecting, once it's done that it goes and gets the appropriate version of the read function and combines it with the string.
now consider something slightly more complex
readList :: Read a => [String] -> a
readList strs = map read strs
under the hood readList actually takes two arguments
readList' (Read a) -> [String] -> [a]
readList' {read = f} strs = map f strs
Again as Haskell is lazy it only bothers looking at the arguments when it's needs to find out the return value, at that point it knows what a is, so the compiler can go and fine the right version of Read. Until then it doesn't care.
Hopefully that's given you a bit of an idea of what's happening and why Haskell can "overload" on the return type. But it's important to remember it's not overloading in the conventional sense. Every function has only one definition. It's just that one of the arguments is a bag of functions. read_str doesn't ever know what types it's dealing with. It just knows it gets a function String -> a and some Strings, to do the application it just passes the arguments to map. map in turn doesn't even know it gets strings. When you get deeper into Haskell it becomes very important that functions don't know very much about the types they're dealing with.
Now let's look at return.
Remember how I said that the type system in Haskell was very similar to Haskell itself. Remember that in Haskell functions are just ordinary values.
Does this mean I can have a type that takes a type as an argument and returns another type? Of course it does!
You've seen some type functions Maybe takes a type a and returns another type which can either be Just a or Nothing. [] takes a type a and returns a list of as. Type functions in Haskell are usually containers. For example I could define a type function BinaryTree which stores a load of a's in a tree like structure. There are of course lots of much stranger ones.
So, if these type functions are similar to ordinary types I can have a typeclass that contains type functions. One such typeclass is Monad
class Monad m where
return a -> m a
(>>=) m a (a -> m b) -> m b
so here m is some type function. If I want to define Monad for m I need to define return and the scary looking operator below it (which is called bind)
As others have pointed out the return is a really misleading name for a fairly boring function. The team that designed Haskell have since realised their mistake and they're genuinely sorry about it. return is just an ordinary function that takes an argument and returns a Monad with that type in it. (You never asked what a Monad actually is so I'm not going to tell you)
Let's define Monad for m = Maybe!
First I need to define return. What should return x be? Remember I'm only allowed to define the function once, so I can't look at x because I don't know what type it is. I could always return Nothing, but that seems a waste of a perfectly good function. Let's define return x = Just x because that's literally the only other thing I can do.
What about the scary bind thing? what can we say about x >>= f? well x is a Maybe a of some unknown type a and f is a function that takes an a and returns a Maybe b. Somehow I need to combine these to get a Maybe b`
So I need to define Nothing >== f. I can't call f because it needs an argument of type a and I don't have a value of type a I don't even know what 'a' is. I've only got one choice which is to define
Nothing >== f = Nothing
What about Just x >>= f? Well I know x is of type a and f takes a as an argument, so I can set y = f a and deduce that y is of type b. Now I need to make a Maybe b and I've got a b so ...
Just x >>= f = Just (f x)
So I've got a Monad! what if m is List? well I can follow a similar sort of logic and define
return x = [x]
[] >>= f = []
(x : xs) >>= a = f x ++ (xs >>= f)
Hooray another Monad! It's a nice exercise to go through the steps and convince yourself that there's no other sensible way of defining this.
So what happens when I call return 1?
Nothing!
Haskell's Lazy remember. The thunk return 1 (technical term) just sits there until someone needs the value. As soon as Haskell needs the value it know what type the value should be. In particular it can deduce that m is List. Now that it knows that Haskell can find the instance of Monad for List. As soon as it does that it has access to the correct version of return.
So finally Haskell is ready To call return, which in this case returns [1]!
The return function is from the Monad class:
class Applicative m => Monad (m :: * -> *) where
...
return :: a -> m a
So return takes any value of type a and results in a value of type m a. The monad, m, as you've observed is polymorphic using the Haskell type class Monad for ad hoc polymorphism.
At this point you probably realize return is not an good, intuitive, name. It's not even a built in function or a statement like in many other languages. In fact a better-named and identically-operating function exists - pure. In almost all cases return = pure.
That is, the function return is the same as the function pure (from the Applicative class) - I often think to myself "this monadic value is purely the underlying a" and I try to use pure instead of return if there isn't already a convention in the codebase.
You can use return (or pure) for any type that is a class of Monad. This includes the Maybe monad to get a value of type Maybe a:
instance Monad Maybe where
...
return = pure -- which is from Applicative
...
instance Applicative Maybe where
pure = Just
Or for the list monad to get a value of [a]:
instance Applicative [] where
{-# INLINE pure #-}
pure x = [x]
Or, as a more complex example, Aeson's parse monad to get a value of type Parser a:
instance Applicative Parser where
pure a = Parser $ \_path _kf ks -> ks a

Solving linear equations - Math.LinearEquationSolver returns IO(Maybe[Rational])

I am writing a program to solve certain mathematical problems, and Haskell is the language I've written it in so far (for various reasons). At one point, I need to solve a system of linear equations, and then use the result for something else. I can give more details if needed, but didn't want to go crazy at first.
The easiest way I could find of solving linear equations was to use the Math.LinearEquationSolver module from the linearEqSolver package on hackage. Everything works fine, except that all of the methods (e.g. solveRationalLinearEqs) have a return type of IO (Maybe [Rational]). I want to be able to feed the solution into a method which accepts [Rational].
I know that the whole point of IO is that you can't just take stuff out of it and put it back in, but I haven't written Haskell in enough years now that I've forgotten all of what I used to know about IO.
Is there an easy explanation/example of what I should do? Is the simplest solution to use some other module/find some other way of solving the system of equations?
Edit: I have tried using the HMatrix method linearSolveLS but this returns a list of type [Double] (and is also nowhere near accurate enough for what I need, even if I did settle for a non-fractional type), whereas I would really prefer the return to be of type [Rational] (as in LinearEquationSolver).
The most idiomatic way to do this is to use >>= to combine the IO action that produces your result with the rest of your program.
(>>=) :: Monad m => m a -> (a -> m b) -> m b
(>>=) :: IO (Maybe [Rational]) -> ((Maybe [Rational]) -> IO a) -> IO a
You would use it like this:
(linearEqSolver arg1 arg2 arg3 ... argn) >>= \maybeResult -> case maybeResult of
Just resultList -> (... :: IO a)
Nothing -> (... :: IO a)
Alternatively, if the rest of your code doesn't need IO, you can use fmap, or its infix synonym <$> to map a pure function over the result of linearEqSolver.
theRestOfYourCode :: Maybe [Rational] -> a
(theRestOfYourCode <$> (linearEqSolver arg1 arg2 ... argn)) :: IO a
Note: Most of these type signatures are just for clarity, and can be inferred.
You could also use the Monad instance for Maybe in the same way, but pattern matching is clearer in this case, since it is hard to mentally parse expressions that use multiple Monad instances in general.

Can I declare a NULL value in Haskell?

Just curious, seems when declaring a name, we always specify some valid values, like let a = 3. Question is, in imperative languages include c/java there's always a keyword of "null". Does Haskell has similar thing? When could a function object be null?
There is a “null” value that you can use for variables of any type. It's called ⟂ (pronounced bottom). We don't need a keyword to produce bottom values; actually ⟂ is the value of any computation which doesn't terminate. For instance,
bottom = let x = x in x -- or simply `bottom = bottom`
will infinitely loop. It's obviously not a good idea to do this deliberately, however you can use undefined as a “standard bottom value”. It's perhaps the closest thing Haskell has to Java's null keyword.
But you definitely shouldn't/can't use this for most of the applications where Java programmers would grab for null.
Since everything in Haskell is immutable, a value that's undefined will always stay undefined. It's not possible to use this as a “hold on a second, I'll define it later” indication†.
It's not possible to check whether a value is bottom or not. For rather deep theoretical reasons, in fact. So you can't use this for values that may or may not be defined.
And you know what? It's really good that Haskell does't allow this! In Java, you constantly need to be wary that values might be null. In Haskell, if a value is bottom than something is plain broken, but this will never be part of intended behaviour / something you might need to check for. If for some value it's intended that it might not be defined, then you must always make this explicit by wrapping the type in a Maybe. By doing this, you make sure that anybody trying to use the value must first check whether it's there. Not possible to forget this and run into a null-reference exception at runtime!
And because Haskell is so good at handling variant types, checking the contents of a Maybe-wrapped value is really not too cumbersome. You can just do it explicitly with pattern matching,
quun :: Int -> String
quun i = case computationWhichMayFail i of
Just j -> show j
Nothing -> "blearg, failed"
computationWhichMayFail :: Int -> Maybe Int
or you can use the fact that Maybe is a functor. Indeed it is an instance of almost every specific functor class: Functor, Applicative, Alternative, Foldable, Traversable, Monad, MonadPlus. It also lifts semigroups to monoids.
Dᴏɴ'ᴛ Pᴀɴɪᴄ now,
you don't need to know what the heck these things are. But when you've learned what they do, you will be able to write very concise code that automagically handles missing values always in the right way, with zero risk of missing a check.
†Because Haskell is lazy, you generally don't need to defer any calculations to be done later. The compiler will automatically see to it that the computation is done when it's necessary, and no sooner.
There is no null in Haskell. What you want is the Maybe monad.
data Maybe a
= Just a
| Nothing
Nothing refers to classic null and Just contains a value.
You can then pattern match against it:
foo Nothing = Nothing
foo (Just a) = Just (a * 10)
Or with case syntax:
let m = Just 10
in case m of
Just v -> print v
Nothing -> putStrLn "Sorry, there's no value. :("
Or use the supperior functionality provided by the typeclass instances for Functor, Applicative, Alternative, Monad, MonadPlus and Foldable.
This could then look like this:
foo :: Maybe Int -> Maybe Int -> Maybe Int
foo x y = do
a <- x
b <- y
return $ a + b
You can even use the more general signature:
foo :: (Monad m, Num a) => m a -> m a -> m a
Which makes this function work for ANY data type that is capable of the functionality provided by Monad. So you can use foo with (Num a) => Maybe a, (Num a) => [a], (Num a) => Either e a and so on.
Haskell does not have "null". This is a design feature. It completely prevents any possibility of your code crashing due to a null-pointer exception.
If you look at code written in an imperative language, 99% of the code expects stuff to never be null, and will malfunction catastrophically if you give it null. But then 1% of the code does expect nulls, and uses this feature to specify optional arguments or whatever. But you can't easily tell, by looking at the code, which parts are expecting nulls as legal arguments, and which parts aren't. Hopefully it's documented — but don't hold your breath!
In Haskell, there is no null. If that argument is declared as Customer, then there must be an actual, real Customer there. You can't just pass in a null (intentionally or by mistake). So the 99% of the code that is expecting a real Customer will always work.
But what about the other 1%? Well, for that we have Maybe. But it's an explicit thing; you have to explicitly say "this value is optional". And you have to explicitly check when you use it. You cannot "forget" to check; it won't compile.
So yes, there is no "null", but there is Maybe which is kinda similar, but safer.
Not in Haskell (or in many other FP languages). If you have some expression of some type T, its evaluation will give a value of type T, with the following exceptions:
infinite recursion may make the program "loop forever" and failing to return anything
let f n = f (n+1) in f 0
runtime errors can abort the program early, e.g.:
division by zero, square root of negative, and other numerical errors
head [], fromJust Nothing, and other partial functions used on invalid inputs
explicit calls to undefined, error "message", or other exception-throwing primitives
Note that even if the above cases might be regarded as "special" values called "bottoms" (the name comes from domain theory), you can not test against these values at runtime, in general. So, these are not at all the same thing as Java's null. More precisely, you can't write things like
-- assume f :: Int -> Int
if (f 5) is a division-by-zero or infinite recursion
then 12
else 4
Some exceptional values can be caught in the IO monad, but forget about that -- exceptions in Haskell are not idiomatic, and roughly only used for IO errors.
If you want an exceptional value which can be tested at run-time, use the Maybe a type, as #bash0r already suggested. This type is similar to Scala's Option[A] or Java's not-so-much-used Optional<A>.
The value is having both a type T and type Maybe T is to be able to precisely identify which functions always succeed, and which ones can fail. In Haskell the following is frowned upon, for instance:
-- Finds a value in a list. Returns -1 if not present.
findIndex :: Eq a => [a] -> a -> Int
Instead this is preferred:
-- Finds a value in a list. Returns Nothing if not present.
findIndex :: Eq a => [a] -> a -> Maybe Int
The result of the latter is less convenient than the one of the former, since the Int must be unwrapped at every call. This is good, since in this way each user of the function is prevented to simply "ignore" the not-present case, and write buggy code.

How to extract value from monadic action

Is there a built-in function with signature :: (Monad m) => m a -> a ?
Hoogle tells that there is no such function.
Can you explain why?
A monad only supplies two functions:
return :: Monad m => a -> m a
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Both of these return something of type m a, so there is no way to combine these in any way to get a function of type Monad m => m a -> a. To do that, you'll need more than these two functions, so you need to know more about m than that it's a monad.
For example, the Identity monad has runIdentity :: Identity a -> a, and several monads have similar functions, but there is no way to provide it generically. In fact, the inability to "escape" from the monad is essential for monads like IO.
There is probably a better answer than this, but one way to see why you cannot have a type (Monad m) => m a -> a is to consider a null monad:
data Null a = Null
instance Monad Null where
return a = Null
ma >>= f = Null
Now (Monad m) => m a -> a means Null a -> a, ie getting something out of nothing. You can't do that.
This doesn't exist because Monad is a pattern for composition, not a pattern for decomposition. You can always put more pieces together with the interface it defines. It doesn't say a thing about taking anything apart.
Asking why you can't take something out is like asking why Java's Iterator interface doesn't contain a method for adding elements to what it's iterating over. It's just not what the Iterator interface is for.
And your arguments about specific types having a kind of extract function follows in the exact same way. Some particular implementation of Iterator might have an add function. But since it's not what Iterators are for, the presence that method on some particular instance is irrelevant.
And the presence of fromJust is just as irrelevant. It's not part of the behavior Monad is intended to describe. Others have given lots of examples of types where there is no value for extract to work on. But those types still support the intended semantics of Monad. This is important. It means that Monad is a more general interface than you are giving it credit for.
Suppose there was such a function:
extract :: Monad m => m a -> a
Now you could write a "function" like this:
appendLine :: String -> String
appendLine str = str ++ extract getLine
Unless the extract function was guaranteed never to terminate, this would violate referential transparency, because the result of appendLine "foo" would (a) depend on something other than "foo", (b) evaluate to different values when evaluated in different contexts.
Or in simpler words, if there was an actually useful extract operation Haskell would not be purely functional.
Is there a build-in function with signature :: (Monad m) => m a -> a ?
If Hoogle says there isn't...then there probably isn't, assuming your definition of "built in" is "in the base libraries".
Hoogle tells that there is no such function. Can you explain why?
That's easy, because Hoogle didn't find any function in the base libraries that matches that type signature!
More seriously, I suppose you were asking for the monadic explanation. The issues are safety and meaning. (See also my previous thoughts on magicMonadUnwrap :: Monad m => m a -> a)
Suppose I tell you I have a value which has the type [Int]. Since we know that [] is a monad, this is similar to telling you I have a value which has the type Monad m => m Int. So let's suppose you want to get the Int out of that [Int]. Well, which Int do you want? The first one? The last one? What if the value I told you about is actually an empty list? In that case, there isn't even an Int to give you! So for lists, it is unsafe to try and extract a single value willy-nilly like that. Even when it is safe (a non-empty list), you need a list-specific function (for example, head) to clarify what you mean by desiring f :: [Int] -> Int. Hopefully you can intuit from here that the meaning of Monad m => m a -> a is simply not well defined. It could hold multiple meanings for the same monad, or it could mean absolutely nothing at all for some monads, and sometimes, it's just simply not safe.
Because it may make no sense (actually, does make no sense in many instances).
For example, I might define a Parser Monad like this:
data Parser a = Parser (String ->[(a, String)])
Now there is absolutely no sensible default way to get a String out of a Parser String. Actually, there is no way at all to get a String out of this with just the Monad.
There is a useful extract function and some other functions related to this at http://hackage.haskell.org/package/comonad-5.0.4/docs/Control-Comonad.html
It's only defined for some functors/monads and it doesn't necessarily give you the whole answer but rather gives an answer. Thus there will be possible subclasses of comonad that give you intermediate stages of picking the answer where you could control it. Probably related to the possible subclasses of Traversable. I don't know if such things are defined anywhere.
Why hoogle doesn't list this function at all appears to be because the comonad package isn't indexed otherwise I think the Monad constraint would be warned and extract would be in the results for those Monads with a Comonad instance. Perhaps this is because the hoogle parser is incomplete and fails on some lines of code.
My alternative answers:
you can perform a - possibly recursive - case analysis if you've imported the type's constructors
You can slink your code that would use the extracted values into the monad using monad >>= \a -> return $ your code uses a here as an alternative code structure and as long as you can convert the monad to "IO ()" in a way that prints your outputs you're done. This doesn't look like extraction but maths isn't the same as the real world.
Well, technicaly there is unsafePerformIO for the IO monad.
But, as the name itself suggests, this function is evil and you should only use it if you really know what you are doing (and if you have to ask wether you know or not then you don't)

Unlike a Functor, a Monad can change shape?

I've always enjoyed the following intuitive explanation of a monad's power relative to a functor: a monad can change shape; a functor cannot.
For example: length $ fmap f [1,2,3] always equals 3.
With a monad, however, length $ [1,2,3] >>= g will often not equal 3. For example, if g is defined as:
g :: (Num a) => a -> [a]
g x = if x==2 then [] else [x]
then [1,2,3] >>= g is equal to [1,3].
The thing that troubles me slightly, is the type signature of g. It seems impossible to define a function which changes the shape of the input, with a generic monadic type such as:
h :: (Monad m, Num a) => a -> m a
The MonadPlus or MonadZero type classes have relevant zero elements, to use instead of [], but now we have something more than a monad.
Am I correct? If so, is there a way to express this subtlety to a newcomer to Haskell. I'd like to make my beloved "monads can change shape" phrase, just a touch more honest; if need be.
I've always enjoyed the following intuitive explanation of a monad's power relative to a functor: a monad can change shape; a functor cannot.
You're missing a bit of subtlety here, by the way. For the sake of terminology, I'll divide a Functor in the Haskell sense into three parts: The parametric component determined by the type parameter and operated on by fmap, the unchanging parts such as the tuple constructor in State, and the "shape" as anything else, such as choices between constructors (e.g., Nothing vs. Just) or parts involving other type parameters (e.g., the environment in Reader).
A Functor alone is limited to mapping functions over the parametric portion, of course.
A Monad can create new "shapes" based on the values of the parametric portion, which allows much more than just changing shapes. Duplicating every element in a list or dropping the first five elements would change the shape, but filtering a list requires inspecting the elements.
This is essentially how Applicative fits between them--it allows you to combine the shapes and parametric values of two Functors independently, without letting the latter influence the former.
Am I correct? If so, is there a way to express this subtlety to a newcomer to Haskell. I'd like to make my beloved "monads can change shape" phrase, just a touch more honest; if need be.
Perhaps the subtlety you're looking for here is that you're not really "changing" anything. Nothing in a Monad lets you explicitly mess with the shape. What it lets you do is create new shapes based on each parametric value, and have those new shapes recombined into a new composite shape.
Thus, you'll always be limited by the available ways to create shapes. With a completely generic Monad all you have is return, which by definition creates whatever shape is necessary such that (>>= return) is the identity function. The definition of a Monad tells you what you can do, given certain kinds of functions; it doesn't provide those functions for you.
Monad's operations can "change the shape" of values to the extent that the >>= function replaces leaf nodes in the "tree" that is the original value with a new substructure derived from the node's value (for a suitably general notion of "tree" - in the list case, the "tree" is associative).
In your list example what is happening is that each number (leaf) is being replaced by the new list that results when g is applied to that number. The overall structure of the original list still can be seen if you know what you're looking for; the results of g are still there in order, they've just been smashed together so you can't tell where one ends and the next begins unless you already know.
A more enlightening point of view may be to consider fmap and join instead of >>=. Together with return, either way gives an equivalent definition of a monad. In the fmap/join view, though, what is happening here is more clear. Continuing with your list example, first g is fmapped over the list yielding [[1],[],[3]]. Then that list is joined, which for list is just concat.
Just because the monad pattern includes some particular instances that allow shape changes doesn't mean every instance can have shape changes. For example, there is only one "shape" available in the Identity monad:
newtype Identity a = Identity a
instance Monad Identity where
return = Identity
Identity a >>= f = f a
In fact, it's not clear to me that very many monads have meaningful "shape"s: for example, what does shape mean in the State, Reader, Writer, ST, STM, or IO monads?
The key combinator for monads is (>>=). Knowing that it composes two monadic values and reading its type signature, the power of monads becomes more apparent:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
The future action can depend entirely on the outcome of the first action, because it is a function of its result. This power comes at a price though: Functions in Haskell are entirely opaque, so there is no way for you to get any information about a composed action without actually running it. As a side note, this is where arrows come in.
A function with a signature like h indeed cannot do many interesting things beyond performing some arithmetic on its argument. So, you have the correct intuition there.
However, it might help to look at commonly used libraries for functions with similar signatures. You'll find that the most generic ones, as you'd expect, perform generic monad operations like return, liftM, or join. Also, when you use liftM or fmap to lift an ordinary function into a monadic function, you typically wind up with a similarly generic signature, and this is quite convenient for integrating pure functions with monadic code.
In order to use the structure that a particular monad offers, you inevitably need to use some knowledge about the specific monad you're in to build new and interesting computations in that monad. Consider the state monad, (s -> (a, s)). Without knowing that type, we can't write get = \s -> (s, s), but without being able to access the state, there's not much point to being in the monad.
The simplest type of a function satisfying the requirement I can imagine is this:
enigma :: Monad m => m () -> m ()
One can implement it in one of the following ways:
enigma1 m = m -- not changing the shape
enigma2 _ = return () -- changing the shape
This was a very simple change -- enigma2 just discards the shape and replaces it with the trivial one. Another kind of generic change is combining two shapes together:
foo :: Monad m => m () -> m () -> m ()
foo a b = a >> b
The result of foo can have shape different from both a and b.
A third obvious change of shape, requiring the full power of the monad, is a
join :: Monad m => m (m a) -> m a
join x = x >>= id
The shape of join x is usually not the same as of x itself.
Combining those primitive changes of shape, one can derive non-trivial things like sequence, foldM and alike.
Does
h :: (Monad m, Num a) => a -> m a
h 0 = fail "Failed."
h a = return a
suit your needs? For example,
> [0,1,2,3] >>= h
[1,2,3]
This isn't a full answer, but I have a few things to say about your question that don't really fit into a comment.
Firstly, Monad and Functor are typeclasses; they classify types. So it is odd to say that "a monad can change shape; a functor cannot." I believe what you are trying to talk about is a "Monadic value" or perhaps a "monadic action": a value whose type is m a for some Monad m of kind * -> * and some other type of kind *. I'm not entirely sure what to call Functor f :: f a, I suppose I'd call it a "value in a functor", though that's not the best description of, say, IO String (IO is a functor).
Secondly, note that all Monads are necessarily Functors (fmap = liftM), so I'd say the difference you observe is between fmap and >>=, or even between f and g, rather than between Monad and Functor.

Resources