Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I'm a math PhD student minoring in CS and currently taking a class in Haskell. We just learned about liftM.
The concepts seem similar but I haven't been able to figure out exactly how liftM can be thought of as a lift in the category-theoretical sense (I know very little category theory and was introduced to lifts in a Topology class).
Given the lack of activity -- and the lack of an obvious connection -- I think it's safe to say that liftM was not named because of its connection to topological and category-theoretic lifts.
Instead, I think the term "lift" has come to generically mean any transformation from one domain of reasoning to another, and it is this sense of "lift" that was the historical reason for the name liftM. Specifically: liftM transforms a pure function, "lift"ing it into the domain of a specific monad.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
pure / impure: appears when we talk about the different between Haskell and lisp-family.
safe / unsafe: appears when we name functions like unsafePerformIO, unsafeCoerce.
referential transparency / referential opacity: appears when we emphasize the benefit of purely functional programming.
The difference between these words are very subtle, I find there is some post talking about them individually, but I'm still hoping there is a clear comparison between them, and I can't find such a post here yet.
I've always been fond of Amr Sabry's 1998 paper that explored a similar question with the rigor it deserved: https://www.cs.indiana.edu/~sabry/papers/purelyFunctional.ps
A sample quote:
A language is purely functional if (i) it includes every simply typed
lambda-calculus term, and (ii) its call-by-name, call-by-need, and
call-by-value implementations are equivalent modulo divergence and
errors.
While this question can generate a lot of "opinion" based answers (which I am carefully avoiding!), reading through Amr's paper can put you in the right mindset about how to think about this question; regardless whether you end up agreeing with him or not.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I understand applicative vs monad style of programming, but most articles discuss this distinction with "simple" monads like Maybe.
But what about monads like Reader, Writer and State. Are there pracrical examples of using them in applicative way?
Every time you use the foo <$> bar <*> baz idom with monadic functions bar and baz, you are using a Monad in an applicative way. This is not a very deep use of Applicative, but rather a convenient way of writing a bit of code, and independent of the Monad – and hence you will find this style also for Reader, Writer and State.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I am learning semantics of Haskell and there I came across this question:
I have tried it but still unable to conclude the answer. It will be great if someone explains me how to prove this one. Thank you.
Just a sketch -> Since pn(s) for fixed n is morphism Ninf -> N , that is set of Integers into Integer, this proof can be simplified using this relation into proof of transitivity over integers
[1,0,0 .. ] -> [2,0,0 ..] -> [3,0,0 ..] -> ...
I am sure you can find even more interesting one
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Im looking for an example of a weakly normalising lambda term.
Am I right in saying that the following:
(λa.b)((λx.xx)(λx.xx))
Reduces to:
b
or:
doesnt terminate (if you try to reduce (λx.xx)(λx.xx))
I wasnt sure if the first reduction is correct so just need some clarification,
thanks.
If you evaluate the right term first and continually then it will never reach a normal form, thus it is not strongly normalizable. If you evaluate the left term first it will immediately reach a normal form, thus it is normalizable and demonstrates that this term is weakly normalizable. It's also an example of the non-confluence of the untyped lambda calculus.
Note that you're more likely to want to talk about how a rewriting system is normalizing than a particular term. This term is thus a counterexample to the strong normalization property of untyped lambda calculus, but does not provide positive evidence that ULC is weakly normalizing (and it isn't).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Why is the function for lifting a value into a functor named pure in Control.Applicative?
Think of pure as an adjective.
foo <*> pure 4 = foo applied on a pure value 4.
(As for the exact reason why it's called pure, probably only McBride and Paterson will know.)
It's a little like fromInteger. Its argument is always a pure value or function that will be lifted into the functor. Perhaps it should have been fromPure but you know how Haskell people love to shorten names (e.g. fst and snd instead of first and second...).