How to implement an optimal beta reduction on Levy's sense? [closed] - haskell

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
In 1990, John Lamping published a paper proposing an optimal implementation of the untyped lambda calculus. Since that paper is 25 years old, I wonder how much we have advanced since. Thus, my question is: what is a simple description of John's optimal lambda calculus evaluating algorithm (or, in case we made improvements since, of the improved algorithm), preferably explained briefly on Haskellish-pseudocode?
Update: as I've learned more since I asked, I believe a valid answer could be simply a pseudocode for an unbloated algorithm that 1. maps pure untyped lambda terms to interaction nets; 2. reduces those nets and 3. maps back from nets to lambda terms such as that the whole process normalizes the initial lambda term optimally.

Related

what is the relationship between safe & pure & referential transparency in functional programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
pure / impure: appears when we talk about the different between Haskell and lisp-family.
safe / unsafe: appears when we name functions like unsafePerformIO, unsafeCoerce.
referential transparency / referential opacity: appears when we emphasize the benefit of purely functional programming.
The difference between these words are very subtle, I find there is some post talking about them individually, but I'm still hoping there is a clear comparison between them, and I can't find such a post here yet.
I've always been fond of Amr Sabry's 1998 paper that explored a similar question with the rigor it deserved: https://www.cs.indiana.edu/~sabry/papers/purelyFunctional.ps
A sample quote:
A language is purely functional if (i) it includes every simply typed
lambda-calculus term, and (ii) its call-by-name, call-by-need, and
call-by-value implementations are equivalent modulo divergence and
errors.
While this question can generate a lot of "opinion" based answers (which I am carefully avoiding!), reading through Amr's paper can put you in the right mindset about how to think about this question; regardless whether you end up agreeing with him or not.

which one is best for parsing between Left corner Parsing algorithm and CYK parsing algorithm ? and Why? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
which one is best for parsing between Left corner Parsing algorithm and CYK parsing algorithm ? and Why ?
Generally speaking, CYK is a maximum-likelihood parse tree. It never gives you the best performance because of this reason and the fact that it ignores contextual information when assigns the probabilities. You need to modify it to consider more contexts, or integrate it into something else. For example, Left-Corner parser can use a CYK procedure, inside. So the answer to your question is, LC is more powerful than CYK, though it's computationally more expensive. Have a look at Mark Johnson's paper.

Programming language features [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Which features need to be present in a programming language such that it can express any sequential computation which a computer can excute today? And what if the language is Haskell in specific
Haskell is Turing complete.
My current beliefs have high weight on the outcome that any sound and complete description of "feature sets that guarantee Turing completeness" is either infinite or includes a non-terminating algorithm; so I believe it is not reasonable to expect an answer to your other question.

How important is Haskell in 2013? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm learning Haskell in order to gain knowledge of Functional programming to apply to Java 8. Is Haskell a marketable skill?
Haskell is used "in the real world," but in terms of "Am I likely to get a job using this?" it's on the very low end. Almost any other language you can likely name has more jobs that require it.
But in terms of learning, Haskell is a great language. It really helps you think about your programs differently. And having a good mind for application architecture is a very marketable skill.

How would you implement a functional programming language? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
In functional paradigm, a function is a primary 'control structure'. For eg., the + operator is also treated as a function and you can pass them around like any other 'objects'. I was wondering that if i had to implement a toy functional language, would i implement simple functions as true functions (i.e. translate + into a callable routine) or translate them into normal instructions that would be placed 'inline' into the translated code. But then, with the second strategy, would i be able to pass them around and apply them partially like in haskell? What are your thoughts on implementing/translating functions as a central idea in a functional language?
I can recommend the PJL book. I wrote a compiler with help from it (in 1989-90, in Prolog) and the book is a very good introduction to the subject.
It might be dated (written in 1987, 30 years ago) but it still covers the basics very well. It is, however, completely focused on lazy languages like Haskell. At the time Haskell did not exist and the book uses LML or Miranda, a predecessor language but the languages are very close.

Resources