Possible optimizations in Haskell that are not yet implemented in GHC? [closed] - haskell

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
So, purely functional languages have their own class of potentials due to the clear separation between pure and impure code. I have seen several features that are somewhat simpler to implement in Haskell like Nested Data Parallelism or Stream Fusion.
My question is, what are other improvements/optimizations that are more or less unique to Haskell in terms of feasibility/simplicity but not yet implemented? (I mostly care about GHC, but also love to hear about others)

One optimization I'd love to see in GHC is supercompilation. That seems unlikely in the near-future of GHC, though, because it's whole-program optimization, and GHC is very focused on module-at-a-time compilation.
Basically, supercompilation is executing as much of a program as possible at compile time. It naturally subsumes inlining, deforestation, specialization, and any number of other techniques. Early experimental results have been promising, but it's a very expensive process. It's hard to see it being a practical optimization, but the concept is ridiculously awesome.

Another issue that SPJ states in his paper on modular supercompilation is combining supercompilation with unboxing. Possibilities for unboxing in supercompiled program are significantly reduced. This causes decrease in performance in comparison with unoptimized program passed through GHC strict-analyser/unboxer. See http://research.microsoft.com/en-us/um/people/simonpj/papers/supercompilation/

Another powerful but also "not yet ready for production use" technique is worker-wrapper transformation.

Related

what is the relationship between safe & pure & referential transparency in functional programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
pure / impure: appears when we talk about the different between Haskell and lisp-family.
safe / unsafe: appears when we name functions like unsafePerformIO, unsafeCoerce.
referential transparency / referential opacity: appears when we emphasize the benefit of purely functional programming.
The difference between these words are very subtle, I find there is some post talking about them individually, but I'm still hoping there is a clear comparison between them, and I can't find such a post here yet.
I've always been fond of Amr Sabry's 1998 paper that explored a similar question with the rigor it deserved: https://www.cs.indiana.edu/~sabry/papers/purelyFunctional.ps
A sample quote:
A language is purely functional if (i) it includes every simply typed
lambda-calculus term, and (ii) its call-by-name, call-by-need, and
call-by-value implementations are equivalent modulo divergence and
errors.
While this question can generate a lot of "opinion" based answers (which I am carefully avoiding!), reading through Amr's paper can put you in the right mindset about how to think about this question; regardless whether you end up agreeing with him or not.

What exactly does it mean for a programming language to be simple? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What factors are important? How do you know if a given programming language is "simple" or "simpler" than another language?
I'm not sure if this is a fair question to ask, since different languages serve different purposes and it might not really be comparing apples to apples.
However, with that said, memory management would come to mind. One can argue that Java is a "simpler" language than C++, since it has a garbage collector that can deal with some of the complexities around memory management, instead of forcing you to do it yourself.
In my perspective, these are the points that define the complexity of a language.
Variation of syntax from common pseudocode and constructs
Ease of developing a structure for real-life entities like objects
Methods of structure enforcement at compile time.
Memory management strategy allocation/deallocation
Code reusability
Ease of code headers and directives management
Inbuilt libraries
Relative installation package sizes
Data exchange capabilities like over network of files
Process handling like thread management
Relative brevity of the code
Speed of compilation
Developer community size and documentation
OpenSource implementations
Platform dependence
And many more could be added to this list.

What would be the consequences of semi-asynchronous exception handling in GHC? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm worried that approach to asynchronous exceptions in GHC might be a net-loss for many applications.
While the paper explains the design to a great detail, it's too complex for average programmers to tell whether this approach provides any benefits at all in their daily work.
Section 2 lists four reasons for current approach (speculative computation, timeouts, user interrupt and resource exhaustion). In my opinion, three are about ability to cancel computations and one is about ability to recover from resource exhaustion which I find questionable (is there any publicly available code that demonstrates this?).
In particular, as mentioned in the paper, Java deprecated Thread.stop() because aborted computation would result in undefined state. Aren't IO actions in GHC subject to the same? Add laziness and the API becomes much more complex in comparison for no clear benefit to most applications.
To summarize, if GHC used the same approach as Java (safe-points, interrupt polling) what would be the consequences to the ecosystem?

Learning functional programming after other programming paradigms [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have taught myself C, Python, Java and a few other procedural or object oriented languages to an intermediate degree from resouces on the internet (thanks SO :D) and in books. When I tried to learn Haskell, I couldn't wrap my head around what the code actually did.
Is there a better functional language for someone coming from a background in procedural or object oriented programming to learn? Are there any resources meant for people in my situation?
Thanks!
It's probably varies with people (and this question is bound to get closed over that), but the way I see it: there isn't a stair you need to step on before you can be within reach to Haskell.
So I'd say you're not driven temporarily off necessarily by the language, but by your sources of learning. For the only truly gentle introduction, I recommend LYAH. It keeps things within reasonable difficulty and it has some really entertaining points every now and then.
However, if you still want to almost-soften your transition, you can check out F# which isn't a functional language but it will give you a good taste of FP, and it will be very familiar to you because you still live in an OO world.
You can also check out basically any other functional language and it will give you some of the mindset (Scala, ML, etc.).
Keep in mind that I say "almost-soften", because Haskell is very different (especially because of purity), and that gives you a very logical and mathematical mindset to things and that has been very different for me than any other language I learned. It's incredible. It was much beyond learning different syntax, it's a way to think about things and I can always find myself learning more and a truly amazing part of it is that (since it's so logical, mathematical, reasonable, etc.) the new ways of thinking I acquire with Haskell don't leave me both when I use other languages and even in my personal daily life.
That being said, the only thing truly horrible with Haskell is that it ruined me for other languages. I used to like C#... :(

How can I get at the cleverest optimizations that GHC makes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Because I can see it coming: this is a different question than What optimizations can GHC be expected to perform reliably? because I'm not asking for the most reliable optimizations, just the most clever/powerful.
I'm specifically looking for non-intuitive optimizations that GHC makes that can have serious impacts on performance and demonstrate the power of compiler optimizations related to lazy evaluation or purity. And direct explanations about how to get at them.
The best answers will have:
An explanation of the optimization and why it is so clever or powerful
Why the optimization improves performance
How GHC recognizes when it can use this optimization
What the optimization actually transforms the code into
Why this optimization requires lazy evaluation or purity
Stream fusion is probably the biggest one. It turns something like sum . map (+1) . filter (>5), which nominally allocates two new lists, into a simple loop that operates in constant space.

Resources