Need good language for math- and logic-heavy research programming - programming-languages

Can anyone name a language with all the following properties:
Has algebraic data types
Has good support for linear algebra
Is fast(-er than python, at least)
Has at least some functional programming ability (I don't need monads)
Has been heard of, is not dead, and can interface on a C calling level

Scala
According to Wikipedia it has algebraic datatypes. And it is fast. Scala is both functional and object oriented. And it's a young language with a growing userbase but still to some extent compatible with Java.
There is a Scala library Scalala for linear algebra:
A high performance numeric linear algebra library for Scala, with rich Matlab-like operators on vectors and matrices; a library of numerical routines

I have my own favorite pet languages, and this isn't one of them, but it sounds to me like R is probably what you are looking for. It seems to be the hot new language these days for people doing heavy math.
As for the "faster than Python" part, that's tough to say. In general, languages don't really have a speed; language implementations do. So the only way to tell really is to compare your time-constraining algorithm on each implementation you can get.

I'd say C and C++. And they work well with:
Matlab
Maple

Related

Are there other HOL programming languages besides Caledon that are based on haskell?

There are programming languages and theorem prover based on higher order logic (HOL). Examples include Twelf, lambda prolog, Isabelle. For example Twelf is is both a programming language and a theorem prover, while Isabelle is mainly a theorem prover, but for Isabelle code extraction is available.
I am looking for a HOL programming language based on haskell. The reason is that I like, for instance, lambda prolog very much, but it is not meant as a practical programming language. Lambda prolog lacks a standard library and interfacing with external libraries doesn't seem trivial. The problem is if you need some functionality, like writing a parser for a text file, you can't interface, say, with the many available existing libraries for haskell, and further, there is no standard library so you start from scratch.
Today I came across the Caledon programming language that was implemented as a master thesis, it seems. From the github page:
Caledon is a dependently typed, polymorphic, higher order logic
programming language.
This is interesting, since it is written in haskell so it should be easy to extend and interface with existing haskell libraries. But it seems that the project is in a bit early stage, I am not sure if input-output (IO) is implemented. Since I learned only today about Caledon, I think I might have missed some further projects. (BTW, I am not interested in standard logic programming languages like prolog).
Are there programming languages based on higher order logic besides Caledon that are implemented in haskell?
(I am asking for "implemented in haskell", as it is rather easy to connect programming languages that can be extracted to or are implemented in haskell. For example the Agda programming language can compile to haskell code and haskell libraries can be used conveniently and is extremly easy to use haskell libraries if you know how. Many other programming languages (e.g., ATS) I belive only provide the smallest common denominator which is a C based foreign function interface (FFI). In my eyes it is quite cumbersome to connect two higher programming languages via their respective C-based FFI interface. Thus the seemly abitrary part that "it should be implemented in haskell". Further, as a side note some users have downvoted in the past for my description of Agda as a programming language, but of course this is not true, i.e., consider Curry-Howard )
"Haskabelle is a converter from Haskell source files to Isabelle/HOL theories implemented in Haskell itself."
Haskabelle
Strange Statement: Haskell' is a higher order logic programming language based on Haskell. Type inference in Haskell with multiparameter type classes, type families, undecidable inference and whatnot actually forms a higher order logic programming language. This probably doesn't help you very much because:
The spec is literally constantly changing (I've had a few packages loose compatibility as they were based on hacks that got "fixed")
The type system itself doesn't have IO (yet?)
It can't really call other Haskell libraries from type inference
Its not very fast.
The logic programming semantics aren't exactly clear or stable.
It doesn't permit you to unify with lambdas or other type classes, although it does permit unification with functions.
Sadly, I know of extraordinarily few full HOL languages let alone ones implemented in Haskell - it turns out higher order unification is a huge pain to implement.
short answer: i don't know. long answer: you have small chances you will find purely academic language with thousands of libraries and tools for it. if you for some reason need that specific language for some specific problem then use it ONLY for that problem. not for parsing files, calculating taxes or launching rockets. create library and link it with other programs. or even better: create a microservice or connect the programs in other way (e.g. standard input/output) that doesn't require much effort. always use best tool for the job

How are J/K/APL classified in terms of common paradigms?

I've just started learning J, which is very interesting, but I was wondering what kind of language it is exactly, in relation to common paradigms and classifications. For example, you could say that F# is a strongly typed, mainly functional (it supports OO and procedural programming, but it's considered to be "functional") language which belongs to the ML family. For J, however, I couldn't find much on how to classify it "conventionally", or find anything on Stackoverflow confirming that it's a functional programming language. Wikipedia says that it "is a very terse array programming language", "supports function-level programming", and "is not a Von Neumann programming language", none of which are more helpful.
I have a couple of questions:
What main paradigm (procedural, OO, functional, logical) do J/K/APL fall under? If their paradigm is only "array programming", what paradigm does that fall under or is most similar to?
What well-known programming languages are J/K/APL most similar to? For example, I'd guess that they're like Lisp, since they operate on arrays (lists) and have minimal, no comma syntax.
I'm just trying to categorize these languages in my head according to what I already know. Thank you.
J is related to the kind of functional programming (sometimes known as function-level programming, as opposed to value-level) advocated by John Backus, who is probably better known as the inventor of Fortran but spent the latter part of his career trying to move programming away from the style used by Fortran.
In his Turing Award Lecture entitled "Can Programming be Liberated from the von Neumann Style?" Backus presented an overview of his ideas, which he experimented with in his FP and FL languages. Those were inspired largely by Ken Iverson's APL, and Iverson in turn borrowed from Backus' languages when he developed J.
The key idea behind these, which was expanded and developed as a paradigm as the language family grew, was the formation of an algebra of programming. Using these algebraic tools, a programmer could derive or calculate a correct program to solve a problem by combining standard higher-order functions according to well-known mathematical rules. From the abstract of the lecture above:
Associated with the functional style of programming is an algebra of
programs whose variables range over programs and whose operations are
combining forms. This algebra can be used to transform programs and to
solve equations whose "unknowns" are programs in much the same way one
transforms equations in high school algebra. These transformations are
given by algebraic laws and are carried out in the same language in
which programs are written. Combining forms are chosen not only for
their programming power but also for the power of their associated
algebraic laws. General theorems of the algebra give the detailed
behavior and termination conditions for large classes of programs.
Although there doesn't seem to be a lot of recent research that uses the notation of this family of languages, the key ideas have seen somewhat recent development in other functional languages. Richard Bird and Lambert Meertens worked on a related formalism for algebraic manipulation and derivation of programs and the data they operate on; this is known as the Bird-Meertens Formalism or, informally, Squiggol.
Bird and Philip Wadler later wrote a text, "Introduction to Functional Programming", which introduced students to the more well-known "value-level" functional programming based on the lambda calculus. It originally used the Miranda programming language, a forerunner to Haskell, but in its 2nd edition was translated to use Haskell. I mention this because Erik Meijer, Maarten Fokkinga, and Ross Paterson used the example functions in the text as the motivating examples in their well-known paper "Functional Programming with Bananas, Lenses, Envelopes, and Barbed Wire", where they extend the Bird-Meertens Formalism, give it a basis in Category Theory, and show how the solution to each of the examples can be calculated based on the laws of the formalism and the recursion schemes captured by its higher-order functions.
These ideas took root in Haskell, and there you can find a style of programming known as "point-free" that bears a strong resemblance to FP programming. It eschews named variables and creates a new function purely by composition of other functions. Most of the language is not optimized for this use, so many functions are awkward to create in point-free style using only the standard libraries, but alternate libraries that provide more Squiggol-style combinator functions are available.
The algebraic style of constructing programs, thanks to its mathematical nature, is amenable to high-level analysis and transformation by sophisticated compilers. This has led to developments in "Fusion" or "Deforestation" optimizations in functional language compilers that allow algorithms expressed in this style to be reduced to highly efficient machine code loops without all the garbage data and redundant loops that a more naive translation would require. Earlier this year, a framework for generalized stream fusion was presented that showed very practical results from this style of programming.
Hopefully this background gives you a better idea of what the Wikipedia article was talking about and how the underlying paradigm works.
While other tags could also apply (as with other languages), J is certainly a functional language. It has most major attributes attributed to functional languages, such as functions being 'first class citizens,' currying, higher order functions, etc. Furthermore, if it means anything to you, I have read articles where the language creators themselves described the language as 'functional.'
You could also say that it is an array programming language, as all functions operate on arrays vs. single elements.
I think the short answer you are looking for is that J is a functional array programming language. You could also throw other descriptors out there, such as non-statically typed, etc.
As to your numbered questions:
Functional and array programming.
As far as array programming goes, they are not similar to any other well-known language; rather they are in their own category of 'array programming languages.' As far as functional aspects go, they'd be in the functional category.

Machine learning in OCaml or Haskell?

I'm hoping to use either Haskell or OCaml on a new project because R is too slow. I need to be able to use support vectory machines, ideally separating out each execution to run in parallel. I want to use a functional language and I have the feeling that these two are the best so far as performance and elegance are concerned (I like Clojure, but it wasn't as fast in a short test). I am leaning towards OCaml because there appears to be more support for integration with other languages so it could be a better fit in the long run (e.g. OCaml-R).
Does anyone know of a good tutorial for this kind of analysis, or a code example, in either Haskell or OCaml?
Hal Daume has written several major machine learning algorithms during his Ph.D. (now he is an assistant professor and rising star in machine learning community)
On his web page, there are a SVM, a simple decision tree and a logistic regression all in OCaml. By reading these code, you can have a feeling how machine learning models are implemented in OCaml.
Another good example of writing basic machine learning models is Owl library for scientific and numeric computations in OCaml.
I'd also like to mention F#, a new .Net language similar to OCaml. Here's a factor graph model written in F# analyzing Chess play data. This research also has a NIPS publication.
While FP is suitable for implementing machine learning and data mining models. But what you can get here most is NOT performance. It is right that FP supports parallel computing better than imperative languages, like C# or Java. But implementing a parallel SVM, or decision tree, has very little relation to do with the language! Parallel is parallel. The numerical optimizations behind machine learning and data mining are usually imperative, writing them pure-functionally is usually hard and less efficient. Making these sophisticated algorithms parallel is very hard task in the algorithm level, not in the language level. If you want to run 100 SVM in parallel, FP helps here. But I don't see the difficulty running 100 libsvm parallel in C++, not to consider that the single thread libsvm is more efficient than a not-well-tested haskell svm package.
Then what do FP languages, like F#, OCaml, Haskell, give?
Easy to test your code. FP languages usually have a top-level interpreter, you can test your functions on the fly.
Few mutable states. This means that passing the same parameter to a function, this function always gives the same result, thus debugging is easy in FPs.
Code is succinct. Type inference, pattern matching, closures, etc. You focus more on the domain logic, and less on the language part. So when you write the code, your mind is mainly thinking about the programming logic itself.
Writing code in FPs is fun.
The only problem I can see is that OCaml doesn't really support multicore parallelism, while GHC has excellent support and performance. If you're looking to use multiple threads of execution, on multiple calls, GHC Haskell will be a lot easier.
Secondly, the Haskell FFI is more powerful (that is, it does more with less code) than OCaml's, and more libraries are avaliable (via Hackage: http://hackage.haskell.org ) so I don't think foreign interfaces will be a deciding factor.
As far as multi-language integration goes, combining C and Haskell is remarkably easy, and I say this as someone who is (unlike dons) not really much of an expert on either. Any other language that integrates well with C shouldn't be much trickier; you can always fall back to a thin interface layer in C if nothing else. For better or worse, C is still the lingua franca of programming, so Haskell is more than acceptable for most cases.
...but. You say you're motivated by performance issues, and want to use "a functional language". From this I infer you're not previously familiar with the languages you ask about. Among Haskell's defining features are that it, by default, uses non-strict evaluation and immutable data structures--which are both incredibly useful in many ways, but it also means that optimizing Haskell for performance is often dramatically different from other languages, and well-honed instincts may lead you astray in baffling ways. You may want to browse performance-related topics on the Haskell wiki to get a feel for the issues.
Which isn't to say that you can't do what you want in Haskell--you certainly can. Both laziness and immutability can in fact be exploited for performance benefits (Chris Okasaki's thesis provides some nice examples). But be aware that there'll be a bit of a learning curve when it comes to dealing with performance.
Both Haskell and OCaml provide the lovely benefits of using an ML-family language, but for most programmers, OCaml is likely to offer a gentler learning curve and better immediate results.
It's hard to give a definitive answer on this. Haskell has the advantages that Don mentioned along with having a more powerful type system and cleaner syntax. OCaml will be easier to learn if you coming from almost any other language (this is because Haskell is as function as functional languages get), and working with mutable random access structures can be a little clunky in Haskell. You will also likely find the performance characteristics of your OCaml code more intuitive than Haskell because of Haskell's lazy evaluation.
Really, I would recommend you evaluate both if you have the time. Here are some relevant Haskell resources:
http://hackage.haskell.org/package/hslibsvm
http://hackage.haskell.org/package/HSvm
Real World Haskell: this is a great freely available book for Haskell
Learn You a Haskell: this tutorial is just plain fun to read
Oh, if you look further into Haskell be sure to sign up for the Haskell Beginners and Haskell Cafe lists. The community is friendly and eager to help out newcomers (is my bias showing?).
If speed is your prime concern then go for C. Haskell is pretty good performance wise but you are never going to get as fast as C. To my knowledge the only functional language that has bettered C in a benchmark is Stalin Scheme but that is very old and nobody really knows how it works.
I've written genetic programming libraries where performance was key and I wrote it in a functional style in C. The functional style allowed me to easily parallelise it using OMP and it scales linearly upto 8 cores within a single process. You certainly can't do that in OCaml although Haskell is improving all the time with regards to concurrency and parallelism.
The downside of using C was that it took me months to finally find all the bugs and stop the core dumps which was extremely challenging because of the concurrency. Haskell would probably have caught 90% of those bugs on the first compilation.
So speed at any cost ? Looking back I'd wish I'd used Haskell as I could stand it to be 2 - 3 times slower if I'd saved over a month in development time.
While dons is correct that multicore parallelism at the thread level is better supported in Haskell, it sounds like you could live with process level parallelism (from your phrase: ideally separating out each execution to run in parallel.) which is supported quite well in OCaml. Keith pointed out that Haskell has a more powerful type system, but it can also be said that OCaml has a more powerful module system than Haskell.
As others have pointed out, OCaml's learning curve will be lower than Haskell's; you'll likely be more productive more quickly in OCaml. That said, learning OCaml is a great stepping-stone towards learning Haskell because many of the underlying concepts are very similar, so you could always migrate to Haskell later and find a lot of things familiar there. And as you pointed out, there is an OCaml-R bridge.
As an examples of Haskell and Ocaml in machine learning see stuff at Hal Daume and Lloyd Allison homepages. IMO it's is much more straightforward to achieve C++-like performance in Ocaml, than in Haskell. Through, as already said, Haskell has much nicer community (packages, tools and support), syntax&features (i.e. FFI, probability monads via typeclasses) and parallel programming support.
Having revamped OCaml-R, I've got a few comments to make on integrating OCaml and R. It might be worthwile to use OCaml to call R code, it works, but is not yet exactly straightforward. So using it to pilot R is worthwile. Integrating R functionality much more thoroughly is still cumbersome as, for example, much remains to be done to export R's type system and data to OCaml in a seamless way (you will have work to do). Moreover, the interaction of R's GC and OCaml's GC is a delicate point: you free n values in O(n^2) time, which isn't nice (to solve this point, you either need a more flexible R API, as far as I understand it, or to implement a GC in the binding itself as a big R array for proper interaction between GCs).
In a nutshell, I'd go for the "pilot R from OCaml" approach.
Contributions on the GC interaction layer and on mapping R datatypes to OCaml are most welcome.
You may want to take a look at this : http://www.haskell.org/pipermail/haskell-cafe/2010-May/077243.html
Late answer but a machine learning library in Haskell is available here : https://github.com/mikeizbicki/HLearn
This library implements various ML algorithms who are designed to have a much faster cross-validation than the usual implementations. It is based on the following paper Algebraic classifiers: a generic approach to fast cross-validation,
online training, and parallel training. The authors claims a 400x speed-up compared to the same task in Weka.
for haskell, consider checking hasktorch (which I managed to use for my AI thesis). for ocaml there seem to be tensorflow bindings.

Why purely functional languages instead of "impure" functional languages?

What's the advantage, for someone who is not a theoretical computer scientist, of learning a purely functional language like Haskell or Erlang instead of an "impure" language with a strong functional component, like Python or version 2 of the D programming language? My arguments are as follows:
No paradigm is perfect. Languages that are single-paradigm, regardless of what paradigm that is, are very restrictive.
Python or D will ease you into functional programming while you get real work done. Haskell or Erlang will force you to learn functional style all at once before you can actually do anything of interest in them.
Edit: By "impure" functional languages, what I really mean is strongly multiparadigm languages w/ a functional flavor, not strongly functional languages with some non-functional abilities.
It all depends on what are trying to achieve. If your goal to write production code in a functional language - a 'pure' functional language can make it more difficult.
But if you are trying to learn new concepts, 'pure' language gives you the benefit of guarding where you are sliding off the mark of functional concepts. Once you have clear understanding of differences you can go to a mixed environments - like F#, but before that it is all too easy to slip to the OOP way of doing things and because of this miss the advantages of functional approach.
In some other thread I offered an opinion that a better way of learning F# is to start with let us say Haskell (and was voted down for this), but if you learn F# to do OOP than what's the point - stay with C#
You almost answered your own question:
Haskell or Erlang will force you to learn functional style all at once before you can actually do anything of interest in them.
Actually, depending on what you mean by 'of interest', you can be productive in Haskell in a week.
The main reason for anyone to learn Haskell (language theorists already know it, and other kinds of theorists are too busy proving theorems to be bothered with programming) is that learning Haskell will change the way you think about programming. Especially the type system, list comprehensions (stolen for Python—the highest form of praise), and programming with pattern matching. You will be able to use many of your new thoughts in all the programming you do. And Haskell will force you to think new thoughts in a way that other langauges won't.
Erlang is an honorable language but has nothing comparable to Haskell's type system.
If you like Paul Graham you can read more about this line of reasoning in his essay Beating the Averages, especially the part about the "Blub Paradox". Just substitute Haskell for Lisp :-)
You learn very little if your "new" language is just a slight permutation of what you already know. It's like asking, "Why learn Chinese when I can just get a dialect coach to teach me to speak with a Scottish brogue?" I guess it's fine if you enjoy speaking with a brogue, but you're not really expanding your expertise very much.
Learning a functional language teaches you a new way of looking at things. Impure or mixed-paradigm languages like OCaml are good as well, but it can be tempting to use the impure elements as a crutch to avoid having to look at the problem in a new way. No, functional languages are not a magic bullet, but they do have a lot of interesting benefits, and you're robbing yourself of those benefits if you learn a language that has a "functional components" but doesn't really work like a real functional language.
For example, in a pure functional language like Haskell, state is very carefully isolated from the rest of your program. This makes all sorts of optimizations trivial that are very hard in other languages. For example, state is the enemy of parallel processing. In Haskell, you can just look at a function's type and be 100% confident that it won't create any side effects.
This way of thinking in functions that work with immutable data structures something that a pure functional language can teach you. I'm not saying pure functional languages are "the best," but they have their benefits. It's another tool in your belt. And you won't get that tool by sticking with what's familiar.
Is Erlang purely functional? I honestly thought it wasn't.
I'm not an expert on this, so take my words with a grain of salt.
Side-effects prevent (or make more complicated) lots of optimizations, including parallelism. So, in theory, going pure functional should mean better optimization for faster code, although I'm not sure this is true in practice. Even if it does not, it might someday be... it will be fun if a purely functional language comes along which makes using up all the cores easy peasy.
Additionally, in a way, programming without side-effects makes for easier to understand programs.
Because only pure is declarative at the operational semantics level. Pure functional programming (i.e. eliminating accidental dependencies) is required for parallelism.
Also, without the immutability restriction, you won't be forced to think about how to do algorithms just as fast as they can be done in imperative, or how to model interaction with the real world without state spaghetti, using functional reactive programming.
Thus you won't be maximizing your skills for writing code that is maximally composable and extensible.

What are some pros/cons to various functional languages?

I know of several functional languages - F#, Lisp and its dialects, R, and more. However, as I've never used any of them (although the three I mentioned are on my "to-learn" list), I was wondering about the pros/cons of the various functional languages out there. Are there significant pros/cons, both in learning the language and in any real-world applications of said language?
Haskell is "extreme" (lazy, pure), has active users, lots of documentation, and makes runnable applications.
SML is "less extreme" (strict, impure), has active users, formal specification, many implementations (SML/NJ, Mlton, Moscow ML, etc.). Implementations vary on how applications are deployed wrt the runtimes.
OCaml is ML with attitude. It has an object orientation, active users, documentation, add ons, and makes runnable applications.
Erlang is concurrent, strict, pure (mostly), and supports distributed apps. It needs a runtime installed separately, so deployment is different from the languages that make runnable binaries.
F# is similar to OCaml with Microsoft backing and .NET libraries.
Scala runs on the JVM and can be used as a functional language with advanced features, or as simply a souped-up Java, or both. The flexibility is cited as a drawback for learning a functional language because it's easy to slip back into imperative Java ways. Of course it is also an advantage if you want to use existing JVM libraries.
I'm not sure if your question is to functional languages in general, or differences between them. For general info on why functional:
http://paulspontifications.blogspot.com/2007/08/no-silver-bullet-and-functional.html
Why Functional Programming Matters
As far as differences between functional languages:
Distinctive traits of the functional languages
The awesome thing about functional languages is that base themselves off of the lambda calculus and other math. This results in being able to use similar algorithms and thoughts across languages more easily.
As far as which one you should learn: Pick one that will have a comfortable environment for you. For example, if you're using .NET and Visual Studio, F# is an excellent fit. (Actually, the VS integration makes F# a strong contender, period.) The book "How to Design Programs" (full text, free, online) with PLT Scheme is also a good choice.
I'm biased, but F# looks to have the biggest "real-world" potential. This is mainly because of the nice IDE/.NET integration, allowing you to fully tap .NET and OO, while keeping a lot of functional power (and extending it in ways too). Scala might be possible contender, but it's more of an OO language that has some functional features; hence Scala won't be as big a productivity gain.
Edit: Just to note JavaScript and Ruby, before someone comments on that :). Ruby is something else you could take a look at if you're doing that type of web dev, as it has a lot of functional concepts in, although not as polished as other languages.
The biggest downside is that once you see the power you can have, you won't be happy using lesser languages. This becomes a problem if you're forced to deal with people who haven't yet understood.
One final note, the only "con" is that "it's so complicated". This isn't actually true -- functional languages are often simpler -- but if you have years of C or whatnot in your brain, it can be a significant hurdle to "get" the functional concept. After it clicks, it should be relatively smooth sailing.
Lisp has a gentle learning curve. You can learn the basics in an hour, though of course it takes longer to learn idioms etc. On the down side, there are many dialects of Lisp, and it's difficult to interact with mainstream environments like Java or .NET.
I would not recommend R unless you need to do statistics. It's a strange language, and not exactly functional. You can do functional programming in R, but most people don't.
If you're familiar with the Microsoft tool stack, F# might be easy to get into. And it has a huge, well-tested library behind it, i.e. the CLR.
You can use a functional programming style in any language, though some make it easier than others. As far as that goes, you might try Python.
ML family (SML/OCaml/F#):
Pros:
Fairly simple
Have effective implementations (on the level with Java/C#)
Easily predictable resource consumption (compared to lazy languages)
Readable syntax
Strong module system
(For F#): large .Net library available
Has mutable variables
Cons:
Sometimes too simple (no typeclasses => problems with overloading)
(Except F#): standard libraries are missing some useful things
Has mutable variables :)
Cannot have infinite data structures (not lazy language)
I haven't mentioned features common to most static-typed functional languages: type inference, parametric polymorphism, higher-order functions, algrebraic data types & pattern matching.
I have learnt Haskell at the university like a pure functional languaje and I can say that's really powerful, but also I couldn't find a practical use.
However, i found this: Haskell in practice . Check it, is amazing.
The characteristics of functional paradigms sometimes are pros, and sometimes cons, depending on the situation / context.
Some of them are:
high level
lambda functions
lazy evaluation
Higher-order functions
recursion
type inference
Cite from wikipedia:
Efficiency issues
Functional programming languages have
been perceived as less efficient in
their use of CPU and memory than
imperative languages such as C and
Pascal.[26] However, for programs that
perform intensive numerical
computations, functional languages
such as OCaml and Clean are similar in
speed to C. For
programs that handle large matrices
and multidimensional databases, array
functional languages (such as J and K)
were designed with speed optimization
in mind.
Purely functional languages have a
reputation for being slower than
imperative languages.
However, immutability of data can, in
many cases, lead to execution
efficiency in allowing the compiler to
make assumptions that are unsafe in an
imperative language, vastly increasing
opportunities for inlining.
Lazy evaluation may also speed up the
program, even asymptotically, whereas
it may slow it down at most by a
constant factor (however, it may
introduce memory leaks when used
improperly).

Resources