Related
Is it possible to turn off lazy evaluation in Haskell?
Is there a specific compiler flag of library to facilitate this?
I wanted to try something new with a old program that I had written a while back, to see if I can improve performance.
There are a number of ways to turn a lazy thing strict. You can:
Explicitly insert a spurious pattern match.
Use seq or its close relative ($!).
Use BangPatterns.
Use strictness annotations on your types.
More information here.
You can't turn off laziness, because the I/O system of Haskell depends on it. Without lazy evaluation this program would run into a busy loop without ever outputting anything:
main = forever (putStrLn "Hello!")
This is because forever c is an infinite program. With lazy evaluation the program is calculated only as far as necessary to run the next instruction. If you turn off laziness, every function becomes strict, including (>>), which basically makes the forever function diverge:
forever c = let cs = c >> cs in cs
However, you can add strictness annotations to constructors and patterns. When a function is strict, its argument is forced as part of the evaluation of the result independent of whether the argument is needed or not. This resembles eager evaluation.
In addition to what Daniel Wagner listed you may want to take a look at a similar question Is there a Haskell compiler or preprocessor that uses strict evaluation?.
Answers include the DDC compiler which attempts to make an strict version of haskell and only lazy explicitly
ghc plugin described in monad.reader 12
"Using nfdata and rnf everywhere" - solrize
and more
The predominate suggestion is to use profiling tools and learn how to optimize Haskell as it is however, since most would consider it a different language with non-strict evaluation turned off.
There's a variant of Haskell called pH (http://csg.csail.mit.edu/projects/languages/ph.shtml)
which uses eager evaluation while still providing non-strict semantics. The Haskell Report is careful to say that it's a non-strict language. Laziness is the obvious way to describe and, apparently, to implement non-strictness.
So, if your question is "Can we use a different evaluation system while maintaining non-strict semantics", you could look at pH. If your question is "Is there a version of Haskell which shares the surface syntax but is strict by default", I think it's covered by other answers.
You can enable the Strict pragma in a module, which will cause everything to be strict by default.
https://ghc.haskell.org/trac/ghc/wiki/StrictPragma
The simple answer is no. The more complex answer is that the computational model upon which Haskell builds up and evaluates functions works in a lazy manner. As you will read in other answer there are ways to force evaluation of some functions earlier then normal, and it is occasionally adventitious to do so. But there is a large portion of valid Haskell which has no normal form. This includes the IO functions and a large amount of the standard prelude.
Conclusion: there is no more a way to turn of lazy evaluation in Haskell then there is a way to turn off pointer arithmetic in C or to turn off OO in Ruby of Java. I suspect that this is much farther then you though this question would take you. (There's no --strict mode), but if you really want to see just how deep the rabit hole goes, "Implementing Lazy Functional Languages on Stock Hardware: The Spineless Tagless G-machine" by Simon Peyton Jones is an adventure worth having.
The strict-identity package has a strict version of the Identity monad.
You can find it here:
https://hackage.haskell.org/package/strict-identity
The usage would look something like this:
foo = runStrictIdentity $! do
x <- f a b
y <- g x y
return $! x + y
Each time return or bind >>= is used, the two parts are evaluated using seq, giving a reasonable guarantee of strictness provided your data structure isn't too deep. This works, for i.e. numbers and basic structures.
Haskell has been called a "pure functional language."
What does "pure" mean in this context? What consequences does this have for a programmer?
In a pure functional language, you can't do anything that has a side effect.
A side effect would mean that evaluating an expression changes some internal state that would later cause evaluating the same expression to have a different result. In a pure functional language you can evaluate the same expression as often as you want with the same arguments, and it would always return the same value, because there is no state to change.
For example, a pure functional language cannot have an assignment operator or do input/output, although for practical purposes, even pure functional languages often call impure libraries to do I/O.
"Pure" and "functional" are two separate concepts, although one is not very useful without the other.
A pure expression is idempotent: it can be evaluated any number of times, with identical results each time. This means the expression cannot have any observable side-effects. For example, if a function mutated its arguments, set a variable somewhere, or changed behavior based on something other than its input alone, then that function call is not pure.
A functional programming language is one in which functions are first-class. In other words, you can manipulate functions with exactly the same ease with which you can manipulate all other first-class values. For example, being able to use a "function returning bool" as a "data structure representing a set" would be easy in a functional programming language.
Programming in functional programming languages is often done in a mostly-pure style, and it is difficult to be strictly pure without higher-order function manipulation enabled by functional programming languages.
Haskell is a functional programming language, in which (almost) all expressions are pure; thus, Haskell is a purely functional programming language.
A pure function is one which has no side effects — it takes a value in and gives a value back. There's no global state that functions modify. A pure functional language is one which forces functions to be pure. Purity has a number of interesting consequences, such as the fact that evaluation can be lazy — since a function call has no purpose but to return a value, then you don't need to actually execute the function if you aren't going to use its value. Thanks to this, things like recursive functions on infinite lists are common in Haskell.
Another consequence is that it doesn't matter in what order functions are evaluated — since they can't affect each other, you can do them in any order that's convenient. This means that some of the problems posed by parallel programming simply don't exist, since there isn't a "wrong" or "right" order for functions to execute.
Strictly speaking, a pure functional language is a functional language (i.e. a language where functions are first-class values) where expressions have no side effects. The term “purely functional language” is synonymous.
By this definition, Haskell is not a pure functional language. Any language in which you can write programs that display their result, read and write files, have a GUI, and so on, is not purely functional. Thus no general purpose programming language is purely functional (but there are useful domain-specific purely functional languages: they can typically be seen as embedded languages in some way).
There is a useful relaxed sense in which languages like Haskell and Erlang can be considered purely functional, but languages like ML and Scheme cannot. A language can be considered purely functional if there is a reasonably large, useful and well-characterised subset where side effects are impossible. For example, in Haskell, all programs whose type is not built from IO or other effect-denoting monad are side-effect-free. In Erlang, all programs that don't use IO-inducing libraries or concurrency features are side-effect-free (this is more of a stretch than the Haskell case). Conversely, in ML or Scheme, a side effect can be buried in any function.
By this perspective, the purely functional subset of Haskell can be seen as the embedded language to deal with the behavior inside each monad (of course this is an odd perspective, as almost all the computation is happening in this “embedded” subset), and the purely functional subset of Erlang can be seen as the embedded language do deal with local behavior.
Graham Hutton has a slightly different, and quite interesting, perspective on the topic of purely functional languages:
Sometimes, the term “purely functional” is also used in a broader sense to mean languages that might incorporate computational effects, but without altering the notion of ‘function’ (as evidenced by the fact that the essential properties of functions are preserved.) Typically, the evaluation of an expression can yield a ‘task’, which is then executed separately to cause computational effects. The evaluation and execution phases are separated in such a way that the evaluation phase does not compromise the standard properties of expressions and functions. The input/output mechanisms of Haskell, for example, are of this kind.
I.e. in Haskell, a function has the type a -> b and can't have side effects. An expression of type IO (a -> b) can have side effects, but it's not a function. Thus in Haskell functions must be pure, hence Haskell is purely functional.
As there cannot be any side effects in pure functional code, testing gets much easier as there is no external state to check or verify. Also, because of this, extending code may become easier.
I lost count of the number of times I had trouble with non-obvious side effects when extending/fixing (or trying to fix) code.
As others have mentioned, the term "pure" in "pure functional programming language" refers to the lack of observable side-effects. For me, this leads straight to the question:
What is a side-effect?
I have seen side-effects explained both as
something that a function does other than simply compute its result
something that can affect the result of a function other than the inputs to the function.
If the first definition is the correct one, then any function that does I/O (e.g. writing to a file) cannot be said to be a "pure" function. Whereas Haskell programs can call functions which cause I/O to be performed, it would seem that Haskell is not a pure functional programming language (as it is claimed to be) according to this definition.
For this and other reasons, I think the second definition is the more useful one. According to the second definition, Haskell can still claim to be a completely pure functional programming language because functions that cause I/O to be performed compute results based only on function inputs. How Haskell reconciles these seemingly conflicting requirements is quite interesting, I think, but I'll resist the temptation to stray any further from answering the actual question.
Amr Sabry wrote a paper about what a pure functional language is. Haskell is by this definition considered pure, if we ignore things like unsafePerformIO. Using this definition also makes ML and Erlang impure. There are subsets of most languages that qualify as pure, but personally I don't think it's very useful to talk about C being a pure language.
Higher-orderness is orthogonal to purity, you can design a pure first-order functional language.
I'm looking to learn functional programming with either Haskell or F#.
Are there any programming habits (good or bad) that could form as a result Haskell's lazy evaluation? I like the idea of Haskell's functional programming purity for the purposes of understanding functional programming. I'm just a bit worried about two things:
I may misinterpret lazy-evaluation-based features as being part of the "functional paradigm".
I may develop thought patterns that work in a lazy world but not in a normal order/eager evaluation world.
There are habits that you get into when programming in a lazy language that don't work in a strict language. Some of these seem so natural to Haskell programmers that they don't think of them as lazy evaluation. A couple of examples off the top of my head:
f x y = if x > y then .. a .. b .. else c
where
a = expensive
b = expensive
c = expensive
here we define a bunch of subexpressions in a where clause, with complete disregard for which of them will ever be evaluated. It doesn't matter: the compiler will ensure that no unnecessary work is performed at runtime. Non-strict semantics means that the compiler is able to do this. Whenever I write in a strict language I trip over this a lot.
Another example that springs to mind is "numbering things":
pairs = zip xs [1..]
here we just want to associate each element in a list with its index, and zipping with the infinite list [1..] is the natural way to do it in Haskell. How do you write this without an infinite list? Well, the fold isn't too readable
pairs = foldr (\x xs -> \n -> (x,n) : xs (n+1)) (const []) xs 1
or you could write it with explicit recursion (too verbose, doesn't fuse). There are several other ways to write it, none of which are as simple and clear as the zip.
I'm sure there are many more. Laziness is surprisingly useful, when you get used to it.
You'll certainly learn about evaluation strategies. Non-strict evaluation strategies can be very powerful for particular kinds of programming problems, and once you're exposed to them, you may be frustrated that you can't use them in some language setting.
I may develop thought patterns that work in a lazy world but not in a normal order/eager evaluation world.
Right. You'll be a more rounded programmer. Abstractions that provide "delaying" mechanisms are fairly common now, so you'd be a worse programmer not to know them.
I may misinterpret lazy-evaluation-based features as being part of the "functional paradigm".
Lazy evaluation is an important part of the functional paradigm. It's not a requirement - you can program functionally with eager evaluation - but it's a tool that naturally fits functional programming.
You see people explicitly implement/invoke it (notably in the form of lazy sequences) in languages that don't make it the default; and while mixing it with imperative code requires caution, pure functional code allows safe use of laziness. And since laziness makes many constructs cleaner and more natural, it's a great fit!
(Disclaimer: no Haskell or F# experience)
To expand on Beni's answer: if we ignore operational aspects in terms of efficiency (and stick with a purely functional world for the moment), every terminating expression under eager evaluation is also terminating under non-strict evaluation, and the values of both (their denotations) coincide.
This is to say that lazy evaluation is strictly more expressive than eager evaluation. By allowing you to write more correct and useful expressions, it expands your "vocabulary" and ability to think functionally.
Here's one example of why:
A language can be lazy-by-default but with optional eagerness, or eager by default with optional laziness, but in fact its been shown (c.f. Okasaki for example) that there are certain purely functional data structures which can only achieve certain orders of performance if implemented in a language that provides laziness either optionally or by default.
Now when you do want to worry about efficiency, then the difference does matter, and sometimes you will want to be strict and sometimes you won't.
But worrying about strictness is a good thing, because very often the cleanest thing to do (and not only in a lazy-by-default language) is to use a thoughtful mix of lazy and eager evaluation, and thinking along these lines will be a good thing no matter which language you wind up using in the future.
Edit: Inspired by Simon's post, one additional point: many problems are most naturally thought about as traversals of infinite structures rather than basically recursive or iterative. (Although such traversals themselves will generally involve some sort of recursive call.) Even for finite structures, very often you only want to explore a small portion of a potentially large tree. Generally speaking, non-strict evaluation allows you to stop mixing up the operational issue of what the processor actually bothers to figure out with the semantic issue of the most natural way to represent the actual structure you're using.
Recently, i found myself doing Haskell-style programming in Python. I took over a monolithic function that extracted/computed/generated values and put them in a file sink, in one step.
I thought this was bad for understanding, reuse and testing. My plan was to separate value generation and value processing. In Haskell i would have generated a (lazy) list of those computed values in a pure function and would have done the post-processing in another (side-effect bearing) function.
Knowing that non-lazy lists in Python can be expensive, if they tend to get big, i thought about the next close Python solution. To me that was to use a generator for the value generation step.
The Python code got much better thanks to my lazy (pun intended) mindset.
I'd expect bad habits.
I saw one of my coworkers try to use (hand-coded) lazy evaluation in our .NET project. Unfortunately the consequence of lazy evaluation hid the bug where it would try remote invocations before the start of main executed, and thus outside the try/catch to handle the "Hey I can't connect to the internet" case.
Basically, the manner of something was hiding the fact that something really expensive was hiding behind a property read and so made it look like a good idea to do inside the type initializer.
Contextual information missing.
Laziness (or more specifically, the assumption of the availabilty of the purity and equational reasoning) is sometimes quite useful for specific problem domains, but not necessarily better in general. If you're talking about general-purpose language settings, relying on the lazy evaluation rules by default is considered harmful.
Analysis
Any languages has functional combination (or the applicable terms combination; i.e. function call expression, function-like macro invocation, FEXPRs, etc.) enforces rules on evaluation, implying the order of different parts of subcomputation therein. For convenience and the simplicity of the specification of the language, a language usually specify the rules in a flavor paired to the reduction strategy:
The strict evaluation, or the applicative-order reduction, which evaluates all subexpression first, before the subcomputation of the remaining evaluation of the hole combination.
The non-strict evaluation, or the normal-order reduction, which does not necessarily evaluate every subexpression at first.
The remaining subcomputation finally determines the result of the whole evaluation of the expression. (For program-defined constructs, this usually implies the substitution of the evaluated argument into something like a function body, and the subsequent evaluation of the result.)
Lazy evaluation, or the call-by-need strategy, is a typical concrete instance of the non-strict evaluation kind. To make it practically usable, subexpression evaluations are required to be pure (side-effect-free), so the reductions implementing the strategy can have the Church-Rosser property whatever the order of subexpression evaluation is actually adopted.
One significant merit of such design is the availability of the equational resoning: users can encode the equality of expression evaluation in the program, and optimizing implementation of the language can perform the transformation depending directly on such constructs.
However, there are many serious problems behind such design.
Equational reasoning is not important as it in the first glance in practice.
The encoding is not a separate feature. It has some specific requirements on the other features to carry the encoding. For a pure language, it is even more difficult to encode them elsewhere, so there is certain pressure to make the type system more expressive, hence more complicated typing and typechecking.
Whether the compiler uses the equational reasoning directly encoded in the program or not is an implementation detail. It is more of a taste of style to promote the importance.
Syntatic equations are not powerful enough to encode semantic conditions like cases of "unspecified behavior" in ISO C. It still needs some additional primitives to express non-determinism of such semantic equivalence classes to make optimization techniques based on such equivalence possible.
It is computationally inefficient at the very basic level by default, and not amendable by the programmer easily.
There is no systemic way to reduce the cost on equations which are known not required by the programmer.
One of the significance comes from the clash between lazily evaluated combinations and proper tail recursion over the combinations.
The unpredictable abuse of thunks to memoize the lazily evaluated expressions also makes troubles on the utilization of the machine resources (e.g. registers and the cache memory).
Purely functional languages like Haskell may declare the referential transparency is a good thingTM. However, this is faulty in certain contexts.
There are semantic gaps over the terminology itself. The purity is not the only aspect for the referential transparency; moreover, there are other kinds of such property not readily provided by the evaluation strategy.
In general, referential transparency should not be a goal about programming. Instead, it is an optional manner to implement the composable components of programs. Composability is essentially about the expected invariance on the interface of the components. There are many ways to keep the composability without the aid of any kinds of referential transparency. Whether the guarantee should be enforced by the language rules? It depends. At least, it should not depend totally on the language designers' point.
The lack of impure evaluations requires more syntax noises to encode many constructs simply expressible by mutable state cells in the traditional impure languages. The workarounds of the practical problems do make the solution more difficult and hard to reason by humans.
For example, I/O operations are side-effectful, thus not directly expressible in Haskell expressions under the usual non-strict evaluation rules, otherwise the order of effects will be non-deterministic.
To overcoming the shortcoming, some indirect conventional constructs like the IO monad to simulate the traditional imperative style are proposed. Such monadic constructs are in essential "indirect" in the sense similar to the continuation-passing style, which is considerably low-level and difficult to read. Even though monads can be "powerful" than continuations in expresiveness, it does not naturally powerful than more high-level alternatives (like algebraic effect systems) when the lazy evaluation strategy is not enforced by default.
Besides the intuition problem above, the necessity of using monadic constructs are often difficult to prove formally (if ever possible). As the result, they are very easily abused (just like the design patterns for "OOP" languages derived from Simula). The related syntax sugar, notably, the famous do-notation, is abused for a few decades before well-known by the Haskell community.
Simulating strict language constructs in languages like Haskell usually needs monadic constructs, while simulating non-strict constructs in strict languages are considerably simpler and easier to implement efficiently. For instance, there is SRFI-45.
The lazy evaluation strategy does not deal with many other non-strict constructs well.
For example, seq has to be a compiler magic in GHC. This is not easily expressible by other Haskell constructs without massive changes in the core Haskell language rules.
Although traditional strict languages also do not allow user programs to simulate the enforcement of the order easily so such sequential constructs are therefore primitive (examples: C-like ; is primitive; the derivation of Scheme's begin is relying on the primitive lambda which in turn implying an implicit evaluation order on expressions), it can be implementable reusing the applicative order rules without additional ad-hoc primitives, like the derivation of the$sequence operator in the Kernel language.
Concerns about specific questions
Lazy evaluation is not a must for the "functional paradigm", though as mentioned above, purely functional languages are likely have the lazy evaluation strategy by default. The common properties are the usability of first-class functions. Impure languages like Lisp and ML family are considered "functional", which use eager evaluation by default. Also note the popularity of "functional paradigm" came after the introducing of function-level programming. The latter is quite different, but still somewhat similar to "functional programming" on the treatment of first-classness.
As mentioned above, the way to simulate laziness in eager languages are well-known. Additionally, for pure programs, there may be no non-trivially semantic difference between call-by-need and normal order reduction. To figure out something really only work in a lazy world is actually not easy. (Do you want to implement the language?) Just go ahead.
Conclusion
Be careful to the problem domain. Lazy evaluation may work well for specific scenarios. However, making it by default is likely to be a bad idea in general, because users (whoever to use the language to program, or to derive a new dialect based on the current language) will likely have few chances to ignore all of the problems it will cause.
Well, try to think of something that would work if lazily evaluated, that wouldn't if eagerly evaluated. The most common category of these would be lazy logical operator evaluation used to hide a "side effect". I'll use C#-ish language to explain, but functional languages would have similar analogs.
Take the simple C# lambda:
(a,b) => a==0 || ++b < 20
In a lazy-evaluated language, if a==0, the expression ++b < 20 is not evaluated (because the entire expression evaluates to true either way), which means that b is not incremented. In both imperative and functional languages, this behavior (and similar behavior of the AND operator) can be used to "hide" logic containing side effects that should not be executed:
(a,b) => a==0 && save(b)
"a" in this case may be the number of validation errors. If there were validation errors, the first half fails and the second half is not evaluated. If there were no validation errors, the second half is evaluated (which would include the side effect of trying to save b) and the result (apparently true or false) is returned to be evaluated. If either side evaluates to false, the lambda returns false indicating that b was not successfully saved. If this were evaluated "eagerly", we would try to save regardless of the value of "a", which would probably be bad if a nonzero "a" indicated that we shouldn't.
Side effects in functional languages are generally considered a no-no. However, there are few non-trivial programs that do not require at least one side effect; there's generally no other way to make a functional algorithm integrate with non-functional code, or with peripherals like a data store, display, network channel, etc.
OK, for those who have never encountered the term, a quine is a "self-replicating" computer program. To be more specific, one which - upon execution - produces a copy of its own source code as its only output.
The quines can, of course, be developed in many programming languages (but not all); but some languages are obviously more suited to producing quines than others (to clearly understand the somewhat subjective-sounding "more suited", look at a Haskell example vs. C example in the Wiki page - and I provide my more-objective definition below).
The question I have is, from programming language perspective, what language features (either theoretical design ones or syntax sugar) make the language more suitable/helpful for writing quines?
My definition of "more suitable" is "quines are easier to write" and "are shorter/more readable/less obfuscated". But you're welcome to add more criteria that are at least somewhat objective.
Please note that this question explicitly excludes degenerate cases, like a language which is designed to contain "print_a_quine" primitive.
I am not entirely sure, so correct me if anyone of you knows better.
I agree with both other answers, going further by explaining, that a quine is this:
Y g
where Y is a Y fixed-point combinator (or any other fixed-point combinator), which means in lambda calculus:
Y g = g(Y g)
now, it is quite apparent, that we need the code to be data and g be a function which will print its arguments.
So to summarize we need for constructing such a quines functions, printing function, fixed-point combinator and call-by-name evaluation strategy.
The smallest language that satisfies this conditions is AFAIK Zot from the Iota and Jot family.
Languages like the Io Programming Language and others allow the treating of code as data. In tree walking systems, this typically allows the language implementer to expose the abstract syntax tree as a first class citizen. In the case of Io, this is what it does. Being object oriented, the AST is modelled around Message objects, and a special sentinel is created to represent the currently executing message; this sentinel is called thisMessage. thisMessage is a full Message like any other, and responds to the print message, which prints it to the screen. As a result, the shortest quine I've ever been able to produce in any language, has come from Io and looks like this:
thisMessage print
Anyway, I just couldn't help but sharing this with you on this subject. The above certainly makes writing quines easy, but not doing it this way certainly doesn't preclude easily creating a quine.
I'm not sure if this is useful answer from a practical point of view, but there is some useful theory in computability theory. In particular fixed points and Kleene's recursion theorem can be used for writing quines. Apparently, the theory can be used for writing quine in LISP (as the wikipedia page shows).
This question is of course inspired by Monads in Haskell.
wrapping my head around continuation passing style has helped my javascript coding a lot
I would say First-class functions.
In computer science, a programming
language is said to support
first-class functions (or function
literals) if it treats functions as
first-class objects. Specifically,
this means that the language supports
constructing new functions during the
execution of a program, storing them
in data structures, passing them as
arguments to other functions, and
returning them as the values of other
functions. This concept doesn't cover
any means external to the language and
program (metaprogramming), such as
invoking a compiler or an eval
function to create a new function.
Do you want to measure the usefulness in connection with functional-programming itself or programming in general?
In general, the positive experience of functional programming doesn't result from particular techniques but from the way it changes your thinking -
Holding immutable data
Formulating declaratively (recursion, pattern-matching)
Treating functions as data
So I'd say that functional programming is the answer to your question itself.
But to give a more specific answer too, I'd vote for functional abstraction mechanisms like
monads
arrows
continuation-passing-style
zippers
higher-order-functions
generics + typeclasses.
As already said, they are very abstract things on the first view, but once you have understood them, they are extremely cool and valueable techniques to write concise, error-safe and last but not least highly reusable code.
Compare the following (Pseudocode):
// Concrete
def sumList(Data : List[Int]) = ...
// Generic
def sumGeneric[C : Collection[T], T : Num](Data : C) = ...
The latter might be somewhat unintuitive compared with the first definition, but it allows you to work with any collection and numeric type in general!
All in all, many modern (mainstream) languages have discovered such benefits and introduced very functional features like lambda functios or Linq. Having understood these techniques will also improve writing code in this languages.
One from the "advanced" department: Programming with phantom types (sometimes also called indexed types). It's admittedly not a "standard" technique in functional programming but not entirely esoteric either, and it's something to keep your brain busy for awhile (you asked for something difficult, right? ;)).
In a nutshell, it is about parameterizing types to encode and statically enforce certain properties at compile time. One of the standard examples is the vector addition function that statically ensures that given two vectors of length N and M will return a vector of length N+M or otherwise you get a compile-time error. Yes, there are more interesting applications.
These techniques are not quite as useful in C++ as they are in a proper functional programming language, but so far I've managed to sneak some of this stuff in all of my recent projects at work to a varying degree, most recently in a C++ EDSL context where it worked out really well. You don't necessarily have to encode fancy stuff, learning this helped me catching the situations where a few type tags can reduce the verbosity of an EDSL or allowed a cleaner syntax, for example.
Admittedly, the usefulness is somewhat restricted by language support and what you're trying to achieve.
Some starters:
Generic and Indexed Type (slides with some brief applications overview)
Fun with Phantom Types
The Kennedy and Russo paper mentioned in the slides is Generalized Algebraic Data Types
and Object Oriented Programming and puts some of this stuff into the context of C#/Java.
Chapter 3 in Dave Abraham's book C++ Template Metaprogramming is available online as sample chapter and uses these techniques in C++ for dimensional analysis.
A practical FP project using phantom types is HaskellDB.
I would say that Structural typing in OCaml is particularly rewarding.
recursion. Difficult to wrap your head around it at times
The concept of higher-order functions, lambda functions and the power of generic algorithms that are easy to combine were very beneficial for me. I'm always excited when I see what I can do with a fold in haskell.
Likewise my programming in C# has changed a lot (to the better, I hope) since I got into functional programming (haskell specifically).