Apparently it is harder to add macros to Haskell than Scheme.
I'm very confused on what exactly macros are in functional programming, but from what I understand in C it is code which alters the "code" at preprocessing time, rather than the program itself.
Is it something to do with the fact that Haskell is pure, so it much harder to alter functional code?
Has it something to do with lazy evaluation?
It certainly has nothing to do with Haskell being pure or lazy, on the contrary.
The main reason why macros aren't as common in Haskell is that it's a static, compiled language, thus changing code at runtime isn't possible anyway. Also, it doesn't have quite as simple syntax as Lisps do, so writing compile-time macros is always a bit more cumbersome.
http://hackage.haskell.org/package/template-haskell-2.12.0.0/docs/Language-Haskell-TH.html#t:Exp
Haskell's lazy evaluation and the abstraction capabilities of its strong type system however mean that you simply don't need macros in lots of situations where Scheme programmers would employ them. Basically, you can abstract over the notion of computation itself, which is a lot like macros (just much cooler, if you ask me...).
This should help you. Lisp languages' syntax are a lot simpler than other languages and the parser is directly accessible from the language.
Clojure is a type of lisp that is both lazy and can deal with pure code but still has the powerful macros of Lisp.
In addition to the prior answers, I would say that both Haskell and Scheme are largely academic languages, meaning specifically that work on them is driven by researchers, and their particular interests. For these languages, then, those that are interested in Macros are likely to gravitate toward the language with the syntax system (Scheme/Racket/Clojure) whose uniformity (everything is parenthesized) makes macro definitions dramatically easier.
Related
I have read some sources where the Haskell's paradigm is described as functional but imperative paradigm. The main source where this is said is Wikipedia. How is possible a functional and imperative paradigm at the same time, or is this a mistake?
Functional languages generally use a declarative methodology when describing their programs, and Haskell is definitely one of these languages; Haskell programs are described as a set of declarations of values (including functions, because functions are first class values).
However, in order for a computer to execute a program, it has to follow a sequence of instructions. To that end, Haskell has a way you can describe what can end up as sequencing when it's executed, in a special syntax (called "do syntax"). Each line of this syntax looks like the next "statement" in a normal imperative language, and kind of behaves that way, too. Really, though, it's just syntactic sugar for ordinary Haskell expressions of function application.
Because it's still written in Haskell, though, and Haskell values and expressions are for the most part typesafe, Haskell is often called "the best imperative language".
As long as changing the order of the atomic parts of a program can change its output, you can safely smack "imperative" on the language. Almost any high-level language is "declarative" in some way, and there are languages that support "declarative programming" better than Haskell does.
In C, for example, enums and even the switch can be "declarative", if you use it in a certain way. Much further in the "declarative" direction is Prolog, and you could write programs in a declarative style, but again, you can also use Prolog to write an imperative program.
Declarative, but there is some caveat to that.
Alonso Church created lambda calculus which Haskell is based on, and Alan Turing created the turing machine. Lambda calculus is purely functions, a turing machine changes the state of a program. Sounds familiar?
Except, it has been shown that these two are equivalent. You can build a turing machine from lambda calculus and vice versa. Which means everything inside Haskell is really just functions, but from them you can build an imperative style of programming.
So yeah, declarative. You don't tell Haskell how to do things, you tell it what a thing is and then build from it.
How come most Lisps and Schemes are dynamically typed?
Does static typing not mix with some of their common features?
Typing and s-expressions can be made to work together, see typed scheme.
Partly it is a historical coincidence that s-expression languages are dynamically typed. These languages tend to rely more heavily on macros, and the ease of parsing and pattern-matching on s-expressions makes macro processing much easier. Most research on sophisticated macros happens in s-expression languages.
Typed Hygienic Macros are hard.
When Lisp was invented in the years from 1958 to 1960 it introduced a lot of features both as a language and an implementation (garbage collection, a self-hosting compiler, ...). Some features were inherited (with some improvements) from other languages (list processing, ...). The language implemented computation with functions. The s-expressions were more an implementation detail ( at that time ), than a language feature. A type system was not part of the language. Using the language in an interactive way was also an early implementation feature.
The useful type systems for functional languages were not yet invented at that time. Still until today it is also relatively difficult to use statically typed languages in an interactive way. There are many implementations of statically typed languages which also provide some interactive interface - but mostly they don't offer the same level of support of interactive use as a typical Lisp system. Programming in an interactive Lisp system means that many things can be changed on the fly and it could be problematic if type changes had to be propagated through whole programs and data in such an interactive Lisp system. note that Some Schemers have a different view about these things. R6RS is mostly a batch language generally not that much in the spirit of Lisp...
The functional languages that got invented later with static type systems then also got a non-s-expression syntax - they did not offer support for macros or related features. later some of these languages/implementations used a preprocessor for syntactic extensions.
Static typing is lexical, it means that all information about types can be inferred from reading source code without evaluating any expressions or computing any things, conditionals being most important here. A statically typed language is designed so that this can happen, a better term would be 'lexically typed', as in, a compiler can prove from reading the source alone that no type errors will occur.
In the case of lisp, this is awkwardly different because lisp's source code itself is not static, lisp is homo-iconic, it uses data as code and can to some extend dynamically edit its own running source.
Lisp was the first dynamically typed language, and probably for this reason, program code itself is no longer lexical in Lisp.
Edit: a far more powerful reason, in the case of static typing you'd have to type lists. You can either have extremely complex types for each lists which account for all elements, of demand that each element has the same type and type it as a list of that. The former option will produce hell with lists of lists. The latter option demands that source code only contains the same type for each datum, this means that you can't even build expressions as a list is anyhow a different type than an integer.
So I dare say that it is completely and utterly infeasible to realize.
One of the main characteristics of functional programming is the use of side-effectless functions. However, this can be done in an imperative language also. The same is true for recursion and lambda functions (for example C++0x). Therefore I wonder whether imperative programming languages are a superset of functional ones.
I can't really say whether they are subset of one another. What I can tell, though, that (except for really esoteric languages) they are all Turing-complete, which means that in the end they're all equally powerful, but not neccesarily equally expressive.
Generally speaking, no; functional programming is a subset of declarative programming (which includes logic programming languages, like Prolog). Many imperative languages borrow elements from functional programming languages, but simply having lambdas or referentially-transparent functions does not make an imperative language functional; functional programming is about more than just these elements.
It is possible to implement a certain programming paradigm in a language which doesn't support the programming paradigm natively. For example its possible to write Object Oriented Code in C while it is not designed for this purpose.
Functional programming is a well developed programming paradigm of its own and is best learnt through languages like Haskell, LISP etc. And after you have learnt them well, even though you don't use these languages regularly, you may start using those principles in the day to day language you use on regular basis.
Some people may like to Google for Object oriented programming in C
A paradigm is a way of doing things, and there are two main programming paradigms: imperative and declarative. The fact that some languages allow to mix both paradigms doesn't mean that one is included in the other, but that the languages are multi-paradigm.
To clarify it a little bit more, let me continue with your analogy: if Lisp and OCaml (for example) are considered functional languages, and both of them allow imperative style... then should imperative be considered a subset of functional?
Most imperative languages don't have functions as first-order types, whereas most functionald o. (As does C++, via boost::function.)
By first-order type, this meas a value/variable can be of any type, an int, a bool, a function from int->bool. It usually also includes closures or bound values as well, where you have the same function, but some arguments are already filled in.
Those two are what functional programming is mostly about, IMHO.
I think it might be helpful to draw a distinction between paradigm and language.
To me, paradigms represent "ways of thinking" (concepts and abstractions such as functions, objects, recursion), whereas languages offer "ways of doing" (syntax, variables, evaluations).
All true programming languages are equivalent in the sense that they are Turing-complete and able, in theory, to compute any Turing-computable function as well as simulate or be simulated by a universal Turing machine.
The interesting thing is how difficult it is to accomplish certain tasks in certain languages or paradigms, how appropriate the tool is to the task. Even Conway's Game of Life is Turing-complete, but that does not make me want to program with it.
Many languages support a number of paradigms. C++ was designed as an object-oriented extension for C, but it is possible to write purely procedural code in it.
Some languages borrow/acquire features from other languages or paradigms over time (just look at the evolution of Java).
A few languages, like Common Lisp, are impressively multi-paradigm languages. It is possible to write code that is functional, object oriented or procedural in Lisp. Arguably, aspect-orientation is already part of the common lisp object system, and therefore "nothing special". In Lisp, it is easy to extend the language itself to do whatever you need it to do, thus it is sometimes called the "programmable programming language". (I'll point out here that Lisp describes a family of languages of which Common Lisp is only one dialect).
I think it doesn't matter which of the terms, declarative, imperative, functional or procedural, is a subset of which. What matters more is to understand the tools languages you're working with, and how those are different from other tools. Even more important is to understand the different ways of thinking that the paradigms represent, since those are your thought-tools. As with most other things in life, the more you understand, the more effective you become.
One way to look at it (not saying it's the right way 'cos I'm not a lang designer or theorist by any means) is that if the language is essentially converted to something else then that 'something else' must be the superset of the source. So bytecode is necessarily a superset of Java. .NET IL is a superset of C# and of F#. The functional constructs in C# (i.e. LINQ) are thus a subset of the imperative constructs of IL.
Since machine language is imperative, you could take the position that, therefore, all languages are imperative, because they are just abstractions useful for humans that are then boiled away by the compiler to procedural, imperative machine code.
Pattern mapping like
f:: [int] -> int
f [] = 0
f (x:xs) = 1 + f(xs)
is something that is for instance one thing that is not available in imperative languages.
Also constructs like curried functions:
add2 :: int -> int
add2 = (2 +)
is not available in most imperative languages
Yes, functional programming is a subset of imperative programming, but...
Yes, because there is nothing in functional programming that you can't do in imperative programming (syntax differences notwithstanding). You can "do" functional programming in an imperative language.
But...
The things you can't do are the key features of functional programming.
By limiting what you can do,
you make certain mistakes impossible,
you enable features (such as program analysis, simpler concurrency, simpler testing, etc).
Other benefits of functional programming are more subjective.
You often hear these arguments.
State allows side effects. Side effects are bad. Functional programming has no state. You can't have side effects without state.
This is a dubious benefit.
First, only unintended side effects are bad. Proper programming practices, such as limiting modification of state to privileged code, alleviate side effect issues.
Second, functional programming only has no internal state. If the program has IO (accessing files, network, hardware) you have external state, thus the potential of side effects.
Functional programs are easier to debug.
In some ways yes, like knowing the explicit path to an exception.
But having state to examine is a debugging benefit that functional programming does not have.
Functional programs are easier to understand.
This is only true if you are fluent in functional programming and not fluent in imperative programming.
I for one, being more fluent in imperative programming, find this argument to be false.
This question is of course inspired by Monads in Haskell.
wrapping my head around continuation passing style has helped my javascript coding a lot
I would say First-class functions.
In computer science, a programming
language is said to support
first-class functions (or function
literals) if it treats functions as
first-class objects. Specifically,
this means that the language supports
constructing new functions during the
execution of a program, storing them
in data structures, passing them as
arguments to other functions, and
returning them as the values of other
functions. This concept doesn't cover
any means external to the language and
program (metaprogramming), such as
invoking a compiler or an eval
function to create a new function.
Do you want to measure the usefulness in connection with functional-programming itself or programming in general?
In general, the positive experience of functional programming doesn't result from particular techniques but from the way it changes your thinking -
Holding immutable data
Formulating declaratively (recursion, pattern-matching)
Treating functions as data
So I'd say that functional programming is the answer to your question itself.
But to give a more specific answer too, I'd vote for functional abstraction mechanisms like
monads
arrows
continuation-passing-style
zippers
higher-order-functions
generics + typeclasses.
As already said, they are very abstract things on the first view, but once you have understood them, they are extremely cool and valueable techniques to write concise, error-safe and last but not least highly reusable code.
Compare the following (Pseudocode):
// Concrete
def sumList(Data : List[Int]) = ...
// Generic
def sumGeneric[C : Collection[T], T : Num](Data : C) = ...
The latter might be somewhat unintuitive compared with the first definition, but it allows you to work with any collection and numeric type in general!
All in all, many modern (mainstream) languages have discovered such benefits and introduced very functional features like lambda functios or Linq. Having understood these techniques will also improve writing code in this languages.
One from the "advanced" department: Programming with phantom types (sometimes also called indexed types). It's admittedly not a "standard" technique in functional programming but not entirely esoteric either, and it's something to keep your brain busy for awhile (you asked for something difficult, right? ;)).
In a nutshell, it is about parameterizing types to encode and statically enforce certain properties at compile time. One of the standard examples is the vector addition function that statically ensures that given two vectors of length N and M will return a vector of length N+M or otherwise you get a compile-time error. Yes, there are more interesting applications.
These techniques are not quite as useful in C++ as they are in a proper functional programming language, but so far I've managed to sneak some of this stuff in all of my recent projects at work to a varying degree, most recently in a C++ EDSL context where it worked out really well. You don't necessarily have to encode fancy stuff, learning this helped me catching the situations where a few type tags can reduce the verbosity of an EDSL or allowed a cleaner syntax, for example.
Admittedly, the usefulness is somewhat restricted by language support and what you're trying to achieve.
Some starters:
Generic and Indexed Type (slides with some brief applications overview)
Fun with Phantom Types
The Kennedy and Russo paper mentioned in the slides is Generalized Algebraic Data Types
and Object Oriented Programming and puts some of this stuff into the context of C#/Java.
Chapter 3 in Dave Abraham's book C++ Template Metaprogramming is available online as sample chapter and uses these techniques in C++ for dimensional analysis.
A practical FP project using phantom types is HaskellDB.
I would say that Structural typing in OCaml is particularly rewarding.
recursion. Difficult to wrap your head around it at times
The concept of higher-order functions, lambda functions and the power of generic algorithms that are easy to combine were very beneficial for me. I'm always excited when I see what I can do with a fold in haskell.
Likewise my programming in C# has changed a lot (to the better, I hope) since I got into functional programming (haskell specifically).
Which techniques or paradigms normally associated with functional languages can productively be used in imperative languages as well?
e.g.:
Recursion can be problematic in languages without tail-call optimization, limiting its use to a narrow set of cases, so that's of limited usefulness
Map and filter have found their way into non-functional languages, even though they have a functional sort of feel to them
I happen to really like not having to worry about state in functional languages. If I were particularly stubborn I might write C programs without modifying variables, only encapsulating my state in variables passed to functions and in values returned from functions.
Even though functions aren't first class values, I can wrap one in an object in Java say, and pass that into another method. Like Functional programming, just less fun.
So, for veterans of functional programming, when you program in imperative languages, what ideas from FP have you applied successfully?
Pretty nearly all of them?
If you understand functional languages, you can write imperative programs that are "informed" by a functional style. That will lead you away from side effects, and toward programs in which reading the program text at any particular point is sufficient to let you really know what the meaning of the program is at that point.
Back at the Dawn of Time we used to worry about "coupling" and "cohesion". Learning an FP will lead you to write systems with optimal (minimal) coupling, and high cohesion.
Here are things that get in the way of doing FP in a non-FP language:
If the language doesn't support lambda/closures, and doesn't have any syntactic sugar to easily mostly hack it, you are dead in the water. You don't call map/filter without closures.
If the language is statically-typed and doesn't support generics, you are dead in the water. All the good FP stuff uses genericity.
If the language doesn't support tail-recursion, you are hindered. You can write implementations of e.g. 'map' iteratively; also often your data may not be too large and recursion will be ok.
If the language does not support algebraic data types and pattern-matching, you will be mildly hindered. It's just annoying not to have them once you've tasted them.
If the language cannot express type classes, well, oh well... you'll get by, but darn if that's not just the awesomest feature ever, but Haskell is the only remotely popular language with good support.
Not having first-class functions really puts a damper on writing functional programs, but there are a few things that you can do that don't require them. The first is to eschew mutable state - try to have most or all of your classes return new objects that represent the modified state instead of making the change internally. As an example, if you were writing a linked list with an add operation, you would want to return the new linked list from add as opposed to modifying the object.
While this may make your programs less efficient (due to the increased number of objects being created and destroyed) you will gain the ability to more easily debug the program because the state and operation of the objects becomes more predictable, not to mention the ability to nest function calls more deeply because they have state inputs and outputs.
I've successfully used higher-order functions a lot, especially the kind that are passed in rather than the kind that are returned. The kind that are returned can be a bit tedious but can be simulated.
All sorts of applicative data structures and recursive functions work well in imperative languages.
The things I miss the most:
Almost no imperative languages guarantee to optimize every tail call.
I know of no imperative language that supports case analysis by pattern matching.