I have been reading many articles trying to understand the difference between functional and logic programming, but the only deduction I have been able to make so far is that logic programming defines programs through mathematical expressions. But such a thing is not associated with logic programming.
I would really appreciate some light being shed on the difference between functional and logic programming.
I wouldn't say that logic programming defines programs through mathematical expressions; that sounds more like functional programming. Logic programming uses logic expressions (well, eventually logic is math).
In my opinion, the major difference between functional and logic programming is the "building blocks": functional programming uses functions while logic programming uses predicates. A predicate is not a function; it does not have a return value. Depending on the value of it's arguments it may be true or false; if some values are undefined it will try to find the values that would make the predicate true.
Prolog in particular uses a special form of logic clauses named Horn clauses that belong to first order logic; Hilog uses clauses of higher order logic.
When you write a prolog predicate you are defining a horn clause:
foo :- bar1, bar2, bar3. means that foo is true if bar1, bar2 and bar3 is true.
note that I did not say if and only if; you can have multiple clauses for one predicate:
foo:-
bar1.
foo:-
bar2.
means that foo is true if bar1 is true or if bar2 is true
Some say that logic programming is a superset of functional programming since each function could be expressed as a predicate:
foo(x,y) -> x+y.
could be written as
foo(X, Y, ReturnValue):-
ReturnValue is X+Y.
but I think that such statements are a bit misleading
Another difference between logic and functional is backtracking. In functional programming once you enter the body of the function you cannot fail and move to the next definition. For example you can write
abs(x) ->
if x>0 x else -x
or even use guards:
abs(x) x>0 -> x;
abs(x) x=<0 -> -x.
but you cannot write
abs(x) ->
x>0,
x;
abs(x) ->
-x.
on the other hand, in Prolog you could write
abs(X, R):-
X>0,
R is X.
abs(X, R):-
R is -X.
if then you call abs(-3, R), Prolog would try the first clause, and fail when the execution reaches the -3 > 0 point but you wont get an error; Prolog will try the second clause and return R = 3.
I do not think that it is impossible for a functional language to implement something similar (but I haven't used such a language).
All in all, although both paradigms are considered declarative, they are quite different; so different that comparing them feels like comparing functional and imperative styles. I would suggest to try a bit of logic programming; it should be a mind-boggling experience. However, you should try to understand the philosophy and not simply write programs; Prolog allows you to write in functional or even imperative style (with monstrous results).
In a nutshell:
In functional programming, your program is a set of function definitions. The return value for each function is evaluated as a mathematical expression, possibly making use of passed arguments and other defined functions. For example, you can define a factorial function, which returns a factorial of a given number:
factorial 0 = 1 // a factorial of 0 is 1
factorial n = n * factorial (n - 1) // a factorial of n is n times factorial of n - 1
In logic programming, your program is a set of predicates. Predicates are usually defined as sets of clauses, where each clause can be defined using mathematical expressions, other defined predicates, and propositional calculus. For example, you can define a 'factorial' predicate, which holds whenever second argument is a factorial of first:
factorial(0, 1). // it is true that a factorial of 0 is 1
factorial(X, Y) :- // it is true that a factorial of X is Y, when all following are true:
X1 is X - 1, // there is a X1, equal to X - 1,
factorial(X1, Z), // and it is true that factorial of X1 is Z,
Y is Z * X. // and Y is Z * X
Both styles allow using mathematical expressions in the programs.
First, there are a lot of commonalities between functional and logic programming. That is, a lot of notions developed in one community can also be used in the other. Both paradigms started with rather crude implementations and strive towards purity.
But you want to know the differences.
So I will take Haskell on the one side and Prolog with constraints on the other. Practically all current Prolog systems offer constraints of some sort, like B, Ciao, ECLiPSe, GNU, IF, Scryer, SICStus, SWI, YAP, XSB. For the sake of the argument, I will use a very simple constraint dif/2 meaning inequality, which was present even in the very first Prolog implementation - so I will not use anything more advanced than that.
What functional programming is lacking
The most fundamental difference revolves around the notion of a variable. In functional programming a variable denotes a concrete value. This value must not be entirely defined, but only those parts that are defined can be used in computations. Consider in Haskell:
> let v = iterate (tail) [1..3]
> v
[[1,2,3],[2,3],[3],[],*** Exception: Prelude.tail: empty list
After the 4th element, the value is undefined. Nevertheless, you can use the first 4 elements safely:
> take 4 v
[[1,2,3],[2,3],[3],[]]
Note that the syntax in functional programs is cleverly restricted to avoid that a variable is left undefined.
In logic programming, a variable does not need to refer to a concrete value. So, if we want a list of 3 elements, we might say:
?- length(Xs,3).
Xs = [_A,_B,_C].
In this answer, the elements of the list are variables. All possible instances of these variables are valid solutions. Like Xs = [1,2,3]. Now, lets say that the first element should be different to the remaining elements:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys).
Xs = [X,_A,_B], Ys = [_A,_B], dif(X,_B), dif(X,_A).
Later on, we might demand that the elements in Xs are all equal. Before I write it out, I will try it alone:
?- maplist(=(_),Xs).
Xs = []
; Xs = [_A]
; Xs = [_A,_A]
; Xs = [_A,_A,_A]
; Xs = [_A,_A,_A,_A]
; ... .
See that the answers contain always the same variable? Now, I can combine both queries:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys), maplist(=(_),Xs).
false.
So what we have shown here is that there is no 3 element list where the first element is different to the other elements and all elements are equal.
This generality has permitted to develop several constraint languages which are offered as libraries to Prolog systems, the most prominent are CLPFD and CHR.
There is no straight forward way to get similar functionality in functional programming. You can emulate things, but the emulation isn't quite the same.
What logic programming is lacking
But there are many things that are lacking in logic programming that make functional programming so interesting. In particular:
Higher-order programming: Functional programming has here a very long tradition and has developed a rich set of idioms. For Prolog, the first proposals date back to the early 1980s, but it is still not very common. At least ISO Prolog has now the homologue to apply called call/2, call/3 ....
Lambdas: Again, it is possible to extend logic programming in that direction, the most prominent system is Lambda Prolog. More recently, lambdas have been developed also for ISO Prolog.
Type systems: There have been attempts, like Mercury, but it has not caught on that much. And there is no system with functionality comparable to type classes.
Purity: Haskell is entirely pure, a function Integer -> Integer is a function. No fine print lurking around. And still you can perform side effects. Comparable approaches are very slowly evolving.
There are many areas where functional and logic programming more or less overlap. For example backtracking and lazyness and list comprehensions, lazy evaluation and freeze/2, when/2, block. DCGs and monads. I will leave discussing these issues to others...
Logic programming and functional programming use different "metaphors" for computation. This often affects how you think about producing a solution, and sometimes means that different algorithms come naturally to a functional programmer than a logic programmer.
Both are based on mathematical foundations that provide more benefits for "pure" code; code that doesn't operate with side effects. There are languages for both paradigms that enforce purity, as well as languages that allow unconstrained side effects, but culturally the programmers for such languages tend to still value purity.
I'm going to consider append, a fairly basic operation in both logical and functional programming, for appending a list on to the end of another list.
In functional programming, we might consider append to be something like this:
append [] ys = ys
append (x:xs) ys = x : append xs ys
While in logic programming, we might consider append to be something like this:
append([], Ys, Ys).
append([X|Xs], Ys, [X|Zs]) :- append(Xs, Ys, Zs).
These implement the same algorithm, and even work basically the same way, but they "mean" something very different.
The functional append defines the list that results from appending ys onto the end of xs. We think of append as a function from two lists to another list, and the runtime system is designed to calculate the result of the function when we invoke it on two lists.
The logical append defines a relationship between three lists, which is true if the third list is the elements of the first list followed by the elements of the second list. We think of append as a predicate that is either true or false for any 3 given lists, and the runtime system is designed to find values that will make this predicate true when we invoke it with some arguments bound to specific lists and some left unbound.
The thing that makes logical append different is you can use it to compute the list that results from appending one list onto another, but you can also use it to compute the list you'd need to append onto the end of another to get a third list (or whether no such list exists), or to compute the list to which you need to append another to get a third list, or to give you two possible lists that can be appended together to get a given third (and to explore all possible ways of doing this).
While equivalent in that you can do anything you can do in one in the other, they lead to different ways of thinking about your programming task. To implement something in functional programming, you think about how to produce your result from the results of other function calls (which you may also have to implement). To implement something in logic programming, you think about what relationships between your arguments (some of which are input and some of which are output, and not necessarily the same ones from call to call) will imply the desired relationship.
Prolog, being a logical language, gives you free backtracking, it's pretty noticeable.
To elaborate, and I precise that I'm in no way expert in any of the paradigms, it looks to me like logical programming is way better when it comes to solving things. Because that's precisely what the language does (that appears clearly when backtracking is needed for example).
I think the difference is this:
imperative programming=modelling actions
function programming=modelling reasoning
logic programming =modelling knowledge
choose what fits your mind best
functional programming:
when 6PM, light on.
logic programming:
when dark, light on.
Related
I was wondering how smart/lazy Haskell is. Can I always be sure that Haskell will only do what is necessary to generate a certain output?
No.
Haskell specifies a denotational semantics for its core lambda-like calculus, and you can rely on that semantics. Additionally, there is a metatheory proof that a particular reduction order -- known colloquially as "lazy evaluation" -- realizes that semantics; and so many people use that as their mental model of how Haskell programs behave.
There are two broad categories of ways that a Haskell program may end up evaluating more than necessary:
The Haskell implementation may choose to evaluate more. GHC uses lazy evaluation in most places, but I believe it will use other evaluation orders for efficiency in some cases. You could also look at the Eager Haskell project, which is attempting to use another implementation strategy. In principle, an implementation would be within its rights to choose to speculatively fork some computations to another thread (and then throw away the results if they weren't needed). And so on and so forth -- there are many possible variations on this theme.
The denotational semantics specified may demand more evaluation than "necessary". For example, one that occasionally trips up beginners:
primes :: [Int]
primes = 2 : filter prime [3,5..]
prime :: Int -> Bool
prime x = and [x `mod` p /= 0 | p <- primes, p < x]
When checking whether 3 should be in the list primes, it is in fact not necessary to check any of the elements of primes past 2, because the sequence is strictly monotonically increasing. But Haskell is not (does not try to be) smart enough to notice that; it will go straight on trying to check the rest of the primes and end up in an infinite loop instead of giving the list of primes.
An even smaller example: you could think that x && False is always False, but x will typically be evaluated anyway, because the semantics says this should be an infinite loop if x is. (Contrast False && x, which typically does not result in evaluating x.)
That said, when you say "complex structure", one thing that comes to mind is: does Haskell do the laziness thing even with custom data types that I define? That is, do complex structures like hash maps and balanced trees and k-d trees and so forth get treated lazily? The answer there is yes, at least for GHC; there is nothing fundamentally special about any of the types in the Prelude except IO. Lists, booleans, Maybes, and so forth are lazy not because the compiler knows special things about them, but simply as a consequence of the lazy evaluation reduction rules specified by Haskell and their declarations as types.
Of course, there are ways to opt-out of laziness. Some of the structures you will see on Hackage do that in various ways; but don't worry, usually this will be declared in their documentation.
Because Prolog uses chronological backtracking(from the Prolog Wikipedia page) even after an answer is found(in this example where there can only be one solution), would this justify Prolog as using eager evaluation?
mother_child(trude, sally).
father_child(tom, sally).
father_child(tom, erica).
father_child(mike, tom).
sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y).
parent_child(X, Y) :- father_child(X, Y).
parent_child(X, Y) :- mother_child(X, Y).
With the following output:
?- sibling(sally, erica).
true ;
false.
To summarize the discussion with #WillNess below, yes, Prolog is strict. However, Prolog's execution model and semantics are substantially different from the languages that are usually labelled strict or non-strict. For more about this, see below.
I'm not sure the question really applies to Prolog, because it doesn't really have the kind of implicit evaluation ordering that other languages have. Where this really comes into play in a language like Haskell, you might have an expression like:
f (g x) (h y)
In a strict language like ML, there is a defined evaluation order: g x will be evaluated, then h y, and f (g x) (h y) last. In a language like Haskell, g x and h y will only be evaluated as required ("non-strict" is more accurate than "lazy"). But in Prolog,
f(g(X), h(Y))
does not have the same meaning, because it isn't using a function notation. The query would be broken down into three parts, g(X, A), h(Y, B), and f(A,B,C), and those constituents can be placed in any order. The evaluation strategy is strict in the sense that what comes earlier in a sequence will be evaluated before what comes next, but it is non-strict in the sense that there is no requirement that variables be instantiated to ground terms before evaluation can proceed. Unification is perfectly content to complete without having given you values for every variable. I am bringing this up because you have to break down a complex, nested expression in another language into several expressions in Prolog.
Backtracking has nothing to do with it, as far as I can tell. I don't think backtracking to the nearest choice point and resuming from there precludes a non-strict evaluation method, it just happens that Prolog's is strict.
That Prolog pauses after giving each of the several correct answers to a problem has nothing to do with laziness; it is a part of its user interaction protocol. Each answer is calculated eagerly.
Sometimes there will be only one answer but Prolog doesn't know that in advance, so it waits for us to press ; to continue search, in hopes of finding another solution. Sometimes it is able to deduce it in advance and will just stop right away, but only sometimes.
update:
Prolog does no evaluation on its own. All terms are unevaluated, as if "quoted" in Lisp.
Prolog will unfold your predicate definitions as written and is perfectly happy to keep your data structures full of unevaluated uninstantiated holes, if so entailed by your predicate definitions.
Haskell does not need any values, a user does, when requesting an output.
Similarly, Prolog produces solutions one-by-one, as per the user requests.
Prolog can even be seen to be lazier than Haskell where all arithmetic is strict, i.e. immediate, whereas in Prolog you have to explicitly request the arithmetic evaluation, with is/2.
So perhaps the question is ill-posed. Prolog's operations model is just too different. There are no "results" nor "functions", for one; but viewed from another angle, everything is a result, and predicates are "multi"-functions.
As it stands, the question is not correct in what it states. Chronological backtracking does not mean that Prolog will necessarily backtrack "in an example where there can be only one solution".
Consider this:
foo(a, 1).
foo(b, 2).
foo(c, 3).
?- foo(b, X).
X = 2.
?- foo(X, 2).
X = b.
So this is an example that does have only one solution and Prolog recognizes that, and does not attempt to backtrack. There are cases in which you can implement a solution to a problem in a way that Prolog will not recognize that there is only one logical solution, but this is due to the implementation and is not inherent to Prolog's execution model.
You should read up on Prolog's execution model. From the Wikipedia article which you seem to cite, "Operationally, Prolog's execution strategy can be thought of as a generalization of function calls in other languages, one difference being that multiple clause heads can match a given call. In that case, [emphasis mine] the system creates a choice-point, unifies the goal with the clause head of the first alternative, and continues with the goals of that first alternative." Read Sterling and Shapiro's "The Art of Prolog" for a far more complete discussion of the subject.
from Wikipedia I got
In eager evaluation, an expression is evaluated as soon as it is bound to a variable.
Then I think there are 2 levels - at user level (our predicates) Prolog is not eager.
But it is at 'system' level, because variables are implemented as efficiently as possible.
Indeed, attributed variables are implemented to be lazy, and are rather 'orthogonal' to 'logic' Prolog variables.
I'm not very good in understanding the difference between the imperative and declarative programming paradigms. I read about Haskell being a declarative language. To an extend I would says yes but there is a something that bothers me in respect to the definition of imperative.
When I have a datastruct and use Haskell's functions to transform that I actually just told WHAT to transform. So I give the datastruct as argument to a function and am happy with the result.
But what if there is no function that actually satisfies my needs?
I would start writing an own function which expects the datastruct as an argument. After that I would start writing how the datastruct should be processed. Since I only can call native Haskell functions I'm still with the declarative paradigm right? But what when I start using an "if statement". Wouldn't that end the declarative nature since I'm about to tell the program HOW to do stuff from that point?
Perhaps this is a matter of perspective. The way I see it, there is nothing imperative about defining things in terms of other things because we can always replace something with its definition (as long as the definition is pure). That is, if we have f x = x + 1, then any place we see f z we can replace with z + 1. So pure functions shouldn't really be considered instructions; they should be considered definitions.
A lot of Haskell code is considered declarative for this reason. We simply define things as (pure) functions of other things.
It is possible to write imperative-style code in Haskell as well. Sometimes we really do want to say something like "do A, then do B, then do C". This adds a new dimension to the simple world of function application: we need a notion of 'happens before'. Haskell has embraced the Monad concept to describe computations that have an order of evaluation. This turns out to be very handy because it can encapsulate effects such as changing state.
There is no if statement in Haskell, only an if expression if a then b else c, which is really equivalent to ternary expressions like a ? b : c in C-like languages or b if a else c in Python. You cannot leave out the else, and it can be used anywhere an arbitrary expression can be used. It is further constrained that a must have type Boolean, and b and c must have the same type. It is syntactic sugar for
case a of
True -> b
False -> c
Don't get too hung up on the label "declarative"; it's just a style of programming that the language supports. Consider, for example, a typical declarative definition of the factorial function:
fact 0 = 1
fact n = n * fact (n - 1)
This is also just syntactic sugar for something you would see as more imperative:
fact n = case n of
0 -> 1
otherwise -> n * fact (n - 1)
Frankly, the term "declarative programming" is more of a marketing term than anything else. The informal definition that it means "specifying what to do, not how to do it" is vague and open to interpretation and definitely far from a black-and-white boundary. In practice, it seems to be applied to anything that broadly falls into the categories of functional programming, logic programming or the use of Domain-Specific Languages (DSLs).
Consequently (and I realise this probably doesn't qualify as an answer to your question, but still :)), I recommend you do not waste your time wondering whether something is still declarative or not. The terms imperative vs. functional vs. logic programming are already a bit more meaningful, so perhaps it's more useful to reflect on those.
Declarative languages are constructed from expressions whereas imperative languages are constructed from statements.
The usual explanation is what to do versus how to do it. You've already found the confusion in this. If you take it this way, declarative is using definitions (by name) and imperative is writing definitions. What is a declarative language then? One which only names definitions? If so then the only Haskell program you could write is precisely main!
There is a declarative way and an imperative way to have branching, and it follows directly from the definition. A declarative branch is an expression and an imperative branch is a statement. Haskell has case … of … (an expression) and C has if (…) {…} else {…} (a statement).
What is the difference between expressions and statements? Expressions have a value whereas statements have effects. What is the difference between values and effects?
For expressions, there is a function μ which maps any expression e to its value μ(e). This is also called the semantic, or meaning, and is ideally a well-defined mathematical object. This way of defining values is called denotational semantics. There are other methods.
For statements, there is a state P immediately before a statement S and a state Q immediately after. The effect of S is the delta from P to Q. This way of defining effects is called Hoare logic. There are other methods.
You can't use an "if statement" in Haskell, because there is none. You can use an "if-expression"
if c then a else b
but this is just syntactic sugar for something like
let f True a b = a; f False a b = b in f c a b
which is again fuly declarative.
For example, a referentially transparent function with no free variables:
g op x y = x `op` y
A now now a function with the free (from the point-of-view of f) variables op and x:
x = 1
op = (+)
f y = x `op` y
f is also referentially transparent. But is it a pure function?
If it's not a pure function, what is the name for a function that is referentially transparent, but makes use of 1 or more variables bound in an enclosing scope?
Motivation for this question:
It's not clear to me from Wikipedia's article:
The result value need not depend on all (or any) of the argument values. However, it must depend on nothing other than the argument values.
(emphasis mine)
nor from Google searches whether a pure function can depend on free (in the sense of being bound in an enclosing scope, and not being bound in the scope of the function) variables.
Also, this book says:
If functions without free variables are pure, are closures impure?
The function function (y) { return x } is interesting. It contains a
free variable, x. A free variable is one that is not bound within
the function. Up to now, we’ve only seen one way to “bind” a variable,
namely by passing in an argument with the same name. Since the
function function (y) { return x } doesn’t have an argument named x,
the variable x isn’t bound in this function, which makes it “free.”
Now that we know that variables used in a function are either bound or
free, we can bifurcate functions into those with free variables and
those without:
Functions containing no free variables are called pure functions.
Functions containing one or more free variables are called closures.
So what is the definition of a "pure function"?
To the best of my understanding "purity" is defined at the level of semantics while "referentially transparent" can take meaning both syntactically and embedded in lambda calculus substitution rules. Defining either one also leads to a bit of a challenge in that we need to have a robust notion of equality of programs which can be challenging. Finally, it's important to note that the idea of a free variable is entirely syntactic—once you've gone to a value domain you can no longer have expressions with free variables—they must be bound else that's a syntax error.
But let's dive in and see if this becomes more clear.
Quinian Referential Transparency
We can define referential transparency very broadly as a property of a syntactic context. Per the original definition, this would be built from a sentence like
New York is an American city.
of which we've poked a hole
_ is an American city.
Such a holey-sentence, a "context", is said to be referentially transparent if, given two sentence fragments which both "refer" to the same thing, filling the context with either of those two does not change its meaning.
To be clear, two fragments with the same reference we can pick would be "New York" and "The Big Apple". Injecting those fragments we write
New York is an American city.
The Big Apple is an American city.
suggesting that
_ is an American city.
is referentially transparent. To demonstrate the quintessential counterexample, we might write
"The Big Apple" is an apple-themed epithet referring to New York.
and consider the context
"_" is an apple-themed epithet referring to New York.
and now when we inject the two referentially identical phrases we get one valid and one invalid sentence
"The Big Apple" is an apple-themed epithet referring to New York.
"New York" is an apple-themed epithet referring to New York.
In other words, quotations break referential transparency. We can see how this occurs by causing the sentence to refer to a syntactic construct instead of purely the meaning of that construct. This notion will return later.
Syntax v Semantics
There's something confusing going on in that this definition of referential transparency above applies directly to English sentences of which we build contexts by literally stripping words out. While we can do that in a programming language and consider whether such a context is referentially transparent, we also might recognize that this idea of "substitution" is critical to the very notion of a computer language.
So, let's be clear: there are two kinds of referential transparency we can consider over lambda calculus—the syntactic one and the semantic one. The syntactic one requires we define "contexts" as holes in the literal words written in a programming language. That lets us consider holes like
let x = 3 in _
and fill it in with things like "x". We'll leave the analysis of that replacement for later. At the semantic level we use lambda terms to denote contexts
\x -> x + 3 -- similar to the context "_ + 3"
and are restricted to filling in the hole not with syntax fragments but instead only valid semantic values, the action of that being performed by application
(\x -> x + 3) 5
==>
5 + 3
==>
8
So, when someone refers to referential transparency in Haskell it's important to figure out what kind of referential transparency they're referring to.
Which kind is being referred to in this question? Since it's about the notion of an expression containing a free variable, I'm going to suggest that it's syntactic. There are two major thrusts for my reasoning here. Firstly, in order to convert a syntax to a semantics requires that the syntax be valid. In the case of Haskell this means both syntactic validity and a successfully type check. However, we'll note that a program fragment like
x + 3
is actually a syntax error since x is simply unknown, unbound leaving us unable to consider the semantics of it as a Haskell program. Secondly, the very notion of a variable such as one that can be let-bound (and consider the difference between "variable" as it refers to a "slot" such as an IORef) is entirely a syntactic construct—there's no way to even talk about them from inside the semantics of a Haskell program.
So let's refine the question to be:
Can an expression containing free variables be (syntactically) referentially transparent?
and the answer is, uninterestingly, no. Referential transparency is a property of "contexts", not expressions. So let's explore the notion of free variables in contexts instead.
Free variable contexts
How can a context meaningfully have a free variable? It could be beside the hole
E1 ... x ... _ ... E2
and so long as we cannot insert something into that syntactic hole which "reaches over" and affects x syntactically then we're fine. So, for instance, if we fill that hole with something like
E1 ... x ... let x = 3 in E ... E2
then we haven't "captured" the x and thus can perhaps consider that syntactic hole to be referentially transparent. However, we're being nice to our syntax. Let's consider a more dangerous example
do x <- foo
let x = 3
_
return x
Now we see that the hole we've provided in some sense has dominion over the later phrase "return x". In fact, if we inject a fragment like "let x = 4" then it indeed changes the meaning of the whole. In that sense, the syntax here is no referentially transparent.
Another interesting interaction between referential transparency and free variables is the notion of an assigning context like
let x = 3 in _
where, from an outside perspective, both phrases "x" and "y" are reference the same thing, some named variable, but
let x = 3 in x ==/== let x = 3 in y
Progression from thorniness around equality and context
Now, hopefully the previous section explained a few ways for referential transparency to break under various kinds of syntactic contexts. It's worth asking harder questions about what kinds of contexts are valid and what kinds of expressions are equivalent. For instance, we might desugar our do notation in a previous example and end up noticing that we weren't working with a genuine context, but instead sort of a higher-order context
foo >>= \x -> (let x = 3 in ____(return x)_____)
Is this a valid notion of context? It depends a lot on what kind of meaning we're giving the program. The notion of desugaring the syntax already implies that the syntax must be well-defined enough to allow for such desugaring.
As a general rule, we must be very careful with defining both contexts and notions of equality. Further, the more meaning we demand the fragments of our language to take on the greater the ways they can be equal and the fewer the valid contexts we can build.
Ultimately, this leads us all the way to what I called "semantic referential transparency" earlier where we can only substitute proper values into a proper, closed lambda expression and we take the resulting equality to be "equality as programs".
What this ends up meaning is that as we impute more and more meaning on our language, as we begin to accept fewer and fewer things as valid, we get stronger and stronger guarantees about referential transparency.
Purity
And so this finally leads to the notion of a pure function. My understanding here is (even) less complete, but it's worth noting that purity, as a concept, does not much exist until we've moved to a very rich semantic space—that of Haskell semantics as a category over lifted Complete Partial Orders.
If that doesn't make much sense, then just imagine purity is a concept that only exists when talking about Haskell values as functions and equality of programs. In particular, we examine the collection of Haskell functions
trivial :: a -> ()
trivial x = x `seq` ()
where we have a trivial function for every choice of a. We'll notate the specific choice using an underscore
trivial_Int :: Int -> ()
trivial_Int x = x `seq` ()
Now we can define a (very strictly) pure function to be a function f :: a -> b such that
trivial_b . f = trivial_a
In other words, if we throw out the result of computing our function, the b, then we may as well have never computed it in the first place.
Again, there's no notion of purity without having Haskell values and no notion of Haskell values when your expressions contain free variables (since it's a syntax error).
So what's the answer?
Ultimately, the answer is that you can't talk about purity around free variables and you can break referential transparency in lots of ways whenever you are talking about syntax. At some point as you convert your syntactic representation to its semantic denotation you must forget the notion and names of free variables in order to have them represent the reduction semantics of lambda terms and by this point we've begun to have referential transparency.
Finally, purity is something even more stringent than referential transparency having to do with even the reduction characteristics of your (referentially transparent) lambda terms.
By the definition of purity given above, most of Haskell isn't pure itself as Haskell may represent non-termination. Many feel that this is a better definition of purity, however, as non-termination can be considered a side effect of computation instead of a meaningful resultant value.
The Wikipedia definition is incomplete, insofar a pure function may use constants to compute its answer.
When we look at
increment n = 1+n
this is obvious. Perhaps it was not mentioned because it is that obvious.
Now the trick in Haskell is that not only top level values and functions are constants, but inside a closure also the variables(!) closed over:
add x = (\y -> x+y)
Here x stands for the value we applied add to - we call it variable not because it could change within the right hand side of add, but because it can be different each time we apply add. And yet, from the point of view of the lambda, x is a constant.
It follows that free variables always name constant values at the point where they are used and hence do not impact purity.
Short answer is YES f is pure
In Haskell map is defined with foldr. Would you agree that map is functional? If so did it matter that it had global function foldr that wasn't supplied to map as an argument?
In map foldr is a free variable. It's not doubt about it. It makes no difference that it's a function or something that evaluates to a value. It's the same.
Free variables, like the functions foldl and +, are essential for functional languages to exist. Without it you wouldn't have abstraction and the languages would be worse off than the Fortran.
I've read the Wikipedia article on concatenative languages, and I am now more confused than I was when I started. :-)
What is a concatenative language in stupid people terms?
In normal programming languages, you have variables which can be defined freely and you call methods using these variables as arguments. These are simple to understand but somewhat limited. Often, it is hard to reuse an existing method because you simply can't map the existing variables into the parameters the method needs or the method A calls another method B and A would be perfect for you if you could only replace the call to B with a call to C.
Concatenative language use a fixed data structure to save values (usually a stack or a list). There are no variables. This means that many methods and functions have the same "API": They work on something which someone else left on the stack. Plus code itself is thought to be "data", i.e. it is common to write code which can modify itself or which accepts other code as a "parameter" (i.e. as an element on the stack).
These attributes make this languages perfect for chaining existing code to create something new. Reuse is built in. You can write a function which accepts a list and a piece of code and calls the code for each item in the list. This will now work on any kind of data as long it's behaves like a list: results from a database, a row of pixels from an image, characters in a string, etc.
The biggest problem is that you have no hint what's going on. There are only a couple of data types (list, string, number), so everything gets mapped to that. When you get a piece of data, you usually don't care what it is or where it comes from. But that makes it hard to follow data through the code to see what is happening to it.
I believe it takes a certain set of mind to use the languages successfully. They are not for everyone.
[EDIT] Forth has some penetration but not that much. You can find PostScript in any modern laser printer. So they are niche languages.
From a functional level, they are at par with LISP, C-like languages and SQL: All of them are Turing Complete, so you can compute anything. It's just a matter of how much code you have to write. Some things are more simple in LISP, some are more simple in C, some are more simple in query languages. The question which is "better" is futile unless you have a context.
First I'm going to make a rebuttal to Norman Ramsey's assertion that there is no theory.
Theory of Concatenative Languages
A concatenative language is a functional programming language, where the default operation (what happens when two terms are side by side) is function composition instead of function application. It is as simple as that.
So for example in the SKI Combinator Calculus (one of the simplest functional languages) two terms side by side are equivalent to applying the first term to the second term. For example: S K K is equivalent to S(K)(K).
In a concatenative language S K K would be equivalent to S . K . K in Haskell.
So what's the big deal
A pure concatenative language has the interesting property that the order of evaluation of terms does not matter. In a concatenative language (S K) K is the same as S (K K). This does not apply to the SKI Calculus or any other functional programming language based on function application.
One reason this observation is interesting because it reveals opportunities for parallelization in the evaluation of code expressed in terms of function composition instead of application.
Now for the real world
The semantics of stack-based languages which support higher-order functions can be explained using a concatenative calculus. You simply map each term (command/expression/sub-program) to be a function that takes a function as input and returns a function as output. The entire program is effectively a single stack transformation function.
The reality is that things are always distorted in the real world (e.g. FORTH has a global dictionary, PostScript does weird things where the evaluation order matters). Most practical programming languages don't adhere perfectly to a theoretical model.
Final Words
I don't think a typical programmer or 8 year old should ever worry about what a concatenative language is. I also don't find it particularly useful to pigeon-hole programming languages as being type X or type Y.
After reading http://concatenative.org/wiki/view/Concatenative%20language and drawing on what little I remember of fiddling around with Forth as a teenager, I believe that the key thing about concatenative programming has to do with:
viewing data in terms of values on a specific data stack
and functions manipulating stuff in terms of popping/pushing values on the same the data stack
Check out these quotes from the above webpage:
There are two terms that get thrown
around, stack language and
concatenative language. Both define
similar but not equal classes of
languages. For the most part though,
they are identical.
Most languages in widespread use today
are applicative languages: the central
construct in the language is some form
of function call, where a function is
applied to a set of parameters, where
each parameter is itself the result of
a function call, the name of a
variable, or a constant. In stack
languages, a function call is made by
simply writing the name of the
function; the parameters are implicit,
and they have to already be on the
stack when the call is made. The
result of the function call (if any)
is then left on the stack after the
function returns, for the next
function to consume, and so on.
Because functions are invoked simply
by mentioning their name without any
additional syntax, Forth and Factor
refer to functions as "words", because
in the syntax they really are just
words.
This is in contrast to applicative languages that apply their functions directly to specific variables.
Example: adding two numbers.
Applicative language:
int foo(int a, int b)
{
return a + b;
}
var c = 4;
var d = 3;
var g = foo(c,d);
Concatenative language (I made it up, supposed to be similar to Forth... ;) )
push 4
push 3
+
pop
While I don't think concatenative language = stack language, as the authors point out above, it seems similar.
I reckon the main idea is 1. We can create new programs simply by joining other programs together.
Also, 2. Any random chunk of the program is a valid function (or sub-program).
Good old pure RPN Forth has those properties, excluding any random non-RPN syntax.
In the program 1 2 + 3 *, the sub-program + 3 * takes 2 args, and gives 1 result. The sub-program 2 takes 0 args and returns 1 result. Any chunk is a function, and that is nice!
You can create new functions by lumping two or more others together, optionally with a little glue. It will work best if the types match!
These ideas are really good, we value simplicity.
It is not limited to RPN Forth-style serial language, nor imperative or functional programming. The two ideas also work for a graphical language, where program units might be for example functions, procedures, relations, or processes.
In a network of communicating processes, every sub-network can act like a process.
In a graph of mathematical relations, every sub-graph is a valid relation.
These structures are 'concatenative', we can break them apart in any way (draw circles), and join them together in many ways (draw lines).
Well, that's how I see it. I'm sure I've missed many other good ideas from the concatenative camp. While I'm keen on graphical programming, I'm new to this focus on concatenation.
My pragmatic (and subjective) definition for concatenative programming (now, you can avoid read the rest of it):
-> function composition in extreme ways (with Reverse Polish notation (RPN) syntax):
( Forth code )
: fib
dup 2 <= if
drop 1
else
dup 1 - recurse
swap 2 - recurse +
then ;
-> everything is a function, or at least, can be a function:
( Forth code )
: 1 1 ; \ define a function 1 to push the literal number 1 on stack
-> arguments are passed implicitly over functions (ok, it seems to be a definition for tacit-programming), but, this in Forth:
a b c
may be in Lisp:
(c a b)
(c (b a))
(c (b (a)))
so, it's easy to generate ambiguous code...
you can write definitions that push the xt (execution token) on stack and define a small alias for 'execute':
( Forth code )
: <- execute ; \ apply function
so, you'll get:
a b c <- \ Lisp: (c a b)
a b <- c <- \ Lisp: (c (b a))
a <- b <- c <- \ Lisp: (c (b (a)))
To your simple question, here's a subjective and argumentative answer.
I looked at the article and several related web pages. The web pages say themselves that there isn't a real theory, so it's no wonder that people are having a hard time coming up with a precise and understandable definition. I would say that at present, it is not useful to classify languages as "concatenative" or "not concatenative".
To me it looks like a term that gives Manfred von Thun a place to hang his hat but may not be useful for other programmers.
While PostScript and Forth are worth studying, I don't see anything terribly new or interesting in Manfred von Thun's Joy programming language. Indeed, if you read Chris Okasaki's paper on Techniques for Embedding Postfix Languages in Haskell you can try out all this stuff in a setting that, relative to Joy, is totally mainstream.
So my answer is there's no simple explanation because there's no mature theory underlying the idea of a concatenative language. (As Einstein and Feynman said, if you can't explain your idea to a college freshman, you don't really understand it.) I'll go further and say although studying some of these languages, like Forth and PostScript, is an excellent use of time, trying to figure out exactly what people mean when they say "concatenative" is probably a waste of your time.
You can't explain a language, just get one (Factor, preferably) and try some tutorials on it. Tutorials are better than Stack Overflow answers.