Related
I'm a beginner to Haskell and I've been following the e-book Get Programming with Haskell
I'm learning about closures with Lambda functions but I fail to see the difference in the following code:
genIfEven :: Integral p => p -> (p -> p) -> p
genIfEven x = (\f -> isEven f x)
genIfEven2 :: Integral p => (p -> p) -> p -> p
genIfEven2 f x = isEven f x
It would be great if anyone could explain what the precise difference here is
At a basic level1, there isn't really a difference between "normal" functions and ones created with lambda syntax. What made you think there was a difference to ask what it is? (In the particular example you've shown, the functions take their parameters in a different order, but are otherwise the same; either of them could be defined with lambda syntax or "normal" syntax)
Functions are first class values in Haskell. Which means you can pass them to other functions, return them as results, store and retrieve them in data structures, etc, etc. Just like you can with numbers, or strings, or any other value.
Just like with numbers, strings, etc, it's helpful to have syntax for denoting a function value, because you might want to make a simple one right in the middle of other code. It would be pretty horrible if you, say, needed to pass x + 1 to some function and you couldn't just write the literal 1 for the number one, you had to instead go elsewhere in the file and add a one = 1 binding so that you could come back and write x + one. In exactly the same way, you might need to pass to some other function a function for adding 1; it would be annoying to go elsewhere in the file and add a separate definition plusOne x = x + 1, so lambda syntax gives us a way of writing "function literals": \x -> x + 1.2
Considering "normal" function definition syntax, like this:
incrementAllBy _ [] = []
incrementAllBy n (x:xs) = (x + n) : xs
Here we don't have any bit of source code that just represents the function value that incrementAllBy is a name for. The function is implied in this syntax, spread over (possibly) multiple "rules" that say what value our function returns given that it is applied to arguments of a certain form. This syntax also fundamentally forces us to bind the function to a name. All of that is in contrast to lambda syntax which just directly represents the function itself, with no bundled case analysis or name.
However they are both merely different ways of writing down functions. Once defined, there is no difference between the functions, whichever syntax you used to express them.
You mentioned that you were learning about closures. It's not really clear how that relates to the question, but I can guess it's being a bit confusing.
I'm going to say something slightly controversial here: you don't need to learn about closures.3
A closure is what is involved in making something like incrementAllBy n xs = map (\x -> x + n) xs work. The function created here \x -> x + n depends on n, which is a parameter so it can be different every time incrementAllBy is called, and there can be multiple such calls running at the same time. So this \x -> x + n function can't end up as just a chunk of compiled code at a particular address in the program's binary, the way top-level functions are. The structure in memory that is passed to map has to either store a copy of n or store a reference to it. Such a structure is called a "closure", and is said to have "closed over" n, or "captured" it.
In Haskell, you don't need to know any of that. I view the expression \n -> x + n as simply creating a new function value, depending on the value n (and also the value +, which is a first-class value too!) that happens to be in scope. I don't think you need to think about this any differently than you would think about the expression x + n creating a new numeric value depending on a local n. Closures in Haskell only matter when you're trying to understand how the language is implemented, not when you're programming in Haskell.
Closures do matter in imperative languages. There the question of whether (the equivalent of) \x -> x + n stores a reference to n or a copy of n (and when the copy is taken, and how deeply) is critical to understanding how code using this function works, because n isn't just a way of referring to a value, it's a variable that has (potentially) different values over time.
But in Haskell I don't really think we should teach beginners about the term or concept of closures. It vastly over-complicates "you can make new functions out of existing values, even ones that are only locally in scope".
So if you have been given these two functions as examples to try and illustrate the concept of closures, and it isn't making sense to you what difference this "closure" makes, you can probably ignore the whole issue and move on to something more important.
1 Sometimes which choice of "equivalent" syntax you use to write your code does affect operational behaviour, like performance. Usually (but not always) the effect is negligible. As a beginner, I highly recommend ignoring such issues for now, so I haven't mentioned them. It's much easier to learn the principles involved in reasoning about how your code might be executed once you've got a thorough understanding of what all the language elements are supposed to mean.
It can sometimes also affect the way GHC infers types (mostly not actually the fact that they're lambdas, but if you bind function names without syntactic parameters as in plusOne = \x -> x + 1 you can trip up the monomorphism restriction, but that's another topic covered in many Stack Overflow questions, so I won't address it here).
2 In this case you could also use an operator section to write an even simpler function literal as (+1).
3 Now I'm going to teach you about closures so I can explain why you don't need to know about closures. :P
There is no difference whatsoever, except one: lambda expressions don't need a name. For that reason, they are sometimes called "anonymous functions" in other languages.
If you plan to use a function often, you'll want to give it a name, if you only need it once, a lambda will usually do, as you can define it at the site where you use it.
You can, of course, name an anonymous function after it is born:
genIfEven2 = \f x -> isEven f x
That would be completely equivalent to your definition.
I've read somewhere lately that pattern matching happens during run-time and not compile-time. (I am looking for the source, but can't find it at the moment.) Is it true? And if so, do guards in functions have the same performance?
Reading this was surprising to me because I used to think GHC was able to optimize some (probably not all) pattern match decisions during compile time. Does this happen at all?
A case for example:
f 1 = 3
f 2 = 4
vs
f' a | a == 1 = 3
| a == 2 = 4
Do f and f' compile to the same number of instructions (e.g. in Core and/or lower)?
Is the situation any different if I pattern match on a constructor instead of a value? E.g. if GHC sees that a function from a location is always invoked with one constructor, does it optimize that call in a way that eliminates the run-time check? And if so, can you give me an example showing what the optimization produces?
In summary
What is good to know about these two approaches in terms of performance?
When is one preferable performance-wise?
Never mind patterns vs. guards, you might as well ask about if vs. case.
Pattern matching is preferrable to equality checks. Equality-checking is not really a natural thing to do in Haskell. Boolean blindness is one problem, but apart from that full equality check is often simply not feasible – e.g. infinite lists will never compare equal!
How much more efficient direct pattern matching is depends on the type. In case of numbers, don't expect much difference since those patterns are under the hood implemented with equality checks.
I generally prefer patterns – because they're just nicer and can be more efficient. Equality checks will be either just as expensive, or possibly more expensive, and are just un-idiomatic. Only use boolean evaluation when you have to, otherwise stick to patterns (which can be in guards too)!
Recently I am trying to learn a functional programming language and I choosed Haskell.
Now I am reading learn you a haskell and here is a description seems like Haskell's philosophy I am not sure I understand it exactly: you do computations in Haskell by declaring what something is instead of declaring how you get it.
Suppose I want to get the sum of a list.
In a declaring how you get it way:
get the total sum by add all the elements, so the code will be like this(not haskell, python):
sum = 0
for i in l:
sum += i
print sum
In a what something is way:
the total sum is the sum of the first element and the sum of the rest elements, so the code will be like this:
sum' :: (Num a) => [a] -> a
sum' [] = 0
sum' (x:xs) = x + sum' xs
But I am not sure I get it or not. Can some one help? Thanks.
Imperative and functional are two different ways to approach problem solving.
Imperative (Python) gives you actions which you need to use to get what you want. For example, you may tell the computer "knead the dough. Then put it in the oven. Turn the oven on. Bake for 10 minutes.".
Functional (Haskell, Clojure) gives you solutions. You'd be more likely to tell the computer "I have flour, eggs, and water. I need bread". The computer happens to know dough, but it doesn't know bread, so you tell it "bread is dough that has been baked". The computer, knowing what baking is, knows now how to make bread. You sit at the table for 10 minutes while the computer does the work for you. Then you enjoy delicious bread fresh from the oven.
You can see a similar difference in how engineers and mathematicians work. The engineer is imperative, looking at the problem and giving workers a blueprint to solve it. The mathematician defines the problem (solve for x) and the solution (x = -----) and may use any number of tried and true solutions to smaller problems (2x - 1 = ----- => 2x = ----- + 1) until he finally finds the desired solution.
It is not a coincidence that functional languages are used largely by people in universities, not because it is difficult to learn, but because there are not many mathematical thinkers outside of universities. In your quotation, they tried to define this difference in thought process by cleverly using how and what. I personally believe that everybody understands words by turning them into things they already understand, so I'd imagine my bread metaphor should clarify the difference for you.
EDIT: It is worth noting that when you imperatively command the computer, you don't know if you'll have bread at the end (maybe you cooked it too long and it's burnt, or you didn't add enough flour). This is not a problem in functional languages where you know exactly what each solution gives you. There is no need for trial and error in a functional language because everything you do will be correct (though not always useful, like accidentally solving for t instead of x).
The missing part of the explanations is the following.
The imperative example shows you step by step how to compute the sum. At no stage you can convince yourself that it is indeed a sum of elements of a list. For example, there is no knowing why sum=0 at first; should it be 0 at all; do you loop through the right indices; what sum+=i gives you.
sum=0 -- why? it may become clear if you consider what happens in the loop,
-- but not on its own
for i in l:
sum += i -- what do we get? it will become clear only after the loop ends
-- at no step of iteration you have *the sum of the list*
-- so the step on its own is not meaningful
The declarative example is very different in this respect. In this particular case you start with declaring that the sum of an empty list is 0. This is already part of the answer of what the sum is. Then you add a statement about non-empty lists - a sum for a non-empty list is the sum of the tail with the head element added to it. This is the declaration of what the sum is. You can demonstrate inductively that it finds the solution for any list.
Note this proof part. In this case it is obvious. In more complex algorithms it is not obvious, so the proof of correctness is a substantial part - and remember that the imperative case only makes sense as a whole.
Another way to compute sum, where, hopefully, declarativeness and proovability become clearer:
sum [] = 0 -- the sum of the empty list is 0
sum [x] = x -- the sum of the list with 1 element is that element
sum xs = sum $ p xs where -- the sum of any other list is
-- the sum of the list reduced with p
p (x:y:xs) = x+y : p xs -- p reduces the list by replacing a pair of elements
-- with their sum
p xs = xs -- if there was no pair of elements, leave the list as is
Here we can convince ourselves that: 1. p makes the list ever shorter, so the computation of the sum will terminate; 2. p produces a list of sums, so by summing ever shorter lists we get a list of just one element; 3. because (+) is associative, the value produced by repeatedly applying p is the same as the sum of all elements in the original list; 4. we can demonstrate the number of applications of (+) is smaller than in the straightforward implementation.
In other words, the order of adding the elements doesn't matter, so we can sum the elements ([a,b,c,d,e]) in pairs first (a+b, c+d), which gives us a shorter list [a+b,c+d,e], whose sum is the same as the sum of the original list, and which now can be reduced in the same way: [(a+b)+(c+d),e], then [((a+b)+(c+d))+e].
Robert Harper claims in his blog that "declarative" has no meaning. I suppose
he is talking about a clear definition there, which I usually think of as more narrow
then meaning, but the post is still worth checking out and hints that you might not
find as clear an answer as you would wish.
Still, everybody talks about "declarative" and it feels like when we do we usually
talk about the same thing. i.e. Give a number of people two different apis/languages/programs
and ask them which is the most declarative one and they will usually pick the same.
The confusing part to me at first was that your declarative sum
sum' [] = 0
sum' (x:xs) = x + sum' xs
can also be seen as an instruction on how to get the result. It's just a different one.
It's also worth noting that the function sum in the prelude isn't actually defined like that
since that particular way of calculating the sum is inefficient. So clearly something is
fishy.
So, the "what, not how" explanation seem unsatisfactory to me. I think of it instead as
declarative being a "how" which in addition have some nice properties. My current intuition
about what those properties are is something similar to:
A thing is more declarative if it doesn't mutate any state.
A thing is more declarative if you can do mathy transformations on it and the meaning of
the thing sort of remains intact. So given your declarative sum again, if we knew that
+ is commutative there is some justification for thinking that writing it like
sum' xs + x should yield the same result.
A declarative thing can be decomposed into smaller thing and still have some meaning. Like
x and sum' xs still have the same meaning when taken separately, but trying to do the
same with the sum += x of python doesn't work as well.
A thing is more declarative if it's independent of the flow of time. For example css
doesn't describe the styling of a web page at page load. It describes the styling of the
web page at any time, even if the page would change.
A thing is more declarative if you don't have to think about program flow.
Other people might have different intuitions, or even a definition that I'm not aware of,
but hopefully these are somewhat helpful regardless.
This has been a question I've been wondering for a while. if statements are staples in most programming languages (at least then ones I've worked with), but in Haskell it seems like it is quite frowned upon. I understand that for complex situations, Haskell's pattern matching is much cleaner than a bunch of ifs, but is there any real difference?
For a simple example, take a homemade version of sum (yes, I know it could just be foldr (+) 0):
sum :: [Int] -> Int
-- separate all the cases out
sum [] = 0
sum (x:xs) = x + sum xs
-- guards
sum xs
| null xs = 0
| otherwise = (head xs) + sum (tail xs)
-- case
sum xs = case xs of
[] -> 0
_ -> (head xs) + sum (tail xs)
-- if statement
sum xs = if null xs then 0 else (head xs) + sum (tail xs)
As a second question, which one of these options is considered "best practice" and why? My professor way back when always used the first method whenever possible, and I'm wondering if that's just his personal preference or if there was something behind it.
The problem with your examples is not the if expressions, it's the use of partial functions like head and tail. If you try to call either of these with an empty list, it throws an exception.
> head []
*** Exception: Prelude.head: empty list
> tail []
*** Exception: Prelude.tail: empty list
If you make a mistake when writing code using these functions, the error will not be detected until run time. If you make a mistake with pattern matching, your program will not compile.
For example, let's say you accidentally switched the then and else parts of your function.
-- Compiles, throws error at run time.
sum xs = if null xs then (head xs) + sum (tail xs) else 0
-- Doesn't compile. Also stands out more visually.
sum [] = x + sum xs
sum (x:xs) = 0
Note that your example with guards has the same problem.
I think the Boolean Blindness article answers this question very well. The problem is that boolean values have lost all their semantic meaning as soon as you construct them. That makes them a great source for bugs and also makes the code more difficult to understand.
Your first version, the one preferred by your prof, has the following advantages compared to the others:
no mention of null
list components are named in the pattern, so no mention of head and tail.
I do think that this one is considered "best practice".
What's the big deal? Why would we want to avoid especially head and tail? Well, everybody knows that those functions are not total, so one automatically tries to make sure that all cases are covered. A pattern match on [] not only stands out more than null xs, a series of pattern matches can be checked by the compiler for completeness. Hence, the idiomatic version with complete pattern match is easier to grasp (for the trained Haskell reader) and to proof exhaustive by the compiler.
The second version is slightly better than the last one because one sees at once that all cases are handled. Still, in the general case the RHS of the second equation could be longer and there could be a where clauses with a couple of definitions, the last of them could be something like:
where
... many definitions here ...
head xs = ... alternative redefnition of head ...
To be absolutly sure to understand what the RHS does, one has to make sure common names have not been redefined.
The 3rd version is the worst one IMHO: a) The 2nd match fails to deconstruct the list and still uses head and tail. b) The case is slightly more verbose than the equivalent notation with 2 equations.
In many programming languages, if-statements are fundamental primitives, and things like switch-blocks are just syntax sugar to make deeply-nested if-statements easier to write.
Haskell does it the other way around. Pattern matching is the fundamental primitive, and an if-expression is literally just syntax sugar for pattern matching. Similarly, constructs like null and head are simply user-defined functions, which are all ultimately implemented using pattern matching. So pattern matching is the thing at the bottom of it all. (And therefore potentially more efficient than calling user-defined functions.)
In many cases - such as the ones you list above - it's simply a matter of style. The compiler can almost certainly optimise things to the point where all versions are roughly equal in performance. But generally [not always!] pattern matching makes it clearer exactly what you're trying to achieve.
(It's annoyingly easy to write an if-expression where you get the two alternatives the wrong way around. You'd think it would be a rare mistake, but it's surprisingly common. With a pattern match, there's little chance of making that specific mistake, although there's still plenty of other things to screw up.)
Each call to null, head and tail entails a pattern match. But the 1st version in your answer does just one pattern match, and reuses its results through named components of the pattern.
Just for that, it is better. But it is also more visually apparent, more readable.
Pattern matching is better than a string of if-then-else statements for (at least) the following reasons:
it is more declarative
it interacts well with sum-types
Pattern matching helps to reduce the amount of "accidental complexity" in your code - that is, code that is really more about implementation details rather than the essential logic of your program.
In most other languages when the compier/run-time sees a string of if-then-else statements it has no choice but to test the conditions in exactly the order the programmer specified them. But pattern matching encourages the programmer to focus more on describing what should happen versus how things should be performed. Due to purity and immutability of values in Haskell the compiler can consider the collection of patterns as a whole and decide the how best to implement them.
An analogy would be C's switch statement. If you dump the assembly code for various switch statements you will see that sometimes the compiler will generate a chain/tree of comparisons and in other cases it will generate a jump table. The programmer uses the same syntax in both cases - the compiler chooses the implementation based on what the comparison values are. If they form a contiguous block of values the jump table method is used, otherwise a comparison tree is used. And this separation of concerns allows the compiler to implement even more strategies in the future if other patterns among the comparison values are detected.
I have been reading many articles trying to understand the difference between functional and logic programming, but the only deduction I have been able to make so far is that logic programming defines programs through mathematical expressions. But such a thing is not associated with logic programming.
I would really appreciate some light being shed on the difference between functional and logic programming.
I wouldn't say that logic programming defines programs through mathematical expressions; that sounds more like functional programming. Logic programming uses logic expressions (well, eventually logic is math).
In my opinion, the major difference between functional and logic programming is the "building blocks": functional programming uses functions while logic programming uses predicates. A predicate is not a function; it does not have a return value. Depending on the value of it's arguments it may be true or false; if some values are undefined it will try to find the values that would make the predicate true.
Prolog in particular uses a special form of logic clauses named Horn clauses that belong to first order logic; Hilog uses clauses of higher order logic.
When you write a prolog predicate you are defining a horn clause:
foo :- bar1, bar2, bar3. means that foo is true if bar1, bar2 and bar3 is true.
note that I did not say if and only if; you can have multiple clauses for one predicate:
foo:-
bar1.
foo:-
bar2.
means that foo is true if bar1 is true or if bar2 is true
Some say that logic programming is a superset of functional programming since each function could be expressed as a predicate:
foo(x,y) -> x+y.
could be written as
foo(X, Y, ReturnValue):-
ReturnValue is X+Y.
but I think that such statements are a bit misleading
Another difference between logic and functional is backtracking. In functional programming once you enter the body of the function you cannot fail and move to the next definition. For example you can write
abs(x) ->
if x>0 x else -x
or even use guards:
abs(x) x>0 -> x;
abs(x) x=<0 -> -x.
but you cannot write
abs(x) ->
x>0,
x;
abs(x) ->
-x.
on the other hand, in Prolog you could write
abs(X, R):-
X>0,
R is X.
abs(X, R):-
R is -X.
if then you call abs(-3, R), Prolog would try the first clause, and fail when the execution reaches the -3 > 0 point but you wont get an error; Prolog will try the second clause and return R = 3.
I do not think that it is impossible for a functional language to implement something similar (but I haven't used such a language).
All in all, although both paradigms are considered declarative, they are quite different; so different that comparing them feels like comparing functional and imperative styles. I would suggest to try a bit of logic programming; it should be a mind-boggling experience. However, you should try to understand the philosophy and not simply write programs; Prolog allows you to write in functional or even imperative style (with monstrous results).
In a nutshell:
In functional programming, your program is a set of function definitions. The return value for each function is evaluated as a mathematical expression, possibly making use of passed arguments and other defined functions. For example, you can define a factorial function, which returns a factorial of a given number:
factorial 0 = 1 // a factorial of 0 is 1
factorial n = n * factorial (n - 1) // a factorial of n is n times factorial of n - 1
In logic programming, your program is a set of predicates. Predicates are usually defined as sets of clauses, where each clause can be defined using mathematical expressions, other defined predicates, and propositional calculus. For example, you can define a 'factorial' predicate, which holds whenever second argument is a factorial of first:
factorial(0, 1). // it is true that a factorial of 0 is 1
factorial(X, Y) :- // it is true that a factorial of X is Y, when all following are true:
X1 is X - 1, // there is a X1, equal to X - 1,
factorial(X1, Z), // and it is true that factorial of X1 is Z,
Y is Z * X. // and Y is Z * X
Both styles allow using mathematical expressions in the programs.
First, there are a lot of commonalities between functional and logic programming. That is, a lot of notions developed in one community can also be used in the other. Both paradigms started with rather crude implementations and strive towards purity.
But you want to know the differences.
So I will take Haskell on the one side and Prolog with constraints on the other. Practically all current Prolog systems offer constraints of some sort, like B, Ciao, ECLiPSe, GNU, IF, Scryer, SICStus, SWI, YAP, XSB. For the sake of the argument, I will use a very simple constraint dif/2 meaning inequality, which was present even in the very first Prolog implementation - so I will not use anything more advanced than that.
What functional programming is lacking
The most fundamental difference revolves around the notion of a variable. In functional programming a variable denotes a concrete value. This value must not be entirely defined, but only those parts that are defined can be used in computations. Consider in Haskell:
> let v = iterate (tail) [1..3]
> v
[[1,2,3],[2,3],[3],[],*** Exception: Prelude.tail: empty list
After the 4th element, the value is undefined. Nevertheless, you can use the first 4 elements safely:
> take 4 v
[[1,2,3],[2,3],[3],[]]
Note that the syntax in functional programs is cleverly restricted to avoid that a variable is left undefined.
In logic programming, a variable does not need to refer to a concrete value. So, if we want a list of 3 elements, we might say:
?- length(Xs,3).
Xs = [_A,_B,_C].
In this answer, the elements of the list are variables. All possible instances of these variables are valid solutions. Like Xs = [1,2,3]. Now, lets say that the first element should be different to the remaining elements:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys).
Xs = [X,_A,_B], Ys = [_A,_B], dif(X,_B), dif(X,_A).
Later on, we might demand that the elements in Xs are all equal. Before I write it out, I will try it alone:
?- maplist(=(_),Xs).
Xs = []
; Xs = [_A]
; Xs = [_A,_A]
; Xs = [_A,_A,_A]
; Xs = [_A,_A,_A,_A]
; ... .
See that the answers contain always the same variable? Now, I can combine both queries:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys), maplist(=(_),Xs).
false.
So what we have shown here is that there is no 3 element list where the first element is different to the other elements and all elements are equal.
This generality has permitted to develop several constraint languages which are offered as libraries to Prolog systems, the most prominent are CLPFD and CHR.
There is no straight forward way to get similar functionality in functional programming. You can emulate things, but the emulation isn't quite the same.
What logic programming is lacking
But there are many things that are lacking in logic programming that make functional programming so interesting. In particular:
Higher-order programming: Functional programming has here a very long tradition and has developed a rich set of idioms. For Prolog, the first proposals date back to the early 1980s, but it is still not very common. At least ISO Prolog has now the homologue to apply called call/2, call/3 ....
Lambdas: Again, it is possible to extend logic programming in that direction, the most prominent system is Lambda Prolog. More recently, lambdas have been developed also for ISO Prolog.
Type systems: There have been attempts, like Mercury, but it has not caught on that much. And there is no system with functionality comparable to type classes.
Purity: Haskell is entirely pure, a function Integer -> Integer is a function. No fine print lurking around. And still you can perform side effects. Comparable approaches are very slowly evolving.
There are many areas where functional and logic programming more or less overlap. For example backtracking and lazyness and list comprehensions, lazy evaluation and freeze/2, when/2, block. DCGs and monads. I will leave discussing these issues to others...
Logic programming and functional programming use different "metaphors" for computation. This often affects how you think about producing a solution, and sometimes means that different algorithms come naturally to a functional programmer than a logic programmer.
Both are based on mathematical foundations that provide more benefits for "pure" code; code that doesn't operate with side effects. There are languages for both paradigms that enforce purity, as well as languages that allow unconstrained side effects, but culturally the programmers for such languages tend to still value purity.
I'm going to consider append, a fairly basic operation in both logical and functional programming, for appending a list on to the end of another list.
In functional programming, we might consider append to be something like this:
append [] ys = ys
append (x:xs) ys = x : append xs ys
While in logic programming, we might consider append to be something like this:
append([], Ys, Ys).
append([X|Xs], Ys, [X|Zs]) :- append(Xs, Ys, Zs).
These implement the same algorithm, and even work basically the same way, but they "mean" something very different.
The functional append defines the list that results from appending ys onto the end of xs. We think of append as a function from two lists to another list, and the runtime system is designed to calculate the result of the function when we invoke it on two lists.
The logical append defines a relationship between three lists, which is true if the third list is the elements of the first list followed by the elements of the second list. We think of append as a predicate that is either true or false for any 3 given lists, and the runtime system is designed to find values that will make this predicate true when we invoke it with some arguments bound to specific lists and some left unbound.
The thing that makes logical append different is you can use it to compute the list that results from appending one list onto another, but you can also use it to compute the list you'd need to append onto the end of another to get a third list (or whether no such list exists), or to compute the list to which you need to append another to get a third list, or to give you two possible lists that can be appended together to get a given third (and to explore all possible ways of doing this).
While equivalent in that you can do anything you can do in one in the other, they lead to different ways of thinking about your programming task. To implement something in functional programming, you think about how to produce your result from the results of other function calls (which you may also have to implement). To implement something in logic programming, you think about what relationships between your arguments (some of which are input and some of which are output, and not necessarily the same ones from call to call) will imply the desired relationship.
Prolog, being a logical language, gives you free backtracking, it's pretty noticeable.
To elaborate, and I precise that I'm in no way expert in any of the paradigms, it looks to me like logical programming is way better when it comes to solving things. Because that's precisely what the language does (that appears clearly when backtracking is needed for example).
I think the difference is this:
imperative programming=modelling actions
function programming=modelling reasoning
logic programming =modelling knowledge
choose what fits your mind best
functional programming:
when 6PM, light on.
logic programming:
when dark, light on.