mathematical VS logical operators precedence - programming-languages

why is it that in most programming languages the mathematical operators precedence is different from the logical operators precedence.
meaning: why is x / y * z evaluates to ( x / y ) * z so that / has the same precedence as * but in logical operators x || y && z would evaluate to x || ( y && z )
So, is there a logical reason for this distinction ( some hardware reason, optimization technique) or is it just the way programming language creators decided to make them??

It's not about programming. Ever worked with boolean algebra? AND has precedence over OR there too, and boolean algebra is from the 17th century (though I don't know when this convention came to be). The two are also written as * and +, which gives a clue in that regard (but can confuse in others).
Programming language designers just carried these precedence rules over, just like they carried over the precedence of arithmetic operators.

Probably just the way programming language creators decided to make them.
More specifically it is likely that a programmer would want the AND statement to be evaluated before OR, in the case of lack of parentheses.
In other words || is more like "addition or subtraction left to right " and && is more like "multiplication division left to right"
Also remember that ! (not) is of even higher precedence than && (AND)

Related

Pros / Cons of Tacit Programming in J

As a beginner in J I am often confronted with tacit programs which seem quite byzantine compared to the more familiar explicit form.
Now just because I find interpretation hard does not mean that the tacit form is incorrect or wrong. Very often the tacit form is considerably shorter than the explicit form, and thus easier to visually see all at once.
Question to the experts : Do these tacit forms convey a better sense of structure, and maybe distil out the underlying computational mechanisms ? Are there other benefits ?
I'm hoping the answer is yes, and true for some non-trivial examples...
Tacit programming is usually faster and more efficient, because you can tell J exactly what you want to do, instead of making it find out as it goes along your sentence. But as someone loving the hell out of tacit programming, I can also say that tacit programming encourages you to think about things in the J way.
To spoil the ending and answer your question: yes, tacit programming can and does convey information about structure. Technically, it emphasizes meaning above all else, but many of the operators that feature prominently in the less-trivial expressions you'll encounter (#: & &. ^: to name a few) have very structure-related meanings.
The canonical example of why it pays to write tacit code is the special code for modular exponentiation, along with the assurance that there are many more shortcuts like it:
ts =: 6!:2, 7!:2#] NB. time and space
100 ts '2 (1e6&| # ^) 8888x'
2.3356e_5 16640
100 ts '1e6 | 2 ^ 8888x'
0.00787232 8.496e6
The other major thing you'll hear said is that when J sees an explicit definition, it has to parse and eval it every single time it applies it:
NB. use rank 0 to apply the verb a large number of times
100 ts 'i (4 : ''x + y + x * y'')"0 i=.i.100 100' NB. naive
0.0136254 404096
100 ts 'i (+ + *)"0 i=.i.100 100' NB. tacit
0.00271868 265728
NB. J is spending the time difference reinterpreting the definition each time
100 ts 'i (4 : ''x (+ + *) y'')"0 i=.i.100 100'
0.0136336 273024
But both of these reasons take a backseat to the idea that J has a very distinct style of solving problems. There is no if, there is ^:. There is no looping, there is rank. Likewise, Ken saw beauty in the fact that in calculus, f+g was the pointwise sum of functions—indeed, one defines f+g to be the function where (f+g)(x) = f(x) + g(x)—and since J was already so good at pointwise array addition, why stop there?
Just as a language like Haskell revels in the pleasure of combining higher-order functions together instead of "manually" syncing them up end to end, so does J. Semantically, take a look at the following examples:
h =: 3 : '(f y) + g y' – h is a function that grabs its argument y, plugs it into f and g, and funnels the results into a sum.
h =: f + g – h is the sum of the functions f and g.
(A < B) +. (A = B) – "A is less than B or A is equal to B."
A (< +. =) B – "A is less than or equal to B."
It's a lot more algebraic. And I've only talked about trains thus far; there's a lot to be said about the handiness of tools like ^: or &.. The lesson is fairly clear, though: J wants it to be easy to talk about your functions algebraically. If you had to wrap all your actions in a 3 :'' or 4 :''—or worse, name them on a separate line!—every time you wanted to apply them interestingly (like via / or ^: or ;.) you'd probably be very turned off from J.
Sure, I admit you will be hard-pressed to find examples as elegant as these as your expressions get more complex. The tacit style just takes some getting used to. The vocab has to be familiar (if not second nature) to you, and even then sometimes you have the pleasure of slogging through code that is simply inexcusable. This can happen with any language.
Not an expert, but the biggest positive aspects of coding in tacit for me are 1) that it makes it a little easier to write programs that write programs and 2) it is a little easier for me to grasp the J way of approaching problems (which is a big part of why like to program with J). Explicit feels more like procedural programming, especially if I am using control words such as if., while. or select. .
The challenges are that 1) explicit code sometimes runs faster than tacit, but this is dependent on the task and the algorithm and 2) tacit code is interpreted as it is parsed and this means that there are times when explicit code is cleaner because you can leave the code waiting for variable values that are only defined at run time.

Why Associativity is a Fundamental Property of Operators But Not that of Precedence Levels

In any programming language textbooks, we are always told how each operator in that language has either left or right associativity. It seems that associativity is a fundamental property of any operator regardless of the number of operands it takes. It also seems to me that we can assign any associativity to any operator regardless of how we assign associativity to other operators.
But why is it the case? Perhaps an example is better. Suppose I want to design a hypothetical programming language. Is it valid to assign associativity to these operators in this arbitrary way (all having the same precedence):
unary operator:
! right associative
binary operators:
+ left associative
- right associative
* left associative
/ right associative
! + - * / are my 5 operators all having the same precedence.
If yes, how would an expression like 2+2!3+5*6/3-5!3!3-3*2 is parenthesized by my hypothetical parser? And why.
EDIT:
The first example (2+2!3+5*6/3-5!3!3-3*2) is incorrect. Perhaps forget about the unary op and let me put it this way, can we assign operators having the same precedence different associativity like the way I did above? If yes how would an example,say 2+3-4*5/3+2 be evaluated? Because most programming language seems to assign the same associativity to the operators having the same precedence. But we always talk about OPERATOR ASSOCIATIVITY as if it is a property of an individual operator - not a property of a precedence level.
Let us remember what associativity means. Take any operator, say #. Its associativity, as we all know, is the rule that disambiguates expressions of the form a # b # c: if # is left associative, it's parsed as (a # b) # c; if it's right associative, a # (b # c). It could also be nonassociative, in which case a # b # c is a syntax error.
What about if we have two different operators, say # and #? If one is of higher precedence than the other, there's nothing more to say, no work for associativity to do; precedence takes care of the disambiguation. However, if they are of equal precedence, we need associativity to help us. There are three easy cases:
If both operators are left associative, a # b # c means (a # b) # c.
If both operators are right associative, a # b # c means a # (b # c).
If both operators are nonassociative, then a # b # c is a syntax error.
In the remaining cases, the operators do not agree about associativity. Which operator's choice gets precedence? You could probably devise such associativity-precedence rules, but I think the most natural rule to impose is to declare any such case syntax errors. After all, if two operators are of equal precedence, why one would have associativity-precedence over the other?
Under the natural rule I just gave, your example expression is a syntax error.
Now, we could certainly assign differing associativities to operators of the same precedence. However, this would mean that there are combinations of operators of equal precedence (such as your example!) that are syntax errors. Most language designers seem to prefer to avoid that and assign the same associativity to all operators of equal precedence; that way, all combinations are legal. It's just aesthetics, I think.
You have to define associativity somehow, and most languages choose to assign associativity (and precedence) "naturally" -- to match the rules of common mathematics.
There are notable exceptions, however -- APL has strict right-to-left associativity, with all operators at the same precedence level.

Difference between logic programming and functional programming

I have been reading many articles trying to understand the difference between functional and logic programming, but the only deduction I have been able to make so far is that logic programming defines programs through mathematical expressions. But such a thing is not associated with logic programming.
I would really appreciate some light being shed on the difference between functional and logic programming.
I wouldn't say that logic programming defines programs through mathematical expressions; that sounds more like functional programming. Logic programming uses logic expressions (well, eventually logic is math).
In my opinion, the major difference between functional and logic programming is the "building blocks": functional programming uses functions while logic programming uses predicates. A predicate is not a function; it does not have a return value. Depending on the value of it's arguments it may be true or false; if some values are undefined it will try to find the values that would make the predicate true.
Prolog in particular uses a special form of logic clauses named Horn clauses that belong to first order logic; Hilog uses clauses of higher order logic.
When you write a prolog predicate you are defining a horn clause:
foo :- bar1, bar2, bar3. means that foo is true if bar1, bar2 and bar3 is true.
note that I did not say if and only if; you can have multiple clauses for one predicate:
foo:-
bar1.
foo:-
bar2.
means that foo is true if bar1 is true or if bar2 is true
Some say that logic programming is a superset of functional programming since each function could be expressed as a predicate:
foo(x,y) -> x+y.
could be written as
foo(X, Y, ReturnValue):-
ReturnValue is X+Y.
but I think that such statements are a bit misleading
Another difference between logic and functional is backtracking. In functional programming once you enter the body of the function you cannot fail and move to the next definition. For example you can write
abs(x) ->
if x>0 x else -x
or even use guards:
abs(x) x>0 -> x;
abs(x) x=<0 -> -x.
but you cannot write
abs(x) ->
x>0,
x;
abs(x) ->
-x.
on the other hand, in Prolog you could write
abs(X, R):-
X>0,
R is X.
abs(X, R):-
R is -X.
if then you call abs(-3, R), Prolog would try the first clause, and fail when the execution reaches the -3 > 0 point but you wont get an error; Prolog will try the second clause and return R = 3.
I do not think that it is impossible for a functional language to implement something similar (but I haven't used such a language).
All in all, although both paradigms are considered declarative, they are quite different; so different that comparing them feels like comparing functional and imperative styles. I would suggest to try a bit of logic programming; it should be a mind-boggling experience. However, you should try to understand the philosophy and not simply write programs; Prolog allows you to write in functional or even imperative style (with monstrous results).
In a nutshell:
In functional programming, your program is a set of function definitions. The return value for each function is evaluated as a mathematical expression, possibly making use of passed arguments and other defined functions. For example, you can define a factorial function, which returns a factorial of a given number:
factorial 0 = 1 // a factorial of 0 is 1
factorial n = n * factorial (n - 1) // a factorial of n is n times factorial of n - 1
In logic programming, your program is a set of predicates. Predicates are usually defined as sets of clauses, where each clause can be defined using mathematical expressions, other defined predicates, and propositional calculus. For example, you can define a 'factorial' predicate, which holds whenever second argument is a factorial of first:
factorial(0, 1). // it is true that a factorial of 0 is 1
factorial(X, Y) :- // it is true that a factorial of X is Y, when all following are true:
X1 is X - 1, // there is a X1, equal to X - 1,
factorial(X1, Z), // and it is true that factorial of X1 is Z,
Y is Z * X. // and Y is Z * X
Both styles allow using mathematical expressions in the programs.
First, there are a lot of commonalities between functional and logic programming. That is, a lot of notions developed in one community can also be used in the other. Both paradigms started with rather crude implementations and strive towards purity.
But you want to know the differences.
So I will take Haskell on the one side and Prolog with constraints on the other. Practically all current Prolog systems offer constraints of some sort, like B, Ciao, ECLiPSe, GNU, IF, Scryer, SICStus, SWI, YAP, XSB. For the sake of the argument, I will use a very simple constraint dif/2 meaning inequality, which was present even in the very first Prolog implementation - so I will not use anything more advanced than that.
What functional programming is lacking
The most fundamental difference revolves around the notion of a variable. In functional programming a variable denotes a concrete value. This value must not be entirely defined, but only those parts that are defined can be used in computations. Consider in Haskell:
> let v = iterate (tail) [1..3]
> v
[[1,2,3],[2,3],[3],[],*** Exception: Prelude.tail: empty list
After the 4th element, the value is undefined. Nevertheless, you can use the first 4 elements safely:
> take 4 v
[[1,2,3],[2,3],[3],[]]
Note that the syntax in functional programs is cleverly restricted to avoid that a variable is left undefined.
In logic programming, a variable does not need to refer to a concrete value. So, if we want a list of 3 elements, we might say:
?- length(Xs,3).
Xs = [_A,_B,_C].
In this answer, the elements of the list are variables. All possible instances of these variables are valid solutions. Like Xs = [1,2,3]. Now, lets say that the first element should be different to the remaining elements:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys).
Xs = [X,_A,_B], Ys = [_A,_B], dif(X,_B), dif(X,_A).
Later on, we might demand that the elements in Xs are all equal. Before I write it out, I will try it alone:
?- maplist(=(_),Xs).
Xs = []
; Xs = [_A]
; Xs = [_A,_A]
; Xs = [_A,_A,_A]
; Xs = [_A,_A,_A,_A]
; ... .
See that the answers contain always the same variable? Now, I can combine both queries:
?- length(Xs,3), Xs = [X|Ys], maplist(dif(X), Ys), maplist(=(_),Xs).
false.
So what we have shown here is that there is no 3 element list where the first element is different to the other elements and all elements are equal.
This generality has permitted to develop several constraint languages which are offered as libraries to Prolog systems, the most prominent are CLPFD and CHR.
There is no straight forward way to get similar functionality in functional programming. You can emulate things, but the emulation isn't quite the same.
What logic programming is lacking
But there are many things that are lacking in logic programming that make functional programming so interesting. In particular:
Higher-order programming: Functional programming has here a very long tradition and has developed a rich set of idioms. For Prolog, the first proposals date back to the early 1980s, but it is still not very common. At least ISO Prolog has now the homologue to apply called call/2, call/3 ....
Lambdas: Again, it is possible to extend logic programming in that direction, the most prominent system is Lambda Prolog. More recently, lambdas have been developed also for ISO Prolog.
Type systems: There have been attempts, like Mercury, but it has not caught on that much. And there is no system with functionality comparable to type classes.
Purity: Haskell is entirely pure, a function Integer -> Integer is a function. No fine print lurking around. And still you can perform side effects. Comparable approaches are very slowly evolving.
There are many areas where functional and logic programming more or less overlap. For example backtracking and lazyness and list comprehensions, lazy evaluation and freeze/2, when/2, block. DCGs and monads. I will leave discussing these issues to others...
Logic programming and functional programming use different "metaphors" for computation. This often affects how you think about producing a solution, and sometimes means that different algorithms come naturally to a functional programmer than a logic programmer.
Both are based on mathematical foundations that provide more benefits for "pure" code; code that doesn't operate with side effects. There are languages for both paradigms that enforce purity, as well as languages that allow unconstrained side effects, but culturally the programmers for such languages tend to still value purity.
I'm going to consider append, a fairly basic operation in both logical and functional programming, for appending a list on to the end of another list.
In functional programming, we might consider append to be something like this:
append [] ys = ys
append (x:xs) ys = x : append xs ys
While in logic programming, we might consider append to be something like this:
append([], Ys, Ys).
append([X|Xs], Ys, [X|Zs]) :- append(Xs, Ys, Zs).
These implement the same algorithm, and even work basically the same way, but they "mean" something very different.
The functional append defines the list that results from appending ys onto the end of xs. We think of append as a function from two lists to another list, and the runtime system is designed to calculate the result of the function when we invoke it on two lists.
The logical append defines a relationship between three lists, which is true if the third list is the elements of the first list followed by the elements of the second list. We think of append as a predicate that is either true or false for any 3 given lists, and the runtime system is designed to find values that will make this predicate true when we invoke it with some arguments bound to specific lists and some left unbound.
The thing that makes logical append different is you can use it to compute the list that results from appending one list onto another, but you can also use it to compute the list you'd need to append onto the end of another to get a third list (or whether no such list exists), or to compute the list to which you need to append another to get a third list, or to give you two possible lists that can be appended together to get a given third (and to explore all possible ways of doing this).
While equivalent in that you can do anything you can do in one in the other, they lead to different ways of thinking about your programming task. To implement something in functional programming, you think about how to produce your result from the results of other function calls (which you may also have to implement). To implement something in logic programming, you think about what relationships between your arguments (some of which are input and some of which are output, and not necessarily the same ones from call to call) will imply the desired relationship.
Prolog, being a logical language, gives you free backtracking, it's pretty noticeable.
To elaborate, and I precise that I'm in no way expert in any of the paradigms, it looks to me like logical programming is way better when it comes to solving things. Because that's precisely what the language does (that appears clearly when backtracking is needed for example).
I think the difference is this:
imperative programming=modelling actions
function programming=modelling reasoning
logic programming =modelling knowledge
choose what fits your mind best
functional programming:
when 6PM, light on.
logic programming:
when dark, light on.

Language support for chained comparison operators (x < y < z)

A question was posted about chained comparison operators and how they are interpreted in different languages.
Chaining comparison operators means that (x < y < z) would be interpreted as ((x < y) && (y < z)) instead of as ((x < y) < z).
The comments on that question show that Python, Perl 6, and Mathematica support chaining comparison operators, but what other languages support this feature and why is it not more common?
A quick look at the Python documentation shows that this feature has been since at least 1996. Is there a reason more languages have not added this syntax?
A statically typed language would have problems with type conversion, but are there other reasons this is not more common?
It should be more common, but I suspect it is not because it makes parsing languages more complex.
Benefits:
Upholds the principle of least surprise
Reads like math is taught
Reduces cognitive load (see previous 2 points)
Drawbacks:
Grammar is more complex for the language
Special case syntactic sugar
As to why not, my guesses are:
Language author(s) didn't think of it
Is on the 'nice to have' list
Was decided that it wasn't useful enough to justify implementing
The benefit is too small to justify complicating the language.
You don't need it that often, and it is easy to get the same effect cleanly with a few characters more.
Scheme (and probably most other Lisp family languages) supports multiple comparison efficiently within its grammar:
(< x y z)
This can be considered an ordinary function application of the < function with three arguments. See 6.2.5 Numerical Operations in the specification.
Clojure supports chained comparison too.
Chained comparison is a feature of BCPL, since the late 1960s.
I think ICON is the original language to have this, and in ICON it falls out of the way that booleans are handled as special 'fail' tags with all other values being treated as true.

Does any language use "=/=" for denoting the not-equal operator

Does any programming language use =/= for not-equal?
Are there any lexical difficulties for scanners to recognize such an operator? Or was it the case historically?
[Note: this is NOT a homework question. I'm just curious.]
Erlang uses it to denote exactly not equal to.
Also generally there shouldn't be any difficulties for scanners to recognize such a token (proof by example: Erlang ;-)
In Erlang =/=, as noted by Bytecode Ninja means "exactly not equal to". The notation of Erlang is strongly influenced by Prolog so it should come as no surprise that Prolog uses that operator too. There are several languages which make defining operators trivial. Haskell would be one such. =/= isn't defined in the Haskell standard, but defining it would be trivial:
(=/=) x y = ....
This could then be used in function call-like syntax:
(=/=) 5 6
Or as an inline operator:
5 =/= 6
The semantics would depend on the implementation, of course.
I think that Common Lisp weenies users could write some kind of reader macro that used that sequence too, but I'm not positive.
Not one of the mainstream ones. One could easily create such a language, however.
(As others have mentioned, Erlang and a few other languages do have it already)
Nope. Unless you have a really weird language, there's nothing special about this operator in terms of lexical analysis.
By the way, Java has:
> (greater than)
>> (signed right shift)
>>= (signed right shift compound assignment)
>>> (unsigned right shift)
>>>= (unsigned right shift compound assignment)
> (closing generic type parameter, nestable)
>>, >>>, >>>>, ...
and they all work just fine.
Related question
What trick does Java use to avoid spaces in >> ?
Yes, Erlang uses this symbol as one of its representations for "not equal".
Erlang is a language with strong support for concurrency, originally designed within Ericsson and used for writing software for telephone exchanges, but now gaining significant popularity outside.
You may want to check out Fortress Introductory Slides. Fortess uses =/= for checking inequality. I suppose you look for readability in languages. If that's the case then I can tell that Fortess code can be rendered into very neat looking TeX.
Project Fortress Old Site (moved to java.net)
None that I know of
Not much harder than any other operator like +=, ??, etc.
However, it's very cumbersome to type such an operator. Even != will be simpler.
A google code search for =/= doesn't turn up anything obvious, so I would say nothing mainstream.
There wouldn't be any issues with any operator you want, the computer would simply look for =/= instead of != or <> or whatever your language uses.
There are some really weird languages out there like BrainFuck language (link)
++++++++++[>+++++++>++++++++++>+++>+<<<<-]>++.>+.+++++++..+++.>++. <<+++++++++++++++.>.+++.——.——–.>+.>.
That is code for "Hello World".

Resources