Why do some languages need semicolons? - programming-languages

I understand that semicolons indicate the end of a line in languages like Java, but why?
I get asked this a lot by other people, and I can't really think of a good way to explain how it works better than just using line breaks or white space.

They don't signal end of line, they signal end of statement.
There are some languages that don't require them, but those languages don't allow multiple statements on a single line or a single statement to span multipile lines (without some other signal like VB's _ signal).
Why do some languages allow multiple statements on a line? The philosophy is that whitespace is irrelevant (an end of line character is whitespace). This allows flexibility in how the code is formatted as formatting is not part of the semantic meaning.

First of all, the semicolon is a statement separator, not a line separator. Some languages use the new line character as statement separator, but languages which ignore all whitespace tend to use the semicolon.
Why do languages ignore whitespace?
A language ignores whitespace to allow the programmer to format the source code as he likes it. For example, in Java there is no difference between
if (welcome)
System.out.println("hello world");
and
if (welcome) System.out.println("hello world");
This is not because there is one separate case for each of these in the grammar of the language, but because the whitespace is simply ignored.
Why does a programming language need a statement separator?
This is the core of the question. To understand it, let's consider a small language without any statement separator. It contains the following statement types:
var x = foo()
y[0, 1] = x
bar()
Here, y is a two-dimensional array and x is written to one of the entries of y.
Now lets look at these statements like the compiler would see them:
var x = foo() y[0, 1] = x bar()
Because there is no statement separator, the compiler has to recognize the end of each statement by itself, to make sense of the input. Is the compiler able to do so? I guess in the above example the compiler can do it.
Now, lets add another type of statement to out language:
[x, y] = ["hello", "world"]
The multi assignment allows the programmer to assign multiple values at once. After this line, the variable x will contain the value "hello" while the variable y contains "world". This might be really handy to allow multiple return values from a function. Now how does this work together with the remaining statement types?
Consider the following sequence of statements:
foo()
[x, y] = [1, 2]
First, we call the method foo. Afterwards, we assign 1 to x and 2 to y. At least this is what we meant to do. Here is what the compiler sees:
foo() [x, y] = [1, 2]
Is the compiler able to recognize each statement? No. There are at least two possible interpretations. The first is the one we intended. Here is the second one:
foo()[x, y] = [1, 2]
What does this mean? First, we call the method foo. This method is supposed to return a two-dimensional array. Now, we write the array [1, 2] at the position [x, y] in the returned array.
The compiler cannot recognize the statements, since there are at least two valid interpretations of the given input. Of course, this should never happen in a real programming language. In the given example, it might be easy to resolve, but the point is that it is hard to design a programming language without a statement separator to be not ambiguous. It is hard, because the language designer has to consider all possible permutations of statement types to be sure the language is not ambiguous.
Thus, the statement separator helps the language designer to initially design the language, but more importantly it allows the language designer to easily extend the language in the future, for example by adding new statement types. This is a big thing, since once code is written in your language, you cannot simply change the grammar for existing statement types, because this will cause all the existing code to not compile anymore.
TL;DR
Summing it all up, the semicolon was introduced as statement separator in whitespace ignoring languages, because it is easier to design and extend a language which has a statement separator.

Many languages allow you to put as much spacing as you like. This allows you to be have control over how the code looks.
Consider:
String result = "asdfsasdfs"
+ "asdfs"
+ "asdfsdf";
Because you are allowed to insert extra newlines you can split that line across several lines without problem. The language still needs to know the line is finished that is why you need a semicolon.

The languages do it, as it signifies the end of a statement, not an end of the line, which means that you can compress code, to make it smaller and take up less space.
Take the C++ code (#include <iostream>):
for(int i = 0; i < 5; ++i){
std::cout << "did you know?" << std::endl;
std::cout << "; signifies **end of statement**" << std::endl;
std::cout << "**not the end of the line**" << std::endl;
}
It could also be written
for(int i = 0; i < 5; ++i){std::cout << "did you know?" << std::endl; std::cout << "; signifies **end of statement**" << std::endl; std::cout << "**not the end of the line**" << std::endl;}

Some programming languages use it to signify the end of a statement thus making the language oblivious to white-space from a statement standpoint. One thing to bear in mid is that if at compile time you are checking for either a new line or a semicolon and then you have to asses several different "situations" the compiler might get what you wanted to do wrong, and it would take a it longer to look for those situations rather than simply looking for a semicolon at the end of the statement. Some higher level languages try to reduce semicolon use or remove it altogether in order to save a few keystrokes, this languages are more oriented toward the comfort of the programmer and generally come with all sort of syntactic sugar; one could argue that not using semicolons is a kind of syntactic sugar.
The use or not of a semicolon in a language should be in according to what the language is trying to accomplish, Languages like C and C++ are mostly about performance, Java and C# are a bit higher in the abstraction sense than C and C++ and then we have things like Scala, Python and Ruby, which are made mostly to make programming more comfortable a the cost of performance,(Ruby openly admits this, and it's very pronounced on Python).
So why do some languages "need" semicolons?
Makes compiling easier
The designer of the language thinks it's more consistent
Historical reasons (Java, C# and C++ are also C's children for example)
and one last thing is that Javascript actually adds the semicolons during compile or before IIRC, so it's not actually semicolon free.

Short answer:
Because everyone else does it.
Not, nor everyone. Furthermore, many popular languages like Python, Ruby, or Visual Basic, don't use semicolon as end of statement but line breaks. Many, not "everyone", still uses semicolon because historical reasons, not rational argumentation: semicolons had a important role to replace the punched-card format in first age of computation, but today it can be totally discarded.
In fact, there're two popular ways of specify an end of statement:
Using a semicolon.
Leaving as is. This makes the compiler read a line break as end of statement. When you want extend your statement to more of one line, you simply use a special character (like \ in Python) to say that the statement has not finished.
In order to make a code more readable, using a special character to specify an end of statement should be an exception, not the rule.

Short answer:
Because everyone else does it.
In theory a language's statement is whatever the language designer is able to syntactically interpret when they parse your file. So if the language designer did not want to have semicolons they could have periods, dashes, spaces, newlines, or whatever to denote the separation of a statement.
Language designers often make the syntax easy to understand so that it can become popular.
Wikipedia: Semicolon Usage in Computer Languages
So if some language designer created a language that used ':-)' to denote the end of a statement it would, 1) be hard to read; 2) not be popular with people who already are used to using a ';'.
echo "Take Care" :-)

Related

Advantage to a certain string comparison order

Looking at some pieces of code around the internet, I've noticed some authors tend to write string comparisons like
if("String"==$variable)
in PHP, or
if("String".equals(variable))
Whereas my preference is:
if(variable.equals("String"))
I realize these are effectively equal: they compare two strings for equality. But I was curious if there was an advantage to one over the other in terms of performance or something else.
Thank you for the help!
One example to the approach of using an equality function or using if( constant == variable ) rather than if( variable == constant ) is that it prevents you from accidentally typoing and writing an assignment instead of a comparison, for instance:
if( s = "test" )
Will assign "test" to s, which will result in undesired behaviour which may potentially cause a hard-to-find bug. However:
if( "test" = s )
Will in most languages (that I'm aware of) result in some form of warning or compiler error, helping to avoid a bug later on.
With a simple int example, this prevents accidental writes of
if (a=5)
which would be a compile error if written as
if (5=a)
I sure don't know about all languages, but decent C compilers warn you about if (a=b). Perhaps whatever language your question is written in doesn't have such a feature, so to be able to generate an error in such a case, they have reverted the order of the comparison arguments.
Yoda conditions call these some.
The kind of syntaxis a language uses has nothing to do with efficiency. It is all about how the comparison algorithm works.
In the examples you mentioned, this:
if("String".equals(variable))
and this:
if(variable.equals("String"))
would be exactly the same, because the expression "String" will be treated as a String variable.
Languages that provide a comparison method for Strings, will use the fastest method so you shouldn't care about it, unless you want to implement the method yourself ;)

A language in which everything compiles

I'm trying to do some research for a new project, and I need to create objects dynamically from random data.
For this to work, I need a language / compiler that doesn't have problems with weird uncompilable code lying around.
Basically, I need the random code to compile (or be interpreted) as much as possible - Meaning that the uncompilable parts will be ignored, and only the compilable parts will create the objects (which could be ran).
Object Oriented-ness is not a must, but is a very strong advantage.
I thought of ASM, but it's very messy, and I'd probably need a more readable code
Thanks!
It sounds like you're doing something very much like genetic programming; even if you aren't, GP has to solve some of the same problems—using randomness to generate valid programs. The approach to this that is typically used is to work with a syntax tree: rather than storing x + y * 3 - 2, you store something like the following:
Then, instead of randomly changing the syntax, one can randomly change nodes in the tree instead. And if x should randomly change to, say, +, you can statically know that this means you need to insert two children (or not, depending on how you define +).
A good choice for a language to work with for this would be any Lisp dialect. In a Lisp, the above program would be written (- (+ x (* y 3)) 2), which is just a linearization of the syntax tree using parentheses to show depth. And in fact, Lisps expose this feature: you can just as easily work with the object '(- (+ x (* y 3)) 2) (note the leading quote). This is a three-element list, whose first element is -, second element is another list, and third element is 2. And, though you might or might not want it for your particular application, there's an eval function, such that (eval '(- (+ x (* y 3)) 2)) will take in the given list, treat it as a Lisp syntax tree/program, and evaluate it. This is what makes Lisps so attractive for doing this sort of work; Lisp syntax is basically a reification of the syntax-tree, and if you operate at the syntax-tree level, you can work on code as though it was a value. Lisp won't help you read /dev/random as a program directly, but with a little interpretation layered on top, you should be able to get what you want.
I should also mention, though I don't know anything about it (not that I know much about ordinary genetic programming) the existence of linear genetic programming. This is sort of like the assembly model that you mentioned—a linear stream of very, very simple instructions. The advantage here would seem to be that if you are working with /dev/random or something like it, the amount of interpretation needed is very small; the disadvantage would be, as you mentioned, the low-level nature of the code.
I'm not sure if this is what you're looking for, but any programming language can be made to function this way. For any programming language P, define the language Palways as follows:
If p is a valid program in P, then p is a valid program in Palways whose meaning is the same as its meaning in P.
If p is not a valid program in P, then p is a valid program in Palways whose meaning is the same as a program that immediately terminates.
For example, I could make the language C++always so that this program:
#include <iostream>
using namespace std;
int main() {
cout << "Hello, world!" << endl;
}
would compile as "Hello, world!", while this program:
Hahaha! This isn't legal C++ code!
Would be a legal program that just does absolutely nothing.
To solve your original problem, just take any OOP language like Java, Smalltalk, etc. and construct the appropriate Javaalways, Smalltalkalways, etc. language from it. Again, I'm not sure if this is at all what you're looking for, but it could be done very easily.
Alternatively, consider finding a grammar for any OOP language and then using that grammar to produce random syntactically valid programs. You could then filter those programs down by using the Palways programming language for that language to eliminate syntactically but not semantically valid programs.
Divide the ASCII byte values into 9 classes (division modulo 9 would help). Then assign then to Brainfuck codewords (see http://en.wikipedia.org/wiki/Brainfuck). Then interpret as Brainfuck.
There you go, any sequence of ASCII characters is a program. Not that it's going to do anything sensible... This approach has a much better chance, compared to templatetypedef's answer, to get a nontrivial program from a random byte sequence.
Text Editors
You could try feeding random character strings to an editor like Emacs or VI. Many (most?) characters will perform an editing action but some will do nothing (other than beep, perhaps). You would have to ensure that the random code mutator never generates the character sequence that exits the editor. However, this experience would be much like programming a Turing machine -- the code is not too readable.
Mathematica
In Mathematica, undefined symbols and other expressions evaluate to themselves, without error. So, that language might be a viable choice if you can arrange for the random code mutator to always generate well-formed expressions. This would be readily achievable since the basic Mathematica syntax is trivial, making it easy to operate on syntactic units rather than at the character level. It would be even easier if the mutator were written in Mathematica itself since expression-munging is Mathematica's forte. You could define a mini-language of valid operations within a Mathematica package that does not import the system-defined symbols. This would allow you to generate well-formed expressions to your heart's content without fear of generating a dangerous expression, like DeleteFile[FileNames["*.*", "/", Infinity]].
I believe Common Lisp should suit your needs. I always have some code in my SLIME/Emacs session that wouldn't compile. You can always tweak things, redefine functions in run-time. It is actually very good for prototyping.
A few years ago it took me quite a while to learn. But nowadays we have quicklisp and everything is so much easier.
Here I describe my development environment:
Install lisp on my linux machine
PS: I want to give an example, where Common Lisp was useful for me:
Up to maybe 2004 I used to write small programs in C (the keep it simple Unix way).
The last 3 years I had to get lots of different hardware running. Motorized stages, scientific cameras, IO cards.
The cameras turned out to be quite annoying. Usually you have to cool them down to -50 degree celsius or so and (in some SDKs) they don't like it when you close them. But this
is exactly how my C development cycle worked: write (30s), compile (1s), run (0.1s), repeat.
Eventually I decided to just use Common Lisp. Often it is straight forward to define the foreign function interfaces to talk to the SDKs and I can do this without ever leaving the running Lisp image. I start the editor in the morning define the open-device function, to talk to the device and after 3 hours I have enough of the functions implemented to set gain, temperature, region of interest and obtain the video.
Then I can often put the SDK manual away and just use the camera.
I used the same interactive programming approach when I have to parse some webpage or some weird XML.

Why do programming languages use commas to separate function parameters?

It seems like all programming languages use commas (,) to separate function parameters.
Why don't they use just spaces instead?
Absolutely not. What about this function call:
function(a, b - c);
How would that look with a space instead of the comma?
function(a b - c);
Does that mean function(a, b - c); or function(a, b, -c);? The use of the comma presumably comes from mathematics, where commas have been used to separate function parameters for centuries.
First of all, your premise is false. There are languages that use space as a separator (lisp, ML, haskell, possibly others).
The reason that most languages don't is probably that a) f(x,y) is the notation most people are used to from mathematics and b) using spaces leads to lots of nested parentheses (also called "the lisp effect").
Lisp-like languages use: (f arg1 arg2 arg3) which is essentially what you're asking for.
ML-like languages use concatenation to apply curried arguments, so you would write f arg1 arg2 arg3.
Tcl uses space as a separator between words passed to commands. Where it has a composite argument, that has to be bracketed or otherwise quoted. Mind you, even there you will find the use of commas as separators – in expression syntax only – but that's because the notation is in common use outside of programming. Mathematics has written n-ary function applications that way for a very long time; computing (notably Fortran) just borrowed.
You don't have to look further than most of our natural languages to see that comma is used for separation items in lists. So, using anything other than comma for enumerating parameters would be unexpected for anyone learning a programming language for the first time.
There's a number of historical reasons already pointed out.
Also, it's because in most languages, where , serves as separator, whitespace sequences are largely ignored, or to be more exact, although they may separate tokens, they do not act as tokens themselves. This is moreless true for all languages deriving their syntax from C. A sequence of whitespaces is much like the empty word and having the empty word delimit anything probably is not the best of ideas.
Also, I think it is clearer and easier to read. Why have whitespaces, which are invisible characters, and essentially serve nothing but the purpose of formatting, as really meaningful delimiters. It only introduces ambiguity. One example is that provided by Carl.
A second would f(a (b + c)). Now is that f(a(b+c)) or f(a, b+c)?
The creators of JavaScript had a very useful idea, similar to yours, which yields just the same problems. The idea was, that ENTER could also serve as ;, if the statement was complete. Observe:
function a() {
return "some really long string or expression or whatsoever";
}
function b() {
return
"some really long string or expression or whatsoever";
}
alert(a());//"some really long string or expression or whatsoever"
alert(b());//"undefined" or "null" or whatever, because 'return;' is a valid statement
As a matter of fact, I sometimes tend to use the latter notation in languages, that do not have this 'feature'. JavaScript forces a way to format my code upon me, because someone had the cool idea, of using ENTER instead of ;.
I think, there is a number of good reasons why some languages are the way they are. Especially in dynamic languages (as PHP), where there's no compile time check, where the compiler could warn you, that the way it resolved an ambiguity as given above, doesn't match the signature of the call you want to make. You'd have a lot of weird runtime errors and a really hard life.
There are languages, which allow this, but there's a number of reasons, why they do so. First and foremost, because a bunch of very clever people sat down and spent quite some time designing a language and then discovered, that its syntax makes the , obsolete most of the time, and thus took the decision to eliminate it.
This may sound a bit wise but I gather for the same reason why most earth-planet languages use it (english, french, and those few others ;-) Also, it is intuitive to most.
Haskell doesn't use commas.
Example
multList :: [Int] -> Int -> [Int]
multList (x : xs) y = (x * y) : (multList xs y)
multList [] _ = []
The reason for using commas in C/C++ is that reading a long argument list without a separator can be difficult without commas
Try reading this
void foo(void * ptr point & * big list<pointers<point> > * t)
commas are useful like spaces are. In Latin nothing was written with spaces, periods, or lower case letters.
Try reading this
IAMTHEVERYMODELOFAWHATDOYOUWANTNOTHATSMYBUCKET
it's primarily to help you read things.
This is not true. Some languages don't use commas. Functions have been Maths concepts before programming constructs, so some languages keep the old notation. Than most of the newer has been inspired by C (Javascript, Java, C#, PHP too, they share some formal rules like comma).
While some languages do use spaces, using a comma avoids ambiguous situations without the need for parentheses. A more interesting question might be why C uses the same character as a separator as is used for the "a then b" operator; the latter question is in some ways more interesting given that the C character set has at three other characters that do not appear in any context (dollar sign, commercial-at, and grave, and I know at least one of those (the dollar sign) dates back to the 40-character punchcard set.
It seems like all programming languages use commas (,) to separate function parameters.
In natural languages that include comma in their script, that character is used to separate things. For instance, if you where to enumerate fruits, you'd write: "lemon, orange, strawberry, grape" That is, using comma.
Hence, using comma to separate parameters in a function is more natural that using other character ( | for instance )
Consider:
someFunction( name, age, location )
vs.
someFunction( name|age|location )
Why don't they use just spaces instead?
Thats possible. Lisp does it.
The main reason is, space, is already used to separate tokens, and it's easier not to assign an extra functionality.
I have programmed in quite a few languages and while the comma does not rule supreme it is certainly in front. The comma is good because it is a visible character so that script can be compressed by removing spaces without breaking things. If you have space then you can have tabs and that can be a pain in the ... There are issues with new-lines and spaces at the end of a line. Give me a comma any day, you can see it and you know what it does. Spaces are for readability (generally) and commas are part of syntax. Mind you there are plenty of exceptions where a space is required or de rigueur. I also like curly brackets.
It is probably tradition. If they used space they could not pass expression as param e.g.
f(a-b c)
would be very different from
f(a -b c)
Some languages, like Boo, allow you to specify the type of parameters or leave it out, like so:
def MyFunction(obj1, obj2, title as String, count as Int):
...do stuff...
Meaning: obj1 and obj2 can be of any type (inherited from object), where as title and count must be of type String and Int respectively. This would be hard to do using spaces as separators.

Can a language have Lisp's powerful macros without the parentheses?

Can a language have Lisp's powerful macros without the parentheses?
Sure, the question is whether the macro is convenient to use and how powerful they are.
Let's first look how Lisp is slightly different.
Lisp syntax is based on data, not text
Lisp has a two-stage syntax.
A) first there is the data syntax for s-expressions
examples:
(mary called tim to tell him the price of the book)
(sin ( x ) + cos ( x ))
s-expressions are atoms, lists of atoms or lists.
B) second there is the Lisp language syntax on top of s-expressions.
Not every s-expression is a valid Lisp program.
(3 + 4)
is not a valid Lisp program, because Lisp uses prefix notation.
(+ 3 4)
is a valid Lisp program. The first element is a function - here the function +.
S-expressions are data
The interesting part is now that s-expressions can be read and then Lisp uses the normal data structures (numbers, symbols, lists, strings) to represent them.
Most other programming languages don't have a primitive representation for internalized source - other than strings.
Note that s-expressions here are not representing an AST (Abstract Syntax Tree). It's more like a hierarchical token tree coming out of a lexer phase. A lexer identifies the lexical elements.
The internalized source code now makes it easy to calculate with code, because the usual functions to manipulate lists can be applied.
Simple code manipulation with list functions
Let's look at the invalid Lisp code:
(3 + 4)
The program
(defun convert (code)
(list (second code) (first code) (third code)))
(convert '(3 + 4)) -> (+ 3 4)
has converted an infix expression into the valid Lisp prefix expression. We can evaluate it then.
(eval (convert '(3 + 4))) -> 7
EVAL evaluates the converted source code. eval takes as input an s-expression, here a list (+ 3 4).
How to calculate with code?
Programming languages now have at least three choices to make source calculations possible:
base the source code transformations on string transformations
use a similar primitive data structure like Lisp. A more complex variant of this is a syntax based on XML. One could then transform XML expressions. There are other possible external formats combined with internalized data.
use a real syntax description format and represent the source code internalized as a syntax tree using data structures that represent syntactic categories. -> use an AST.
For all these approaches you will find programming languages. Lisp is more or less in camp 2. The consequence: it is theoretically not really satisfying and makes it impossible to statically parse source code (if the code transformations are based on arbitrary Lisp functions). The Lisp community struggles with this for decades (see for example the myriad of approaches that the Scheme community has tried). Fortunately it is relatively easy to use, compared to some of the alternatives and quite powerful. Variant 1 is less elegant. Variant 3 leads to a lot complexity in simple AND complex transformations. It usually also means that the expression was already parsed with respect to a specific language grammar.
Another problem is HOW to transform the code. One approach would be based on transformation rules (like in some Scheme macro variants). Another approach would be a special transformation language (like a template language which can do arbitrary computations). The Lisp approach is to use Lisp itself. That makes it possible to write arbitrary transformations using the full Lisp language. In Lisp there is not a separate parsing stage, but at any time expressions can be read, transformed and evaluated - because these functions are available to the user.
Lisp is kind of a local maximum of simplicity for code transformations.
Other frontend syntax
Also note that the function read reads s-expressions to internal data. In Lisp one could either use a different reader for a different external syntax or reuse the Lisp built-in reader and reprogram it using the read macro mechanism - this mechanism makes it possible to extend or change the s-expression syntax. There are examples for both approaches to provide a different external syntax in Lisp.
For example there are Lisp variants which have a more conventional syntax, where code gets parsed into s-expressions.
Why is the s-expression-based syntax popular among Lisp programmers?
The current Lisp syntax is popular among Lisp programmers for two reasons:
1) the data is code is data idea makes it easy to write all kinds of code transformations based on the internalized data. There is also a relatively direct way from reading code, over manipulating code to printing code. The usual development tools can be used.
2) the text editor can be programmed in a straight forward way to manipulate s-expressions. That makes basic code and data transformations in the editor relatively easy.
Originally Lisp was thought to have a different, more conventional syntax. There were several attempts later to switch to other syntax variants - but for some reasons it either failed or spawned different languages.
Absolutely. It's just a couple orders of magnitude more complex, if you have to deal with a complex grammar. As Peter Norvig noted:
Python does have access to the
abstract syntax tree of programs, but
this is not for the faint of heart. On
the plus side, the modules are easy to
understand, and with five minutes and
five lines of code I was able to get
this:
>>> parse("2 + 2")
['eval_input', ['testlist', ['test', ['and_test', ['not_test', ['comparison',
['expr', ['xor_expr', ['and_expr', ['shift_expr', ['arith_expr', ['term',
['factor', ['power', ['atom', [2, '2']]]]], [14, '+'], ['term', ['factor',
['power', ['atom', [2, '2']]]]]]]]]]]]]]], [4, ''], [0, '']]
This was rather a disapointment to me. The Lisp parse of the equivalent expression is (+ 2 2). It seems that only a real expert would want to manipulate Python parse trees, whereas Lisp parse trees are simple for anyone to use. It is still possible to create something similar to macros in Python by concatenating strings, but it is not integrated with the rest of the language, and so in practice is not done.
Since I'm not a super-genius (or even a Peter Norvig), I'll stick with (+ 2 2).
Here's a shorter version of Rainer's answer:
In order to have lisp-style macros, you need a way of representing source-code in data structures. In most languages, the only "source code data structure" is a string, which doesn't have nearly enough structure to allow you to do real macros on. Some languages offer a real data structure, but it's too complex, like Python, so that writing real macros is stupidly complicated and not really worth it.
Lisp's lists and parentheses hit the sweet spot in the middle. Just enough structure to make it easy to handle, but not too much so you drown in complexity. As a bonus, when you nest lists you get a tree, which happens to be precisely the structure that programming languages naturally adopt (nearly all programming languages are first parsed into an "abstract syntax tree", or AST, before being actually interpreted/compiled).
Basically, programming Lisp is writing an AST directly, rather than writing some other language that then gets turned into an AST by the computer. You could possibly forgo the parens, but you'd just need some other way to group things into a list/tree. You probably wouldn't gain much from doing so.
Parentheses are irrelevant to macros. It's just Lisp's way of doing things.
For example, Prolog has a very powerful macros mechanism called "term expansion". Basically, whenever Prolog reads a term T, if tries a special rule term_expansion(T, R). If it is successful, the content of R is interpreted instead of T.
Not to mention the Dylan language, which has a pretty powerful syntactic macro system, which features (among other things) referential transparency, while being an infix (Algol-style) language.
Yes. Parentheses in Lisp are used in the classic way, as a grouping mechanism. Indentation is an alternative way to express groups. E.g. the following structures are equivalent:
A ((B C) D)
and
A
B
C
D
Have a look at Sweet-expressions. Wheeler makes a very good case that the reason things like infix notation have not worked before is that typical notation also tries to add precedence, which then adds complexity, which causes difficulties in writing macros.
For this reason, he proposes infix syntax like {1 + 2 + 3} and {1 + {2 * 3}} (note the spaces between symbols), that are translated to (+ 1 2) and (+ 1 (* 2 3)) respectively. He adds that if someone writes {1 + 2 * 3}, it should become (nfx 1 + 2 * 3), which could be captured, if you really want to provide precedence, but would, as a default, be an error.
He also suggests that indentation should be significant, proposes that functions could be called as fn(A B C) as well as (fn A B C), would like data[A] to translate to (bracketaccess data A), and that the entire system should be compatible with s-expressions.
Overall, it's an interesting set of proposals I'd like to experiment with extensively. (But don't tell anyone at comp.lang.lisp: they'll burn you at the stake for your curiosity :-).
Code rewriting in Tcl in a manner recognizably similar to Lisp macros is a common technique. For example, this is (trivial) code that makes it easier to write procedures that always import a certain set of global variables:
proc gproc {name arguments body} {
set realbody "global foo bar boo;$body"
uplevel 1 [list proc $name $arguments $realbody]
}
With that, all procedures declared with gproc xyz rather than proc xyz will have access to the foo, bar and boo globals. The whole key is that uplevel takes a command and evaluates it in the caller's context, and list is (among other things) an ideal constructor for substitution-safe code fragments.
Erlang's parse transforms are similar in power to Lisp macros, though they are much trickier to write and use (they are applied to the entire source file, rather than being invoked on demand).
Lisp itself had a brief dalliance with non-parenthesised syntax in the form of M-expressions. It didn't take with the community, though variants of the idea found their way into modern Lisps, so you get Lisp's powerful macros without the parentheses ... in Lisp!
Yes, you can definitely have Lisp macros without all the parentheses.
Take a look at "sweet-expressions", which provides a set of additional abbreviations for traditional s-expressions. They add indentation, a way to do infix, and traditional function calls like f(x), but in a way that is backwards-compatible (you can freely mix well-formatted s-expressions and sweet-expressions), generic, and homoiconic.
Sweet-expressions were developed on http://readable.sourceforge.net and there is a sample implementation.
For Scheme there is a SRFI for sweet-expressions, SRFI-110: http://srfi.schemers.org/srfi-110/
No, it's not necessary. Anything that gives you some sort of access to a parse tree would be enough to allow you to manipulate the macro body in hte same way as is done in Common Lisp. However, as the manipulation of the AST in lisp is identical to the manipulation of lists (something that is bordering on easy in the lisp family), it's possibly not nearly as natural without having the "parse tree" and "written form" be essentially the same.
I think this was not mentioned.
C++ templates are Turing-complete and perform processing at compile-time.
There is the well-known expression templates mechanism that allow transformations,
not from arbitrary code, but at least, from the subset of c++ operators.
So imagine you have 3 vectors of 1000 elements and you must perform:
(A + B + C)[0]
You can capture this tree in a expression template and arbitrarily manipulate it
at compile-time.
With this tree, at compile time, you can transform the expression.
For example, if that expression means A[0] + B[0] + C[0] for your domain, you could
avoid the normal c++ processing which would be:
Add A and B, adding 1000 elements.
Create a temporary for the result, and add with the 1000 elements of C.
Index the result to get the first element.
And replace with another transformed expression template tree that does:
Capture A[0]
Capture B[0]
Capture C[0]
Add all 3 results together in the result to return with += avoiding temporaries.
It is not better than lisp, I think, but it is still very powerful.
Yes, it is certainly possible. Especially if it is still a Lisp under the bonnet:
http://www.meta-alternative.net/pfront.pdf
http://www.meta-alternative.net/pfdoc.pdf
Boo has a nice "quoted" macro syntax that uses [| |] as delimiters, and has certain substitutions which are actually verified syntactically by the compiler pipeline using $variables. While simple and relatively painless to use, it's much more complicated to implement on the compiler side than s-expressions. Boo's solution may have a few limitations that haven't affected my own code. There's also an alternate syntax that reads more like ordinary OO code, but that falls into the "not for the faint of heart" category like dealing with Ruby or Python parse trees.
Javascript's template strings offer yet another approach to this sort of thing. For instance, Mark S. Miller's quasiParserGenerator implements a grammar syntax for parsers.
Go ahead and enter the Elixir programming language.
Elixir is a functional programming language that feels like Lisp with respect to macros, but it is on Ruby's clothes, and runs on top of the Erlang VM.
For those who do not like the parenthesis, but wish their language has powerful macros, Elixir is a great choice.
You can write macros in R (it have more like Algol Syntax) that have notion of delayed expression like in LISP macros. You can call substitute() or quote() to not evaluate the delayed expression but get actual expression and traverse its source code like in LISP. Even structure of the expression source code is like in LISP. Operators are first item in list. e.g.: input$foo which is getting property foo from list input as expression is written as ['$', 'input', 'foo'] just like in LISP.
You can check the ebook Metaprogramming in R that also show how to create Macros in R (not something you would normally do but it's possible). It's based on Article from 2001 Programmer’s Niche: Macros in R that explain how to write LIPS macros in R.

What are some examples of where using parentheses in a program lowers readability?

I always thought that parentheses improved readability, but in my textbook there is a statement that the use of parentheses dramatically reduces the readability of a program. Does anyone have any examples?
I can find plenty of counterexamples where the lack of parentheses lowered the readability, but the only example I can think of for what the author may have meant is something like this:
if(((a == null) || (!(a.isSomething()))) && ((b == null) || (!(b.isSomething()))))
{
// do some stuff
}
In the above case, the ( ) around the method calls is unnecessary, and this kind of code may benefit from factoring out of terms into variables. With all of those close parens in the middle of the condition, it's hard to see exactly what is grouped with what.
boolean aIsNotSomething = (a == null) || !a.isSomething(); // parens for readability
boolean bIsNotSomething = (b == null) || !b.isSomething(); // ditto
if(aIsNotSomething && bIsNotSomething)
{
// do some stuff
}
I think the above is more readable, but that's a personal opinion. That may be what the author was talking about.
Some good uses of parens:
to distinguish between order of operation when behavior changes without the parens
to distinguish between order of operation when behavior is unaffected, but someone who doesn't know the binding rules well enough is going to read your code. The good citizen rule.
to indicate that an expression within the parens should be evaluated before used in a larger expression: System.out.println("The answer is " + (a + b));
Possibly confusing use of parens:
in places where it can't possibly have another meaning, like in front of a.isSomething() above. In Java, if a is an Object, !a by itself is an error, so clearly !a.isSomething() must negate the return value of the method call.
to link together a large number of conditions or expressions that would be clearer if broken up. As in the code example up above, breaking up the large paranthetical statement into smaller chunks can allow for the code to be stepped through in a debugger more straightforwardly, and if the conditions/values are needed later in the code, you don't end up repeating expressions and doing the work twice. This is subjective, though, and obviously meaningless if you only use the expressions in 1 place and your debugger shows you intermediate evaluated expressions anyway.
Apparently, your textbook is written by someone who hate Lisp.
Any way, it's a matter of taste, there is no single truth for everyone.
I think that parentheses is not a best way to improve readability of your code. You can use new line to underline for example conditions in if statement. I don't use parentheses if it is not required.
Well, consider something like this:
Result = (x * y + p * q - 1) % t and
Result = (((x * y) + (p * q)) - 1) % t
Personally I prefer the former (but that's just me), because the latter makes me think the parantheses are there to change the actual order of operations, when in fact they aren't doing that. Your textbook might also refer to when you can split your calculations in multiple variables. For example, you'll probably have something like this when solving a quadratic ax^2+bx+c=0:
x1 = (-b + sqrt(b*b - 4*a*c)) / (2*a)
Which does look kind of ugly. This looks better in my opinion:
SqrtDelta = sqrt(b*b - 4*a*c);
x1 = (-b + SqrtDelta) / (2*a);
And this is just one simple example, when you work with algorithms that involve a lot of computations, things can get really ugly, so splitting the computations up into multiple parts will help readability more than parantheses will.
Parentheses reduce readability when they are obviously redundant. The reader expects them to be there for a reason, but there is no reason. Hence, a cognitive hiccough.
What do I mean by "obviously" redundant?
Parentheses are redundant when they can be removed without changing the meaning of the program.
Parentheses that are used to disambiguate infix operators are not "obviously redundant", even when they are redundant, except perhaps in the very special case of multiplication and addition operators. Reason: many languages have between 10–15 levels of precedence, many people work in multiple languages, and nobody can be expected to remember all the rules. It is often better to disambiguate, even if parentheses are redundant.
All other redundant parentheses are obviously redundant.
Redundant parentheses are often found in code written by someone who is learning a new language; perhaps uncertainty about the new syntax leads to defensive parenthesizing.
Expunge them!
You asked for examples. Here are three examples I see repeatedly in ML code and Haskell code written by beginners:
Parentheses between if (...) then are always redundant and distracting. They make the author look like a C programmer. Just write if ... then.
Parentheses around a variable are silly, as in print(x). Parentheses are never necessary around a variable; the function application should be written print x.
Parentheses around a function application are redundant if that application is an operand in an infix expression. For example,
(length xs) + 1
should always be written
length xs + 1
Anything taken to an extreme and/or overused can make code unreadable. It wouldn't be to hard to make the same claim with comments. If you have ever looked at code that had a comment for virtually every line of code would tell you that it was difficult to read. Or you could have whitespace around every line of code which would make each line easy to read but normally most people want similar related lines (that don't warrant a breakout method) to be grouped together.
You have to go way over the top with them to really damage readability, but as a matter of personal taste, I have always found;
return (x + 1);
and similar in C and C++ code to be very irritating.
If a method doesn't take parameters why require an empty () to call method()? I believe in groovy you don't need to do this.

Resources