I'm writing a Lisp in Haskell (code at GitHub) as a way of learning more about both languages.
The newest feature that I'm adding is macros. Not hygienic macros or anything fancy - just plain vanilla code transformations. My initial implementation had a separate macro environment, distinct from the environment that all other values live in. Between the read and eval functions I interspersed another function, macroExpand, which walked the code tree and performed the appropriate transformations whenever it found a keyword in the macro environment, before the final form was passed on to eval to be evaluated. A nice advantage of this was that macros had the same internal representation as other functions, which reduced some code duplication.
Having two environments seemed clunky though, and it annoyed me that if I wanted to load a file, eval had to have access to the macro environment in case the file contained macro definitions. So I decided to introduce a macro type, store macros in the same environment as functions and variables, and incorporate the macro expansion phase into eval. I was at first at a bit of a loss for how to do it, until I figured that I could just write this code:
eval env (List (function : args)) = do
func <- eval env function
case func of
(Macro {}) -> apply func args >>= eval env
_ -> mapM (eval env) args >>= apply func
It works as follows:
If you are passed a list containing an initial expression and a bunch of other expressions...
Evaluate the first expression
If it's a macro, then apply it to the arguments, and evaluate the result
If it's not a macro, then evaluate the arguments and apply the function to the result
It's as though macros are exactly the same as functions, except the order of eval/apply is switched.
Is this an accurate description of macros? Am I missing something important by implementing macros in this way? If the answers are "yes" and "no", then why have I never seen macros explained this way before?
The answers are "no" and "yes".
It looks like you've started with a good model of macros, where the macro level and the runtime level in in separate worlds. In fact, this is one of the main points behind Racket's macro system. You can read some brief text about it in the Racket guide, or see the original paper that describes this feature and why it's a good idea to do that. Note that Racket's macro system is a very sophisticated one, and it's hygienic -- but phase separation is a good idea regardless of hygiene. To summarize the main advantage, it makes it possible to always expand code in a reliable way, so you get benefits like separate compilation, and you don't depend on code loading order and such problems.
Then, you moved into a single environment, which loses that. In most of the Lisp world (eg, in CL and in Elisp), this is exactly how things are done -- and obviously, you run into the problems that are described above. ("Obvious" since phase separation was designed to avoid these, you just happened to get your discoveries in the opposite order from how they happened historically.) In any case, to address some of these problems, there is the eval-when special form, which can specify that some code is evaluated at run-time or at macro-expansion-time. In Elisp you get that with eval-when-compile, but in CL you get much more hair, with a few other "*-time"s. (CL also has read-time, and having that share the same environment as everything else is triple the fun.) Even if it seems like a good idea, you should read around around and see how some lispers lose hair because of this mess.
And in the last step of your description you step even further back in time and discover something that is known as FEXPRs. I won't even put any pointers for that, you can find a ton of texts about it, why some people think that it's a really bad idea, why some other people think that it's a really good idea. Practically speaking, those two "some"s are "most" and "few" respectively -- though the few remaining FEXPR strongholds can be vocal. To translate all of this: it's explosive stuff... Asking questions about it is a good way to get long flamewars. (As a recent example of a serious discussion you can see the initial discussion period for the R7RS, where FEXPRs came up and lead to exactly these kinds of flames.) No matter which side you choose to sit at, one thing is obvious: a language with FEXPRs is extremely different than a language without them. [Coincidentally, working on an implementation in Haskell might affect your view, since you have a place to go to for a sane static world for code, so the temptation in "cute" super-dynamic languages is probably bigger...]
One last note: since you're doing something similar, you should look into a similar project of implementing a Scheme in Haskell -- IIUC, it even has hygienic macros.
Not quite. Actually, you've pretty concisely described the difference between "call by name" and "call by value"; a call-by-value language reduces arguments to values before substitution, a call-by-name language performs substitution first, then reduction.
The key difference is that macros allow you to break referential transparency; in particular, the macro can examine the code, and thus can differentiate between (3 + 4) and 7, in a way that ordinary code can't. That's why macros are both more powerful and also more dangerous; most programmers would be upset if they found that (f 7) produced one result and (f (+ 3 4)) produced a different result.
Background Rambling
What you have there is very late binding macros. This is a workable approach, but it is inefficient, because repeated executions of the same code will repeatedly expand the macros.
On the positive side, this is friendly for interactive development. If the programmer changes a macro, and then re-invokes some code which uses it, such as a previously defined function, the new macro instantly takes effect. This is an intuitive "do what I mean" behavior.
Under a macro system which expands macros earlier, the programmer has to redefine all of the functions that depend on a macro when that macro changes, otherwise the existing definitions continue to be based on the old macro expansions, oblivious to the new version of the macro.
A reasonable approach is to have this late binding macro system for interpreted code, but a "regular" (for lack of a better word) macro system for compiled code.
Expanding macros does not require a separate environment. It should not, because local macros should be in the same namespace as variables. For instance in Common Lisp if we do this (let (x) (symbol-macrolet ((x 'foo)) ...)), the inner symbol macro shadows the outer lexical variable. The macro expander has to be aware of the variable binding forms. And vice versa! If there is an inner let for the variable x, it shadows an outer symbol-macrolet. The macro expander cannot just blindly substitute all occurrences of x that occur in the body. So in other words, Lisp macro expansion has to be aware of the full lexical environment in which macros and other kinds of bindings coexist. Of course, during macro expansion, you don't instantiate the environment in the same way. Of course if there is a (let ((x (function)) ..), (function) is not called and x is not given a value. But the macro expander is aware that there is an x in this environment and so occurrences of x are not macros.
So when we say one environment, what we really mean is that there are two different manifestations or instantiations of a unified environment: the expansion-time manifestation and then the evaluation-time manifestation. Late-binding macros simplify the implementation by merging these two times into one, as you have done, but it does not have to be that way.
Also note that Lisp macros can accept an &environment parameter. This is needed if the macros need to call macroexpand on some piece of code supplied by the user. Such a recursion back into the macro expander through a macro has to pass the proper environment so the user's code has access to its lexically surrounding macros and gets expanded properly.
Concrete Example
Suppose we have this code:
(symbol-macrolet ((x (+ 2 2)))
(print x)
(let ((x 42)
(y 19))
(print x)
(symbol-macrolet ((y (+ 3 3)))
(print y))))
The effect of this to prints 4, 42 and 6. Let's use the CLISP implementation of Common Lisp, and expand this using CLISP's implementation-specific function called system::expand-form. We cannot use regular, standard macroexpand because that will not recurse into the local macros:
(system::expand-form
'(symbol-macrolet ((x (+ 2 2)))
(print x)
(let ((x 42)
(y 19))
(print x)
(symbol-macrolet ((y (+ 3 3)))
(print y)))))
-->
(LOCALLY ;; this code was reformatted by hand to fit your screen
(PRINT (+ 2 2))
(LET ((X 42) (Y 19))
(PRINT X)
(LOCALLY (PRINT (+ 3 3))))) ;
(Now firstly, about these locally forms. Why are they there? Note that they correspond to places where we had a symbol-macrolet. This is probably for the sake of declarations. If the body of a symbol-macrolet form has declarations, they have to be scoped to that body, and locally will do that. If the expansion of symbol-macrolet does not leave behind this locally wrapping, then declarations will have the wrong scope.)
From this macro expansion you can see what the task is. The macro expander has to walk the code and recognize all binding constructs (all special forms, really), not only binding constructs having to do with the macro system.
Notice how one of the instances of (print x) is left alone: the one which is in the scope of the (let ((x ..)) ...). The other became (print (+ 2 2)), in accordance with the symbol macro for x.
Another thing we can learn from this is that macro expansion just substitutes the expansion and removes the symbol-macrolet forms. So the environment that remains is the original one, minus all of the macro material which is scrubbed away in the expansion process. The macro expansion honors all of the lexical bindings, in one big "Grand Unified" environment, but then graciously vaporizes, leaving behind just the code like (print (+ 2 2)) and other vestiges like the (locally ...), with just the non-macro binding constructs resulting in a reduced version of the original environment.
Thus now when the expanded code is evaluated, just the reduced environment's run-time personality comes into play. The let bindings are instantiated and stuffed with initial values, etc. During expansion, none of that was happening; the non-macro bindings just lie there asserting their scope, and hinting at a future existence in the run time.
What you're missing is that this symmetry breaks down when you separate analysis from evaluation, which is what all practical Lisp implementations do. Macro expansion would occur during the analysis phase so eval can be kept simple.
I really recommend to have some of the Lisp books handy. Recommended is for example Christian Queinnec, Lisp in Small Pieces. The book is about the implementation of Scheme.
http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/LiSP.html
Chapter 9 is about macros: http://pagesperso-systeme.lip6.fr/Christian.Queinnec/WWW/chap9.html
For what its worth, the Scheme R5RS section Binding constructs for syntactic keywords has this to say about it:
Let-syntax and letrec-syntax are analogous to let and letrec, but they bind syntactic keywords to macro transformers instead of binding variables to locations that contain values.
See: http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-7.html#%_sec_4.3.1
This seems to imply a separate strategy should be used, at least for the syntax-rules macro system.
You can write some... interesting code in a Scheme that uses separate "places" for macros. It does not make much sense to mix macros and variables of the same name in any "real" code but if you just want to try it out, consider this example from Chicken Scheme:
#;1> let
Error: unbound variable: let
#;1> (define let +)
#;2> (let ((talk "hello!")) (write talk))
"hello!"
#;3> let
#<procedure C_plus>
#;4> (let 1 2)
Error: (let) not a proper list: (let 1 2)
Call history:
<syntax> (let 1 2) <--
#;4> (define a let)
#;5> (a 1 2)
3
Related
I was remembering the haskell programming I learnt the last year and suddenly I had a little problem.
ghci> let test = [1,2,3,4]
ghci> test = drop 1 test
ghci> test
^CInterrupted.
I do not remember if it is possible.
Thanks!
test on the first line and test on the second line are not, in fact, the same variable. They're two different, separate, unrelated variables that just happen to have the same name.
Further, the concept of "saving in a variable" does not apply to Haskell. In Haskell, variables cannot be interpreted as "memory cells", which can hold values. Haskell's variables are more like mathematical variables - just names that you give to complex expressions for easier reasoning (well, this is a bit of an oversimplification, but good enough for now)
Consequently, variables in Haskell are immutable. You cannot change the value of a variable by assignment, like you can in many other languages. This property follows from interpreting the concept of "variable" in the mathematical sense, as described above.
Furthermore, definitions (aka "bindings") in Haskell are recursive. This means that the right side (the body) of a binding may refer to its left side. This is very handy for constructing infinite data structures, for example:
x = 42 : x
An infinite list of 42s
In your example, when you write test = drop 1 test, you're defining a list named test, which is completely unrelated to the list defined on the previous line, and which is equal to itself without the first element. It's only natural that trying to print such a list results in an infinite loop.
The bottom line is: you cannot do what you're trying to do. You cannot create a new binding, which shadows an existing binding, while at the same time references it. Just give it a different name.
I'm implementing something similar to a Spreadsheet engine in Haskell.
There are ETables, which have rows of cells containing expressions in the form of ASTs (e.g. BinOp + 2 2), which can contain references to other cells of ETables.
The main function should convert these ETables into VTables, which contain a fully resolved value in the cell (e.g. the cell BinOp + 2 2 should be resolved to IntValue 4). This is pretty easy when cells have no external references, because you can just build the value bottom up from the expression AST of the cell (e.g. eval (BinOpExpr op l r) = IntValue $ (eval l) op (eval r), minus unboxing and typechecking) all the way to the table (evalTable = (map . map) eval rows)
However, I can't think of a "natural" way of handling this when external references are thrown into the mix. Am I correct to assume that I can't just call eval on the referenced cell and use its value, because Haskell is not smart enough to cache the result and re-use it when that cell is independently evaluated?
The best thing I came up with is using a State [VTable] which is progressively filled, so the caching is explicit (each eval call updates the state with the return value before returning). This should work, however it feels "procedural". Is there a more idiomatic approach available that I'm missing?
Haskell doesn't memoization by default because that would generally take up too much memory, so you can't just rely on eval doing the right thing. However, the nature of lazy evaluation means that data structures are, in a sense, memoized: each thunk in a large lazy structure is only evaluated once. This means that you can memoize a function by defining a large lazy data structure internally and replacing recursive calls with accesses into the structure—each part of the structure will be evaluated at most once.
I think the most elegant approach to model your spreadsheet would be a large, lazy directed graph with the cells as nodes and references as edges. Then you would need to define the VTable graph in a recursive way such that all recursion goes through the graph itself, which will memoize the result in the way I described above.
There are a couple of handy ways to model a graph. One option would be to use an explicit map with integers as node identifiers—IntMap or even an array of some sort could work. Another option is to use an existing graph library; this will save you some work and ensure you have a nice graph abstraction, but will take some effort up front to understand. I'm a big fan of the fgl, the "functional graph library", but it does take a bit of up-front reading and thinking to understand. The performance isn't going to be very different because it's also implemented in terms of IntMap.
Tooting my own horn a bit, I've written a couple of blog posts expanding on this answer: one about memoization with lazy structures (with pictures!) and one about the functional graph library. Putting the two ideas together should get you what you want, I believe.
I'm trying to do some research for a new project, and I need to create objects dynamically from random data.
For this to work, I need a language / compiler that doesn't have problems with weird uncompilable code lying around.
Basically, I need the random code to compile (or be interpreted) as much as possible - Meaning that the uncompilable parts will be ignored, and only the compilable parts will create the objects (which could be ran).
Object Oriented-ness is not a must, but is a very strong advantage.
I thought of ASM, but it's very messy, and I'd probably need a more readable code
Thanks!
It sounds like you're doing something very much like genetic programming; even if you aren't, GP has to solve some of the same problems—using randomness to generate valid programs. The approach to this that is typically used is to work with a syntax tree: rather than storing x + y * 3 - 2, you store something like the following:
Then, instead of randomly changing the syntax, one can randomly change nodes in the tree instead. And if x should randomly change to, say, +, you can statically know that this means you need to insert two children (or not, depending on how you define +).
A good choice for a language to work with for this would be any Lisp dialect. In a Lisp, the above program would be written (- (+ x (* y 3)) 2), which is just a linearization of the syntax tree using parentheses to show depth. And in fact, Lisps expose this feature: you can just as easily work with the object '(- (+ x (* y 3)) 2) (note the leading quote). This is a three-element list, whose first element is -, second element is another list, and third element is 2. And, though you might or might not want it for your particular application, there's an eval function, such that (eval '(- (+ x (* y 3)) 2)) will take in the given list, treat it as a Lisp syntax tree/program, and evaluate it. This is what makes Lisps so attractive for doing this sort of work; Lisp syntax is basically a reification of the syntax-tree, and if you operate at the syntax-tree level, you can work on code as though it was a value. Lisp won't help you read /dev/random as a program directly, but with a little interpretation layered on top, you should be able to get what you want.
I should also mention, though I don't know anything about it (not that I know much about ordinary genetic programming) the existence of linear genetic programming. This is sort of like the assembly model that you mentioned—a linear stream of very, very simple instructions. The advantage here would seem to be that if you are working with /dev/random or something like it, the amount of interpretation needed is very small; the disadvantage would be, as you mentioned, the low-level nature of the code.
I'm not sure if this is what you're looking for, but any programming language can be made to function this way. For any programming language P, define the language Palways as follows:
If p is a valid program in P, then p is a valid program in Palways whose meaning is the same as its meaning in P.
If p is not a valid program in P, then p is a valid program in Palways whose meaning is the same as a program that immediately terminates.
For example, I could make the language C++always so that this program:
#include <iostream>
using namespace std;
int main() {
cout << "Hello, world!" << endl;
}
would compile as "Hello, world!", while this program:
Hahaha! This isn't legal C++ code!
Would be a legal program that just does absolutely nothing.
To solve your original problem, just take any OOP language like Java, Smalltalk, etc. and construct the appropriate Javaalways, Smalltalkalways, etc. language from it. Again, I'm not sure if this is at all what you're looking for, but it could be done very easily.
Alternatively, consider finding a grammar for any OOP language and then using that grammar to produce random syntactically valid programs. You could then filter those programs down by using the Palways programming language for that language to eliminate syntactically but not semantically valid programs.
Divide the ASCII byte values into 9 classes (division modulo 9 would help). Then assign then to Brainfuck codewords (see http://en.wikipedia.org/wiki/Brainfuck). Then interpret as Brainfuck.
There you go, any sequence of ASCII characters is a program. Not that it's going to do anything sensible... This approach has a much better chance, compared to templatetypedef's answer, to get a nontrivial program from a random byte sequence.
Text Editors
You could try feeding random character strings to an editor like Emacs or VI. Many (most?) characters will perform an editing action but some will do nothing (other than beep, perhaps). You would have to ensure that the random code mutator never generates the character sequence that exits the editor. However, this experience would be much like programming a Turing machine -- the code is not too readable.
Mathematica
In Mathematica, undefined symbols and other expressions evaluate to themselves, without error. So, that language might be a viable choice if you can arrange for the random code mutator to always generate well-formed expressions. This would be readily achievable since the basic Mathematica syntax is trivial, making it easy to operate on syntactic units rather than at the character level. It would be even easier if the mutator were written in Mathematica itself since expression-munging is Mathematica's forte. You could define a mini-language of valid operations within a Mathematica package that does not import the system-defined symbols. This would allow you to generate well-formed expressions to your heart's content without fear of generating a dangerous expression, like DeleteFile[FileNames["*.*", "/", Infinity]].
I believe Common Lisp should suit your needs. I always have some code in my SLIME/Emacs session that wouldn't compile. You can always tweak things, redefine functions in run-time. It is actually very good for prototyping.
A few years ago it took me quite a while to learn. But nowadays we have quicklisp and everything is so much easier.
Here I describe my development environment:
Install lisp on my linux machine
PS: I want to give an example, where Common Lisp was useful for me:
Up to maybe 2004 I used to write small programs in C (the keep it simple Unix way).
The last 3 years I had to get lots of different hardware running. Motorized stages, scientific cameras, IO cards.
The cameras turned out to be quite annoying. Usually you have to cool them down to -50 degree celsius or so and (in some SDKs) they don't like it when you close them. But this
is exactly how my C development cycle worked: write (30s), compile (1s), run (0.1s), repeat.
Eventually I decided to just use Common Lisp. Often it is straight forward to define the foreign function interfaces to talk to the SDKs and I can do this without ever leaving the running Lisp image. I start the editor in the morning define the open-device function, to talk to the device and after 3 hours I have enough of the functions implemented to set gain, temperature, region of interest and obtain the video.
Then I can often put the SDK manual away and just use the camera.
I used the same interactive programming approach when I have to parse some webpage or some weird XML.
Can a language have Lisp's powerful macros without the parentheses?
Sure, the question is whether the macro is convenient to use and how powerful they are.
Let's first look how Lisp is slightly different.
Lisp syntax is based on data, not text
Lisp has a two-stage syntax.
A) first there is the data syntax for s-expressions
examples:
(mary called tim to tell him the price of the book)
(sin ( x ) + cos ( x ))
s-expressions are atoms, lists of atoms or lists.
B) second there is the Lisp language syntax on top of s-expressions.
Not every s-expression is a valid Lisp program.
(3 + 4)
is not a valid Lisp program, because Lisp uses prefix notation.
(+ 3 4)
is a valid Lisp program. The first element is a function - here the function +.
S-expressions are data
The interesting part is now that s-expressions can be read and then Lisp uses the normal data structures (numbers, symbols, lists, strings) to represent them.
Most other programming languages don't have a primitive representation for internalized source - other than strings.
Note that s-expressions here are not representing an AST (Abstract Syntax Tree). It's more like a hierarchical token tree coming out of a lexer phase. A lexer identifies the lexical elements.
The internalized source code now makes it easy to calculate with code, because the usual functions to manipulate lists can be applied.
Simple code manipulation with list functions
Let's look at the invalid Lisp code:
(3 + 4)
The program
(defun convert (code)
(list (second code) (first code) (third code)))
(convert '(3 + 4)) -> (+ 3 4)
has converted an infix expression into the valid Lisp prefix expression. We can evaluate it then.
(eval (convert '(3 + 4))) -> 7
EVAL evaluates the converted source code. eval takes as input an s-expression, here a list (+ 3 4).
How to calculate with code?
Programming languages now have at least three choices to make source calculations possible:
base the source code transformations on string transformations
use a similar primitive data structure like Lisp. A more complex variant of this is a syntax based on XML. One could then transform XML expressions. There are other possible external formats combined with internalized data.
use a real syntax description format and represent the source code internalized as a syntax tree using data structures that represent syntactic categories. -> use an AST.
For all these approaches you will find programming languages. Lisp is more or less in camp 2. The consequence: it is theoretically not really satisfying and makes it impossible to statically parse source code (if the code transformations are based on arbitrary Lisp functions). The Lisp community struggles with this for decades (see for example the myriad of approaches that the Scheme community has tried). Fortunately it is relatively easy to use, compared to some of the alternatives and quite powerful. Variant 1 is less elegant. Variant 3 leads to a lot complexity in simple AND complex transformations. It usually also means that the expression was already parsed with respect to a specific language grammar.
Another problem is HOW to transform the code. One approach would be based on transformation rules (like in some Scheme macro variants). Another approach would be a special transformation language (like a template language which can do arbitrary computations). The Lisp approach is to use Lisp itself. That makes it possible to write arbitrary transformations using the full Lisp language. In Lisp there is not a separate parsing stage, but at any time expressions can be read, transformed and evaluated - because these functions are available to the user.
Lisp is kind of a local maximum of simplicity for code transformations.
Other frontend syntax
Also note that the function read reads s-expressions to internal data. In Lisp one could either use a different reader for a different external syntax or reuse the Lisp built-in reader and reprogram it using the read macro mechanism - this mechanism makes it possible to extend or change the s-expression syntax. There are examples for both approaches to provide a different external syntax in Lisp.
For example there are Lisp variants which have a more conventional syntax, where code gets parsed into s-expressions.
Why is the s-expression-based syntax popular among Lisp programmers?
The current Lisp syntax is popular among Lisp programmers for two reasons:
1) the data is code is data idea makes it easy to write all kinds of code transformations based on the internalized data. There is also a relatively direct way from reading code, over manipulating code to printing code. The usual development tools can be used.
2) the text editor can be programmed in a straight forward way to manipulate s-expressions. That makes basic code and data transformations in the editor relatively easy.
Originally Lisp was thought to have a different, more conventional syntax. There were several attempts later to switch to other syntax variants - but for some reasons it either failed or spawned different languages.
Absolutely. It's just a couple orders of magnitude more complex, if you have to deal with a complex grammar. As Peter Norvig noted:
Python does have access to the
abstract syntax tree of programs, but
this is not for the faint of heart. On
the plus side, the modules are easy to
understand, and with five minutes and
five lines of code I was able to get
this:
>>> parse("2 + 2")
['eval_input', ['testlist', ['test', ['and_test', ['not_test', ['comparison',
['expr', ['xor_expr', ['and_expr', ['shift_expr', ['arith_expr', ['term',
['factor', ['power', ['atom', [2, '2']]]]], [14, '+'], ['term', ['factor',
['power', ['atom', [2, '2']]]]]]]]]]]]]]], [4, ''], [0, '']]
This was rather a disapointment to me. The Lisp parse of the equivalent expression is (+ 2 2). It seems that only a real expert would want to manipulate Python parse trees, whereas Lisp parse trees are simple for anyone to use. It is still possible to create something similar to macros in Python by concatenating strings, but it is not integrated with the rest of the language, and so in practice is not done.
Since I'm not a super-genius (or even a Peter Norvig), I'll stick with (+ 2 2).
Here's a shorter version of Rainer's answer:
In order to have lisp-style macros, you need a way of representing source-code in data structures. In most languages, the only "source code data structure" is a string, which doesn't have nearly enough structure to allow you to do real macros on. Some languages offer a real data structure, but it's too complex, like Python, so that writing real macros is stupidly complicated and not really worth it.
Lisp's lists and parentheses hit the sweet spot in the middle. Just enough structure to make it easy to handle, but not too much so you drown in complexity. As a bonus, when you nest lists you get a tree, which happens to be precisely the structure that programming languages naturally adopt (nearly all programming languages are first parsed into an "abstract syntax tree", or AST, before being actually interpreted/compiled).
Basically, programming Lisp is writing an AST directly, rather than writing some other language that then gets turned into an AST by the computer. You could possibly forgo the parens, but you'd just need some other way to group things into a list/tree. You probably wouldn't gain much from doing so.
Parentheses are irrelevant to macros. It's just Lisp's way of doing things.
For example, Prolog has a very powerful macros mechanism called "term expansion". Basically, whenever Prolog reads a term T, if tries a special rule term_expansion(T, R). If it is successful, the content of R is interpreted instead of T.
Not to mention the Dylan language, which has a pretty powerful syntactic macro system, which features (among other things) referential transparency, while being an infix (Algol-style) language.
Yes. Parentheses in Lisp are used in the classic way, as a grouping mechanism. Indentation is an alternative way to express groups. E.g. the following structures are equivalent:
A ((B C) D)
and
A
B
C
D
Have a look at Sweet-expressions. Wheeler makes a very good case that the reason things like infix notation have not worked before is that typical notation also tries to add precedence, which then adds complexity, which causes difficulties in writing macros.
For this reason, he proposes infix syntax like {1 + 2 + 3} and {1 + {2 * 3}} (note the spaces between symbols), that are translated to (+ 1 2) and (+ 1 (* 2 3)) respectively. He adds that if someone writes {1 + 2 * 3}, it should become (nfx 1 + 2 * 3), which could be captured, if you really want to provide precedence, but would, as a default, be an error.
He also suggests that indentation should be significant, proposes that functions could be called as fn(A B C) as well as (fn A B C), would like data[A] to translate to (bracketaccess data A), and that the entire system should be compatible with s-expressions.
Overall, it's an interesting set of proposals I'd like to experiment with extensively. (But don't tell anyone at comp.lang.lisp: they'll burn you at the stake for your curiosity :-).
Code rewriting in Tcl in a manner recognizably similar to Lisp macros is a common technique. For example, this is (trivial) code that makes it easier to write procedures that always import a certain set of global variables:
proc gproc {name arguments body} {
set realbody "global foo bar boo;$body"
uplevel 1 [list proc $name $arguments $realbody]
}
With that, all procedures declared with gproc xyz rather than proc xyz will have access to the foo, bar and boo globals. The whole key is that uplevel takes a command and evaluates it in the caller's context, and list is (among other things) an ideal constructor for substitution-safe code fragments.
Erlang's parse transforms are similar in power to Lisp macros, though they are much trickier to write and use (they are applied to the entire source file, rather than being invoked on demand).
Lisp itself had a brief dalliance with non-parenthesised syntax in the form of M-expressions. It didn't take with the community, though variants of the idea found their way into modern Lisps, so you get Lisp's powerful macros without the parentheses ... in Lisp!
Yes, you can definitely have Lisp macros without all the parentheses.
Take a look at "sweet-expressions", which provides a set of additional abbreviations for traditional s-expressions. They add indentation, a way to do infix, and traditional function calls like f(x), but in a way that is backwards-compatible (you can freely mix well-formatted s-expressions and sweet-expressions), generic, and homoiconic.
Sweet-expressions were developed on http://readable.sourceforge.net and there is a sample implementation.
For Scheme there is a SRFI for sweet-expressions, SRFI-110: http://srfi.schemers.org/srfi-110/
No, it's not necessary. Anything that gives you some sort of access to a parse tree would be enough to allow you to manipulate the macro body in hte same way as is done in Common Lisp. However, as the manipulation of the AST in lisp is identical to the manipulation of lists (something that is bordering on easy in the lisp family), it's possibly not nearly as natural without having the "parse tree" and "written form" be essentially the same.
I think this was not mentioned.
C++ templates are Turing-complete and perform processing at compile-time.
There is the well-known expression templates mechanism that allow transformations,
not from arbitrary code, but at least, from the subset of c++ operators.
So imagine you have 3 vectors of 1000 elements and you must perform:
(A + B + C)[0]
You can capture this tree in a expression template and arbitrarily manipulate it
at compile-time.
With this tree, at compile time, you can transform the expression.
For example, if that expression means A[0] + B[0] + C[0] for your domain, you could
avoid the normal c++ processing which would be:
Add A and B, adding 1000 elements.
Create a temporary for the result, and add with the 1000 elements of C.
Index the result to get the first element.
And replace with another transformed expression template tree that does:
Capture A[0]
Capture B[0]
Capture C[0]
Add all 3 results together in the result to return with += avoiding temporaries.
It is not better than lisp, I think, but it is still very powerful.
Yes, it is certainly possible. Especially if it is still a Lisp under the bonnet:
http://www.meta-alternative.net/pfront.pdf
http://www.meta-alternative.net/pfdoc.pdf
Boo has a nice "quoted" macro syntax that uses [| |] as delimiters, and has certain substitutions which are actually verified syntactically by the compiler pipeline using $variables. While simple and relatively painless to use, it's much more complicated to implement on the compiler side than s-expressions. Boo's solution may have a few limitations that haven't affected my own code. There's also an alternate syntax that reads more like ordinary OO code, but that falls into the "not for the faint of heart" category like dealing with Ruby or Python parse trees.
Javascript's template strings offer yet another approach to this sort of thing. For instance, Mark S. Miller's quasiParserGenerator implements a grammar syntax for parsers.
Go ahead and enter the Elixir programming language.
Elixir is a functional programming language that feels like Lisp with respect to macros, but it is on Ruby's clothes, and runs on top of the Erlang VM.
For those who do not like the parenthesis, but wish their language has powerful macros, Elixir is a great choice.
You can write macros in R (it have more like Algol Syntax) that have notion of delayed expression like in LISP macros. You can call substitute() or quote() to not evaluate the delayed expression but get actual expression and traverse its source code like in LISP. Even structure of the expression source code is like in LISP. Operators are first item in list. e.g.: input$foo which is getting property foo from list input as expression is written as ['$', 'input', 'foo'] just like in LISP.
You can check the ebook Metaprogramming in R that also show how to create Macros in R (not something you would normally do but it's possible). It's based on Article from 2001 Programmer’s Niche: Macros in R that explain how to write LIPS macros in R.
I always thought that parentheses improved readability, but in my textbook there is a statement that the use of parentheses dramatically reduces the readability of a program. Does anyone have any examples?
I can find plenty of counterexamples where the lack of parentheses lowered the readability, but the only example I can think of for what the author may have meant is something like this:
if(((a == null) || (!(a.isSomething()))) && ((b == null) || (!(b.isSomething()))))
{
// do some stuff
}
In the above case, the ( ) around the method calls is unnecessary, and this kind of code may benefit from factoring out of terms into variables. With all of those close parens in the middle of the condition, it's hard to see exactly what is grouped with what.
boolean aIsNotSomething = (a == null) || !a.isSomething(); // parens for readability
boolean bIsNotSomething = (b == null) || !b.isSomething(); // ditto
if(aIsNotSomething && bIsNotSomething)
{
// do some stuff
}
I think the above is more readable, but that's a personal opinion. That may be what the author was talking about.
Some good uses of parens:
to distinguish between order of operation when behavior changes without the parens
to distinguish between order of operation when behavior is unaffected, but someone who doesn't know the binding rules well enough is going to read your code. The good citizen rule.
to indicate that an expression within the parens should be evaluated before used in a larger expression: System.out.println("The answer is " + (a + b));
Possibly confusing use of parens:
in places where it can't possibly have another meaning, like in front of a.isSomething() above. In Java, if a is an Object, !a by itself is an error, so clearly !a.isSomething() must negate the return value of the method call.
to link together a large number of conditions or expressions that would be clearer if broken up. As in the code example up above, breaking up the large paranthetical statement into smaller chunks can allow for the code to be stepped through in a debugger more straightforwardly, and if the conditions/values are needed later in the code, you don't end up repeating expressions and doing the work twice. This is subjective, though, and obviously meaningless if you only use the expressions in 1 place and your debugger shows you intermediate evaluated expressions anyway.
Apparently, your textbook is written by someone who hate Lisp.
Any way, it's a matter of taste, there is no single truth for everyone.
I think that parentheses is not a best way to improve readability of your code. You can use new line to underline for example conditions in if statement. I don't use parentheses if it is not required.
Well, consider something like this:
Result = (x * y + p * q - 1) % t and
Result = (((x * y) + (p * q)) - 1) % t
Personally I prefer the former (but that's just me), because the latter makes me think the parantheses are there to change the actual order of operations, when in fact they aren't doing that. Your textbook might also refer to when you can split your calculations in multiple variables. For example, you'll probably have something like this when solving a quadratic ax^2+bx+c=0:
x1 = (-b + sqrt(b*b - 4*a*c)) / (2*a)
Which does look kind of ugly. This looks better in my opinion:
SqrtDelta = sqrt(b*b - 4*a*c);
x1 = (-b + SqrtDelta) / (2*a);
And this is just one simple example, when you work with algorithms that involve a lot of computations, things can get really ugly, so splitting the computations up into multiple parts will help readability more than parantheses will.
Parentheses reduce readability when they are obviously redundant. The reader expects them to be there for a reason, but there is no reason. Hence, a cognitive hiccough.
What do I mean by "obviously" redundant?
Parentheses are redundant when they can be removed without changing the meaning of the program.
Parentheses that are used to disambiguate infix operators are not "obviously redundant", even when they are redundant, except perhaps in the very special case of multiplication and addition operators. Reason: many languages have between 10–15 levels of precedence, many people work in multiple languages, and nobody can be expected to remember all the rules. It is often better to disambiguate, even if parentheses are redundant.
All other redundant parentheses are obviously redundant.
Redundant parentheses are often found in code written by someone who is learning a new language; perhaps uncertainty about the new syntax leads to defensive parenthesizing.
Expunge them!
You asked for examples. Here are three examples I see repeatedly in ML code and Haskell code written by beginners:
Parentheses between if (...) then are always redundant and distracting. They make the author look like a C programmer. Just write if ... then.
Parentheses around a variable are silly, as in print(x). Parentheses are never necessary around a variable; the function application should be written print x.
Parentheses around a function application are redundant if that application is an operand in an infix expression. For example,
(length xs) + 1
should always be written
length xs + 1
Anything taken to an extreme and/or overused can make code unreadable. It wouldn't be to hard to make the same claim with comments. If you have ever looked at code that had a comment for virtually every line of code would tell you that it was difficult to read. Or you could have whitespace around every line of code which would make each line easy to read but normally most people want similar related lines (that don't warrant a breakout method) to be grouped together.
You have to go way over the top with them to really damage readability, but as a matter of personal taste, I have always found;
return (x + 1);
and similar in C and C++ code to be very irritating.
If a method doesn't take parameters why require an empty () to call method()? I believe in groovy you don't need to do this.