How is precedence of binary messages implemented in Pharo Smalltalk? - pharo

I was looking at the syntax of Pharo Smalltalk, and was wondering how is precedence of binary messages implemented.
How does one go about declaring such a binary message?
How does the system figure out the precedence over unary messages?

For implementing binary messages:
Yes, you can look at the number class for example. It has a lot of binary messages there. Or consider this method of Object:
-> anObject
^ Association basicNew key: self value: anObject
This allows you to evaluate 'five' -> 5 and get an association.
For precedence:
This is done by a parser. So first of all it looks for keyword messages, then binary, then unary, then parents.
So if you have
collection add: 'five' -> 5
The parser will first of all parse add: with receiver collection then it parses 'five' -> 5 and puts it a a parameter of add:. This way the AST is composed in the way that keyword messages are more general and will be executed after their attributes are computed
Where to find
In the latest versions of Pharo the parsing is done with RBParser (same parser used previously for refactoring). Keyword messages are parsed with parseKeywordMessage and binary with parseBinaryMessage. The easies way to explore would be to put a one time breakpoint on parseBinaryMessage and execute something like
RBParser parseExpression: 'coll add: #five -> 5'
The debugger will stop on parseBinaryMessage so you can look at the stack and see how it is working. Pay attention, breaking once is important, otherwise you will get a debugger every time you compile a method or so on.

In Smalltalk, all binary messages have the same precedence and are strictly evaluated from left to right. That is why
1 + 2 * 3
evaluates to 9 and not 7, being evaluated as
(1 + 2) * 3
All unary messages have the same precedence and are evaluated from left to right. Their precedence is higher than the binary messages:
1 sin sqrt + 1 cos sqrt
is equivalent to
((1 sin) sqrt) + ((1 cos) sqrt)

Related

Does the functionality of Grouping operator `()` in JavaScript differ from Haskell or other programming languages?

Grouping operator ( ) in JavaScript
The grouping operator ( ) controls the precedence of evaluation in expressions.
Does the functionality ( ) in JavaScript itself differ from Haskell or any other programming languages?
In other words,
Is the functionality ( ) in programming languages itself affected by evaluation strategies ?
Perhaps we can share the code below:
a() * (b() + c())
to discuss the topic here, but not limited to the example.
Please feel free to use your own examples to illustrate. Thanks.
Grouping parentheses mean the same thing in Haskell as they do in high school mathematics. They group a sub-expression into a single term. This is also what they mean in Javascript and most other programming language1, so you don't have to relearn this for Haskell coming from other common languages, if you have learnt it the right way.
Unfortunately, this grouping is often explained as meaning "the expression inside the parentheses must be evaluated before the outside". This comes from the order of steps you would follow to evaluate the expression in a strict language (like high school mathematics). However the grouping really isn't really about the order in which you evaluate things, even in that setting. Instead it is used to determine what the expression actually is at all, which you need to know before you can do anythign at all with the expression, let alone evaluate it. Grouping is generally resolved as part of parsing the language, totally separate from the order in which any runtime evaluation takes place.
Let's consider the OP's example, but I'm going to declare that function call syntax is f{} rather than f() just to avoid using the same symbol for two purposes. So in my newly-made-up syntax, the OP's example is:
a{} * (b{} + c{})
This means:
a is called on zero arguments
b is called on zero arguments
c is called on zero arguments
+ is called on two arguments; the left argument is the result of b{}, and the right argument is the result of c{}
* is called on two arguments: the left argument is the result of a{}, and the right argument is the result of b{} + c{}
Note I have not numbered these. This is just an unordered list of sub-expressions that are present, not an order in which we must evaluate them.
If our example had not used grouping parentheses, it would be a{} * b{} + c{}, and our list of sub-expressions would instead be:
a is called on zero arguments
b is called on zero arguments
c is called on zero arguments
+ is called on two arguments; the left argument is the result of a{} * b{}, and the right argument is the result of c{}
* is called on two arguments: the left argument is the result of a{}, and the right argument is the result of b{}
This is simply a different set of sub-expressions from the first (because the overall expression doesn't mean the same thing). That is all that grouping parentheses do; they allow you to specify which sub-expressions are being passed as arguments to other sub-expressions2.
Now, in a strict language "what is being passed to what" does matter quite a bit to evaluation order. It is impossible in a strict language to call anything on "the result of a{} + b{} without first having evaluated a{} + b{} (and we can't call + without evaluating a{} and b{}). But even though the grouping determines what is being passed to what, and that partially determines evaluation order3, grouping isn't really "about" evaluation order. Evaluation order can change as a result of changing the grouping in our expression, but changing the grouping makes it a different expression, so almost anything can change as a result of changing grouping!
Non-strict languages like Haskell make it especially clear that grouping is not about order of evaluation, because in non-strict languages you can pass something like "the result of a{} + b{}" as an argument before you actually evaluate that result. So in my lists of subexpressions above, any order at all could potentially be possible. The grouping doesn't determine it at all.
A language needs other rules beyond just the grouping of sub-expressions to pin down evaluation order (if it wants to specify the order), whether it's strict or lazy. So since you need other rules to determine it anyway, it is best (in my opinion) to think of evaluation order as a totally separate concept than grouping. Mixing them up seems like a shortcut when you're learning high school mathematics, but it's just a handicap in more general settings.
1 In languages with roughly C-like syntax, parentheses are also used for calling functions, as in func(arg1, arg2, arg3). The OP themselves has assumed this syntax in their a() * (b() + c()) example, where this is presumably calling a, b, and c as functions (passing each of them zero arguments).
This usage is totally unrelated to grouping parentheses, and Haskell does not use parentheses for calling functions. But there can be some confusion because the necessity of using parentheses to call functions in C-like syntax sometimes avoids the need for grouping parentheses e.g. in func(2 + 3) * 6 it is unambiguous that 2 + 3 is being passed to func and the result is being multiplied by 6; in Haskell syntax you would need some grouping parentheses because func 2 + 3 * 6 without parentheses is interpreted as the same thing as (func 2) + (3 * 6), which is not func (2 + 3) * 6.
C-like syntax is not alone in using parentheses for two totally unrelated purposes; Haskell overloads parentheses too, just for different things in addition to grouping. Haskell also uses them as part of the syntax for writing tuples (e.g. (1, True, 'c')), and the unit type/value () which you may or may not want to regard as just an "empty tuple".
2 Which is also what associativity and precedence rules for operators do. Without knowing that * is higher precedence than +, a * b + c is ambiguous; there would be no way to know what it means. With the precedence rules, we know that a * b + c means "add c to the result of multiplying a and b", but we now have no way to write down what we mean when we want "multiply a by the result of adding b and c" unless we also allow grouping parentheses.
3 Even in a strict language the grouping only partially determines evaluation order. If you look at my "lists of sub-expressions" above it's clear that in a strict language we need to have evaluated a{}, b{}, and c{} early on, but nothing determines whether we evaluate a{} first and then b{} and then c{}, or c{} first, and then a{} and then b{}, or any other permutation. We could even evaluate only the two of them in the innermost +/* application (in either order), and then the operator application before evaluating the third named function call, etc etc.
Even in a strict language, the need to evaluate arguments before the call they are passed to does not fully determine evaluation order from the grouping. Grouping just provides some constraints.
4 In general in a lazy language evaluation of a given call happens a bit at a time, as it is needed, so in fact in general all of the sub-evaluations in a given expression could be interleaved in a complicated fashion (not happening precisely one after the other) anyway.
To clarify the dependency graph:
Answer by myself (the Questioner), however, I am willing to be examined, and still waiting for your answer (not opinion based):
Grouping operator () in every language share the common functionality to compose Dependency graph.
In mathematics, computer science and digital electronics, a dependency graph is a directed graph representing dependencies of several objects towards each other. It is possible to derive an evaluation order or the absence of an evaluation order that respects the given dependencies from the dependency graph.
dependency graph 1
dependency graph 2
the functionality of Grouping operator () itself is not affected by evaluation strategies of any languages.

How to get the grammar production when there is an error with Ply(Yacc)?

In the yacc.py file I defined the output of a grammar and also an error, like this:
def p_error(p):
if p:
print("Error when trying to read the symbol '%s' (Token type: %s)" % (p.value, p.type))
else:
print("Syntax error at EOF")
exit()
In addition to this error message, I also want to print what was the production read at the time of the error, something like:
print("Error in production: ifstat -> IF LPAREN expression RPAREN statement elsestat")
How can I do this?
Really, you can't. You particularly can't with a bottom-up parser like the one generated by Ply, but even with top-down parsers "the production read at the time" is not a very well-defined concept.
For example, consider the erroneous code:
if (x < y return 42;
in which the error is a missing parentheses. At least, that's how I'd describe the error. But a closing parenthesis is not the only thing which could follow the 0. For example, a correct program might include any of the following:
if (x < y) return 42;
if (x < y + 10) return 42;
if (x < y && give_up_early) return 42;
and many more.
So which production is the parser trying to complete when it sees the token return? Evidently, it's still trying to complete expression (which might actually have a hierarchy of different expression types, or which might be relying on precedence declarations to be a single non-terminal, or some combination of the two.) But that doesn't really help identify the error as a missing close parenthesis.
In a top-down parser, it would be possible to walk up the parser stack to get a list of partially-completed productions in inclusion order. (At least, that would be possible if the parser maintained its own stack. If it were a recursive-descent parser, examining the stack would be more complicated.)
But in a bottom-up parser, the parser state is more complicated. Bottom-up parsers are more flexible than top-down parsers precisely because they can, in effect, consider multiple productions at the same time. So there often really isn't one single partial production; the parser will decide which production it is looking at by gradually eliminating all the possibilities which don't work.
That description makes it sound like the bottom-up parser is doing a lot of work, which is misleading. The work was already done by the parser generator, which compiles a simple state transition table to guide the parse. What that means in practice is that the parser knows how to handle every possibly-correct token at each moment in the parse. So, for example, when it sees a ) following if (x < y, it immediately knows that it must finish up the expression production and proceed with the rest of the if statement.
Bison -- a C implementation of yacc -- has an optional feature which allows it to list the possible correct tokens when an error is encountered. That's not as simple as it sounds, and implementing it correctly creates a noticeable overhead in parsing time, but it is sometimes useful. (It's often not useful, though, because the list of possible tokens can be very long. In the case of the error I'm using as an example, the list would include every single arithmetic operator, as well as those tokens which could start a postfix operator. The bison extended error handler stops trying when it reaches the sixth possible token, which means that it will rarely generated an extended error message if the parse is in the middle of an expression.) In any case, Ply does not have such a feature.
Ply, like bison, does implement error recovery through the error pseudo-token. The error-recovery algorithm works best with languages which have an unambiguous resynchronisation point, as in languages with a definite statement terminator (unlike C-like languages, in which many statements do not end with ;). But you can use error productions to force the parser to pop its stack back to some containing production in order to produce a better error message. In my experience, a lot of experimentation is needed to get this strategy right.
In short, producing meaningful error messages is hard. My recommendation is to first focus on getting your parser working on correct inputs.

Guarantee of sameness of output after switching order in functional programming

I started reading some of Haskell's documentation, and there's a fundamental concept I just don't understand. I read about it in other places as well, but I want to understand it once and for all.
In many places discussing functional programing, I keep reading that if the functions you're using are pure (have no side effects, and give same response for the same input at every call) then you can switch the order in which they are called when composing them, with it being guaranteed that the output of this composed call will remain the same regardless of the order.
For example, here is an entry from the Haskell Wiki:
Haskell is a pure language, which means that the result of any
function call is fully determined by its arguments. Pseudo-functions
like rand() or getchar() in C, which return different results on each
call, are simply impossible to write in Haskell. Moreover, Haskell
functions can't have side effects, which means that they can't effect
any changes to the "real world", like changing files, writing to the
screen, printing, sending data over the network, and so on. These two
restrictions together mean that any function call can be replaced by
the result of a previous call with the same parameters, and the
language guarantees that all these rearrangements will not change the
program result!
But when I fiddle with this idea I can quickly think of examples that contradict the statement above. For instance, let's say I have two functions (I will use pseudo code rather than Haskell):
x(a)->a+3
y(a)->a*3
z(a)->x(y(a))
w(a)->y(x(a))
Now, if we execute z and w, we get:
z(5) //gives 3*5+3=18
w(5) //gives (5+3)*3=24
That being so, I think I misunderstood the promised guarantee they speak about. Can anybody explain it to me?
When you compare x(y(a)) to y(x(a)), those two expressions are not equivalent because x and y aren't called with the same arguments in each. In the first expression x is called with the argument y(a) and y is called with the argument a. Whereas in the second y is called with x(a), not a, as its argument and x is called with a, not y(a). So: different arguments, (possibly) different results.
When people say that the order does not matter, they mean that in the following code:
a = f(x)
b = g(y)
you can switch the definition of a and b without affecting their values. That is it makes no difference whether f is called before g or vice versa. This is clearly not true for the following code:
a = getchar()
b = getchar()
If you switch a and b here, their values are switched as well, because getchar returns a (possibly) different character each time that it's called. So a purely functional language can't have a function exactly like getchar.

Is there a fast way of going from a symbol to a function call in Julia? [duplicate]

This question already has an answer here:
Julia: invoke a function by a given string
(1 answer)
Closed 6 years ago.
I know that you can call functions using their name as follows
f = x -> println(x)
y = :f
eval(:($y("hi")))
but this is slow since it is using eval is it possible to do this in a different way? I know it's easy to go the other direction by just doing symbol(f).
What are you trying to accomplish? Needing to eval a symbol sounds like a solution in search of a problem. In particular, you can just pass around the original function, thereby avoiding issues with needing to track the scope of f (or, since f is just an ordinary variable in your example, the possibility that it would get reassigned), and with fewer characters to type:
f = x -> println(x)
g = f
g("hi")
I know it's easy to go the other direction by just doing symbol(f).
This is misleading, since it's not actually going to give you back f (that transform would be non-unique). But it instead gives you the string representation for the function (which might happen to be f, sometimes). It is simply equivalent to calling Symbol(string(f)), since the combination is common enough to be useful for other purposes.
Actually I have found use for the above scenario. I am working on a simple form compiler allowing for the convenient definition of variational problems as encountered in e.g. finite element analysis.
I am relying on the Julia parser to do an initial analysis of the syntax. The equations entered are valid Julia syntax, but will trigger errors on execution because some of the symbols or methods are not available at the point of the problem definition.
So what I do is roughly this:
I have a type that can hold my problem description:
type Cmd f; a; b; end
I have defined a macro so that I have access to the problem description AST. I travers this expression and create a Cmd object from its elements (this is not completely unlike the strategy behind the #mat macro in MATLAB.jl):
macro m(xp)
c = Cmd(xp.args[1], xp.args[3], xp.args[2])
:($c)
end
At a later step, I run the Cmd. Evaluation of the symbols happens only at this stage (yes, I need to be careful of the evaluation context):
function run(c::Cmd)
xp = Expr(:call, c.f, c.a, c.b)
eval(xp)
end
Usage example:
c = #m a^b
...
a, b = 2, 3
run(c)
which returns 9. So in short, the question is relevant in at least some meta-programming scenarios. In my case I have to admit I couldn't care less about performance as all of this is mere preprocessing and syntactic sugar.

Parse arithmetic/boolean expression but skip capture

Given the following expression
x = a + 3 + b * 5
I would like to write that in the following data structure, where I'm only interested to capture the variables used on the RHS and keep the string intact. Not interesting in parsing a more specific structure since I'm doing a transformation from language to language, and not handling the evaluation
Variable "x" (Expr ["a","b"] "a + 3 + b * 5")
I've been using this tutorial as my starting point, but I'm not sure how to write an expression parser without buildExpressionParser. That doesn't seem to be the way I should approach this.
I am not sure why you want to avoid buildExpressionParser, as it hides a lot of the complexity in parsing expressions with infix operators. It is the right way to do things....
Sorry about that, but now that I got that nag out of the way, I can answer your question.
First, here is some background-
The main reason writing a parser for expressions with infix operators is hard is because of operator precedence. You want to make sure that this
x+y*z
parses as this
+
/ \
x *
/\
y z
and not this
*
/ \
+ z
/ \
x y
Choosing the correct parsetree isn't a very hard problem to solve.... But if you aren't paying attention, you can write some really bad code. Why? Performance....
The number of possible parsetrees, ignoring precedence, grows exponentially with the size of the input. For instance, if you write code to try all possibilities then throw away all but the ones with the proper precedence, you will have a nasty surprise when your parser tackles anything in the real world (remember, exponential complexity often ain't just slow, it is basically not a solution at all.... You may find that you are waiting half an hour for a simple parse, no one will use that parser).
I won't repeat the details of the "proper" solution here (a google search will give the details), except to note that the proper solution runs at O(n) with the size of the input, and that buildExpressionParser hides all the complexity of writing such a parser for you.
So, back to your original question....
Do you need to use buildExpressionParser to get the variables out of the RHS, or is there a better way?
You don't need it....
Since all you care about is getting the variables used in the right side, you don't care about operator precedence. You can just make everything left associative and write a simple O(n) parser. The parsetrees will be wrong, but who cares? You will still get the same variables out. You don't even need a context free grammar for this, this regular expression basically does it
<variable>(<operator><variable>)*
(where <variable> and <operator> are defined in the obvious way).
However....
I wouldn't recommend this, because, as simple as it is, it still will be more work than using buildExpressionParser. And it will be trickier to extend (like adding parenthesis). But most important, later on, you may accidentally use it somewhere where you do need a full parsetree, and be confused for a while why the operator precedence is so completely messed up.
Another solution is, you could rewrite your grammar to remove the ambiguity (again, google will tell you how).... This would be good as a learning exercise, but you basically would be repeating what buildExpressionParser is doing internally.

Resources