The low-level primitives in Rebol for functions and closures are FUNC and CLOS. Without explicitly telling the FUNC or CLOS to make something local, then assignments will not be local.
x: 10
y: 20
foo: func [/local x] [
x: 304
y: 304
]
foo
print [{x is} x {and} {y is} y]
This will output:
x is 10 and y is 304
The higher-level routines FUNCTION and CLOSURE are written as Rebol code in the default library. They scan the body for symbols of category SET-WORD (such as x: and y:). Then they automatically generate an augmented function specification which adds them as /LOCAL:
x: 10
y: 20
foo: function [] [
x: 304
y: 304
]
foo
print [{x is} x {and} {y is} y]
This will output:
x is 10 and y is 20
That's better almost all of the time, so it's good that these get the prettier names. Yet how can you use FUNCTION as an object member?
bar: object [
x: 10
y: 20
foo: function [] [
x: 304
y: 304
c: 12-Dec-2012
d: $0.50
]
]
That won't behave like in other languages where within an object, it's assumed that the members are not hidden by local variables by default. What is someone to do if they want foo to act like a FUNC on any words set in the object, but a FUNCTION for words that are not?
The only thing I thought of was to pass self into a variant of the code for FUNCTION, something like:
method: func [
me [object!] {always the parameter "self"?}
spec [block!]
body [block!]
] [
unless find spec: copy/deep spec /local [append spec [
/local
]]
body: copy/deep body
append spec exclude collect-words/deep/set/ignore body words-of me spec
foreach l next find spec /local [
if refinement? l [
break
]
insert body to-lit-word l
insert body 'unset
]
make function! reduce [spec body]
]
But then you would have to write foo: method self [] [...] which is wordy (assuming this approach is even legitimate).
Is there any trick to get past passing in self, or some other idiom for supporting this desire? Or does everyone just use FUNC as object members?
The behavior described results from the dynamic scope used in rebol. The proposed definition for :method infers the locals from the body of the function, while allowing access to the instance variables of the object without any declaration effort from the programmer. This type of leaky abstraction is dangerous in the presence of dynamic scoping. For example:
The programmer writes this initial version:
o: make object! [
x: 1
y: 1
m: method [][
x: 2
y: 2
z: x * y
]
]
Many revisions later, another programmer decides to revise the code to this:
o: make object! [
x: 1
y: 1
z: method [][
z: x + y
]
m: method [][
x: 2
y: 2
z: x * y
]
]
Depending on the execution path, the revised code could give different results. An invocation of the o/m method will override the o/z method. Thus the proposed implementation introduces an element of surprise.
By saving the programmer the effort to express its intent clearly, the code has become brittle. You can be explicit that you want a member of the object simply by using self when that is what you mean:
o: make object! [
x: 1
y: 1
z: function [][
z: x + y
]
m: function [][
self/x: 2
self/y: 2
z: x * y
]
]
You can then use FUNCTION and CLOSURE and it is readable and explicit.
Disclaimer: I wrote function.
The rest of the answers have done a good job of covering the standard behavior, so let's just skip that.
Let's take a look at your suggestion for a method function to make it easier to write object-bound functions that refer to the object's words. With some tweaking it could be a useful addition to Rebol.
The Problem
Doing things the normal, right way doesn't always work. Take some code from jvargas:
o: make object! [
x: 1
y: 1
z: function [][
z: x + y
]
m: function [][
self/x: 2
self/y: 2
z: x * y
]
]
Say you need to protect the words x and y from being accessed from outside the object - one of the main reasons people write methods in the first place. Rebol provides a protect/hide function that does just that. So let's use that:
protect/hide/words in o [x y]
Once you hide the words, they can't be accessed except through words that were bound to them before they were protected - we still want existing bindings to work so we can write privileged code which is allowed to access the words, but want to block outside code. So any new or "outside" code that tries to access the words will fail. Outside code, in this case, means referring to the words through a path expression, like o/x, or through binding expressions like in o 'x.
Unfortunately for the code above, that means that the self/x and self/y expressions won't work either. If they did it would be too easy to get around the restrictions like this:
do in o [self/x: "something horrible"]
That's one of the reasons why we made function/with, so it could be used like Ladislav suggested. But function/with won't necessarily work either because it is also meant to be able to work from the outside - it explicitly binds the function body to the object provided, which increases its power in advanced code, but doesn't help us here if the words were hidden before the method is constructed:
bar: object [
x: 10
y: 20
protect/hide/words [x y]
foo: function/with [] [
x: 304
y: 304
c: 12-Dec-2012
d: $0.50
] self
]
The call to function/with would not reserve the x and y here because it wouldn't see them (they're already hidden), so those words would be local to the resulting function. To do what you want you have to use another option:
bar: object [
x: 10
y: 20
protect/hide/words [x y]
foo: function/extern [] [
x: 304
y: 304
c: 12-Dec-2012
d: $0.50
] [x y]
]
This just makes function skip adding x and y to the locals, and since they were bound to the object earlier with the rest of the object's code block, before the words were hidden, they will still be bound and still work.
This is all too tricky. We may benefit from a simple alternative.
A Solution
Here's a cleaned up version of your method function that solves a few issues:
method: func [
"Defines an object function, all set-words local except object words."
:name [set-word!] "The name of the function (modified)"
spec [block!] "Help string (opt) followed by arg words (and opt type and string)"
body [block!] "The body block of the method"
] [
unless find spec: copy/deep spec /local [append spec [
/local
]]
body: copy/deep body
append spec collect-words/deep/set/ignore body
append append copy spec 'self words-of bind? name
set name make function! reduce [spec body]
]
You use it like this:
bar: object [
x: 10
y: 20
method foo: [] [
x: 304
y: 304
c: 12-Dec-2012
d: $0.50
]
]
You might note that this looks a lot less awkward than function/with or your method, almost like it's syntax. You might also notice that you don't have to pass self or a word list - so how does it work at all, given Rebol's lack of scopes?
The trick is that this function makes you put the set-word for the method name after the word method, not before it. That is what makes it look like method syntax in other programming languages. However, for us, it turns the method name into a parameter for the method function, and with that parameter we can get to the original object through the binding of the word, then get a list of words from that.
There are a few more factors that make this binding trick work. By naming this function method and mentioning objects in the doc string, we pretty much ensure that this function will normally be used only in objects, or maybe modules. Objects and modules gather their words from the set-words in their code block. By making sure the name must be a set-word, this makes it likely to be bound to the object or module. We declare it with :name to block the set-word being treated as an assignment expression, then set it explicitly within the function the way someone might naively expect it to be.
As an advantage over function/with, method doesn't rebind the function body to the object, it just skips the object's words like function/extern and leaves the existing bindings. Specializing lets it be simpler, and have a little less overhead as a bonus.
The new method also has a couple advantages over your original method:
The extra code in that foreach loop had the side effect of
reserving the word unset in functions, and the effect of
unsetting the local words for a single refinement group, when
normally those words would be none by default. Function local words
are none by default on purpose, and this would make their behavior
inconsistent. It's unnecessary and ill-advised.
The self word is special, it's not returned by words-of and
explicitly triggers an error if you try to assign it, to help you
avoid bugs. Your code loses that error, where the code in function
was designed to preserve it. Given how bad an idea it is to
accidentally override self, it is better to require people to
override it explicitly by declaring it in the function locals.
exclude doesn't work with blocks that have nested blocks, so your
code wouldn't have worked with function specs with types declared.
That's why function used those append calls in the first place.
Does this serve your purposes?
This works at present, but it probably is not exactly what you wished:
bar: object [
x: 10
y: 20
foo: function/with [] [
x: 304
y: 304
c: 12-Dec-2012
d: $0.50
] self
]
Related
I am trying to overload the __rrshift__ method on one class. The basic implementation follows:
class A:
def __rrshift__(self, arg0):
print(f'called A.__rrshift__ with arg0={arg0}')
a = A()
42 >> a
Here I got the expected result:
called A.__rrshift__ with arg0=42
But if I call this method with more than one argument, only the last one is used:
'hello', 'world' >> a
Which returns:
called A.__rrshift__ with arg0=world
If I try and add arguments to the __rrshift__ method, the behavior is not modified as I expected:
class B:
def __rrshift__(self, arg0, arg1=None):
print(f'called B.__rrshift__ with arg0={arg0}, arg1={arg1}')
b = B()
42 >> b
'hello', 'world' >> b
# called B.__rrshift__ with arg0=42, arg1=None
# called B.__rrshift__ with arg0=world, arg1=None
Is it possible to consider more than one argument for the __rrshift__ method?
I'm afraid that's not possible.
__rrshift__, like __add__, __sub__ et al. are binary operators. They accept exactly two arguments: self and whatever_other_argument.
Of course, you can cheat the system by calling these methods explicitly, and then they'll be able to accept as many arguments as you want, but if you use the operators like >>, +, - et al., then the syntax of the language will force them to accept two arguments exactly.
You can probably modify that by hacking the heck of Python's grammar with the ast module, but that won't be Python anymore.
Here's how a, b >> c is seen by the Python parser, according to the grammar:
>>> ast.dump(ast.parse('a, b >> c'))
# I prettified this myself. The actual output of `dump` is horrendous looking.
Module(
body=[
Expr(
# Build a tuple...
value=Tuple(elts=[
Name(id='a', ctx=Load()), # ...containing `a`...
# ...and the result of a BINARY OPERATOR (RShift)...
BinOp(
left=Name(id='b', ctx=Load()), # ...which is applied to `b`...
op=RShift(),
right=Name(id='c', ctx=Load()) # ...and `c`
)
],
ctx=Load()
)
)
]
)
The production in the grammar that produces [sic] the tuple seems to be the following:
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
As you can see, it then goes on to parser the test production, which is then unpacked all the way to the expr production, which then arrives at the following production:
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
So, the first test in testlist_star_expr resolves to atom: NAME, and the second one - to shift_expr. This later ends up constructing the tuple.
I am not sure what you are after but if you just need to supply some args (kwargs eventually), this might show you how to achieve that:
class A:
def __rrshift__(self, *args):
if len(args) > 1:
print(f'called A.__rrshift__ with arg={", ".join(args)}')
else:
print(f'called A.__rrshift__ with arg={args[0]}')
a = A()
a.__rrshift__('hello', 'world')
a.__rrshift__(42)
#called A.__rrshift__ with arg=hello, world
#called A.__rrshift__ with arg=42
What is closure in groovy?
Why we use this closure?
Are you asking about Closure annotation parameters?
[...
An interesting feature of annotations in Groovy is that you can use a closure as an annotation value. Therefore annotations may be used with a wide variety of expressions and still have IDE support. For example, imagine a framework where you want to execute some methods based on environmental constraints like the JDK version or the OS. One could write the following code:
class Tasks {
Set result = []
void alwaysExecuted() {
result << 1
}
#OnlyIf({ jdk>=6 })
void supportedOnlyInJDK6() {
result << 'JDK 6'
}
#OnlyIf({ jdk>=7 && windows })
void requiresJDK7AndWindows() {
result << 'JDK 7 Windows'
}
}
...]
Source:http://docs.groovy-lang.org/
Closures are a powerful concept with which you can implement a variety of things and which enable specifying DSLs. They are sort of like Java ( lambdas, but more powerful and versatile. You dont need to use closures, but they can make many things easier.
Since you didnt really specify a concrete question, I'll just point you to the startegy pattern example in the groovy docs:
http://docs.groovy-lang.org/latest/html/documentation/#_strategy_pattern
Think of the closure as an executable unit on its own, like a method or function, except that you can pass it around like a variable, but can do a lot of things that you would normally do with a class, for example.
An example: You have a list of numbers and you either want to add +1 to each number, or you want to double each number, so you say
def nums = [1,2,3,4,5]
def plusone = { item ->
item + 1
}
def doubler = { item ->
item * 2
}
println nums.collect(plusone)
println nums.collect(doubler)
This will print out
[2, 3, 4, 5, 6]
[2, 4, 6, 8, 10]
So what you achieved is that you separated the function, the 'what to do' from the object that you did it on. Your closures separate an action that can be passed around and used by other methods, that are compatible with the closure's input and output.
What we did in the example is that we had a list of numbers and we passed each of them to a closure that did something with it. Either added +1 or doubled the value, and collected them into another list.
And this logic opens up a whole lot of possibilities to solve problems smarter, cleaner, and write code that represents the problem better.
map-each can be used to evaluate some code for every member in a collection, and aggregate the results of the evaluation in a block:
>> values: map-each x [1 2] [
print ["Doing mapping for" x]
x * 10
]
Doing mapping for 1
Doing mapping for 2
== [10 20]
I was building a block of blocks in this way. But I forgot that since blocks aren't evaluated by default, the x would be left as-is and not get the value I wanted:
>> blocks: map-each x [1 2] [
print ["Doing mapping for" x]
[x * 10]
]
Doing mapping for 1
Doing mapping for 2
== [[x * 10] [x * 10]]
No surprise there. After the evaluation x has no value--much less the ability to take on many values:
>> probe x
** Script error: x has no value
So it's too late, the evaluation must be done with a REDUCE or COMPOSE inside the body of the map-each. But...
>> reduce first blocks
== [20]
>> reduce second blocks
== [20]
The evaluations of items in the result block don't throw an error, but behave as if x had the value of the last iteration.
How is it doing this? Should it be doing this?
Just like 'FOREACH, 'MAP-EACH binds the block you give it within a context it creates and executes it there.
the X is never created globally. the fact that you gave it a word (and not a lit-word) as an argument is managed by the function's interface which probably uses the lit-word notation, and uses the word given, as-is, un-evaluated, instead of the value it may contain.
for this reason, the X used in your call to map-each doesn't trigger a
** Script error: x has no value
since map-each is grabbing it before it gets evaluated and only uses the word as a token, directly.
To illustrate how binding works more vividly and to show how 'X may survive existence past its original context, here is an example which illustrates the foundation of how words are bound in Rebol (and the fact that this binding persists).
look at this example:
a: context [x: "this"]
b: context [x: "is"]
c: context [x: "sensational!"]
>> blk: reduce [in a 'x in b 'x in c 'x]
== [x x x]
x: "doh!"
== "doh!"
>> probe reduce blk
["this" "is" "sensational!"]
We created a single block with three 'X words, but none of them are bound to the same context.
Because the binding in Rebol is static, and scope doesn't exist, the same word can have different values, even when they are being manipulated in the same context (in this case the console is the global | user context).
This is the penultimate example of why a word is really NOT a variable in Rebol.
blocks: map-each x [1 2] [
print ["Doing mapping for" x]
[x * 10]
]
probe bound? first blocks/1
gives this
Doing mapping for 1
Doing mapping for 2
make object! [
x: 2
]
In Rebol 2:
>> foo: make object! [a: 10 b: 20]
>> foo/a
== 10
>> foo/b
== 20
>> first foo
== [self a b]
>> second foo
== [make object! [
a: 10
b: 20
] 10 20]
>> third foo
== [a: 10 b: 20]
>> fourth foo
** Script Error: fourth expected series argument of type:
series date port tuple event
** Near: fourth foo
So you can pick out of it as if it were a block for values 1, 2, 3. But doing positional selection is right out in Rebol 3:
>> first foo
** Script error: cannot use pick on object! value
** Where: first
** Near: first foo
I gather that this is deprecated now (like picking out of a function to get its parameter list). However, I'm trying to translate some code that says something like:
bar: construct/with (third foo) mumble
(a) What is the point of that code?
(b) How would I translate it to Rebol 3?
This usage of first, second, third, etc for reflection is indeed deprecated (and it's probably quite obvious why).
The general replacement is REFLECT, which takes a FIELD parameter to specify what information is to be extracted.
REFLECT is in turn wrapped by a group of functions (referred to by some as "reflectors") for convenience: SPEC-OF, BODY-OF, WORDS-OF, VALUES-OF, etc. Those are the preferred replacement for reflection using FIRST et al. Luckily, those "reflectors" have been backported to R2 (2.7.7+) as well.
How to translate third foo to Rebol 3?
The counterpart to reflective THIRD on an object is BODY-OF.
What is the point of the construct/with (third a) b idiom?
It allows you to construct a new object by merging A and B (with values from A taking precedence over B).
So you could, for example, use this idiom to create a full "options" object by merging actual user-provided options with an object of defaults.
a) Construct builds the object without evaluating the spec block. This implies that the spec is of some [set-word! any-type!] form (which it would always be if you are using the body of another object). Construct/with uses a second object (mumble) as the prototype.
b) Object operations appear to have changed as follows:
i) first object is replaced by words-of object
ii) second object is replaced by values-of object
iii) third object is replaced by body-of object or to block! object
Therefore your code can be replaced with:
bar: construct/with body-of foo mumble
There is a new implementation of FUNCTION in Rebol 3, which allows making variables automatically bound to local context by default.
FUNCTION seems to have a problem with the VALUE? test, as it returns TRUE even if a variable has not been set at runtime yet:
foo: function [] [
if value? 'bar [
print [{Before assignment, bar has a value, and it is} bar]
]
bar: 10
if value? 'bar [
print [{After assignment, bar has a value, and it is} bar]
]
]
If you call FOO you will get:
Before assignment, bar has a value, and it is none
After assignment, bar has a value, and it is 10
That is not the way FUNC works (it only says BAR has a value after the assignment). But then FUNC does not make variables automatically local.
I found the FUNCS primitive here, in a library created by Ladislav Mecir. How is it different, and does it have the same drawbacks?
http://www.fm.vslib.cz/~ladislav/rebol/funcs.r
The main difference is, that FUNCTION deep-searches for set-words in the body, while FUNCS just shallow-searches for them. FUNCS also uses a slightly different specification.
FUNCS has been around for quite some time (a name change occurred not long ago, though).
That VALUE? function "problem" is related to the fact that the local variables of functions (even if you use FUNC with /LOCAL to explicitly declare them) are initialized to NONE. That causes the VALUE? function to yield TRUE even when the variables are "not initialized yet".
Generally, I do not see this "initialized with NONE" a "big deal", although this behavior is not the same as the behavior of either global or object variables