Racket difference between new and instantiate - object

On table-panel I stumbled upon the call to instantiate. Before, when reading the documentation for GUI in Racket I only saw new being used to create objects of GUI classes.
Usage of instantiate from that page:
(instantiate button%
((format "~a" j) child)
(stretchable-width #t)
(stretchable-height #t)
(callback
(lambda (button event)
(printf "~a~n" (send button get-label)))))
Usage of new in the rest of the documentation:
; Make a frame by instantiating the frame% class
(define frame (new frame% [label "Example"]))
What is the difference between the two?
Edit
I found a documentation page telling me about the difference, but I don't really understand what "by-name initialization arguments" are. Is this the same as keyword arguments?

In (define frame (new frame% [label "Example"])), [label "Example"] is a by-name initialization argument, with the argument named being label being given the value "Example". They're conceptually similar to keyword arguments but the mechanism is different and, unlike keyword arguments, they can be provided by position if you really want to. Here's the relevant docs, from https://docs.racket-lang.org/reference/createclass.html:
Initialization arguments can be provided by name or by position. The
external name of an initialization variable can be used with
instantiate or with the superclass initialization form. Those forms
also accept by-position arguments. The make-object procedure and the
superclass initialization procedure accept only by-position arguments.
Arguments provided by position are converted into by-name arguments
using the order of init and init-field clauses and the order of
variables within each clause. When an instantiate form provides both
by-position and by-name arguments, the converted arguments are placed
before by-name arguments. (The order can be significant; see also
Creating Objects.)

Related

Why are these two classes executed in python 3.x, They're just defined?

Why are these two classes executed? I just defined two base classes. I didn't use f = F () and g = G () to instantiate objects. Why are the two prints executed?
class F():
print("F.__new__called")
class G:
print("G.__new__called")
print("test")
output:
F.__new__called
G.__new__called
test
As in Python 3 docs 9.3.1 "Class Definition Syntax" (emphasis mine):
The simplest form of class definition looks like this:
class ClassName:
<statement-1>
.
.
.
<statement-N>
Class definitions, like function definitions (def statements) must be executed before they have any effect. (You could conceivably place a class definition in a branch of an if statement, or inside a function.)
In practice, the statements inside a class definition will usually be function definitions, but other statements are allowed, and sometimes useful — we’ll come back to this later. The function definitions inside a class normally have a peculiar form of argument list, dictated by the calling conventions for methods — again, this is explained later.
As far as Python is concerned, any statement is allowed within a class definition, even beyond typical attribute assignment and function definitions. These statements are run immediately, in the same sense that the def statement must be run to define a function.
(The docs do not obviously come back to the utility of other statements in the block, though other examples interleave assignments for private variable references in the 9.6 code block.)
However, unlike the contents of your print statement, these do not correspond to "F.__new__called"; it only corresponds to the class being defined. When you call f = F (), Python will call the methods __new__ and __init__, which then are responsible for constructing the object.
Think about how the interpreter runs through the code. When it sees a class definition(class statement), it just creates a new namespace and executes all the statements inside it in the new namespace. That is why your print statements get executed.
When interpreter sees a function definition (def statement), it binds the function name in the current namespace to a function object(which represents the code of the function).
class ClassName:
print("hey")
a = 2
def f():
print("test")
In the above class, all the statements are executed within the namespace ClassName. So print("hey") gets executed and the assignment a = 2 also gets executed. Since a is within the namespace ClassName, you can refer to it as ClassName.a. The statement def f(self): also gets executed but since it is a function definition, the name ClassName.f binds to the function object which when called (ClassName.f()), executes the function code.

Is the `def` keyword optional? If so, why use it?

I am aware that a variable can be dynamically typed with the def keyword in Groovy. But I have also noticed that in some circumstances it can be left out, such as when defining method parameters, eg func(p1, p2) instead of func(def p1, def p2). The latter form is discouraged.
I have noticed that this is extendable to all code - anytime you want to define a variable and set its value, eg var = 2 the def keyword can be safely left out. It only appears to be required if not instantiating the variable on creation, ie. def var1 so that it can be instantiated as a NullObject.
Is this the only time def is useful? Can it be safely left out in all other declarations, for example, of classes and methods?
Short answer: you can't. There are some use cases where skipping the type declaration (or def keyword) works, but it is not a general rule. For instance, Groovy scripts allow you to use variables without specific type declaration, e.g.
x = 10
However, it works because groovy.lang.Script class implements getProperty and setProperty methods that get triggered when you access a missing property. In this case, such a variable is promoted to be a global binding, not a local variable. If you try to do the same on any other class that does not implement those methods, you will end up getting groovy.lang.MissingPropertyException.
Skipping types in a method declaration is supported, both in dynamically compiled and statically compiled Groovy. But is it useful? It depends. In most cases, it's much better to declare the type for a better readability and documentation purpose. I would not recommend doing it in the public API - the user of your API will see Object type, while you may expect some specific type. It shows that this may work if your intention is to receive any object, no matter what is its specific type. (E.g. a method like dump(obj) could work like that.)
And last but not least, there is a way to skip type declaration in any context. You can use a final keyword for that.
class Foo {
final id = 1
void bar(final name) {
final greet = "Hello, "
println greet + name + "!"
}
}
This way you can get a code that compiles with dynamic compilation, as well as with static compilation enabled. Of course, using final keyword prevents you from re-assigning the variable, but for the compiler, this is enough information to infer the proper type.
For more information, you can check a similar question that was asked on SO some time ago: Groovy: "def" keyword vs concrete type
in Groovy it plays an important role in Global and Local variable
if the variable name is same with and without def
def is considered local and without def its global
I have explained here in detail https://stackoverflow.com/a/45994227/2986279
So if someone use with and without it will make a difference and can change things.

what does getType do in antlr4?

This question is with reference to the Cymbol code from the book (~ page 143) :
int t = ctx.type().start.getType(); // in DefPhase.enterFunctionDecl()
Symbol.Type type = CheckSymbols.getType(t);
What does each component return: "ctx.type()", "start", "getType()" ? The book does not contain any explanation about these names.
I can "kind of" understand that "ctx.type()" refers to the "type" rule, and "getType()" returns the number associated with it. But what exactly does the "start" do?
Also, to generalize this question: what is the mechanism to get the value/structure returned by a rule - especially in the context of usage in a listener?
I can see that for an ID, it is:
String name = ctx.ID().getText();
And as in above, for an enumeration of keywords it is via "start.getType()". Any other special kinds of access that I should be aware of?
Lets disassemble problem step by step. Obviously, ctx is instance of CymbolParser.FunctionDeclContext. On page 98-99 you can see how grammar and ParseTree are implemented (at least the feeling - for real implementation please see th .g4 file).
Take a look at the figure of AST on page 99 - you can see that node FunctionDeclContext has a several children, one labeled type. Intuitively you see that it somehow correspond with function return-type. This is the node you retrieve when calling CymbolParser.FunctionDeclContext::type. The return type is probably sth like TypeContext.
Note that methods without 'get' at the beginning are usually children-getters - e.g. you can access the block by calling CymbolParser.FunctionDeclContext::block.
So you got the type context of the method you got passed. You can call either begin or end on any context to get first of last Token defining the context. Simply start gets you "the first word". In this case, the first Token is of course the function return-type itsef, e.g. int.
And the last call - Token::getType returns integral representation of Token.
You can find more information at API reference webpages - Context, Token. But the best way of understanding the behavior is reading through the generated ANTLR classes such as <GrammarName>Parser etc. And to be complete, I attach a link to the book.

should it be allowed to change the method signature in a non statically typed language

Hypothetic and academic question.
pseudo-code:
<pre><code>
class Book{
read(theReader)
}
class BookWithMemory extends Book {
read(theReader, aTimestamp = null)
}
</pre></code>
Assuming:
an interface (if supported) would prohibit it
default value for parameters are supported
Notes:
PHP triggers an strict standards error for this.
I'm not surprised that PHP strict mode complains about such an override. It's very easy for a similar situation to arise unintentionally in which part of a class hierarchy was edited to use a new signature and a one or a few classes have fallen out of sync.
To avoid the ambiguity, name the new method something different (for this example, maybe readAt?), and override read to call readAt in the new class. This makes the intent plain to the interpreter as well as anyone reading the code.
The actual behavior in such a case is language-dependent -- more specifically, it depends on how much of the signature makes up the method selector, and how parameters are passed.
If the name alone is the selector (as in PHP or Perl), then it's down to how the language handles mismatched method parameter lists. If default arguments are processed at the call site based on the static type of the receiver instead of at the callee's entry point, when called through a base class reference you'd end up with an undefined argument value instead of your specified default, similarly to what would happen if there was no default specified.
If the number of parameters (with or without their types) are part of the method selector (as in Erlang or E), as is common in dynamic languages that run on JVM or CLR, you have two different methods. Create a new overload taking additional arguments, and override the base method with one that calls the new overload with default argument values.
If I am reading the question correctly, this question seems very language specific (as in it is not applicable to all dynamic languages), as I know you can do this in ruby.
class Book
def read(book)
puts book
end
end
class BookWithMemory < Book
def read(book,aTimeStamp = nil)
super book
puts aTimeStamp
end
end
I am not sure about dynamic languages besides ruby. This seems like a pretty subjective question as well, as at least two languages were designed on either side of the issue (method overloading vs not: ruby vs php).

Why does ATL COM map scanning code expect the first entry to be of _ATL_SIMPLEMAPENTRY type?

ATL provides a bunch of macros for creating so-called COM maps - chains of rules of how the QueryInterface() call behaves on a given object. The map begins with BEGIN_COM_MAP and ends with END_COM_MAP. In between the the following can be used (among others):
COM_INTERFACE_ENTRY, COM_INTERFACE_ENTRY2 - to ask C++ to simply cast this class to the corresponding COM interface
COM_INTERFACE_ENTRY_FUNC - to ask C++ to call a function that will retrieve the interface
Now the problem is I want to use COM_INTERFACE_ENTRY_FUNC for every interface I expose so that I can log all the calls - I believe it will help me debugging my component when it is deployed in the field. The implementation of CComObjectRootBase::InternalQueryInterface contains an ATLASSERT:
ATLASSERT(pEntries->pFunc == _ATL_SIMPLEMAPENTRY);
which implies that the following is allright:
BEGIN_COM_MAP
COM_INTERFACE_ENTRY( IMyInterface1 )
COM_INTERFACE_ENTRY_FUNC( __uuidof(IMyInterface2), 0, OnQueryMyInterface2 )
END_COM_MAP
since here the first entry results in _ATL_SIMPLEMAPENTRY type entry but the following is not:
BEGIN_COM_MAP
COM_INTERFACE_ENTRY_FUNC( __uuidof(IMyInterface1), 0, OnQueryMyInterface1 )
COM_INTERFACE_ENTRY_FUNC( __uuidof(IMyInterface2), 0, OnQueryMyInterface2 )
END_COM_MAP
since here the entry type will not be _ATL_SIMPLEMAPENTRY.
This makes no sense at all. Why am I enforced into having a "please, C++, do the static_cast" entry as the first entry of the COM map?
Upd: Resolved after many more hour of debugging, answer added.
Inside ATL there's AtlInternalQueryInterface() that actually does scan the COM map. Inside it there's this code:
if (InlineIsEqualUnknown(iid)) // use first interface
{
IUnknown* pUnk = (IUnknown*)((INT_PTR)pThis+pEntries->dw);
// call AddRef on pUnk, copy it to ppvObject, return S_OK
}
this code actually relies on the first entry of the table being of _ATL_SIMPLEMAPENTRY type since it expects that _ATL_INTMAP_ENTRY::dw stores an offset from the current object this pointer to the necessary interface. In the use cited in the question if the first entry is this one:
COM_INTERFACE_ENTRY_FUNC( __uuidof(IMyInterface1), 0, OnQueryMyInterface1 )
the entry will be of wrong type, but the _ATL_INTMAP_ENTRY::dw will be zero (second parameter to the macro) and the code will happily work each time returning this pointer as IUnknown*. But if the second macro parameter which corresponds to a pass this value into the function specified as the third parameter variable is not zero the program will use that value as the offset and could run into undefined behaviour.

Resources