When using
SomeClass compile: someSourceString
the symbol containing method's name is returned. Is there any reason why not an instance of CompiledMethod is returned? There are couple of tests like:
tutu compile: 'foo'.
self deny: (tutu >> #foo) allLiterals last key isNil.
Is there any method that returns a compiled method like:
compileMethod: aString
^ self >> (self compile: aString)
How about this ?
(SomeClass compiledMethodAt: (SomeClass compile:'foo'))
This will return the compiled method you need.
Related
I would like to disable the default (zero argument) constructor for a C++ class exposed to R using RCPP_MODULE so that calls to new(class) without any further arguments in R give an error. Here are the options I've tried:
1) Specifying a default constructor and throwing an error in the function body: this does what I need it to, but means specifying a default constructor that sets dummy values for all const member variables (which is tedious for my real use case)
2) Specifying that the default constructor is private (without a definition): this means the code won't compile if .constructor() is used in the Rcpp module, but has no effect if .constructor() is not used in the Rcpp module
3) Explicitly using delete on the default constructor: requires C++11 and seems to have the same (lack of) effect as (2)
I'm sure that I am missing something obvious, but can't for the life of me work out what it is. Does anyone have any ideas?
Thanks in advance,
Matt
Minimal example code (run in R):
inc <- '
using namespace Rcpp;
class Foo
{
public:
Foo()
{
stop("Disallowed default constructor");
}
Foo(int arg)
{
Rprintf("Foo OK\\n");
}
};
class Bar
{
private:
Bar();
// Also has no effect:
// Bar() = delete;
public:
Bar(int arg)
{
Rprintf("Bar OK\\n");
}
};
RCPP_MODULE(mod) {
class_<Foo>("Foo")
.constructor("Disallowed default constructor")
.constructor<int>("Intended 1-argument constructor")
;
class_<Bar>("Bar")
// Wont compile unless this line is commented out:
// .constructor("Private default constructor")
.constructor<int>("Intended 1-argument constructor")
;
}
'
library('Rcpp')
library('inline')
fx <- cxxfunction(signature(), plugin="Rcpp", include=inc)
mod <- Module("mod", getDynLib(fx))
# OK as expected:
new(mod$Foo, 1)
# Fails as expected:
new(mod$Foo)
# OK as expected:
new(mod$Bar, 1)
# Unexpectedly succeeds:
new(mod$Bar)
How can I get new(mod$Bar) to fail without resorting to the solution used for Foo?
EDIT
I have discovered that my question is actually a symptom of something else:
#include <Rcpp.h>
class Foo {
public:
int m_Var;
Foo() {Rcpp::stop("Disallowed default constructor"); m_Var=0;}
Foo(int arg) {Rprintf("1-argument constructor\n"); m_Var=1;}
int GetVar() {return m_Var;}
};
RCPP_MODULE(mod) {
Rcpp::class_<Foo>("Foo")
.constructor<int>("Intended 1-argument constructor")
.property("m_Var", &Foo::GetVar, "Get value assigned to m_Var")
;
}
/*** R
# OK as expected:
f1 <- new(Foo, 1)
# Value set in the 1-parameter constructor as expected:
f1$m_Var
# Unexpectedly succeeds without the error message:
tryCatch(f0 <- new(Foo), error = print)
# This is the type of error I was expecting to see:
tryCatch(f2 <- new(Foo, 1, 2), error = print)
# Note that f0 is not viable (and sometimes brings down my R session):
tryCatch(f0$m_Var, error = print)
*/
[With acknowledgements to #RalfStubner for the improved code]
So in fact it seems that new(Foo) is not actually calling any C++ constructor at all for Foo, so my question was somewhat off-base ... sorry.
I guess there is no way to prevent this happening at the C++ level, so maybe it makes most sense to use a wrapper function around the call to new(Foo) at the R level, or continue to specify a default constructor that throws an error. Both of these solutions will work fine - I was just curious as to exactly why my expectation regarding the absent default constructor was wrong :)
As a follow-up question: does anybody know exactly what is happening in f0 <- new(Foo) above? My limited understanding suggests that although f0 is created in R, the associated pointer leads to something that has not been (correctly/fully) allocated in C++?
After a bit of experimentation I have found a simple solution to my problem that is obvious in retrospect...! All I needed to do was use .factory for the default constructor along with a function that takes no arguments and just throws an error. The default constructor for the Class is never actually referenced so doesn't need to be defined, but it obtains the desired behaviour in R (i.e. an error if the user mistakenly calls new with no additional arguments).
Here is an example showing the solution (Foo_A) and a clearer illustration of the problem (Foo_B):
#include <Rcpp.h>
class Foo {
private:
Foo(); // Or for C++11: Foo() = delete;
const int m_Var;
public:
Foo(int arg) : m_Var(arg) {Rcpp::Rcout << "Constructor with value " << m_Var << "\n";}
void ptrAdd() const {Rcpp::Rcout << "Pointer: " << (void*) this << "\n";}
};
Foo* dummy_factory() {Rcpp::stop("Default constructor is disabled for this class"); return 0;}
RCPP_MODULE(mod) {
Rcpp::class_<Foo>("Foo_A")
.factory(dummy_factory) // Disable the default constructor
.constructor<int>("Intended 1-argument constructor")
.method("ptrAdd", &Foo::ptrAdd, "Show the pointer address")
;
Rcpp::class_<Foo>("Foo_B")
.constructor<int>("Intended 1-argument constructor")
.method("ptrAdd", &Foo::ptrAdd, "Show the pointer address")
;
}
/*** R
# OK as expected:
fa1 <- new(Foo_A, 1)
# Error as expected:
tryCatch(fa0 <- new(Foo_A), error = print)
# OK as expected:
fb1 <- new(Foo_B, 1)
# No error:
tryCatch(fb0 <- new(Foo_B), error = print)
# But this terminates R with the following (quite helpful!) message:
# terminating with uncaught exception of type Rcpp::not_initialized: C++ object not initialized. (Missing default constructor?)
tryCatch(fb0$ptrAdd(), error = print)
*/
As was suggested to me in a comment I have started a discussion at https://github.com/RcppCore/Rcpp/issues/970 relating to this.
I was browsing the docs, and I found StaticString. It states:
An simple string designed to represent text that is "knowable at compile-time".
I originally thought that String has the same behaviour as NSString, which is known at compile time, but it looks like that I was wrong. So my question is when should we use StaticString instead of a String, and is the only difference is that StaticString is known at compile-time?
One thing I found is
var a: String = "asdf" //"asdf"
var b: StaticString = "adsf" //{(Opaque Value), (Opaque Value), (Opaque Value)}
sizeofValue(a) //24
sizeofValue(b) //17
So it looks like StaticString has a little bit less memory footprint.
It appears that StaticString can hold string literals. You can't assign a variable of type String to it, and it can't be mutated (with +=, for example).
"Knowable at compile time" doesn't mean that the value held by the variable will be determined at compile time, just that any value assigned to it is known at compile time.
Consider this example which does work:
var str: StaticString
for _ in 1...10 {
switch arc4random_uniform(3) {
case 0: str = "zero"
case 1: str = "one"
case 2: str = "two"
default: str = "default"
}
print(str)
}
Any time you can give Swift more information about how a variable is to be used, it can optimize the code using it. By restricting a variable to StaticString, Swift knows the variable won't be mutated so it might be able to store it more efficiently, or access the individual characters more efficiently.
In fact, StaticString could be implemented with just an address pointer and a length. The address it points to is just the place in the static code where the string is defined. A StaticString doesn't need to be reference counted since it doesn't (need to) exist in the heap. It is neither allocated nor deallocated, so no reference count is needed.
"Knowable at compile time" is pretty strict. Even this doesn't work:
let str: StaticString = "hello " + "world"
which fails with error:
error: 'String' is not convertible to 'StaticString'
StaticString is knowable at compile time. This can lead to optimizations. Example:
EDIT: This part doesn't work, see edit below
Suppose you have a function that calculates an Int for some String values for some constants that you define at compile time.
let someString = "Test"
let otherString = "Hello there"
func numberForString(string: String) -> Int {
return string.stringValue.unicodeScalars.reduce(0) { $0 * 1 << 8 + Int($1.value) }
}
let some = numberForString(someString)
let other = numberForString(otherString)
Like this, the function would be executed with "Test" and "Hello there" when it really gets called in the program, when the app starts for example. Definitely at runtime. However if you change your function to take a StaticString
func numberForString(string: StaticString) -> Int {
return string.stringValue.unicodeScalars.reduce(0) { $0 * 1 << 8 + Int($1.value) }
}
the compiler knows that the passed in StaticString is knowable at compile time, so guess what it does? It runs the function right at compile time (How awesome is that!). I once read an article about that, the author inspected the generated assembly and he actually found the already computed numbers.
As you can see this can be useful in some cases like the one mentioned, to not decrease runtime performance for stuff that can be done at compile time.
EDIT: Dániel Nagy and me had a conversation. The above example of mine doesn't work because the function stringValue of StaticString can't be known at compile time (because it returns a String). Here is a better example:
func countStatic(string: StaticString) -> Int {
return string.byteSize // Breakpoint here
}
func count(string: String) -> Int {
return string.characters.count // Breakpoint here
}
let staticString : StaticString = "static string"
let string : String = "string"
print(countStatic(staticString))
print(count(string))
In a release build only the second breakpoint gets triggered whereas if you change the first function to
func countStatic(string: StaticString) -> Int {
return string.stringValue.characters.count // Breakpoint here
}
both breakpoints get triggered.
Apparently there are some methods which can be done at compile time while other can't. I wonder how the compiler figures this out actually.
I've a function to modify a PictureBox, so I need to use a delegate. My function needs an int in order to do its job, and I've created an enum in order to define the values it can have.
However, when I'm invoking it, there's a problem because it cannot convert from my enum to object, in order to send it the the function.
How can I face it?
My function:
System::Void modifyButtonPicture(int estado)
The enum:
enum BUTTON_STATE : int { PB_STOP = 0, PB_PLAY = 1 };
Delegate:
delegate void SetTextDelegatePlayButton(int estado);
Invoke:
Invoke(gcnew SetTextDelegatePlayButton(this, &Form1:: modifyButtonPicture), PB_PLAY);
The error message (translated):
error C2664: 'System::Object ^System::Windows::Forms::Control::Invoke(System::Delegate ^,...cli::array<Type> ^)' : cannot convert 2nd parameter from 'BUTTON_STATE' to 'System::Object ^'
As documented by MSDN Control::Invoke Method (Delegate, array) the Invoke method accepts these parameters:
method
Type: System::Delegate
A delegate to a method that takes parameters of the same number and type that >are contained in the args parameter.
args
Type: array
An array of objects to pass as arguments to the specified method. This parameter can be nullptr if the method takes no arguments.
And in your call, your passing an int as the second parameter (the PB_PLAY).
So you need to cast your enum to an System::Object array:
int play = (int)PB_PLAY;
array<Object^>^myEnumArray = {play};
Invoke(gcnew SetTextDelegatePlayButton(this, &Form1:: modifyButtonPicture), myEnumArray);
I'm using #CompileStatic for the first time, and confused as to how Groovy's map constructors work in this situation.
#CompileStatic
class SomeClass {
Long id
String name
public static void main(String[] args) {
Map map = new HashMap()
map.put("id", 123L)
map.put("name", "test file")
SomeClass someClass1 = new SomeClass(map) // Does not work
SomeClass someClass2 = map as SomeClass // Works
}
}
Given the code above I see the following error when trying to compile
Groovyc: Target constructor for constructor call expression hasn't been set
If #CompileStatic is removed, both constructors work properly.
Can anyone explain why new SomeClass(map) does not compile with #CompileStatic? And a possible addition, why does map as SomeClass still work?
Groovy actually does not give you a "Map-Constructor". The constructors
in your class are what you write down. If there are none (like in your case),
then there is the default c'tor.
But what happens, if you use the so called map c'tor (or rather call it
"object construction by map")? The general approach of groovy is like this:
create a new object using the default c'tor (this is the reason, why the
construction-by-map no longer works, if there would be just e.g.
SomeClass(Long id, String name))
then use the passed down map and apply all values to the properties.
If you disassmble your code (with #CompileDynamic (the default)) you see, that
the construction is handled by CallSite.callConstructor(Object,Object),
which boils down to this this code area.
Now bring in the version of this construction by map, that is more familiar
for the regular groovyist:
SomeClass someClass3 = new SomeClass(id: 42L, name: "Douglas").
With the dynamic version of the code, the disassembly of this looks actually
alot like your code with the map. Groovy creates a map from the param(s) and
sends it off to callConstructor - so this is actually the same code path
taken (minus the implicit map creation).
For now ignore the "cast-case", as it is actually the same for both static and
dynamic: it will be sent to ScriptBytecodeAdapter.asType which basically
gives you the dynamic behaviour in any case.
Now the #CompileStatic case: As you have witnessed, your call with an
explicit map for the c'tor no longer works. This is due to the fact, that
there never was an explicit "map-c'tor" in the first place. The class still
only has its default c'tor and with static compilation groovyc now can just
work with the things that are there (or not if there aren't in this case).
What about new SomeClass(id: 42L, name: "Douglas") then? This still works
with static compilation! The reason for this is, that groovyc unrolls this
for you. As you can see, this simply boils down to def o = new SomeClass();
o.setId(42); o.setName('Douglas'):
new #2 // class SomeClass
dup
invokespecial #53 // Method "<init>":()V
astore_2
ldc2_w #54 // long 42l
dup2
lstore_3
aload_2
lload_3
invokestatic #45 // Method java/lang/Long.valueOf:(J)Ljava/lang/Long;
invokevirtual #59 // Method setId:(Ljava/lang/Long;)V
aconst_null
pop
pop2
ldc #61 // String Douglas
dup
astore 5
aload_2
aload 5
invokevirtual #65 // Method setName:(Ljava/lang/String;)V
As the CompileStatic documentation says:
will actually make sure that the methods which are inferred as being
called will effectively be called at runtime. This annotation turns
the Groovy compiler into a static compiler, where all method calls are
resolved at compile time and the generated bytecode makes sure that
this happens
As a result, a constructor with a Map argument is searched in the static compilation to "resolve it at compile time", but it is not found and thereby there is a compilation error:
Target constructor for constructor call expression hasn't been set
Adding such a constructor solves the issue with the #CompileStatic annotation, since it is resolved at compile time:
import groovy.transform.CompileStatic
#CompileStatic
class SomeClass {
Long id
String name
SomeClass(Map m) {
id = m.id as Long
name = m.name as String
}
public static void main(String[] args) {
Map map = new HashMap()
map.put("id", 123L)
map.put("name", "test file")
SomeClass someClass1 = new SomeClass(map) // Now it works also
SomeClass someClass2 = map as SomeClass // Works
}
}
You can check StaticCompilationVisitor if you want to dig deeper.
Regarding the line
SomeClass someClass2 = map as SomeClass
You are using there the asType() method of Groovy's GDK java.util.Map, so it is therefore solved at runtime even in static compilation:
Coerces this map to the given type, using the map's keys as the public
method names, and values as the implementation. Typically the value
would be a closure which behaves like the method implementation.
I'm new to D, and I was wondering whether it's possible to conveniently do compile-time-checked duck typing.
For instance, I'd like to define a set of methods, and require that those methods be defined for the type that's being passed into a function. It's slightly different from interface in D because I wouldn't have to declare that "type X implements interface Y" anywhere - the methods would just be found, or compilation would fail. Also, it would be good to allow this to happen on any type, not just structs and classes. The only resource I could find was this email thread, which suggests that the following approach would be a decent way to do this:
void process(T)(T s)
if( __traits(hasMember, T, "shittyNameThatProbablyGetsRefactored"))
// and presumably something to check the signature of that method
{
writeln("normal processing");
}
... and suggests that you could make it into a library call Implements so that the following would be possible:
struct Interface {
bool foo(int, float);
static void boo(float);
...
}
static assert (Implements!(S, Interface));
struct S {
bool foo(int i, float f) { ... }
static void boo(float f) { ... }
...
}
void process(T)(T s) if (Implements!(T, Interface)) { ... }
Is is possible to do this for functions which are not defined in a class or struct? Are there other/new ways to do it? Has anything similar been done?
Obviously, this set of constraints is similar to Go's type system. I'm not trying to start any flame wars - I'm just using D in a way that Go would also work well for.
This is actually a very common thing to do in D. It's how ranges work. For instance, the most basic type of range - the input range - must have 3 functions:
bool empty(); //Whether the range is empty
T front(); // Get the first element in the range
void popFront(); //pop the first element off of the range
Templated functions then use std.range.isInputRange to check whether a type is a valid range. For instance, the most basic overload of std.algorithm.find looks like
R find(alias pred = "a == b", R, E)(R haystack, E needle)
if (isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{ ... }
isInputRange!R is true if R is a valid input range, and is(typeof(binaryFun!pred(haystack.front, needle)) : bool) is true if pred accepts haystack.front and needle and returns a type which is implicitly convertible to bool. So, this overload is based entirely on static duck typing.
As for isInputRange itself, it looks something like
template isInputRange(R)
{
enum bool isInputRange = is(typeof(
{
R r = void; // can define a range object
if (r.empty) {} // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}
It's an eponymous template, so when it's used, it gets replaced with the symbol with its name, which in this case is an enum of type bool. And that bool is true if the type of the expression is non-void. typeof(x) results in void if the expression is invalid; otherwise, it's the type of the expression x. And is(y) results in true if y is non-void. So, isInputRange will end up being true if the code in the typeof expression compiles, and false otherwise.
The expression in isInputRange verifies that you can declare a variable of type R, that R has a member (be it a function, variable, or whatever) named empty which can be used in a condition, that R has a function named popFront which takes no arguments, and that R has a member front which returns a value. This is the API expected of an input range, and the expression inside of typeof will compile if R follows that API, and therefore, isInputRange will be true for that type. Otherwise, it will be false.
D's standard library has quite a few such eponymous templates (typically called traits) and makes heavy use of them in its template constraints. std.traits in particular has quite a few of them. So, if you want more examples of how such traits are written, you can look in there (though some of them are fairly complicated). The internals of such traits are not always particularly pretty, but they do encapsulate the duck typing tests nicely so that template constraints are much cleaner and more understandable (they'd be much, much uglier if such tests were inserted in them directly).
So, that's the normal approach for static duck typing in D. It does take a bit of practice to figure out how to write them well, but that's the standard way to do it, and it works. There have been people who have suggested trying to come up with something similar to your Implements!(S, Interface) suggestion, but nothing has really come of that of yet, and such an approach would actually be less flexible, making it ill-suited for a lot of traits (though it could certainly be made to work with basic ones). Regardless, the approach that I've described here is currently the standard way to do it.
Also, if you don't know much about ranges, I'd suggest reading this.
Implements!(S, Interface) is possible but did not get enough attention to get into standard library or get better language support. Probably if I won't be the only one telling it is the way to go for duck typing, we will have a chance to have it :)
Proof of concept implementation to tinker around:
http://dpaste.1azy.net/6d8f2dc4
import std.traits;
bool Implements(T, Interface)()
if (is(Interface == interface))
{
foreach (method; __traits(allMembers, Interface))
{
foreach (compareTo; MemberFunctionsTuple!(Interface, method))
{
bool found = false;
static if ( !hasMember!(T, method) )
{
pragma(msg, T, " has no member ", method);
return false;
}
else
{
foreach (compareWhat; __traits(getOverloads, T, method))
{
if (is(typeof(compareTo) == typeof(compareWhat)))
{
found = true;
break;
}
}
if (!found)
{
return false;
}
}
}
}
return true;
}
interface Test
{
bool foo(int, double);
void boo();
}
struct Tested
{
bool foo(int, double);
// void boo();
}
pragma(msg, Implements!(Tested, Test)());
void main()
{
}