UDA opCall __traits - attributes

This code fails at the second unittest at the getA!B() call. The error is: "need 'this' for 'value' of type 'string'"
The question is. How do I get getA to always return a A, whether the UDA is a type or an opCall?
static A opCall(T...)(T args) {
A ret;
ret.value = args[0];
return ret;
}
string value;
}
#A struct B {
}
#A("hello") struct C {
}
A getA(T)() {
foreach(it; __traits(getAttributes, T)) {
if(is(typeof(it) == A)) {
A ret;
ret.value = it.value;
return ret;
}
}
assert(false);
}
unittest {
A a = getA!C();
assert(a.value == "hello");
}
unittest {
A a = getA!B();
assert(a.value == "");
}

As you know, traits are evaluated at compile-time. So any introspection on values obtained via __traits must be done statically. Luckily D has the "static if condition" for this.
If you change
if(is(typeof(it) == A)) {
to
static if (is(typeof(it) == A)) {
you should not have problems compiling the code as is(typeof(it) == A can be evaluated at compile-time.

Related

Vala, string to enum

Is there a way to convert a string to an enum in vala:
string foo = "Enum1";
MY_ENUM theEnum = MY_ENUM.get_value_by_name(foo);
enum MY_ENUM {
Enum1,
Enum2,
Enum3
}
So in this example "theEnum" would have the value: MY_ENUM.Enum1
It is possible using the runtime type system provided by GLib's GObject library. There are EnumClass and EnumValue. These provide introspection at runtime and allow an enum to be initialised from a string.
The syntax is a bit complex at present, although it may be possible for someone to modify the Vala compiler to make it easier, but that is a significant piece of work.
An example:
void main () {
try {
MyEnum? the_enum_value;
the_enum_value = MyEnum.parse ("FIRST");
print (#"$(the_enum_value)\n");
} catch (EnumError error) {
print (#"$(error.message)\n");
}
}
errordomain EnumError {
UNKNOWN_VALUE
}
enum MyEnum {
FIRST,
SECOND,
THIRD;
public static MyEnum parse (string value) throws EnumError {
EnumValue? a;
a = ((EnumClass)typeof (MyEnum).class_ref ()).get_value_by_name ("MY_ENUM_" + value);
if (a == null) {
throw new EnumError.UNKNOWN_VALUE (#"String $(value) is not a valid value for $(typeof(MyEnum).name())");
}
return (MyEnum)a.value;
}
}

Why can't you modify closure parameters of inline methods?

I've got this section of code:
class Main {
static inline function difference(a:Int, b:Int, ?f:(Int, Int) -> Int):Int {
if (f == null) {
f = (a, b) -> a - b;
}
return f(a, b);
}
static function main() {
trace(difference(42, 37));
trace(difference(42, 37, (a, b) -> a - b));
}
}
Which, when I compile using haxe --main Main, fails with this error:
Main.hx:11: characters 15-50 : Cannot modify a closure parameter inside inline method
Main.hx:11: characters 15-50 : For function argument 'v'
If I change Main.difference to not be inline, this error doesn't come up and everything compiles fine.
Why does this error occur?
Edit: I've found out I can also assign the argument to a variable first, and then pass the variable to Main.difference, like this:
static function main() {
var f = (a, b) -> a - b;
trace(difference(42, 37, f));
}
Which works fine with Main.difference being inlined. How does assigning the function to a variable first change things though?
This is related to how inline functions are unwrapped by the compiler. Let us take a simpler variant of your code:
class HelloWorld {
static inline function difference(a:Int, b:Int, ?f:(Int, Int) -> Int):Int {
return f(a, b);
}
static function main() {
trace(difference(42, 37, (a, b) -> a - b));
}
}
When disabling optimizations, this will yield the following JavaScript:
HelloWorld.main = function() {
console.log("HelloWorld.hx:14:",(function(a,b) {
return a - b;
})(42,37));
};
So the body of difference has been incorporated into main using a JavaScript closure. My best guess for what is happnening in your exact case is something like this:
HelloWorld.main = function() {
var v = function(a,b) {
return a - b;
}
console.log("HelloWorld.hx:14:", (function(a,b) {
if (v == null) {
v = function(a, b) {
return a - b;
}
}
return v(a, b);
})(42, 37));
};
This alters the value of v, which exists outside of difference, which has been automatically placed there as a binding for the anonymous lambda. This is what the compiler is trying to avoid. This would not be the end of the world in your case, but in general this is bad and would lead to issues in many programs.
There is a way to inline this code perfectly by hand without this, but I think that there is some weirdness surrounding how annonymous lambdas are currently handled. The situation may improve in the future.
When you explicitly defined f in main, the compiler is intelligent enough to rename the nested f as f1, which is why the issue does not occur:
HelloWorld.main = function() {
var f = function(a,b) {
return a - b;
};
var f1 = f;
if(f1 == null) {
f1 = function(a,b) {
return a - b;
};
}
console.log("HelloWorld.hx:14:",f1(42,37));
};
But this would also work if the inline part of this function is important to you:
class HelloWorld {
static inline function difference(a:Int, b:Int, ?f:(Int, Int) -> Int):Int {
var h = f;
if (h == null) {
h = (a, b) -> a - b;
}
return h(a, b);
}
static function main() {
trace(difference(42, 37, (a, b) -> a - b));
}
}

what's different between StrCmpW and wcscmp?

Actually i changed code to next.
struct myclass {
bool operator() (std::wstring p1, std::wstring p2) {
int result = 0;
//// If character is alphabet, sorting need converse.
wint_t a1 = p1.at(0);
wint_t b2 = p2.at(0);
int r1 = iswalpha(a1);
int r2 = iswalpha(b2);
**// return code of iswalpha.
// 257 is Upper Alphabet,
// 258 is Lower Alphabet**
if ((r1 == 257 && r1 == 258) ||
(r2 == 258 && r2 == 257)) {
result = p2.compare(p1);
}
else {
result = p1.compare(p2);
}
if (result != 0) {
if (result == -1) {
return true;
}
else {
return false;
}
}
return false;
}
} wStrCompare;
void main() {
std::vector<std::wstring> wlist;
wlist.emplace_back(L"가나");
wlist.emplace_back(L"123");
wlist.emplace_back(L"abc");
wlist.emplace_back(L"타파");
wlist.emplace_back(L"하하");
wlist.emplace_back(L"!##$");
wlist.emplace_back(L"一二三");
wlist.emplace_back(L"好好");
wlist.emplace_back(L"QWERID");
wlist.emplace_back(L"ⓐⓑ");
wlist.emplace_back(L"☆★");
wlist.emplace_back(L"とばす");
std::sort(wlist.begin(), wlist.end(), wStrCompare);
}
Test Result
L"!##$"
L"123"
L"abc"
L"QWERID"
L"ⓐⓑ"
L"☆★"
L"とばす"
L"一二三"
L"好好"
L"가나"
L"타파"
L"하하"
is this good?
Please give me a some opinion.
Thanks!!
I change my code, but i still want to know "is there difference between StrCmpW and wcscmp" Please talk to me. thanks!
Old question
I use qsort with std::wstring(for unicode string), and use StrCmpW.
Previously, I used StrCmpLogicalW() with CString, CStringArray.
(These are depend on windows)
But my code run in linux too, not only in windows.
(CString is ATL(afx), StrCmpLogicalW() is in Shlwapi.h)
So I use std::wstring and wcscmp, but result is different.
Is there a difference between StrCmpW() and wcscmp()?
The Following is my code.(exactly not mine lol)
int wCmpName(const void* p1, const void *p2)
{
std::wstring* wszName1 = ((std::wstring *)(p1));
std::wstring* wszName2 = ((std::wstring *)(p2));
int wret = StrCmpW(wszName1->c_str(), wszName2->c_str());
// int wret = wcscmp(wszName1->c_str(), wszName2->c_str());
// When i use wcscmp, different result comes out.
return wret;
}
void wSort(std::vector<std::wstring> &arr)
{
qsort(arr.data(), arr.size(), sizeof(std::wstring), wCmpName);
}
Thanks!
Test Code
void main() {
std::vector<std::wstring> wlist;
wlist.emplace_back(L"가나");
wlist.emplace_back(L"123");
wlist.emplace_back(L"abc");
wlist.emplace_back(L"타파");
wlist.emplace_back(L"하하");
wlist.emplace_back(L"!##$");
wlist.emplace_back(L"一二三");
wlist.emplace_back(L"好好");
wlist.emplace_back(L"QWERID");
wlist.emplace_back(L"ⓐⓑ");
wlist.emplace_back(L"☆★");
wlist.emplace_back(L"とばす");
wSort(wlist);
}
Test Result
wcscmp
L"!##$"
L"123"
L"QWERID"
L"abc"
L"ⓐⓑ"
L"☆★"
L"とばす"
L"一二三"
L"好好"
L"가나"
L"타파"
L"하하"
StrCmpW
L"!##$"
L"☆★"
L"123"
L"ⓐⓑ"
L"abc"
L"QWERID"
L"とばす"
L"가나"
L"一二三"
L"타파"
L"하하"
L"好好"
p.s : WHY limit reputation?! limited Images, limited URLs.
Only text takes so long time.

Unexpected behavior with overloaded methods

I'm a bit confused about groovys method overloading behavior: Given the class
and tests below, I am pretty okay with testAStringNull and testBStringNull
throwing ambiguous method call exceptions, but why is that not the case for
testANull and testBNull then?
And, much more importantly: why does testBNull(null)
call String foo(A arg)? I guess the object doesn't know about the type of the variable it's bound to, but why is that call not ambiguous to groovy while the others are?
(I hope I explained well enough, my head hurts from generating this minimal
example.)
class Foo {
static class A {}
static class B {}
String foo(A arg) { return 'a' }
String foo(String s, A a) { return 'a' }
String foo(B arg) { return 'b' }
String foo(String s, B b) { return 'b' }
}
Tests:
import org.junit.Test
import Foo.A
import Foo.B
class FooTest {
Foo foo = new Foo()
#Test
void testA() {
A a = new A()
assert foo.foo(a) == 'a'
}
#Test
void testAString() {
A a = new A()
assert foo.foo('foo', a) == 'a'
}
#Test()
void testANull() {
A a = null
assert foo.foo(a) == 'a'
}
#Test
void testAStringNull() {
A a = null
assert foo.foo('foo', a) == 'a'
}
#Test
void testB() {
B b = new B()
assert foo.foo(b) == 'b'
}
#Test
void testBString() {
B b = new B()
assert foo.foo('foo', b) == 'b'
}
#Test
void testBNull() {
B b = null
assert foo.foo(b) == 'b'
}
#Test
void testBStringNull() {
B b = null
assert foo.foo('foo', b) == 'b'
}
}
It's a (somewhat little-known) oddity of Groovy's multi-dispatch mechanism, which as attempting to invoke the "most appropriate" method, in combination with the fact that the provided static type (in your case A or B) is not used as part of the dispatch mechanism. When you declare A a = null, what you get is not a null reference of type A, but a reference to NullObject.
Ultimately, to safely handle possibly null parameters to overloaded methods, the caller must cast the argument, as in
A a = null
assert foo.foo('foo', a as A) == 'a'
This discussion on "Groovy Isn't A Superset of Java" may shed some light on the issue.

How to implement Haskell *Maybe* construct in D?

I want to implement Maybe from Haskell in D, just for the hell of it.
This is what I've got so far, but it's not that great. Any ideas how to improve it?
class Maybe(a = int){ } //problem 1: works only with ints
class Just(alias a) : Maybe!(typeof(a)){ }
class Nothing : Maybe!(){ }
Maybe!int doSomething(in int k){
if(k < 10)
return new Just!3; //problem 2: can't say 'Just!k'
else
return new Nothing;
}
Haskell Maybe definition:
data Maybe a = Nothing | Just a
what if you use this
class Maybe(T){ }
class Just(T) : Maybe!(T){
T t;
this(T t){
this.t = t;
}
}
class Nothing : Maybe!(){ }
Maybe!int doSomething(in int k){
if(k < 10)
return new Just!int(3);
else
return new Nothing;
}
personally I'd use tagged union and structs though (and enforce it's a Just when getting the value)
Look at std.typecons.Nullable. It's not exactly the same as Maybe in Haskell, but it's a type which optionally holds a value of whatever type it's instantiated with. So, effectively, it's like Haskell's Maybe, though syntactically, it's a bit different. The source is here if you want to look at it.
I haven't used the Maybe library, but something like this seems to fit the bill:
import std.stdio;
struct Maybe(T)
{
private {
bool isNothing = true;
T value;
}
void opAssign(T val)
{
isNothing = false;
value = val;
}
void opAssign(Maybe!T val)
{
isNothing = val.isNothing;
value = val.value;
}
T get() #property
{
if (!isNothing)
return value;
else
throw new Exception("This is nothing!");
}
bool hasValue() #property
{
return !isNothing;
}
}
Maybe!int doSomething(in int k)
{
Maybe!int ret;
if (k < 10)
ret = 3;
return ret;
}
void main()
{
auto retVal = doSomething(5);
assert(retVal.hasValue);
writeln(retVal.get);
retVal = doSomething(15);
assert(!retVal.hasValue);
writeln(retVal.hasValue);
}
With some creative operator overloading, the Maybe struct could behave quite naturally. Additionally, I've templated the Maybe struct, so it can be used with any type.

Resources