How do you create public constants in Haxe? I just need the analog of good old const in AS3:
public class Hello
{
public static const HEY:String = "hey";
}
The usual way to declare a constant in Haxe is using the static and inline modifiers.
class Main {
public static inline var Constant = 1;
static function main() {
trace(Constant);
trace(Test.Constant);
}
}
If you have a group of related constants, it can often make sense to use an enum abstract. Values of enum abstracts are static and inline implicitly.
Note that only the basic types (Int, Float, Bool) as well as String are allowed to be inline, for others it will fail with this error:
Inline variable initialization must be a constant value
Luckily, Haxe 4 has introduced a final keyword which can be useful for such cases:
public static final Regex = ~/regex/;
However, final only prevents reassignment, it doesn't make the type immutable. So it would still be possible to add or remove values from something like static final Values = [1, 2, 3];.
For the specific case of arrays, Haxe 4 introduces haxe.ds.ReadOnlyArray which allows for "constant" lists (assuming you don't work around it using casts or reflection):
public static final Values:haxe.ds.ReadOnlyArray<Int> = [1, 2, 3];
Values = []; // Cannot access field or identifier Values for writing
Values.push(0); // haxe.ds.ReadOnlyArray<Int> has no field push
Even though this is an array-specific solution, the same approach can be applied to other types as well. ReadOnlyArray<T> is simply an abstract type that creates a read-only "view" by doing the following:
it wraps Array<T>
it uses #:forward to only expose fields that don't mutate the array, such as length and map()
it allows implicit casts from Array<T>
You can see how it's implemented here.
For non-static variables and objects, you can give them shallow constness as shown below:
public var MAX_COUNT(default, never):Int = 100;
This means you can read the value in the 'default' way but can 'never' write to it.
More info can be found http://adireddy.github.io/haxe/keywords/never-inline-keywords.
Related
I'm using #CompileStatic for the first time, and confused as to how Groovy's map constructors work in this situation.
#CompileStatic
class SomeClass {
Long id
String name
public static void main(String[] args) {
Map map = new HashMap()
map.put("id", 123L)
map.put("name", "test file")
SomeClass someClass1 = new SomeClass(map) // Does not work
SomeClass someClass2 = map as SomeClass // Works
}
}
Given the code above I see the following error when trying to compile
Groovyc: Target constructor for constructor call expression hasn't been set
If #CompileStatic is removed, both constructors work properly.
Can anyone explain why new SomeClass(map) does not compile with #CompileStatic? And a possible addition, why does map as SomeClass still work?
Groovy actually does not give you a "Map-Constructor". The constructors
in your class are what you write down. If there are none (like in your case),
then there is the default c'tor.
But what happens, if you use the so called map c'tor (or rather call it
"object construction by map")? The general approach of groovy is like this:
create a new object using the default c'tor (this is the reason, why the
construction-by-map no longer works, if there would be just e.g.
SomeClass(Long id, String name))
then use the passed down map and apply all values to the properties.
If you disassmble your code (with #CompileDynamic (the default)) you see, that
the construction is handled by CallSite.callConstructor(Object,Object),
which boils down to this this code area.
Now bring in the version of this construction by map, that is more familiar
for the regular groovyist:
SomeClass someClass3 = new SomeClass(id: 42L, name: "Douglas").
With the dynamic version of the code, the disassembly of this looks actually
alot like your code with the map. Groovy creates a map from the param(s) and
sends it off to callConstructor - so this is actually the same code path
taken (minus the implicit map creation).
For now ignore the "cast-case", as it is actually the same for both static and
dynamic: it will be sent to ScriptBytecodeAdapter.asType which basically
gives you the dynamic behaviour in any case.
Now the #CompileStatic case: As you have witnessed, your call with an
explicit map for the c'tor no longer works. This is due to the fact, that
there never was an explicit "map-c'tor" in the first place. The class still
only has its default c'tor and with static compilation groovyc now can just
work with the things that are there (or not if there aren't in this case).
What about new SomeClass(id: 42L, name: "Douglas") then? This still works
with static compilation! The reason for this is, that groovyc unrolls this
for you. As you can see, this simply boils down to def o = new SomeClass();
o.setId(42); o.setName('Douglas'):
new #2 // class SomeClass
dup
invokespecial #53 // Method "<init>":()V
astore_2
ldc2_w #54 // long 42l
dup2
lstore_3
aload_2
lload_3
invokestatic #45 // Method java/lang/Long.valueOf:(J)Ljava/lang/Long;
invokevirtual #59 // Method setId:(Ljava/lang/Long;)V
aconst_null
pop
pop2
ldc #61 // String Douglas
dup
astore 5
aload_2
aload 5
invokevirtual #65 // Method setName:(Ljava/lang/String;)V
As the CompileStatic documentation says:
will actually make sure that the methods which are inferred as being
called will effectively be called at runtime. This annotation turns
the Groovy compiler into a static compiler, where all method calls are
resolved at compile time and the generated bytecode makes sure that
this happens
As a result, a constructor with a Map argument is searched in the static compilation to "resolve it at compile time", but it is not found and thereby there is a compilation error:
Target constructor for constructor call expression hasn't been set
Adding such a constructor solves the issue with the #CompileStatic annotation, since it is resolved at compile time:
import groovy.transform.CompileStatic
#CompileStatic
class SomeClass {
Long id
String name
SomeClass(Map m) {
id = m.id as Long
name = m.name as String
}
public static void main(String[] args) {
Map map = new HashMap()
map.put("id", 123L)
map.put("name", "test file")
SomeClass someClass1 = new SomeClass(map) // Now it works also
SomeClass someClass2 = map as SomeClass // Works
}
}
You can check StaticCompilationVisitor if you want to dig deeper.
Regarding the line
SomeClass someClass2 = map as SomeClass
You are using there the asType() method of Groovy's GDK java.util.Map, so it is therefore solved at runtime even in static compilation:
Coerces this map to the given type, using the map's keys as the public
method names, and values as the implementation. Typically the value
would be a closure which behaves like the method implementation.
I have a list of KeyValuePairs. I normally would use ToDictionary.
However I just noted that the error message (shown below) has something about explicit cast, which implies I can actually cast the list to Dictionary<...>. How can I do this?
Cannot implicitly convert type 'System.Linq.IOrderedEnumerable<System.Collections.Generic.KeyValuePair<int,string>>' to 'System.Collections.Generic.Dictionary<int, string>'. An explicit conversion exists (are you missing a cast?)
Sample code:
Dictionary<int, string> d = new Dictionary<int, string>() {
{3, "C"},
{2, "B"},
{1, "A"},
};
var s = d.OrderBy(i => i.Value);
d = s;
Implies I can actually cast list to dictionary
Well, it implies that the cast would be valid at compile-time. It doesn't mean it will work at execution time.
It's possible that this code could work:
IOrderedEnumerable<KeyValuePair<string, string>> pairs = GetPairs();
Dictionary<string, string> dictionary = (Dictionary<string, string>) pairs;
... but only if the value returned by GetPairs() were a class derived from Dictionary<,> which also implemented IOrderedEnumerable<KeyValuePair<string, string>>. It's very unlikely that that's actually the case in normal code. The compiler can't stop you from trying, but it won't end well. (In particular, if you do it with the code in your question and with standard LINQ to Objects, it will definitely fail at execution time.)
You should stick with ToDictionary... although you should also be aware that you'll lose the ordering, so there's no point in ordering it to start with.
To show this with the code in your question:
Dictionary<int, string> d = new Dictionary<int, string>() {
{3, "C"},
{2, "B"},
{1, "A"},
};
var s = d.OrderBy(i => i.Value);
d = (Dictionary<int, string>) s;
That compiles, but fails at execution time as predicted:
Unhandled Exception: System.InvalidCastException: Unable to cast object of type 'System.Linq.OrderedEnumerable`2[System.Collections.Generic.KeyValuePair`2[System.Int32,System.String],System.String]' to type 'System.Collections.Generic.Dictionary`2[System.Int32,System.String]'.
at Test.Main()
As a bit of background, you can always cast from any interface type to a non-sealed class ("target"), even if that type doesn't implement the interface, because it's possible for another class derived from "target" to implement the interface.
From section 6.2.4 of the C# 5 specification:
The explicit reference conversions are:
...
From any class-type S to any interface-type T, provided S is not sealed and provided S does not implement T.
...
(The case where S does implement T is covered by implicit reference conversions.)
If you try to implicitly convert a value and there's no implicit conversion available, but there's an explicit conversion available, the compiler will give you the warning in your question. That means you can fix the compiler-error with a cast, but you need to be aware of the possibility of it failing at execution time.
Here's an example:
using System;
class Test
{
static void Main()
{
IFormattable x = GetObject();
}
static object GetObject()
{
return DateTime.Now.Second >= 30 ? new object() : 100;
}
}
Error message:
Test.cs(7,26): error CS0266: Cannot implicitly convert type 'object' to
'System.IFormattable'. An explicit conversion exists (are you missing a cast?)
So we can add a cast:
IFormattable x = (IFormattable) GetObject();
At this point, the code will work about half the time - the other half, it'll throw an exception.
I don't understand why this code is wrong, i just want to encapsulate voids in the dictionary.
private delegate void LotIs(string path);
private Dictionary<int, LotIs> lots = new Dictionary<int, LotIs>
{
{0, this.LotIsBanHummer},
{1, this.LotIsDuck},
{2, this.LotIsToy},
{3, this.LotIsDragon},
{4, this.LotIsMoney}
};
private void LotIsBanHummer(string path)
{
lotImage.Image = LB10_VAR7.Properties.Resources.banhammer2;
StreamReader str = new StreamReader(path + "BunHummer.txt");
textBox1.Text = str.ReadToEnd();
textBox3.AppendText(textBox1.Lines[1].Split(' ')[1]);
}
The compiler does not allow you to use this in such an initializer expression because this is assumed to be uninitialized when the expression is evaluated. Remember that such expressions are evaluated before any constructor has been executed.
Within a constructor the use of this is permitted at any point even though some fields may not have been initialized yet, either, but there, it is within your responsibility to not access any uninitialized members.
In your case, therefore, the solution is to initialize your dictionary/add the initial contents in your constructor (or, in the case of several constructors, in a method that you call from each constructor).
From the C# specification:
17.4.5.2
Instance field initialization
A variable initializer for an instance field cannot refe rence the
instance being created. Thus, it is a compile- time error to reference
this in a variable initializer, as it is a compile-time error for a
variable initializer to reference any instance member through a
simple-name
You can move your initialiser to the constructor however.
use this in a constructor to define Dictionary like that.
like this:
private Dictionary<int, LotIs> lots = new Dictionary<int, LotIs>();
public YourClass() {
lots[0] = this.LotIsBanHummer;
lots[1] = this.LotIsDuck;
lots[2] = this.LotIsToy;
lots[3] = this.LotIsDragon;
lots[4] = this.LotIsMoney;
}
If LotIsBanHummer, LotIsDuck etc are defined static then you can initialize without the this.
I'm new to D, and I was wondering whether it's possible to conveniently do compile-time-checked duck typing.
For instance, I'd like to define a set of methods, and require that those methods be defined for the type that's being passed into a function. It's slightly different from interface in D because I wouldn't have to declare that "type X implements interface Y" anywhere - the methods would just be found, or compilation would fail. Also, it would be good to allow this to happen on any type, not just structs and classes. The only resource I could find was this email thread, which suggests that the following approach would be a decent way to do this:
void process(T)(T s)
if( __traits(hasMember, T, "shittyNameThatProbablyGetsRefactored"))
// and presumably something to check the signature of that method
{
writeln("normal processing");
}
... and suggests that you could make it into a library call Implements so that the following would be possible:
struct Interface {
bool foo(int, float);
static void boo(float);
...
}
static assert (Implements!(S, Interface));
struct S {
bool foo(int i, float f) { ... }
static void boo(float f) { ... }
...
}
void process(T)(T s) if (Implements!(T, Interface)) { ... }
Is is possible to do this for functions which are not defined in a class or struct? Are there other/new ways to do it? Has anything similar been done?
Obviously, this set of constraints is similar to Go's type system. I'm not trying to start any flame wars - I'm just using D in a way that Go would also work well for.
This is actually a very common thing to do in D. It's how ranges work. For instance, the most basic type of range - the input range - must have 3 functions:
bool empty(); //Whether the range is empty
T front(); // Get the first element in the range
void popFront(); //pop the first element off of the range
Templated functions then use std.range.isInputRange to check whether a type is a valid range. For instance, the most basic overload of std.algorithm.find looks like
R find(alias pred = "a == b", R, E)(R haystack, E needle)
if (isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{ ... }
isInputRange!R is true if R is a valid input range, and is(typeof(binaryFun!pred(haystack.front, needle)) : bool) is true if pred accepts haystack.front and needle and returns a type which is implicitly convertible to bool. So, this overload is based entirely on static duck typing.
As for isInputRange itself, it looks something like
template isInputRange(R)
{
enum bool isInputRange = is(typeof(
{
R r = void; // can define a range object
if (r.empty) {} // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}
It's an eponymous template, so when it's used, it gets replaced with the symbol with its name, which in this case is an enum of type bool. And that bool is true if the type of the expression is non-void. typeof(x) results in void if the expression is invalid; otherwise, it's the type of the expression x. And is(y) results in true if y is non-void. So, isInputRange will end up being true if the code in the typeof expression compiles, and false otherwise.
The expression in isInputRange verifies that you can declare a variable of type R, that R has a member (be it a function, variable, or whatever) named empty which can be used in a condition, that R has a function named popFront which takes no arguments, and that R has a member front which returns a value. This is the API expected of an input range, and the expression inside of typeof will compile if R follows that API, and therefore, isInputRange will be true for that type. Otherwise, it will be false.
D's standard library has quite a few such eponymous templates (typically called traits) and makes heavy use of them in its template constraints. std.traits in particular has quite a few of them. So, if you want more examples of how such traits are written, you can look in there (though some of them are fairly complicated). The internals of such traits are not always particularly pretty, but they do encapsulate the duck typing tests nicely so that template constraints are much cleaner and more understandable (they'd be much, much uglier if such tests were inserted in them directly).
So, that's the normal approach for static duck typing in D. It does take a bit of practice to figure out how to write them well, but that's the standard way to do it, and it works. There have been people who have suggested trying to come up with something similar to your Implements!(S, Interface) suggestion, but nothing has really come of that of yet, and such an approach would actually be less flexible, making it ill-suited for a lot of traits (though it could certainly be made to work with basic ones). Regardless, the approach that I've described here is currently the standard way to do it.
Also, if you don't know much about ranges, I'd suggest reading this.
Implements!(S, Interface) is possible but did not get enough attention to get into standard library or get better language support. Probably if I won't be the only one telling it is the way to go for duck typing, we will have a chance to have it :)
Proof of concept implementation to tinker around:
http://dpaste.1azy.net/6d8f2dc4
import std.traits;
bool Implements(T, Interface)()
if (is(Interface == interface))
{
foreach (method; __traits(allMembers, Interface))
{
foreach (compareTo; MemberFunctionsTuple!(Interface, method))
{
bool found = false;
static if ( !hasMember!(T, method) )
{
pragma(msg, T, " has no member ", method);
return false;
}
else
{
foreach (compareWhat; __traits(getOverloads, T, method))
{
if (is(typeof(compareTo) == typeof(compareWhat)))
{
found = true;
break;
}
}
if (!found)
{
return false;
}
}
}
}
return true;
}
interface Test
{
bool foo(int, double);
void boo();
}
struct Tested
{
bool foo(int, double);
// void boo();
}
pragma(msg, Implements!(Tested, Test)());
void main()
{
}
I was wondering if there an established convention to specifying fixed point binary numbers in decimal format (with the use of a macro). I am not sure if this possible in C/C++, but perhaps this is implemented in some language(s) and there is a notational standard like 0x000000,1.2f,1.2d,1l,etc
Take this example for instance:
I am using Q15.16 for instance, but would like to have the convenience of specifying numbers in decimal format, perhaps something like this:
var num:Int32=1.2fp;
Presumably, the easiest way with regards to Haxe macros, numbers can be initialized with a function:
#:macro
fp_from_float(1.2);
But it would be nice to have a shorthand notation.
Have you seen Luca's Fixed Point example with Haxe 3 and Abstracts?
It's here:
https://groups.google.com/forum/?fromgroups=#!topic/haxelang/JsiWvl-c0v4
Summing it up, with the new Haxe 3 abstract types, you can define a type that will be compiled as an Int:
abstract Fixed16(Int)
{
inline function new(x:Int) this = x;
}
You can also define "conversion functions", which will allow you to automatically convert a float into Fixed16:
#:from public static inline function fromf(x:Float) {
#if debug
if (x >= 32768.0 || x < -32768.0) throw "Conversion to Fixed16 will overflow";
#end
return new Fixed16(Std.int(x*65536.0));
}
The secret here is the #:from metadata. With this code, you will already be able to declare fixed types like this:
var x:Fixed16 = 1.2;
Luca's already defined some operators, to make working with them easier, like:
#:op(A+B) public inline static function add(f:Fixed16, g:Fixed16) {
#if debug
var fr:Float = f.raw();
var gr:Float = g.raw();
if (fr+gr >= 2147483648.0 || fr+gr < -2147483648.0) throw "Addition of Fixed16 values will overflow";
#end
return new Fixed16(f.raw()+g.raw());
}
Again, the secret here is in #:op(A+B) metadata, which will annotate that this function may be called when handling addition. The complete GIST code is available at https://gist.github.com/deltaluca/5413225 , and you can learn more about abstracts at http://haxe.org/manual/abstracts