Creating libraries that can be imported and used in Groovy - groovy

Currently, I am working on a project to transpile from my company's in house scripting language, which is Object Orientated and takes quite a few features from other languages, into Groovy, which has many similar features.
To keep code as close to original as possible, I am trying to leave certain function names and parameters the same. To cater for this, I would like to write a set of libraries that can be imported.
For example, say I have an inbuilt method in the original scripting language,
I would like to be able to write the definition for this method in a groovy file, that can then be imported when needed, and the method may be called.
Tools.groovy
// filename: Tools.groovy
public String foo(String bar) {
return bar;
}
and in another file
Main.groovy
// filename: Main.groovy
import Tools;
String bat = foo("bar")
I know you can can compile class files into jars and put them into the class path, but a lot of the methods I will need to implement will either require meta programming or won't be associated with an object.
Sorry if it's either a bad question or not clear enough. I'm not sure whether its even possible.
Cheers

I believe you should be able to create libraries and reuse them when needed.
All you need to do is create class and add the static methods if you do not have to create instances, non static methods otherwise. Then it looks like you already aware how to proceed later.
For instance, you can create utilities classes for String, List, etc based on your description.
By the way, even if you do not create libraries, it is even possible to write one lines in groovy achieve what you may needed most of the cases.

Related

Why should I import "java.util.*" in top of my code?

my question is that i want to use Scanner object or Arrays object for example for usage of Arrays.copyof, but before importing java.util.* or java.util.Scanner and java.util.Arrays there is now object of them to use !
why this happens to me??enter image description here
It is a little unclear what you want to achieve. Are you asking why you should do imports in Java?
import statements allow you to refer to classes which are declared in other packages without referring to their full package name. This is standard java practice. Inside your main(String args[] you can use java.util.Scanner myobject = new .. if you want, but that is too cumbersome and often you have multiple classes in a package that you want to use. So adding a simple import java.util.* is considered better.
Another tip is to use one among the large number of available IDEs (eclipse, sublime, IntelliJ) which will add the imports automatically for you.

How do I get haxe to generate externs?

I am writing haxe code which I want to compile to an arbitrary target as a module and then use the results from another module compiled for this same target. I don’t want to handle this the “Haxe way” (which is to fully inline all libraries at compiletime). Instead I want to be able to write distinct Haxe modules and reference them with full type safety without inlining between the modules. The natural way to do this would be to have both source Haxe files and a separate directory of “headers” filled with extern describing the public API of my module, with these externs somehow automatically generated so that they don’t need to be manually maintained.
I cannot figure out how to get Haxe to emit externs. It would make sense to me if haxe-externs were an actual “target platform” so that I could do something like:
$ haxe ClassName -hxe externsoutdir
It would make less sense but still be acceptable if one of the -D flags like -D dump (which seems to sort of get one part of the way there) or some imaginary, nonexistent -D dump-externs existed. Then you could generate externs while compiling to your favorite target:
$ haxe ClassName -js outfile.js -D shallow-expose -D dump-externs=externsoutdir
The idea is to take a class definition like this:
#:expose
class ClassName {
function quack() {
trace('quack');
}
}
and emit something like this in a separate directory:
extern class ClassName {
function quack():Void;
}
so that I can consume it from another module like this:
#:expose
class MyClassName extends ClassName {
override function quack() {
super.quack();
trace('…and again I say “quack”');
}
}
$ haxe -cp path\to\externsoutdir MyClassName -js outfile.js -D shallow-expose
It would only make sense to generate externs for things decorated with #:expose or some other decorator.
I will figure out how to wrap the emitted modules to load each other correctly. That’s easy. The hard part is generating the extern definitions—shouldn’t Haxe already have a way to do this?
Is there already some tool or built-in way I’m missing to do this? When Googling, all I see are projects that supposedly help with generating externs for existing JavaScript libraries. But that’s not my use case…
Update: --gen-hx-classes was removed sometime aroudn Haxe 4.0.0-rc3. Apparently the functionality still exists secretly as -D gen-hx-classes, but beware, if you rely on this, it seems like its going away.
I believe --gen-hx-classes option might be what you're looking for. Oddly I don't see it in the compiler flags list.
I use it in a modular JavaScript build system that is similar to what you're talking about.
I believe it creates a directory of .hx files that are externs for every class generated by the build (including those from the Haxe standard library.) Actually, getting duplicates of the classes in the standard library may be a problem you will face.
You may also need to use #:keep (or the related macro) to ensure dead code elimination doesn't remove things the other build will need.
You might also need to exclude a class from one or the other builds, e.g. --macro 'exclude("haxe.io.Input")' (or, excludeFile is actually more performant for a whole list of exclusions.)

Get a list of all the arguments a constructor takes

Is it possible to get a list of all the arguments a constructor takes?
With the names and types of the parameters?
I want to automatically check the values of a JSON are good to use for building their equivalent as a class instance.
Preferably without macros... I have build a few, but I still find them quiet confusing.
Must work with neko and JS, if that maters.
Thanks.
I think you want to look at Runtime Type Information (rtti)
From the Haxe Manual: The Haxe compiler generates runtime type information (RTTI) for classes that are annotated or extend classes that are annotated with the #:rtti metadata. This information is stored as a XML string in a static field __rtti and can be processed through haxe.rtti.XmlParser. The resulting structure is described in RTTI structure.
Alternative; If you want to go with macros, this might be a good start
http://code.haxe.org/category/macros/add-parameters-as-fields.html

How to prevent dead-code removal of utility libraries in Haxe?

I've been tasked with creating conformance tests of user input, the task if fairly tricky and we need very high levels of reliability. The server runs on PHP, the client runs on JS, and I thought Haxe might reduce duplicative work.
However, I'm having trouble with deadcode removal. Since I am just creating helper functions (utilObject.isMeaningOfLife(42)) I don't have a main program that calls each one. I tried adding #:keep: to a utility class, but it was cut out anyway.
I tried to specify that utility class through the -main switch, but I had to add a dummy main() method and this doesn't scale beyond that single class.
You can force the inclusion of all the files defined in a given package and its sub packages to be included in the build using a compiler argument.
haxe --macro include('my.package') ..etc
This is a shortcut to the macro.Compiler.include function.
As you can see the signature of this function allows you to do it recursive and also exclude packages.
static include (pack:String, rec:Bool = true, ?ignore:Array<String>, ?classPaths:Array<String>):Void
I think you don't have to use #:keep in that case for each library class.
I'm not sure if this is what you are looking for, I hope it helps.
Otherwise this could be helpful checks:
Is it bad that the code is cut away if you don't use it?
It could also be the case some code is inlined in the final output?
Compile your code using the compiler flag -dce std as mentioned in comments.
If you use the static analyzer, don't use it.
Add #:keep and reference the class+function somewhere.
Otherwise provide minimal setup if you can reproduce.

Is there any global flag for Groovy static compilation?

I know that since Groovy 2.0 there are annotations for static compilation.
However it's ease to omit such annotation by accident and still run into troubles.
Is there any way to achieve the opposite compiler behaviour, like compile static all project files by default and compile dynamic only files chosen by purpose with some kind #CompileDynamic annotation for example?
I have found some (I believe recently introduced) feature which allows doing so with Gradle.
In build.gradle file for the project containing groovy sources we need to add following lines:
compileGroovy {
configure(groovyOptions) {
configurationScript = file("$rootDir/config/groovy/compiler-config.groovy")
}
}
or compileTestGroovy { ... for applying the same to test sources. Keep in mind that neither static compilation nor type checking works well with Spock Framework though. Spock by its nature utilizes dynamic 'groovyness' a lot.
Then on a root of the project create folder config/groovy/ and a file named compiler-config.groovy within. The content of the file is as follows:
import groovy.transform.CompileStatic
withConfig(configuration) {
ast(CompileStatic)
}
Obviously path and name of the configurationScript may vary and it's up to you. It shouldn't rather go to the same src/main/groovy though as it would be mixing totally separate concerns.
The same may be done with groovy.transform.TypeChecked or any other annotation, of course.
To reverse applied behaviour on certain classes or methods then #CompileDynamic annotation or #TypeChecked(TypeCheckingMode.SKIP) respectively may be used.
I'm not sure how to achieve the same when no Gradle is in use as build tool. I may update this answer in the future with such info though.
Not at this time, but there is an open Jira issue here you can follow to watch progress for this feature
There was also a discussion about methods for doing this on the Groovy developers list

Resources