Groovy - extensions structure - groovy

I'd like to extend String's asType method to handle LocalDateTime. I know how to override this method, however I've no idea where should I put it in project structure to work globally - for all strings in my project. Is it enough to put such extension wherever in the classpath? I know that there's a special convention for extensions (META-INF/services), how does it work for method overriding?

All documentation regarding this topic can be found here. And here exactly the relevant part can be found.
Module extension and module descriptor
For Groovy to be able to load your extension methods, you must declare
your extension helper classes. You must create a file named
org.codehaus.groovy.runtime.ExtensionModule into the META-INF/services
directory:
org.codehaus.groovy.runtime.ExtensionModule moduleName=Test module for
specifications moduleVersion=1.0-test
extensionClasses=support.MaxRetriesExtension
staticExtensionClasses=support.StaticStringExtension The module
descriptor requires 4 keys:
moduleName : the name of your module
moduleVersion: the version of your module. Note that version number is
only used to check that you don’t load the same module in two
different versions.
extensionClasses: the list of extension helper classes for instance
methods. You can provide several classes, given that they are comma
separated.
staticExtensionClasses: the list of extension helper classes for
static methods. You can provide several classes, given that they are
comma separated.
Note that it is not required for a module to define both static
helpers and instance helpers, and that you may add several classes to
a single module. You can also extend different classes in a single
module without problem. It is even possible to use different classes
in a single extension class, but it is recommended to group extension
methods into classes by feature set.
Module extension and classpath
It’s worth noting that you can’t use an extension which is compiled at
the same time as code using it. That means that to use an extension,
it has to be available on classpath, as compiled classes, before the
code using it gets compiled. Usually, this means that you can’t have
the test classes in the same source unit as the extension class
itself. Since in general, test sources are separated from normal
sources and executed in another step of the build, this is not an
issue.

Related

Disable react-jsx per file in a ReasonReact project

Is there a way to disable react-jsx transformation in some files of a ReasonReact project?
I think the other way around is possible by not adding "reason": { "react-jsx": 3 } to bsconfig.json and by adding ##bs.config({jsx: 3}) to the top of the files where you want react-jsx transformation, but that would force me to add this annotation in too many files.
I'd like to build a small DSL based on JSX in a few files while benefiting from React in the rest of my project.
Note: the solution suggested is not very straight forward, and I think it's much simpler to add ##bs.config annotations explicitly in all required files, but if you really don't want to do that, the following might work.
If I'm reading the compiler code correctly, user-defined ppxs are applied before ReasonReact ppx. In the linked compiler module, Cmd_ppx_apply.apply_rewriters will apply with all arguments passed with -ppx flag, and Ppx_entry.rewrite_implementation is ReasonReact ppx.
Assuming that's true, one could have a ppx that checks a top-level statement like ##custom.jsx at the top of the file, that the ppx would check. The ReasonReact ppx used to have a similar check, in case it serves as reference.
Then if this statement is found, the custom ppx would process the nodes that have the #JSX attributes and make sure it removes the attributes from them, so when the compiler passes the AST to ReasonReact ppx, it won't see them.
Note this would break if the ReScript ppx pipeline is updated one day to a driver-based one (unlikely I'd say because that would mean ReScript should support native libraries as 1st class citizens somehow), or if the ordering that was mentioned above changes (ReasonReact ppx applies before user-defined ones).

Get a list of all the arguments a constructor takes

Is it possible to get a list of all the arguments a constructor takes?
With the names and types of the parameters?
I want to automatically check the values of a JSON are good to use for building their equivalent as a class instance.
Preferably without macros... I have build a few, but I still find them quiet confusing.
Must work with neko and JS, if that maters.
Thanks.
I think you want to look at Runtime Type Information (rtti)
From the Haxe Manual: The Haxe compiler generates runtime type information (RTTI) for classes that are annotated or extend classes that are annotated with the #:rtti metadata. This information is stored as a XML string in a static field __rtti and can be processed through haxe.rtti.XmlParser. The resulting structure is described in RTTI structure.
Alternative; If you want to go with macros, this might be a good start
http://code.haxe.org/category/macros/add-parameters-as-fields.html

RequireJS - When specify module id in define()

In RequireJS documents (http://requirejs.org/docs/api.html#modulename), I couldn't understand this sentence.
You can explicitly name modules yourself, but it makes the modules less portable
My question is
Why explicitly naming module makes less portable?
When explicitly naming module needed?
Why explicitly naming module makes less portable?
If you do not give the module a name explicitly, RequireJS is free to name it whichever way it wants, which gives you more freedom regarding the name you can use to refer to the module. Let's say you have a module the file bar.js. You could give RequireJS this path:
paths: {
"foo": "bar"
}
And you could load the module under the name "foo". If you had given a name to the module in the define call, then you'd be forced to use that name. An excellent example of this problem is with jQuery. It so happens that the jQuery developers have decided (for no good reason I can discern) to hardcode the module name "jquery" in the code of jQuery. Once in a while someone comes on SO complaining that their code won't work and their paths has this:
paths: {
jQuery: "path/to/jquery"
}
This does not work because of the hardcoded name. The paths configuration has to use the name "jquery", all lower case. (A map configuration can be used to map "jquery" to "jQuery".)
When explicitly naming module needed?
It is needed when there is no other way to name the module. A good example is r.js when it concatenates multiple modules together into one file. If the modules were not named during concatenation, there would be no way to refer to them. So r.js adds explicit names to all the modules it concatenates (unless you tell it not to do it or unless the module is already named).
Sometimes I use explicit naming for what I call "glue" or "utility" modules. For instance, suppose that jQuery is already loaded through a script element before RequireJS but I also want my RequireJS modules to be able to require the module jquery to access jQuery rather than rely on the global $. If I ever want to run my code in a context where there is no global jQuery to get, then I don't have to modify it for this situation. I might have a main file like this:
define('jquery', function () {
return $;
});
require.config({ ... });
The jquery module is there only to satisfy modules that need jQuery. There's nothing gained by putting it into a separate file, and to be referred to properly, it has to be named explicitly.
Here's why named modules are less portable, from Sitepen's "AMD, The Definite Source":
AMD is also “anonymous”, meaning that the module does not have to hard-code any references to its own path, the module name relies solely on its file name and directory path, greatly easing any refactoring efforts.
http://www.sitepen.com/blog/2012/06/25/amd-the-definitive-source/
And from Addy Osmani's "Writing modular javascript":
When working with anonymous modules, the idea of a module's identity is DRY, making it trivial to avoid duplication of filenames and code. Because the code is more portable, it can be easily moved to other locations (or around the file-system) without needing to alter the code itself or change its ID. The module_id is equivalent to folder paths in simple packages and when not used in packages. Developers can also run the same code on multiple environments just by using an AMD optimizer that works with a CommonJS environment such as r.js.
http://addyosmani.com/writing-modular-js/
Why one would need a explicitly named module, again from Addy Osmani's "Writing modular javascript":
The module_id is an optional argument which is typically only required when non-AMD concatenation tools are being used (there may be some other edge cases where it's useful too).

How to create a TypeScript Nodejs module spanning multiple files

I'm trying to create a Node.js module with a single namespace using TypeScript. I need the classes of the module to be in separate files, and have references to each other. I also need this module to utilize other node modules. But if I understand right, TypeScript only supports namespaces if a namespace is contained within a single file or the namespace does not use external modules.
Some people present the use of post-build tools to make the final module work, which is nice but doesn't address all the errors TypeScript throws when combining cross-file namespaces and imports during development.
Is it true that the closest solution is to create a module per file, and create a web of inter-imports?
I find that what works best for me for module dependencies is to always use source references, e.g.:
/// <reference path="other.ts" />
Since you can't generate single-file output from tsc if you use language-level imports/modules, I have resorted to using require as a function and so on.
This seems to be the same solution chosen by the TypeScript developers themselves (search for "require(" there). As suggested by silentorb's comment, you can declare the require function as some variation of the following (or use DefinitelyTyped):
declare function require(name: string): any;
Note that there's often a lot of discussion on the whole modularity topic in TS over on the TS forums (one recent example).

Is there any global flag for Groovy static compilation?

I know that since Groovy 2.0 there are annotations for static compilation.
However it's ease to omit such annotation by accident and still run into troubles.
Is there any way to achieve the opposite compiler behaviour, like compile static all project files by default and compile dynamic only files chosen by purpose with some kind #CompileDynamic annotation for example?
I have found some (I believe recently introduced) feature which allows doing so with Gradle.
In build.gradle file for the project containing groovy sources we need to add following lines:
compileGroovy {
configure(groovyOptions) {
configurationScript = file("$rootDir/config/groovy/compiler-config.groovy")
}
}
or compileTestGroovy { ... for applying the same to test sources. Keep in mind that neither static compilation nor type checking works well with Spock Framework though. Spock by its nature utilizes dynamic 'groovyness' a lot.
Then on a root of the project create folder config/groovy/ and a file named compiler-config.groovy within. The content of the file is as follows:
import groovy.transform.CompileStatic
withConfig(configuration) {
ast(CompileStatic)
}
Obviously path and name of the configurationScript may vary and it's up to you. It shouldn't rather go to the same src/main/groovy though as it would be mixing totally separate concerns.
The same may be done with groovy.transform.TypeChecked or any other annotation, of course.
To reverse applied behaviour on certain classes or methods then #CompileDynamic annotation or #TypeChecked(TypeCheckingMode.SKIP) respectively may be used.
I'm not sure how to achieve the same when no Gradle is in use as build tool. I may update this answer in the future with such info though.
Not at this time, but there is an open Jira issue here you can follow to watch progress for this feature
There was also a discussion about methods for doing this on the Groovy developers list

Resources