Groovy added the --indy option in version 2.0, back in 2012. It wasn't the default at the time because invoke dynamic required Java 7, and many people at the time used Java 6.
Now even the forthcoming Groovy 3.0 still requires the --indy option in order to force it to use invoke dynamic. That's in spite of the fact Groovy 3.0 requires Java 8 or later.
Is there any technical advantage in still having the default be non-indy compilation, and the default run-time libraries being non-indy? I would have thought there's no need to even have a non-indy option nowadays.
Having --indy by default is on the roadmap for Groovy 3.0 (currently in alpha). The team wanted feedback on the new parser so didn't wait for all features to be available before releasing alpha versions.
The Groovy 3.0 compiler will likely keep a non-indy option of some kind available for a version or two to assist people wanting to recompile old libraries and produce like-for-like bytecode.
Currently, there are some primitive handling optimisations in play when producing non-indy bytecode. Very early benchmarks (on quite old JVMs now) showed some performance regressions since the indy bytecode didn't have those same optimisations. Also on the roadmap for 3.0 is to re-look at performance in those specific cases with a view to considering possible optimisations if still needed.
Exact specifics of whether some non-indy jars will be required for a version or two depend on some other parallel changes to remove some legacy classes that aren't really needed for the indy case but which would be required for all existing libraries written in Groovy to run. That will be detailed in the documentation and release notes once finalised.
There are some more details in [1].
[1] http://groovy.markmail.org/thread/yxeflplf5sr2wfqp
Related
Let us say that I have worked on a haskell library and am now ready to release a beta version of the software to hackage/make repo public on github etc.
Possible Solutions and why they do not work for me
Use packagename-0.0.0.1-alpha or similar.
The problem here is quite simple: The Haskell PVP Specification does not allow it: (bold is me)
The components of the version number MUST be numbers! Historically Cabal supported version numbers with string tags at the end, e.g. 1.0-beta This proved not to work well because the ordering for tags was not well defined. Version tags are no longer supported and mostly ignored, however some tools will fail in some circumstances if they encounter them.
Just use packagename-0.* until it is out of alpha/beta (and then use packagename-1.*).
The problem here is twofold:
This method would not work for describing relase candidates which are post version 1.
Programmers from other ecosystems, such as that of rust, where it is quite common to have a stable library in 0.*, might wrongly assume that this library is stable. (Of course, it could be mitigated somewhat with a warning in the README, but I would prefer a better solution still.)
So, what is the best (and most conventional in haskell) way to indicate that the library version is in alpha/beta stage of development or is a release candidate?
As far as I know, there is not a package-wide way to say this. However, you can add a module description that describes the stability of the module to each of your modules' documentation.
{-|
Stability: experimental
-}
module PackageName.ModuleName where
Using maven, there are a couple of plugins to support e.g. generation of JAXB classes from XSD, e.g. org.codehaus.mojo:jaxb2-maven-plugin and org.jvnet.jaxb2.maven2:maven-jaxb2-plugin.
The newest version of those have dependencies to e.g. org.glassfish.jaxb:jaxb-xjc and org.glassfish.jaxb:jaxb-runtime (in version 2.2.11).
But I wonder what would happen if I used those to generate my classes from XSD but use JDK 8 only (which contains version 2.2.8) at runtime: wouldn't there be a risk that I get runtime errors?So is it necessary or recommended to always use the jaxb-runtime corresponding to the jaxb-xjc version I used to generate my classes from XSD?
Of course I could simply override the dependencies to jaxb-xjc etc. and explicitly use version 2.2.8. But even then I wonder if I would get the same result as if I used JDK 8 xjc tool directly?
You have three phases:
(1) generation of the schema-derived code
(2) compilation of the schema-derived code
(3) runtime
The most important that JAXB API you use to compile (2) is compatible with JAXB API you use in runtime (3). If this is not the case then you might compile code which uses some annotation which is later not available in the runtime. And you'll see the error first in the runtime.
As for (1) vs. (2), this is also necessary. If you generate with JAXB 2.2.x and use JAXB 2.1.x to compile, this will not necessarily work. But this is less critical as this will be a compilation error which you will be forced to correct.
So if you problem is just JAXB version used by the maven-jaxb2-plugin vs. JAXB version embedded in JDK, I wouldn't worry about this. As long as it compiles, you're as safe as you can ever be.
On the ANTLR download page it states that the latest version of ANTLR is 4.4. From the C# Target section on the same page, clicking "ANTLR 4 C# Target (Latest Release)" brings me to the 4.3 Target Release GitHub page that has a link for Readme.md, which when clicked, results in a 404.
Question 1: Although the download page states that the latest version for C# 4.4, the version I get via NuGet is 4.3. Does this mean 4.4 isn't available for C#?
Question 2: Where do I find the tools for code generation that correspond to the version I got from NuGet (that is, Antlr 4.3)?
We attempted using antlr-4.4-complete.jar for code generation - we substituted that jar for the previous (antlr4-csharp-4.0.1-SNAPSHOT-complete.jar) in our build script and now we get: "error(31): ANTLR cannot generate CSharp_v4_5 code as of version 4.4" (which we didn't get previously). We also tried antlr-4.3-complete.jar and got similar results.
What do we need to take advantage of the latest release?
First of all, I corrected the link to the Readme.md in the release notes. Thanks for pointing it out, although a more reliable way to notify the maintainer is to file an issue directly for the project.
Second, the C# target is not based on the version of ANTLR posted on antlr.org, but instead on a fork of the project I created to optimize performance and (especially) memory overhead associated with parsing highly complex grammars. The tools use different serialization formats and are not interchangeable.
The C# code generator is distributed via NuGet, as described in the readme file.
ANTLR 4.4's primary differences over ANTLR 4.3 are the following:
Inclusion of additional targets (irrelevant for the C# target, since the runtime libraries are not C# and also use the other serialization format)
A bug-fix in the tool that has minimal effect on users (it throws an exception instead of reporting an error at code generation time for a specific type of grammar error)
Fixes a bug that occurs when an unknown target is specified (also not applicable to the C# target, since the MSBuild integration automatically selects the correct target language)
Based on this, the 4.3 release of the C# target is functionally equivalent to 4.4. I'm waiting to release a "4.4" version until I can address other performance concerns and functionality which doesn't apply to the reference version. In particular, I'm working on the following:
Improving concurrency by reducing contention (sharwell/antlr4#13)
Supporting indirect left recursion (currently a work-in-progress in the indirect-lr and java8-grammar branches)
Supporting a new baseContext option, shown here for a Java 8 grammar
Is there any way to check your Facelets files for errors during the build or deployment process?
I am not looking for a solution that will simply validate the Facelets files against their schema, but also to verify that EL expressions are valid. For example if the name of a property or method is misspelled in a EL expression (eg. Value=”#{controller.nme}” instead of value="#{controller.name}"), this will be discovered only during testing at run time.
I am using JBoss 7.1.
Theoretically Eclipse plugins like WTP and JBosd tools can do this, but as of today those only work in the full IDE, not as a seperate command line tool that can be invoked by Ant or Maven.
Worse, those tools have never been perfect. They always report tons of false positives and as a rule of thumb their validation algorithms are usually years behind. The current version of WTP probably just barely validates everything from Java EE 5 (maybe it still misses some obscure feautures).
As a result, if you fail your build based on this validation you'll probably never be able to deploy anything. Even in the most carefully coded and fully correct web apps, WTP and JBoss tools find it necessary to report hundreds or in large projects thousands of warnings and errors. It's IMHO completely useless to depend upon.
This is a sort of a chicken/egg problem. As you said yourself, many EL expressions can only be evaluated at run time.
Keep in mind that EL includes a whole lot more than simple property and method names, it has different implicit objects (params, facesContext, session, etc) which are available in different contexts, and you can also add your own objects to that through many different ways (other Facelets templates, beans which may or may not be registered in the faces-config and even plain Java code inserting objects in the view).
All these things contribute to make very hard to build tooling with this type of checking for you. I think the closest thing for what you want would be to create your own JSF unit tests for each page using JSFUnit and Arquillian and integrate them to your build. As you are targeting JBoss 7, I think that should be feasible.
I'm currently writing an embedded application for J2ME environment (CLDC 1.1 configuration and IMP-NG profile). Being spoiled by all those new features in JVM-based languages (Groovy, Scala, Clojure, you name it), I was considering using one of them for my code.
However, most of the mentioned languages require pretty decent JVM environment. Most so-called "dynamic" languages require the VM to have reflection. Many ask for annotations support. None of the above features are available under J2ME.
From what I found, Xtend looks like a viable options, as its compiler spits out plain Java, not bytecode, and doesn't require any library for runtime needs. Of course, the generated Java code also must meet some requirements, but Xtend webpage looks promising in this regard:
Xtend just does classes and nothing else
Interface definitions in Java are already nice and concise. They have a decent default visibility and also in other areas there is very little to improve. Given all the knowledge and the great tools being able to handle these files there is no reason to define them in a different way. The same applies for enums and annotation types.
That's why Xtend can do classes only and relies on interfaces, annotations and enums being defined in Java. Xtend is really not meant to replace Java but to modernize it.
Am I right and it is possible to compile Xtend-generated code for J2ME platform, or there are some constructs that will not work there?
Alternatively, can you recommend any other "rich" Java modification language that can be run on J2ME?
Update: Knowing that the "compiler" producing results as another source code is called transcompiler, one can also find Mirah, a tool which requires no runtime library and specific Java features.
Xtend's generated code uses google guava heavily. If that is compatible to the J2ME, Xtend could be the language of your choice. I'm not aware of anything that prevents from using it on other platforms that provide a dedicated development kit (e.g. Android).
In addition to being able to generate Java source, Mirah recently added support for javac's --bootclasspath option, which allows you to generate your bytecode against a non-standard version of the java core classes, e.g. LeJOS.
It's still a little fresh, but it'd be nice to have more people using it on different javas.