Using maven, there are a couple of plugins to support e.g. generation of JAXB classes from XSD, e.g. org.codehaus.mojo:jaxb2-maven-plugin and org.jvnet.jaxb2.maven2:maven-jaxb2-plugin.
The newest version of those have dependencies to e.g. org.glassfish.jaxb:jaxb-xjc and org.glassfish.jaxb:jaxb-runtime (in version 2.2.11).
But I wonder what would happen if I used those to generate my classes from XSD but use JDK 8 only (which contains version 2.2.8) at runtime: wouldn't there be a risk that I get runtime errors?So is it necessary or recommended to always use the jaxb-runtime corresponding to the jaxb-xjc version I used to generate my classes from XSD?
Of course I could simply override the dependencies to jaxb-xjc etc. and explicitly use version 2.2.8. But even then I wonder if I would get the same result as if I used JDK 8 xjc tool directly?
You have three phases:
(1) generation of the schema-derived code
(2) compilation of the schema-derived code
(3) runtime
The most important that JAXB API you use to compile (2) is compatible with JAXB API you use in runtime (3). If this is not the case then you might compile code which uses some annotation which is later not available in the runtime. And you'll see the error first in the runtime.
As for (1) vs. (2), this is also necessary. If you generate with JAXB 2.2.x and use JAXB 2.1.x to compile, this will not necessarily work. But this is less critical as this will be a compilation error which you will be forced to correct.
So if you problem is just JAXB version used by the maven-jaxb2-plugin vs. JAXB version embedded in JDK, I wouldn't worry about this. As long as it compiles, you're as safe as you can ever be.
Related
I noticed that log4j-core has different dependency scope in the two slf4j-impl libraries.
Is this by design?
log4j-core scope
log4j-slf4j-impl
runtime
log4j-slf4j2-impl
test
Looks like it was changed with this commit, which references LOG4J2-2975. I don't see anything that mentions why the scope was changed from runtime to test.
Yes, removing the runtime dependency on log4j-core was intentional, since the log4j-slf4j2-impl module works with any implementation of the Log4j2 API.
Since version 2.17.2 at least two implementations can be used with the SLF4J-to-LOG4J2 bridge: log4j-core and log4j-to-jul (the third implementation maintained by the Log4j2 project, log4j-to-slf4j, can not be used for obvious reasons).
This choice was not undisputed: cf. LOG4J2 3601 for a discussion.
I use boost in my project and I need to build it from sources.
I use the gcc 11 as compiler ans I want to build the project using -std=c++20 option.
Do I need to build boost using -std=c++20 too?
Knowing that:
Known Issues on boost:
Boost.Operators is currently incompatible with C++20 compilers, which in some cases may manifest as an infinite recursion or infinite loop in runtime when a comparison operator is called. The problem is caused by the new operator rewriting behavior introduced in C++20. As a workaround, users are advised to target C++17 or older C++ standard.
What can I do?
Groovy added the --indy option in version 2.0, back in 2012. It wasn't the default at the time because invoke dynamic required Java 7, and many people at the time used Java 6.
Now even the forthcoming Groovy 3.0 still requires the --indy option in order to force it to use invoke dynamic. That's in spite of the fact Groovy 3.0 requires Java 8 or later.
Is there any technical advantage in still having the default be non-indy compilation, and the default run-time libraries being non-indy? I would have thought there's no need to even have a non-indy option nowadays.
Having --indy by default is on the roadmap for Groovy 3.0 (currently in alpha). The team wanted feedback on the new parser so didn't wait for all features to be available before releasing alpha versions.
The Groovy 3.0 compiler will likely keep a non-indy option of some kind available for a version or two to assist people wanting to recompile old libraries and produce like-for-like bytecode.
Currently, there are some primitive handling optimisations in play when producing non-indy bytecode. Very early benchmarks (on quite old JVMs now) showed some performance regressions since the indy bytecode didn't have those same optimisations. Also on the roadmap for 3.0 is to re-look at performance in those specific cases with a view to considering possible optimisations if still needed.
Exact specifics of whether some non-indy jars will be required for a version or two depend on some other parallel changes to remove some legacy classes that aren't really needed for the indy case but which would be required for all existing libraries written in Groovy to run. That will be detailed in the documentation and release notes once finalised.
There are some more details in [1].
[1] http://groovy.markmail.org/thread/yxeflplf5sr2wfqp
On the ANTLR download page it states that the latest version of ANTLR is 4.4. From the C# Target section on the same page, clicking "ANTLR 4 C# Target (Latest Release)" brings me to the 4.3 Target Release GitHub page that has a link for Readme.md, which when clicked, results in a 404.
Question 1: Although the download page states that the latest version for C# 4.4, the version I get via NuGet is 4.3. Does this mean 4.4 isn't available for C#?
Question 2: Where do I find the tools for code generation that correspond to the version I got from NuGet (that is, Antlr 4.3)?
We attempted using antlr-4.4-complete.jar for code generation - we substituted that jar for the previous (antlr4-csharp-4.0.1-SNAPSHOT-complete.jar) in our build script and now we get: "error(31): ANTLR cannot generate CSharp_v4_5 code as of version 4.4" (which we didn't get previously). We also tried antlr-4.3-complete.jar and got similar results.
What do we need to take advantage of the latest release?
First of all, I corrected the link to the Readme.md in the release notes. Thanks for pointing it out, although a more reliable way to notify the maintainer is to file an issue directly for the project.
Second, the C# target is not based on the version of ANTLR posted on antlr.org, but instead on a fork of the project I created to optimize performance and (especially) memory overhead associated with parsing highly complex grammars. The tools use different serialization formats and are not interchangeable.
The C# code generator is distributed via NuGet, as described in the readme file.
ANTLR 4.4's primary differences over ANTLR 4.3 are the following:
Inclusion of additional targets (irrelevant for the C# target, since the runtime libraries are not C# and also use the other serialization format)
A bug-fix in the tool that has minimal effect on users (it throws an exception instead of reporting an error at code generation time for a specific type of grammar error)
Fixes a bug that occurs when an unknown target is specified (also not applicable to the C# target, since the MSBuild integration automatically selects the correct target language)
Based on this, the 4.3 release of the C# target is functionally equivalent to 4.4. I'm waiting to release a "4.4" version until I can address other performance concerns and functionality which doesn't apply to the reference version. In particular, I'm working on the following:
Improving concurrency by reducing contention (sharwell/antlr4#13)
Supporting indirect left recursion (currently a work-in-progress in the indirect-lr and java8-grammar branches)
Supporting a new baseContext option, shown here for a Java 8 grammar
I've been using the default Sun JAXB implementation that ships with the Oracle JDK 1.7.
Unfortunately I have certain quite complex XSD schemas to work with and I've hit what appears to be a bug in the XSD to Java engine (described in this SO post).
It appears that only a workaround is possible and what's worse I haven't yet been able to apply the particular workaround in my individual case. What's more unsettling however is that a workaround should be required for what is in my view a very elementary case (one XSD schema referencing an element defined in another).
I know of at least two other JAXB implementations:
Apache Camel
EclipseLink MOXy
Would anyone have any insights into how these compare against each other and against Sun's JAXB ?
Note: I'm the EclipseLink JAXB (MOXy) lead and a member of the JAXB (JSR-222) expert group.
Apache Camel - I believe Apache Camel just leverages JAXB and is not a JAXB
(JSR-222) implementation itself.
EclipseLink MOXy - There are many great reasons to switch to MOXy (XPath based mapping, external mapping metadata, JSON-binding, etc). But MOXy uses the XML Schema to Java Compiler (XJC) tool from the JAXB reference implementation so it won't fix this use case.