XJC re-use classes across ant tasks - jaxb

I am exploring options to use same classes across two different ant tasks. In the first task, I am already building the jar & deleting the generated classes. Xjc does not allow passing a jar as a parameter for reference. One option that I currently did is regenerate the episode file only from the xsd and construct another jar. Is there any better approach?

Please see this post:
http://blog.bdoughan.com/2011/12/reusing-generated-jaxb-classes.html
The right way to do this:
Use -episode to generate the episode file on your first schema
Use the generated episode JAR during the second compilation
(In some cases) remove the leftovers
So episodes are definitely the way to go. I don't quite understand what do you mean by "Xjc do not allow to pass jar as parameter for reference".

Related

Can SCons be used to construct targets from indeterminately named source files?

I have a directory with multiple source files of indeterminate name. The only thing I know is the file extension. I want to take each source file, and build a single target from each. The method I'm currently using is to determine the name of each source using a for loop:
targets = []
for file in listdir('.'):
if file.endswith('.xdm'):
targets += env.m4(source=file)
The advantage of doing it progrmatically like this is that the SConscript doesn't have to be maintained by the developers as they add new sources. The problem is that the targets are no longer cleaned because of something to do with dependencies that I don't entirely understand.
So my question is is there a more appropriate way to do this, using in-built SCons functionality, without relying on more traditional flow control, or should I just ensure that each of my sources is determined and list them individually in the SConscript?
Instead of fiddling with listdir I would simply use the Glob() method, as provided by SCons itself:
for file in Glob("*.xdm"):
env.m4(source=file)
This (like the example from your question) is a perfectly fine approach, since it uses the fact that SConscripts are actually Python scripts. The Glob() approach has the advantage of also finding *.xdm files that don't exist on the harddrive yet, but may get created as part of the build process later.
I wonder about the problems that you mentioned, regarding cleaning of the targets. The Q&A linked in your question above seems unrelated to me. If you experience actual "cleaning" problems with one of the approaches above, please post a separate question together with the full verbatim input and output. If it should turn out that this doesn't work out-of-the-box, I'd consider it to be a bug.

How to generate a dependency diagram from a set of XSD files?

See the title: I have around 50 XSD files importing each other (with tags) and I need to analyze their dependencies.
Do you know any software (preferably free) to generate a dependency diagram automatically from these files?
I did not find any existing program to do that, so... I developed my own! It is called GraphVisu.
There is a first program to generate the graph structure from seed XSD files, and another one to visualise graphs. I also included a detection of clusters of interrelated nodes (called "strongly connected components" in graph theory).
Feel free to use it!
I am not aware of any free solution tailored specifically for XSD. If I would have to build it using freely available components, I would probably consider GraphViz. You would need to write a module to generate the data needed by GraphViz which will come from parsing the XSD files. The latter is kind of trivial, if you take into account how schema location works and is resolved, and handle correctly circular dependencies. The good thing is that GraphViz is supported on a wide set of platforms, and as long as you can parse XML, you could be set.
I've also developed my own, in form of an XML Schema Refactoring (XSR) add-on for QTAssistant. This particular feature set has been around since 2004, so it works really well, including WSDL and XSD files.
I can interpret differently what you asked, so I'll refer to what you could do with XSR:
XSD files dependencies
This is a simple one, showing a hierarchical layout.
This is a more complex one, showign an organic layout.
intra-XSD file schema components dependencies: can be filtered on arbitrary criteria (not sure what you meant by with tags).
XSD file set schema components dependencies (same as the above, but one can navigate across different files)
The tool comes with an automation library, where you can write a few lines of C# or Java script code which you can then invoke using QTAssistant shell or a command line shell to integrate it with an automatic build process.
Other features include the ability to export the underlying data using GraphML, that is if you wish to analyse or process the graph further (e.g. topological sorting, cycles, etc.)

Groovy DSL scripts

I wrote a Global AST transformation that should be applied to DSL scripts, and am now in the process of selecting the best way to identify specific groovy scripts as these DSL scripts.
I considered the following options:
A custom file extension; The biggest disadvantage here is IDE support: many barely support compilation/editing of files that have non-groovy extensions (you can configure an editor but it requires some tweaking).
A special file name suffix (prefix) but in this case the suffix should be really unique (and thus relatively long) to avoid accidental transformation of regular groovy files (my current choice).
A local AST transformation applied to a script class, this has as disadvantage that one would need to write some boilerplate code for each script.
Having some unique first statement in the scripts that will identify the DSL.
What would in your opinion be the best option to choose and why? Are there any other options at my disposal that I haven't thought about?
If you compile your DSL scripts using GroovyShell you can use CompilerConfiguration.addCompilationCustomizer(ASTTransformationCustomizer(YourGlobalASTTransformation)) to apply the transformation on them.

#Grape in scripts with multiple files

I'd like to use #Grape in my groovy program but my program consists of several files. The examples on the Groovy Grape page all seem to assume that your script will consist of one file. How can I do this? Should I just add it to one of the files and expect that the imports will work from the others? If so, then is it common to place all the #Grape calls in one file with no other code? Do I need to add the Grape call to all files that will import the package? Do I need to download the JAR and create a Gradle file, which I was getting away without at this point?
the grape engine and the #grab annotation were created as part of core groovy with single file scripts in mind, to allow a chunk of text to easily become a fully functional program.
for larger applications, gradle is an awesome build tool with lots of useful features.
but yes, you can manage all the application dependencies just with grape.
whether you annotate every file or a single one does not matter, just make sure the #grab annotated file is read before you try to use the external class.
annotating the main class is probably better as you will easily lose track of library versions if you have the annotations scattered.
and yes, you should consider gradle for any application with more than a dozen files or anything you might want to reuse elsewhere as a library.
In my opinion, it depends how your program is to be run...
If your program is to be run as a collection of standalone scripts, then I'd probably stick the #Grab required for each script at the top of each of them.
If your program is more of a standard style program with a single point of entry, then I'd go for using a build tool like Gradle (as you say), as you get a lot of easy wins by using it.
Firstly, it makes it easy to define your dependencies (and build a single large jar containing all of them)
Secondly, Gradle makes it really easy to start writing tests, include code coverage plugins, or useful tools like codenarc to suggest possible fixes or improvements to your code. These all become invaluable not only for improving your code (or knowing your code works), but also when refactoring your code, you know you've not broken anything that used to work.

Groovy script runner architecture

Initial info: I have a groovy app (let's call it Runner) which is capable of running anything implementing certain interface (let's call it Runnable). And I have a pool of Runnables (groovy scripts) which should be visible to this app at the init stage and which app will call (through the interface and passing an object as a param).
Task: What I need is a way to load and call all the Runnables from the Runner.
Requirements: It's tricky, as scripts may not follow certain package structure and can be placed on the same machine as Runner but virtually in any place. They can also be named differently (open discussion for mandatory java alike naming: class name == file name) and can be skipped for now (though if there's gonna be advice on that it's cool!).
NOTES: I imagine it possible through having a config file in which scripts are configured (absolute path is provided) and to load them using this stuff and either cast Object to Runnable interface and trigger what I need or to invokeMethod(...). But have no idea if it can be done easier (there should be a way, cause it looks all too clumsy). I also cant think of a way to handle file naming issue and multiple classes in one file issue.
P.S.: Such long description might cause misunderstanding so please comment on vague parts.
I think you need to know all classes implementing an interface. Find Java classes implementing an interface may be of interest to you.
The option to have a config file in which script's absolute paths are written is good and proved to be a working solution. You'll have to deal with class loading of whatever is not visible in the app class loader. In particular you'll have to deal with annotation based POJO serialization problems. Singleton of Runnable loader is a good practice.

Resources