See the title: I have around 50 XSD files importing each other (with tags) and I need to analyze their dependencies.
Do you know any software (preferably free) to generate a dependency diagram automatically from these files?
I did not find any existing program to do that, so... I developed my own! It is called GraphVisu.
There is a first program to generate the graph structure from seed XSD files, and another one to visualise graphs. I also included a detection of clusters of interrelated nodes (called "strongly connected components" in graph theory).
Feel free to use it!
I am not aware of any free solution tailored specifically for XSD. If I would have to build it using freely available components, I would probably consider GraphViz. You would need to write a module to generate the data needed by GraphViz which will come from parsing the XSD files. The latter is kind of trivial, if you take into account how schema location works and is resolved, and handle correctly circular dependencies. The good thing is that GraphViz is supported on a wide set of platforms, and as long as you can parse XML, you could be set.
I've also developed my own, in form of an XML Schema Refactoring (XSR) add-on for QTAssistant. This particular feature set has been around since 2004, so it works really well, including WSDL and XSD files.
I can interpret differently what you asked, so I'll refer to what you could do with XSR:
XSD files dependencies
This is a simple one, showing a hierarchical layout.
This is a more complex one, showign an organic layout.
intra-XSD file schema components dependencies: can be filtered on arbitrary criteria (not sure what you meant by with tags).
XSD file set schema components dependencies (same as the above, but one can navigate across different files)
The tool comes with an automation library, where you can write a few lines of C# or Java script code which you can then invoke using QTAssistant shell or a command line shell to integrate it with an automatic build process.
Other features include the ability to export the underlying data using GraphML, that is if you wish to analyse or process the graph further (e.g. topological sorting, cycles, etc.)
Related
I've just started a couple of hours ago reading about DSL modeling.
But right now, I'm tied to using the JetBrains MPS IDE or it's plugin for JetBrains Intellij Idea and I'd like to know how can I export those DSL models to something available to use for e.g. console applications or whatever (in case it's possible or it makes sense).
You can do several things already in MPS without exporting the models:
Analyze the models to check for errors, business rule violations or inconsistencies.
Interpret the models then display the result of the interpretation in MPS directly. Useful if you implement a specification and an example/test of that specification, then you can run tests in MPS and show the results as green/red highlight, for example.
Define a generator to translate the model into text (executable code or input for a tool such as Liquibase to create database schemas for example).
If you're looking to export your data from MPS for use in a different application there are two approaches I would
recommend:
The simplest way: NodeSerializer from MPS-extensions. I have more details on how to use it in a blog post. This lets you quickly export your data in a rather nice XML structure.
The most flexible approach: writing a custom exporter by using the MPS Open
API to recursively traverse a node
tree. You can output any format you want (XML, JSON, YAML, etc.) and customize the output as you like.
Here are two more approaches that you could be considering but that I would NOT recommend:
Accessing the model (*.mps) files directly. While they are already in XML format, their structure is adapted to
MPS' needs. It is normalized, meaning that a given piece of information is generally only stored once, and it also
encodes node IDs in a particular way to save space. The format is also undocumented and could change in the future
(although it hasn't changed for the past several years).
Using the MPS generator to convert your DSL to MPS' built-in XML language, jetbrains.mps.core.xml. I don’t recommend using the MPS generator because the generator’s sweet spot is translating between two different MPS languages, e.g. from your custom DSL to Java. If you try writing generator rules to convert anything to XML you would hit a few problems that are possible to overcome but totally unnecessary.
You can define a generator which transforms a sentence (file, AST) of your language into another MPS language. The target language must exist in MPS first.
Alternatively, you could generate text with the TextGen aspect, but that is more suitable to just print the textual representation of your language. If you would like something more sophisticated (like generating text code of another language), you can use plaintextgen language from MPS-extensions or mbeddr.platform.
If you want to input (import) a textual program into MPS , you can code a paste handler where you could put your parser, or you can change the format in which the AST is stored (from XML to maybe directly your language, but this would again require a parser to read) with custom persistence.
I am currently working on a solution which enables to import an MPS language from a YAJCo model (model-based parser generator, where the input is not a grammar, but Java classes representing the semantic model). Then you can import a sentence (file) which creates and populates a model (AST). From the program in MPS you can generate Java source code which fills the original Java classes. So if you want a textual MPS language and use the IDE but then export the AST into Java objects you can use, maybe YtM is for you.
It is 2017, and as far as I know, the way programmers organize their codes have not changed. We distribute our codes into files and organize them with a tree structure (nested directories and files). When codebase is huge, and the relations between classes/components are complex, this organization approach gives me the inefficient impression. With more files, either one directory has more files in it or the depth of directories increases. And since we handle the directories directly, navigation costs me time and effort without tools like search.
Figure: A complex UML from https://github.com/CMPUT301W15T09/Team9Project/wiki/UML
We can use CAD to design/draw complex things; mind map can be created in a similar manner. For these, we do not need to deal with file systems. Can't we have something similar and hide file system in a black box? Why the fundamental organization methods have not evolved for so long a time.
So I wonder, what's the advantages that keeps us from getting a new way? What's the inherit advantages of using file system to organize our codes.
Different on-disk representations of source-code have been tried (e.g. how Flash stores ActionScript inside binary .fla files) and they're generally unpopular. No-one likes proprietary file formats. It also means you can't use text-based source control systems like Git, which means you can't do a text-merge to resolve change conflicts.
We store source code in files in a tree structure (e.g. one OOP class or procedural module per file), with nested namespaces represented by nested directories because it's intuitive (and again, for better cohesion with source-control systems).
Some languages enforce this, like Java, for example, that requires the source file be named the same as the class it contains and be in the same directory name as its containing package. For other languages like C# and C++ it just makes sense - because otherwise it's confusing to someone who might be new to your codebase when they see class TurboEncabulator inside a file named PrefabulatedAmulite.cs.
I have a .NET 4.0 C# Solution with a single .csproj (Library) having several thousand files.
I want to extract out a small subset of the functionality from the thousands of files.
e.g. I want to extract the functionality of the MyLibrary.RelevantMethod() method into another library.
The aim is to create a new .csproj with the bare minimum class files needed to achieve this functionality.
i have a Program.cs which invokes the functionality and i can navigate through the flow to find all classes involved. Just that there are too many. (still a small subset of all classes)
Solutions tried:
the usual brute force of going through the flow from the method (F12) and copying over every class file and associated files needed for it to compile. this is taking a lot of time, but i know that if i keep at it, it'll be done. so that is what i am doing right now.
other option was to copy over the whole project and eliminate folders of classes based on instinct/name space references, build to verify and keep at it. this got nasty because a subset of classes in a folder were needed.
the vs 2013 code-map graphs became unmanageable in 3 drill downs. sequence diagrams became too complex as well.
Call hierarchy seemed to be the most promising showing all the classes involved visually but there is still the manual task of drilling through and copying the classes.
while i manually continue extracting the class one-by-one using the call hierarchy, is there a faster way or a more automated way (semi works as well) to determine all the classes involved in a method call in C#?
if i can get the list, i can do a search on the physical folders nesting the .cs. files (every class has an equivalent .cs file) and just copy them over.
You can find all classes involved in a method call with the Runtime Flow tool (developed by me). From the Runtime Summary window you can also copy these classes to the Clipboard for the selected module or a namespace.
I maintain an academic website for myself that duplicates a lot of the material that I also put in my cv. To avoid having to maintain multiple files of the same information, and to keep things in sync, I use tex and bib files mostly, and I generate my cv in latex and use htlatex for the website.
As a project to improve my Haskell knowledge I have been thinking of generating my website with one of the haskell based static site generators. I have easily found several hakyll sites, but only a few yst, and it isn't clear to me what problem hakyll was designed to solve that wasn't being dealt with by yst. I am interested in learning what people see as the comparative advantages and disadvantages of each, and if there is any particular reason why I might want to start with one or the other given my current base of .tex and .bib files.
Disclaimer: I am the author of Hakyll.
What Hakyll gives you is an EDSL on top of pandoc, which allows you to more easily specify how different files should be processed. It is much like a specialized make on top of Pandoc. It also offers some other features which are useful for building static websites, i.e., manipulating URLS and HTML.
I think the main difference between yst and Hakyll is that Hakyll is on one side more customizable (since the configuration is just Haskell), but probably harder to get up and running as well.
I don't know about hakyll, but yst uses pandoc (http://johnmacfarlane.net/pandoc/) and shines in combining a static site combined with a bit of dynamic data in yaml (for example events): it support a sql like mini language to insert these dynamic data fields in a template.
Yst also helps to build a multi-page website, which is a bit more difficult when using pandoc alone.
However, I found it a bit hard to insert other elements in the template which are not supported by yst by default (for example, a table of contents for the page itself).
Additionally, pandoc (used in the background) has become much more powerful with the advent of
the yaml metadata block (http://johnmacfarlane.net/pandoc/README.html#yaml-metadata-block) which lets you insert virtually anything in the underlying template (for me, pandoc has
replaced LaTeX completely as a input format, for pandoc can convert files to both html as well as LaTeX (among others)).
I would suggest you'd consider to use pandoc instead of yst, unless you need that mini sql language feature.
I'm using an external java library for which I only have the javadocs and do not have the source code. I'd like to generate a UML diagram from the existing javadocs so that I can visualize the class hierarchy using something like Graphviz. Is that possible? Note that what I'm looking for is a graphical version of overview-tree.html.
Please let me know if you have any ideas and/or suggestions.
Thanks,
Shirley
I don't believe that there is such a tool. Most of the reverse engineer tools depend on the actual code. The javadoc information isn't guaranteed to match the code as a 1:1 for the structure, thus making it unreliable.
I'm not familiar with any off-the-shelf solution for this purpose. Most commonly folks have the source code that generated the JavaDoc.
That being said, the overview-tree.html traditionally has a fairly straightforward HTML format.
It should not be difficult to write a script that would read the file as text or as a DOM, reconstruct the hierarchy of UL and LI tags, and use that to build an input file for graphviz. I've done similar stuff in the past with other forms of data.
It's just a matter of time and proficiency with the scripting language or appropriate tools.
The one problem of this approach is that you would only get the hierarchy of classes. You would have to make it somewhat smarter if you wanted to get the "implements XYZ" and create multiple hierarchies. Even if you could get that data, you would have to manipulate GraphViz's levels to get it to provide an appropriate layout once you have this multiple inheritance structure.
Of course, adding the details of the members would turn this into a whole new problem since you will have to access other HTML files.