In my current work, I have written code generator using String Template without thinking about Parser ( I am instantiating Template files using direct Java Object). and code generator generator generates nice Java code.
Now, I have started to write Parser. B'coz of some nice editor features of xText, I am thinking to write parser in Xtext.
My question is "Is it possible to use code generator ( written using StringTemplate ) and Parse (written in Xtext) in same project?
Yes that's possible. Xtext offers a typed AST for the parsed files and you could easily pass them to your code generator (directly, iff they fulfil the same contract / interfaces, or indirectly by transforming them to the expected structure). Xtext does not impose any constraints on how you want to use the parsed information.
Related
All I want is to generate API docs from function docstrings in my source code, presumably through sphinx's autodoc extension, to comprise my lean API documentation. My code follows the functional programming paradigm, not OOP, as demonstrated below.
I'd probably, as a second step, add one or more documentation pages for the project, hosting things like introductory comments, code examples (leveraging doctest I guess) and of course linking to the API documentation itself.
What might be a straightforward flow to accomplish documentation from docstrings here? Sphinx is a great popular tool, yet I find its getting started pages a bit dense.
What I've tried, from within my source directory:
$ mkdir documentation
$ sphinx-apidoc -f --ext-autodoc -o documentation .
No error messages, yet this doesn't find (or handle) the docstrings in my source files; it just creates an rst file per source, with contents like follows:
tokenizer module
================
.. automodule:: tokenizer
:members:
:undoc-members:
:show-inheritance:
Basically, my source files look like follows, without much module ceremony or object oriented contents in them (I like functional programming, even though it's python this time around). I've truncated the sample source file below of course, it contains more functions not shown below.
tokenizer.py
from hltk.util import clean, safe_get, safe_same_char
"""
Basic tokenization for text
not supported:
+ forms of pseuod elipsis (...)
support for the above should be added only as part of an automata rewrite
"""
always_swallow_separators = u" \t\n\v\f\r\u200e"
always_separators = ",!?()[]{}:;"
def is_one_of(char, chars):
'''
Returns whether the input `char` is any of the characters of the string `chars`
'''
return chars.count(char)
Or would you recommend a different tool and flow for this use case?
Many thanks!
If you find Sphinx too cumbersome and particular to use for simple projects, try pdoc:
$ pdoc --html tokenizer.py
I am trying to write source code in one language and have it converted to both native c++ and JS source. Ideally the converted source should be human readable and resemble the original source as best it can. I was hoping haxe could solve this problem for me. So I code in haxescript and have it convert it to its corresponding C++ and JS source. However the examples I'm finding of haxe seems to create the final application for you. So with C++ it will use msbuild (or whatever compiler it finds) and creates the final exe for you from generated C++ code. Does haxe also create the c++ and JS source code for you to view or is it all done internally to haxe and not accessible? If it is accessible then is it possible to remove the building side of haxe so it simply creates the source code and stops?
Thanks
When you generate CPP all the intermediate files are generated and kept wherever you decide to generate your output (the path given using -cpp pathToOutput). The fact that you get an executable is probably because you are using the -main switch. That implies an entry point to your application but that is not really required and you can just pass to the command line a bunch of types that you want to have built in your output.
For JS it is very similar, a single JS file is generated and it only has an entry point if you used -main.
Regarding the other topic, does your Haxe code resembles the generated code the answer is yes, but ... some of the types (like Enum and Abstract) only exist in Haxe so they will generate code that functionally works but it might look quite different. Also Haxe has an always-on optimizer/analyzer that might mungle your code in unexpected ways (the analyzer can be disabled). I still find that it is not that difficult to figure out the Haxe source from the generated code. JS has support for source mapping which is really useful for debugging. So in the end, Haxe doesn't do anything to obfuscate your generated code but also doesn't do much to try to preserve it too strictly.
Is it possible to get a list of all the arguments a constructor takes?
With the names and types of the parameters?
I want to automatically check the values of a JSON are good to use for building their equivalent as a class instance.
Preferably without macros... I have build a few, but I still find them quiet confusing.
Must work with neko and JS, if that maters.
Thanks.
I think you want to look at Runtime Type Information (rtti)
From the Haxe Manual: The Haxe compiler generates runtime type information (RTTI) for classes that are annotated or extend classes that are annotated with the #:rtti metadata. This information is stored as a XML string in a static field __rtti and can be processed through haxe.rtti.XmlParser. The resulting structure is described in RTTI structure.
Alternative; If you want to go with macros, this might be a good start
http://code.haxe.org/category/macros/add-parameters-as-fields.html
I’m using ANTLR4 and I have an "import" statement inside my grammar.
Does ANTLR4 have an option to automatically open and parse input file instead of doing it inside my visitor (creating another parser/lexer and visitor for each "import" declaration) ?
"Pretty" sure that I've already seen it but I can't find it anymore.
Inside my grammar :
importStatement : 'import' ID ';' // Here ? an action (Java code)
// to prepend an AST to my current AST ?
Inside an input files :
Import test;
There is no built-in functionality for this, primarily because every language requiring it has its own set of rules for how it needs to be done. In addition, this can quickly make the parse operation for your whole project go from O(n) to O(n²) (i.e. parsing each file once, to parsing up to the whole project for each file).
If your language allows you to build a correct parse tree prior to resolving the imports (e.g. it doesn't have arbitrary #define statements that can appear in imports), then you should be glad you aren't C/C++ and parse each file independently before resolving the import statements.
I'm using the Yesod scaffolded site (yesod 1.1.9.2) and spent a few hours yesterday wrapping my head around basic usage of Fay with Yesod. I think I now understand the intended workflow for using Fay to add a chunk of AJAX functionality to a page (I'm going to be a little pedantic here just because someone else might find the step-by-step helpful):
Add a data constructor Example a to SharedTypes.Command.
In the expression case readFromFay Command of ... in Handler.Fay.onCommand, add a case that matches on my new data constructor.
Create a Fay file 'Example.hs' in /fay, patterned after fay/Home.hs. Somewhere in here, use the expression call (Example "foo") $ myFayCallback.
Define a route and handler for the page that will use the Javascript I'm generating. In the handler, use $(fayFile' (ConE 'ScriptR) "Example.hs").
My question: In the current Yesod/Fay architecture, how should I go about sharing my Persistent model types with my Fay code?
Using import Model in a Fay file doesn't work -- when I try to load the page that's using this Fay file, I get an error in the browser (Fay's standard way of alerting me to errors, I guess) indicating that it couldn't find module 'Model' but that it only searched the following directories:
projectroot/cabal-dev//share/fay-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0/src
projectroot/cabal-dev/share/fay-base-0.14.2.0
projectroot/fay
projectroot/fay-shared
I also tried importing and re-exporting Model in SharedTypes.hs, but that produced the same error.
Is there a way to do this? If not, why not? (I'm a relative noob in both Haskell and Yesod, so the answer to the "why not?" question would be really helpful.)
EDIT:
I just realized that mentioning Persistent in this question's title might be misleading. To be clearer about what I'm trying to do: I just want to be able to represent data in my Fay code using the same datatypes Yesod defines for my models. E.g. if I define a model thusly in config/models...
Foo
bar BarId
textThatCanBeNull Text Maybe
deriving Show
... I want to be able to define an AJAX 'command' that receives and/or returns a value of type Foo and have my Fay code deal in Foos without me having to write any de/serialization code. I understand that I won't be able to use any of Persistent's query functionality directly from my Fay code; I only mentioned Persistent in the title because I mentally associate everything in Model.hs and config/models with Persistent.
This currently is not supported; there are many features leveraged by Persistent which Fay does not support (e.g., Template Haskell). For now, it probably makes sense to have an intermediate data type which is shared by both Fay and Yesod and convert your Persistent data to/from that type.