I try to follow the Antlr4 reference book, with the Python3 target, but I got stuck in the calculator example. On the Antlr4 docs it says
The Python implementation of AntLR is as close as possible to the Java one, so you shouldn't find it difficult to adapt the examples for Python
but I don't get it yet.
The java code visitor has a .visit method and in python I don't have this method. I think it's because in java the visit method had parameter overloads of the tokens. In python we have visitProg(), visitAssign(), visitId() etc. But now I can't write value = self.visit(ctx.expr()) because we don't know what visit to call?
Or am I missing an instruction somewhere?
Looks like sometime in the last 3+ years this was fixed. I generated a parser from a grammar and targeted Python 3, using:
antlr4 -Dlanguage=Python3 -no-listener -visitor mygrammar.g4
It generates a visitor class that subclasses ParseTreeVisitor, which is a class in the antlr4-python3-runtime. Looking at the ParseTreeVisitor class, there is a visit method.
For those interested in working through the The Definitive ANTLR 4 Reference using Python, the ANTLR4 documentation points you towards this github repo:
https://github.com/jszheng/py3antlr4book
The Python2/3 targets do not yet have a visitor implemented. I tried to implement it myself, and a pull request is send to that antlr guy to see if I did it correctly. Follow the pull request here: https://github.com/antlr/antlr4-python3/pull/6
Related
Here is some python code that illustrates the problem:
from pyscipopt import Model
master = Model("master LP")
relax = master.relax()
This generates the error:
builtins.AttributeError: 'pyscipopt.scip.Model' object has no attribute 'relax'
These python statements are taken from SCIP documentation --- section Column generation method for the cutting stock problem.
Note, I am using Python 3.6.5 and pyscipopt 3.0.2
The relax method does not exist in PySCIPOpt.
What you link to is a book that, I think, was originally written for using Gurobi's python interface. The authors started translating it to use it with SCIP but this is not ready yet. Actually, you can even see it in the Todo at the beginning of the page you link to.
In any case, this is not the documentation of SCIP
Check the pyscipopt github page and if you have further problems/questions, please open an issue there
I was wondering if anyone could guide me to obtain a kind of document that specifically contains all syntax of CPLEX library on Python 2.7. and probably an example to show the arguments.
The syntax I am looking for is that I need to see just one or a range of constraints, but the following command presents all the model:
print(mdl.export_to_string())
I need to only see constraints e.g. in range 10 to 20.
Thanks,
The documentation for docplex.mp.model.Model is at http://ibmdecisionoptimization.github.io/docplex-doc/mp/docplex.mp.model.html. And the rest of the documentation is available from http://ibmdecisionoptimization.github.io/docplex-doc/.
I would like to use the helm-semantic-or-imenu command to navigate components of type annotated Python code, but whatever code analyzer is used to dentify the components doesn't seem to recognize the type annotated python code. Functions with the return type annotation doesn't get recognized at all and functions with annotated arguments show the type instead of the arguments names in the signatures
The main problem I have is that I do not properly understand the components that is involved in making this work (when it does work). Obviously it might help to somehow update the code analyzer, but in which project do I find that? helm? semantic? imenu? or as someone mentioned somewhere else with regards to code analysis python.el? I could really use some help getting started to solve this. If the code analyzer is found in python.el, can I then try to modify and make emacs use a local version preferentially over the installed one?
EDIT:
After making the initial post I finally made a break through in trying to figure out where the components come from. I searched for python*.el in all of the file systemsystem and discovered these:
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python.elc
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python-wy.elc
I found the source for emacs 26.2 and discovered that indeed it seems python-el is responsible for parsing python files for semantic. It also internally uses the python-wy for recognizing a large portion of the language components. But unfortunately that is where I hit a brick wall. I was hoping to be able to monkey patch the function that recognizes a function definition via an re or something, but semantic actually solves the problem the right way. So python-wy seems to be auto-generated from a formal grammar definition file (in emacs git admin/grammars/python.wy) and figuring out how to modify that it is unfortunately much beyond my abilities.
The semantic python backend doesn't appear to parse type annotations correctly (and there hasn't been much recent development on those libraries as far as I can tell). Since helm-semantic-or-imenu favors semantic when it is active, you can disable semantic altogether for python buffers unless you use its other features (personally I only use it for C/C++).
When the semantic mode-specific libraries are loaded they set imenu-create-default-create-index and imenu-default-goto-function, causing imenu to use semantic instead of python.el's imenu function.
To disable semantic support for your python files you can customize the semantic-new-buffer-setup-functions, only adding entries for modes you want semantic support for, eg. in your semantic hook (or alternatively with the customize UI),
(setq semantic-new-buffer-setup-functions
'((c-mode . semantic-default-c-setup)
(c++-mode . semantic-default-c-setup)
(srecode-template-mode . srecode-template-setup-parser)
(texinfo-mode . semantic-default-texi-setup)
;; etc.
;; (makefile-automake-mode . semantic-default-make-setup)
;; (makefile-mode . semantic-default-make-setup)
;; (makefile-gmake-mode . semantic-default-make-setup)
))
All I want is to generate API docs from function docstrings in my source code, presumably through sphinx's autodoc extension, to comprise my lean API documentation. My code follows the functional programming paradigm, not OOP, as demonstrated below.
I'd probably, as a second step, add one or more documentation pages for the project, hosting things like introductory comments, code examples (leveraging doctest I guess) and of course linking to the API documentation itself.
What might be a straightforward flow to accomplish documentation from docstrings here? Sphinx is a great popular tool, yet I find its getting started pages a bit dense.
What I've tried, from within my source directory:
$ mkdir documentation
$ sphinx-apidoc -f --ext-autodoc -o documentation .
No error messages, yet this doesn't find (or handle) the docstrings in my source files; it just creates an rst file per source, with contents like follows:
tokenizer module
================
.. automodule:: tokenizer
:members:
:undoc-members:
:show-inheritance:
Basically, my source files look like follows, without much module ceremony or object oriented contents in them (I like functional programming, even though it's python this time around). I've truncated the sample source file below of course, it contains more functions not shown below.
tokenizer.py
from hltk.util import clean, safe_get, safe_same_char
"""
Basic tokenization for text
not supported:
+ forms of pseuod elipsis (...)
support for the above should be added only as part of an automata rewrite
"""
always_swallow_separators = u" \t\n\v\f\r\u200e"
always_separators = ",!?()[]{}:;"
def is_one_of(char, chars):
'''
Returns whether the input `char` is any of the characters of the string `chars`
'''
return chars.count(char)
Or would you recommend a different tool and flow for this use case?
Many thanks!
If you find Sphinx too cumbersome and particular to use for simple projects, try pdoc:
$ pdoc --html tokenizer.py
I am using antlr 4 with java ... I implement my own listener and save the errors produce by antlr4 in list ... and I build my ast ... My question is:
there are any method to knew if I have an error in my listener then stop building ast ?
You can try implementing your own default error strategy and simply return whatever error is caught and break from its current action to the method where you output your ast.
See these links for help:
Default error strategy
The antlr guy's guide
(my recommended reading for this issue)
Once you implement your own default error strategy you could simply have a method within in keep a list of all the errors and return those, or use a not null check to see whether to continue onto the next step of your parsing.
Hope this helps in anyway and good luck on your project.