How to generate API documentation from docstrings, for functional code - python-3.x

All I want is to generate API docs from function docstrings in my source code, presumably through sphinx's autodoc extension, to comprise my lean API documentation. My code follows the functional programming paradigm, not OOP, as demonstrated below.
I'd probably, as a second step, add one or more documentation pages for the project, hosting things like introductory comments, code examples (leveraging doctest I guess) and of course linking to the API documentation itself.
What might be a straightforward flow to accomplish documentation from docstrings here? Sphinx is a great popular tool, yet I find its getting started pages a bit dense.
What I've tried, from within my source directory:
$ mkdir documentation
$ sphinx-apidoc -f --ext-autodoc -o documentation .
No error messages, yet this doesn't find (or handle) the docstrings in my source files; it just creates an rst file per source, with contents like follows:
tokenizer module
================
.. automodule:: tokenizer
:members:
:undoc-members:
:show-inheritance:
Basically, my source files look like follows, without much module ceremony or object oriented contents in them (I like functional programming, even though it's python this time around). I've truncated the sample source file below of course, it contains more functions not shown below.
tokenizer.py
from hltk.util import clean, safe_get, safe_same_char
"""
Basic tokenization for text
not supported:
+ forms of pseuod elipsis (...)
support for the above should be added only as part of an automata rewrite
"""
always_swallow_separators = u" \t\n\v\f\r\u200e"
always_separators = ",!?()[]{}:;"
def is_one_of(char, chars):
'''
Returns whether the input `char` is any of the characters of the string `chars`
'''
return chars.count(char)
Or would you recommend a different tool and flow for this use case?
Many thanks!

If you find Sphinx too cumbersome and particular to use for simple projects, try pdoc:
$ pdoc --html tokenizer.py

Related

How to use helm-semantic-or-imenu for code navigation with type annotated python code

I would like to use the helm-semantic-or-imenu command to navigate components of type annotated Python code, but whatever code analyzer is used to dentify the components doesn't seem to recognize the type annotated python code. Functions with the return type annotation doesn't get recognized at all and functions with annotated arguments show the type instead of the arguments names in the signatures
The main problem I have is that I do not properly understand the components that is involved in making this work (when it does work). Obviously it might help to somehow update the code analyzer, but in which project do I find that? helm? semantic? imenu? or as someone mentioned somewhere else with regards to code analysis python.el? I could really use some help getting started to solve this. If the code analyzer is found in python.el, can I then try to modify and make emacs use a local version preferentially over the installed one?
EDIT:
After making the initial post I finally made a break through in trying to figure out where the components come from. I searched for python*.el in all of the file systemsystem and discovered these:
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python.elc
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python-wy.elc
I found the source for emacs 26.2 and discovered that indeed it seems python-el is responsible for parsing python files for semantic. It also internally uses the python-wy for recognizing a large portion of the language components. But unfortunately that is where I hit a brick wall. I was hoping to be able to monkey patch the function that recognizes a function definition via an re or something, but semantic actually solves the problem the right way. So python-wy seems to be auto-generated from a formal grammar definition file (in emacs git admin/grammars/python.wy) and figuring out how to modify that it is unfortunately much beyond my abilities.
The semantic python backend doesn't appear to parse type annotations correctly (and there hasn't been much recent development on those libraries as far as I can tell). Since helm-semantic-or-imenu favors semantic when it is active, you can disable semantic altogether for python buffers unless you use its other features (personally I only use it for C/C++).
When the semantic mode-specific libraries are loaded they set imenu-create-default-create-index and imenu-default-goto-function, causing imenu to use semantic instead of python.el's imenu function.
To disable semantic support for your python files you can customize the semantic-new-buffer-setup-functions, only adding entries for modes you want semantic support for, eg. in your semantic hook (or alternatively with the customize UI),
(setq semantic-new-buffer-setup-functions
'((c-mode . semantic-default-c-setup)
(c++-mode . semantic-default-c-setup)
(srecode-template-mode . srecode-template-setup-parser)
(texinfo-mode . semantic-default-texi-setup)
;; etc.
;; (makefile-automake-mode . semantic-default-make-setup)
;; (makefile-mode . semantic-default-make-setup)
;; (makefile-gmake-mode . semantic-default-make-setup)
))

How to assemble a tool that parses reStructuredText with additional directives

Let's say I want to add a singe new directive to the reStructuredText standard directives.
There is a how-to for creating reStructuredText directives which outlines how each directive is essentially derived from the Directive class or its subclasses. It also suggests that the interested reader should check how the standard directives are implemented and use that as a starting point.
So far, so good, but the tutorial actually stops at that point. How can that new directive be incorporated into a new tool; how am I supposed to actually use it?
The "Hacker's Guide", which outlines how the front-end tools are assembled out of a Reader, a Parser, a Transformer and a Writer, seems incomplete and contains absolutely no code.
So, how am I supposed to assemble a new tool out of the standard reader + my extended Parser which recognizes the new directive + possibly a new Writer which appropriately handles the new directive in the output it produces?
I would very much appreciate a nudge in the right direction.
The docutils.core.Publisher class is "A facade encapsulating the high-level logic of a Docutils system." According to its documentation:
Calling the publish_* convenience functions (or instantiating a
Publisher object) with component names will result in default
behavior. For custom behavior (setting component options), create
custom component objects first, and pass them to
publish_*/Publisher.
So, in order to build a tool which, for example, parses an .rst file and outputs html5 to the standard output, using standard components, one can write:
import sys
import docutils.core
import docutils.parsers.rst
import docutils.writers.html5_polyglot
public = docutils.core.publish_file(
source=open("filename.rst", 'r'),
parser=docutils.parsers.rst.Parser(),
writer=docutils.writers.html5_polyglot.Writer())
Replace the standard components for the parser or the writer with your own, extended, classes and you 're done.

How to find a source file used in process.binding('..') in node source code?

I want to see source code of the file mentioned in process.binding() statement in nodeJS source code, have seen similar question on stackoverflow but most of them answers to specific cases like fs etc.
I want explanation based on which I should be able to find any file mentioned in process.bindings().
Since this question is lying here for very long I decided to give it another try to find the answer and found one very nice presentation online.
Process.bindings presentation
To summaries this, process.bindings() binds the javascript object with the C++ object which exposes the C++ functions to be executed from javascript, essentially it creates a javascript wrapper around the C++ functions to make it available from NodeJS.
To find which C++ function is bound to which javascript function you can dig into NodeJS source tree here NodeJs source tree . Names are straightforward to map with core module names and look for signature env->SetMethod(target, "<modulename>", <C++ function name>);
Example: env->SetMethod(target, "stat", Stat);

What is imported using the Gjs imports statement?

If I'm looking at Gjs code and see this line near the beginning:
const Gio = imports.gi.Gio;
How can I know what methods, constants, events, etc. are on 'Gio' (without doing a Google search)? Is there a file somewhere on my installation that contains that information?
Obviously I'm asking for any 'imports' statement, not Gio specifically.
Some of imports statements import other javascript files:
imports.ui.* -> /usr/share/cinnamon/js/ui/*
imports.misc.* -> /usr/share/cinnamon/js/misc/*
imports.[cairo, dbus, format, gettext, jsUnit, lang, promise, signals] -> /usr/share/gjs-1.0/
For the imports.gi imports, Gnome Introspection is used to allow gjs to use C library.
So to get informations about those libraries I suggest you to look at the Gnome reference manuals:
Gio
Gtk
St
But to conclude, there is a huge lack of documentation and examples. That makes difficult to develop with gjs.
UPDATE
Here other useful links:
Seed documentation (seed is another javascript implementation for GNOME)
Gjs exemples
Since I got no answers I kept searching online and found this excellent blog post on how to generate HTML-formatted documentation from typelib files (such as Gio-2.0.typelib):
http://mathematicalcoffee.blogspot.com/2012/09/developing-gnome-shell-extensions_6.html

haskell libmagic how to use it?

I'm trying to write a program that will check the file type of a certain file and I found a haskell library that should do the trick.
The problem arises when I try to use it. I have no idea what I have to do, which function to call etc. The library is full of cryptic commands with no examples, no tutorial or a homepage.
Please help.
There is the package's documentation that contains short descriptions of the important functions (which are not that many). For additional information about what the underlying C library does (and therefore also the Haskell library), have a look at libmagic's man page.
The basic usage should look similar to this (untested):
import Magic.Init
import Magic.Operations
main =
do magic <- magicOpen []
loadDefaultMagic magic
magicFile magic "/my/file" >>= print

Resources