undefined symbol: __atomic_exchange_8 - python-3.x

I'm trying to run google assistant on my raspberry pi following the steps on: https://developers.google.com/assistant/sdk/guides/service/python/embed/run-sample
all works fine until activating the Google Assistant with the command:
googlesamples-assistant-pushtotalk --project-id my-dev-project --device-model-id my-model
I'm getting the following ImportError:
Traceback (most recent call last):
File "/home/pi/env/bin/googlesamples-assistant-pushtotalk", line 5, in <module>
from googlesamples.assistant.grpc.pushtotalk import main
File "/home/pi/env/lib/python3.9/site-packages/googlesamples/assistant/grpc/pushtotalk.py", line 28, in <module>
import grpc
File "/home/pi/env/lib/python3.9/site-packages/grpc/__init__.py", line 22, in <module>
from grpc import _compression
File "/home/pi/env/lib/python3.9/site-packages/grpc/_compression.py", line 15, in <module>
from grpc._cython import cygrpc
ImportError: /home/pi/env/lib/python3.9/site-packages/grpc/_cython/cygrpc.cpython-39-arm-linux-gnueabihf.so: undefined symbol: __atomic_exchange_8
Any ideas on how to fix this?

just ended up here since I ran into the same problem (on a different project) but also involving python3.9, cygrpc on a RPi4 with a recent raspbian-lite (32bit).
While I don't have a solution here are my guesses:
formerly __atomic_exchange_8 was defined in /lib/arm-linux-gnueabihf/libgcc_s.so.1 but now it seems defined in libatomic:
$ grep __atomic_exchange_8 /lib/arm-linux-gnueabihf/libatomic.so.1
grep: /lib/arm-linux-gnueabihf/libatomic.so.1: binary file matches
EDIT:
Solved it. I was looking at the patch which tried to solve the problem two years ago:
https://github.com/grpc/grpc/pull/20514/commits/b912fc7d8d401bb65b3147ee77d03beaa3d46038
I figured their test check_linker_need_libatomic() might be broken, and patched it again to always return True, the problem got fixed.
Had tried earlier to fix it by adding CFLAGS='-latomic' CPPFLAGS='-latomic' but that didn't help.
here's my tiny workaround (not fix!) for today's grpc git HEAD:
root#mypi:/home/pi/CODE/grpc# git diff
diff --git a/setup.py b/setup.py
index 1a72c5c668..60b7705cd2 100644
--- a/setup.py
+++ b/setup.py
## -197,6 +197,7 ## ENABLE_DOCUMENTATION_BUILD = _env_bool_value(
def check_linker_need_libatomic():
"""Test if linker on system needs libatomic."""
+ return True
code_test = (b'#include <atomic>\n' +
b'int main() { return std::atomic<int64_t>{}; }')
cxx = os.environ.get('CXX', 'c++')
diff --git a/tools/distrib/python/grpcio_tools/setup.py b/tools/distrib/python/grpcio_tools/setup.py
index 6b842f56b9..8d5f581ac7 100644
--- a/tools/distrib/python/grpcio_tools/setup.py
+++ b/tools/distrib/python/grpcio_tools/setup.py
## -85,6 +85,7 ## BUILD_WITH_STATIC_LIBSTDCXX = _env_bool_value(
def check_linker_need_libatomic():
"""Test if linker on system needs libatomic."""
+ return True
code_test = (b'#include <atomic>\n' +
b'int main() { return std::atomic<int64_t>{}; }')
cxx = os.environ.get('CXX', 'c++')
root#mypi:/home/pi/CODE/grpc#
EDIT:
as a quick test, cygrpc.cpython-39-arm-linux-gnueabihf.so needs to depend on libatomic:
pi#mypi:~/CODE/grpc $ ldd /usr/local/lib/python3.9/dist-packages/grpc/_cython/cygrpc.cpython-39-arm-linux-gnueabihf.so
linux-vdso.so.1 (0xbeef7000)
/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so => /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so (0xb698b000)
libpthread.so.0 => /lib/arm-linux-gnueabihf/libpthread.so.0 (0xb695f000)
libatomic.so.1 => /lib/arm-linux-gnueabihf/libatomic.so.1 (0xb6946000)
libstdc++.so.6 => /lib/arm-linux-gnueabihf/libstdc++.so.6 (0xb67be000)
libm.so.6 => /lib/arm-linux-gnueabihf/libm.so.6 (0xb674f000)
libc.so.6 => /lib/arm-linux-gnueabihf/libc.so.6 (0xb65fb000)
/lib/ld-linux-armhf.so.3 (0xb6fcc000)
libgcc_s.so.1 => /lib/arm-linux-gnueabihf/libgcc_s.so.1 (0xb65ce000)

This works for me on RPI0 + Bullseye + Python3.9:
pip3 uninstall -y grpcio grpcio-tools
sudo apt install -y python3-grpcio python3-grpc-tools

EDIT: Update to gRPC v1.44.0. The issue has been fixed there, see the explanation below in the old answer.
There was a problem with the order of the parameters used by the compiler to compile some test code which result is used to determine whether libatomic needs to be linked or not.
The issue will be fixed with the next release of grpc. If they maintain the same schedule of previous releases it should be v1.44.0 which should come out some time the next month.
In the mean time you can git cherry-pick the proper fix and build grpc yourself

Related

Failed build Yocto Gatesgarth "extensible SDK" (eSDK) - populate_sdk_ext fail

I'm working with Yocto "Gatesgarth" on a custom board based on i.MX6ULL.
I'm facing some problems in generating the extensible SDK (eSDK).
The generation of normal SDK it's accomplished correctly.
Below some details.
Details of system:
Board based on NXP i.MX6ULL
Yocto version "Gatesgarth 3.2.4 (May 2021)"
BB_VERSION = "1.48.0",
NATIVELSBSTRING = "ubuntu-18.04"
DISTRO_VERSION = "5.10-gatesgarth"
meta-qt5 is present
Build environment based on Docker Container
Environment Variable:
File: conf/local.conf
SDKMACHINE ?= 'x86_64'
File: test-image-mx6ull.bb
inherit core-image
inherit populate_sdk_qt5
inherit populate_sdk_ext
SDK_EXT_TYPE = "minimal"
SDK_INCLUDE_TOOLCHAIN = "1"
SDK_INCLUDE_PKGDATA = "0"
SDK_INCLUDE_NATIVESDK = "1"
The command executed is :
bitbake test-image-mx6ull -c populate_sdk_ext
Output:
ERROR: test-image-mx6ull-1.0-r0 do_populate_sdk_ext: Error executing a python function in exec_python_func() autogenerated:
The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: <module>
0001:
*** 0002:do_populate_sdk_ext(d)
0003:
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 720, function: do_populate_sdk_ext
0716: bb.fatal('The extensible SDK can currently only be built for the same architecture as the machine being built on - SDK_ARCH is set to %s (likely via setting
SDKMACHINE) which is different from the architecture of the build machine (%s). Unable to continue.' % (d.getVar('SDK_ARCH'), d.getVar('BUILD_ARCH')))
0717:
0718: d.setVar('SDK_INSTALL_TARGETS', get_sdk_install_targets(d))
0719: if d.getVar('SDK_INCLUDE_BUILDTOOLS') == '1':
*** 0720: buildtools_fn = get_current_buildtools(d)
0721: else:
0722: buildtools_fn = None
0723: d.setVar('SDK_REQUIRED_UTILITIES', get_sdk_required_utilities(buildtools_fn, d))
0724: d.setVar('SDK_BUILDTOOLS_INSTALLER', buildtools_fn)
File: '/yocto/sources/poky/meta/classes/populate_sdk_ext.bbclass', lineno: 556, function: get_current_buildtools
0552: import glob
0553: btfiles = glob.glob(os.path.join(d.getVar('SDK_DEPLOY'), '*-buildtools-nativesdk-standalone-*.sh'))
0554: btfiles.sort(key=os.path.getctime)
0555: print("MY-DEBUG - btfiles = {} - SDK_DEPLOY = {}".format(btfiles, d.getVar('SDK_DEPLOY')))
*** 0556: return os.path.basename(btfiles[-1])
0557:
0558:def get_sdk_required_utilities(buildtools_fn, d):
0559: """Find required utilities that aren't provided by the buildtools"""
0560: sanity_required_utilities = (d.getVar('SANITY_REQUIRED_UTILITIES') or '').split()
Exception: IndexError: list index out of range
DEBUG: Python function do_populate_sdk_ext finished
MY-DEBUG - btfiles = [] - SDK_DEPLOY = /yocto/build-mX6ull/tmp/deploy/sdk
Question:
In line 553 the array btfiles should be filled,
but the array is empty and the line 556 generate the exception.
I have no idea of whats is wrong, what I have forget and what Yocto environment variables are needed to setup to do a correctly work.
hope you are doing good
i had similar issue where i couldnt populate esdk,
its all in GLIBC version..
kindly update your GLIBC version
In my case i had to update GLIBC version to 2.33 in "yocto-uninative.inc" file. It worked for me!!!

UnicodeDecodeError: invalid start byte in METADATA file at path:

I see that several Python-package related files have gibberish at their end.
Due to this, I am unable to do several pip operations (even basic ones like "pip list").
(Usually, I use conda by the way)
For example. When I pressed pip list. I get the following error.
ERROR: Exception:
Traceback (most recent call last):
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\cli\base_command.py", line 173, in _main
status = self.run(options, args)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 179, in run
self.output_package_listing(packages, options)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 255, in output_package_listing
data, header = format_for_columns(packages, options)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 307, in format_for_columns
row = [proj.raw_name, str(proj.version)]
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\metadata\base.py", line 163, in raw_name
return self.metadata.get("Name", self.canonical_name)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\metadata\pkg_resources.py", line 96, in metadata
return get_metadata(self._dist)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\utils\packaging.py", line 48, in get_metadata
metadata = dist.get_metadata(metadata_name)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 1424, in get_metadata
return value.decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfd in position 14097: invalid start byte in METADATA file at path: c:\users\shan_jaffry\miniconda3\envs\sql_version\lib\site-packages\hupper-1.10.2.dist-info\METADATA
I went into the file META and found the following gibberish at the end. This (I found) has been done in several other files i.e. end of files are appended with gibberish and the actual thin is removed. Any help?
> 0.1 (2016-10-21)
> ================
> -
> - Initial rele9ýl·øA
I found that the by manually going to the site-packages folder, and removing the two folders, :: hupper and hupper-1.10.2.dist-info and then installing hupper again using "pip install hupper", problem was solved.
The issue was that the hupper package (and hupper-1.10.2.dist-info) were corrupted. Hence uninstall and re-install helped.

Bazel Error After Upgrading Nodejs Rules - ERROR: defs.bzl has been removed from build_bazel_rules_nodejs

After upgrading build_bazel_rules_nodejs from 0.42.2 to 1.0.1 I get this error:
ERROR: /home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl:19:5: Traceback (most recent call
last):
File "/home/flolu/Desktop/minimal-bazel-monorepo/services/server/src/BUILD", line 76
nodejs_image(name = "server", <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/io_bazel_rules_docker/nodejs/image.bzl", line 112, in nodejs_image
nodejs_binary(name = binary, <2 more arguments>)
File "/home/flolu/.cache/bazel/_bazel_flolu/698f7adad10ea020bcdb85216703ce08/external/build_bazel_rules_nodejs/defs.bzl", line 19, in nodejs_binary
fail(<1 more arguments>)
ERROR: defs.bzl has been removed from build_bazel_rules_nodejs
Please update your load statements to use index.bzl instead.
See https://github.com/bazelbuild/rules_nodejs/wiki#migrating-off-build_bazel_rules_nodejsdefsbzl for help.
ERROR: error loading package 'services/server/src': Package 'services/server/src' contains errors
INFO: Elapsed time: 0.119s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (1 packages loaded)
FAILED: Build did NOT complete successfully (1 packages loaded)
Line 76 in the error refers to this part of the BUILD file:
load("#io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "server",
data = [":lib"],
entry_point = ":index.ts",
)
But there is no defs.bzl. So I am confused by the error.
So in detail I have upgraded from
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "16fc00ab0d1e538e88f084272316c0693a2e9007d64f45529b82f6230aedb073",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/0.42.2/rules_nodejs-0.42.2.tar.gz"],
)
to
http_archive(
name = "build_bazel_rules_nodejs",
sha256 = "e1a0d6eb40ec89f61a13a028e7113aa3630247253bcb1406281b627e44395145",
urls = ["https://github.com/bazelbuild/rules_nodejs/releases/download/1.0.1/rules_nodejs-1.0.1.tar.gz"],
)
You can recreate the error by cloning this repo: https://github.com/flolude/minimal-bazel-monorepo/tree/48add7ddcad4d25e361e1c7f7f257cf916a797b2 and running
bazel test //services/server/src:test
There are some breaking changes between those versions of build_bazel_rules_nodejs. Namely the import path this:
load("#build_bazel_rules_nodejs//:defs..bzl", <whatever>)
needs to become this
load("#build_bazel_rules_nodejs//:index.bzl", <whatever>)
You also need to update your io_bazel_rules_docker to at least v0.13.0. From looking at the release notes its the version compatible with 1.0.1 in node. https://github.com/bazelbuild/rules_docker/releases/

os.read on inotify file descriptor: reading 32 bytes works but 31 raises an exception

I'm writing a program that should respond to file changes using inotify. The below skeleton program works as I expect...
# test.py
import asyncio
import ctypes
import os
IN_CLOSE_WRITE = 0x00000008
async def main(loop):
libc = ctypes.cdll.LoadLibrary('libc.so.6')
fd = libc.inotify_init()
os.mkdir('directory-to-watch')
wd = libc.inotify_add_watch(fd, 'directory-to-watch'.encode('utf-8'), IN_CLOSE_WRITE)
loop.add_reader(fd, handle, fd)
with open(f'directory-to-watch/file', 'wb') as file:
pass
def handle(fd):
event_bytes = os.read(fd, 32)
print(event_bytes)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
... in that it outputs...
b'\x01\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00file\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
However, if I change it to attempt to read 31 bytes...
event_bytes = os.read(fd, 31)
... then it raises an exception...
Traceback (most recent call last):
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/t.py", line 19, in handle
event_bytes = os.read(fd, 31)
OSError: [Errno 22] Invalid argument
... and similarly for all numbers smaller than 31 that I have tried, including 1 byte.
Why is this? I would have thought it should be able to attempt to read any number of bytes, and just return whatever is in the buffer, up to the length given by the second argument of os.read.
I'm running this in Alpine linux 3.10 in a docker container on Mac OS, with very basic Dockerfile:
FROM alpine:3.10
RUN apk add --no-cache python3
COPY test.py /
and running it by
docker build . -t test && docker run -it --rm test python3 /test.py
It's because it's written to only allow reads that can return information about the next event. From http://man7.org/linux/man-pages/man7/inotify.7.html
The behavior when the buffer given to read(2) is too small to return
information about the next event depends on the kernel version: in
kernels before 2.6.21, read(2) returns 0; since kernel 2.6.21,
read(2) fails with the error EINVAL.
and from https://github.com/torvalds/linux/blob/f1a3b43cc1f50c6ee5ba582f2025db3dea891208/include/uapi/asm-generic/errno-base.h#L26
#define EINVAL 22 /* Invalid argument */
which presumably maps to the Python OSError with Errno 22.

Sphinx documentation and links to Markdown

I'm trying to use Sphinx to build some documentation from Markdown source. My conf.py is as follows...
conf.py
from recommonmark.parser import CommonMarkParser
project = 'DS'
copyright = '2018, DS'
author = 'DS, Work'
version = ''
release = ''
extensions = []
templates_path = ['_templates']
source_suffix = ['.rst', '.md']
master_doc = 'index'
language = None
exclude_patterns = []
pygments_style = 'sphinx'
html_theme = 'classic'
html_static_path = ['_static']
source_parsers = {
'.md': CommonMarkParser,
}
htmlhelp_basename = 'DSDocumentationdoc'
latex_elements = {
}
latex_documents = [
(master_doc, 'DSDocumentation.tex', 'DS Documentation',
'DS, Work', 'manual'),
]
man_pages = [
(master_doc, 'dsdocumentation', 'DS Documentation',
[author], 1)
]
texinfo_documents = [
(master_doc, 'DSDocumentation', 'DS Documentation',
author, 'DSDocumentation', 'One line description ofproject.',
'Miscellaneous'),
]
index.rst
Welcome to DS Documentation!
======================================
The following documentation is produced and maintained by the Data Science team.
Contents:
.. toctree::
:maxdepth: 2
:glob:
README.md
documentation.md
getting_started/*
how-tos/*
statistics_data_visualisation.md
The documents build and html output is generated, however README.md has links to other markdown documents in the two sub-directories such as the following...
... [this document](./getting_started/setting_your_machine_up.md)...
...which in the translated README.html document the target has not been converted to the translated HTML target as its been recognised as reference external...
...<a class="reference external" href="./getting_started/setting_your_machine_up.md">this document</a>...
...I was half-expecting/hoping it would output as reference internal and convert the file extension approrpiately...
...<a class="reference internal" href="./getting_started/setting_your_machine_up.html">this document</a>...
...so that links worked in the same vein as the Table of Contents does in the sidebar.
Any suggestions as to whether this can be achieved would be appreciated.
Cheers.
EDIT
Trying out the solution suggested by #waylan I have added the following to by conf.py to enable_auto_doc_ref...
def setup(app):
app.add_config_value('recommonmark_config', {
'enable_auto_doc_ref': True,
}, True)
app.add_transform(AutoStructify)
...and on running make html I get the following error.....
❱ cat /tmp/sphinx-err-57rejer3.log
# Sphinx version: 1.8.0
# Python version: 3.6.6 (CPython)
# Docutils version: 0.14
# Jinja2 version: 2.10
# Last messages:
# building [mo]: targets for 0 po files that are out of date
#
# building [html]: targets for 16 source files that are out of date
#
# updating environment:
#
# 16 added, 0 changed, 0 removed
#
# reading sources... [ 6%] README
#
# Loaded extensions:
# sphinx.ext.mathjax (1.8.0) from /home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/ext/math
jax.py
# alabaster (0.7.11) from /home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/alabaster/__init__.py
Traceback (most recent call last):
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/cmd/build.py", line 304, in build_ma
in
app.build(args.force_all, filenames)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/application.py", line 341, in build
self.builder.build_update()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 347, in
build_update
len(to_build))
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 360, in
build
updated_docnames = set(self.read())
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 468, in
read
self._read_serial(docnames)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 490, in
_read_serial
self.read_doc(docname)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 534, in
read_doc
doctree = read_doc(self.app, self.env, self.env.doc2path(docname))
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/io.py", line 318, in read_doc
pub.publish()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/core.py", line 218, in publish
self.apply_transforms()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/core.py", line 199, in apply_trans
forms
self.document.transformer.apply_transforms()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/transforms/__init__.py", line 90, in
apply_transforms
Transformer.apply_transforms(self)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/transforms/__init__.py", line 171,
in apply_transforms
transform.apply(**kwargs)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 325, in ap
ply
self.traverse(self.document)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 287, in tr
averse
newnode = self.find_replace(c)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 267, in fi
nd_replace
newnode = self.auto_doc_ref(node)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 175, in au
to_doc_ref
return self.state_machine.run_role('doc', content=content)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/states.py", line 134, in run_r
ole
content=content)
TypeError: 'NoneType' object is not callable
I've looked through the last two calls and I think this might be down to content not being set, which may be something to do with my index.rst but I'm really out of my depth here.
The recommonmark documentation suggests enabling AutoStructify by adding the following to your config.py file:
from recommonmark.transform import AutoStructify
github_doc_root = 'https://github.com/rtfd/recommonmark/tree/master/doc/'
def setup(app):
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'auto_toc_tree_section': 'Contents',
}, True)
app.add_transform(AutoStructify)
This will give you the following features:
enable_auto_toc_tree: whether enable Auto Toc Tree feature.
auto_toc_tree_section: when enabled, Auto Toc Tree will only be enabled on section that matches the title.
enable_auto_doc_ref: whether enable Auto Doc Ref feature.
enable_math: whether enable Math Formula
enable_inline_math: whether enable Inline Math
enable_eval_rst: whether Embed reStructuredText is enabled.
url_resolver: a function that maps a existing relative position in the document to a http link
Of note is the Auto Doc Ref feature:
It is common to refer to another document page in one document. We
usually use reference to do that. AutoStructify will translate these
reference block into a structured document reference. For example
[API Reference](api_ref.md)
will be translated to the AST of following reStructuredText code
:doc:`API Reference </api_ref>`
And it will be rendered as API Reference
Why is this necessary? Because, unlike Rst, Markdown does not have any knowledge of anything outside of the given document and has no support for Rst style directives. Therefore, there is no mechanism to transform a URL.
Instead, AutoStructify waits until after the recommonmark bridge converts the Markdown to Sphinx's underlying document structure (docutils document object), then it runs a series of transformers on it to provide limited Rst like functionality. Even with AutoStructify, you will never get full feature support when using Markdown. That would require Markdown to have native support for directives, which is not likely to ever happen.

Resources