Sphinx documentation and links to Markdown - python-3.x

I'm trying to use Sphinx to build some documentation from Markdown source. My conf.py is as follows...
conf.py
from recommonmark.parser import CommonMarkParser
project = 'DS'
copyright = '2018, DS'
author = 'DS, Work'
version = ''
release = ''
extensions = []
templates_path = ['_templates']
source_suffix = ['.rst', '.md']
master_doc = 'index'
language = None
exclude_patterns = []
pygments_style = 'sphinx'
html_theme = 'classic'
html_static_path = ['_static']
source_parsers = {
'.md': CommonMarkParser,
}
htmlhelp_basename = 'DSDocumentationdoc'
latex_elements = {
}
latex_documents = [
(master_doc, 'DSDocumentation.tex', 'DS Documentation',
'DS, Work', 'manual'),
]
man_pages = [
(master_doc, 'dsdocumentation', 'DS Documentation',
[author], 1)
]
texinfo_documents = [
(master_doc, 'DSDocumentation', 'DS Documentation',
author, 'DSDocumentation', 'One line description ofproject.',
'Miscellaneous'),
]
index.rst
Welcome to DS Documentation!
======================================
The following documentation is produced and maintained by the Data Science team.
Contents:
.. toctree::
:maxdepth: 2
:glob:
README.md
documentation.md
getting_started/*
how-tos/*
statistics_data_visualisation.md
The documents build and html output is generated, however README.md has links to other markdown documents in the two sub-directories such as the following...
... [this document](./getting_started/setting_your_machine_up.md)...
...which in the translated README.html document the target has not been converted to the translated HTML target as its been recognised as reference external...
...<a class="reference external" href="./getting_started/setting_your_machine_up.md">this document</a>...
...I was half-expecting/hoping it would output as reference internal and convert the file extension approrpiately...
...<a class="reference internal" href="./getting_started/setting_your_machine_up.html">this document</a>...
...so that links worked in the same vein as the Table of Contents does in the sidebar.
Any suggestions as to whether this can be achieved would be appreciated.
Cheers.
EDIT
Trying out the solution suggested by #waylan I have added the following to by conf.py to enable_auto_doc_ref...
def setup(app):
app.add_config_value('recommonmark_config', {
'enable_auto_doc_ref': True,
}, True)
app.add_transform(AutoStructify)
...and on running make html I get the following error.....
❱ cat /tmp/sphinx-err-57rejer3.log
# Sphinx version: 1.8.0
# Python version: 3.6.6 (CPython)
# Docutils version: 0.14
# Jinja2 version: 2.10
# Last messages:
# building [mo]: targets for 0 po files that are out of date
#
# building [html]: targets for 16 source files that are out of date
#
# updating environment:
#
# 16 added, 0 changed, 0 removed
#
# reading sources... [ 6%] README
#
# Loaded extensions:
# sphinx.ext.mathjax (1.8.0) from /home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/ext/math
jax.py
# alabaster (0.7.11) from /home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/alabaster/__init__.py
Traceback (most recent call last):
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/cmd/build.py", line 304, in build_ma
in
app.build(args.force_all, filenames)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/application.py", line 341, in build
self.builder.build_update()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 347, in
build_update
len(to_build))
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 360, in
build
updated_docnames = set(self.read())
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 468, in
read
self._read_serial(docnames)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 490, in
_read_serial
self.read_doc(docname)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/builders/__init__.py", line 534, in
read_doc
doctree = read_doc(self.app, self.env, self.env.doc2path(docname))
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/io.py", line 318, in read_doc
pub.publish()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/core.py", line 218, in publish
self.apply_transforms()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/core.py", line 199, in apply_trans
forms
self.document.transformer.apply_transforms()
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/sphinx/transforms/__init__.py", line 90, in
apply_transforms
Transformer.apply_transforms(self)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/docutils/transforms/__init__.py", line 171,
in apply_transforms
transform.apply(**kwargs)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 325, in ap
ply
self.traverse(self.document)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 297, in tr
averse
self.traverse(child)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 287, in tr
averse
newnode = self.find_replace(c)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 267, in fi
nd_replace
newnode = self.auto_doc_ref(node)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/transform.py", line 175, in au
to_doc_ref
return self.state_machine.run_role('doc', content=content)
File "/home/neil.shephard#samba.sheffield.thefloow.com/.local/lib/python3.6/site-packages/recommonmark/states.py", line 134, in run_r
ole
content=content)
TypeError: 'NoneType' object is not callable
I've looked through the last two calls and I think this might be down to content not being set, which may be something to do with my index.rst but I'm really out of my depth here.

The recommonmark documentation suggests enabling AutoStructify by adding the following to your config.py file:
from recommonmark.transform import AutoStructify
github_doc_root = 'https://github.com/rtfd/recommonmark/tree/master/doc/'
def setup(app):
app.add_config_value('recommonmark_config', {
'url_resolver': lambda url: github_doc_root + url,
'auto_toc_tree_section': 'Contents',
}, True)
app.add_transform(AutoStructify)
This will give you the following features:
enable_auto_toc_tree: whether enable Auto Toc Tree feature.
auto_toc_tree_section: when enabled, Auto Toc Tree will only be enabled on section that matches the title.
enable_auto_doc_ref: whether enable Auto Doc Ref feature.
enable_math: whether enable Math Formula
enable_inline_math: whether enable Inline Math
enable_eval_rst: whether Embed reStructuredText is enabled.
url_resolver: a function that maps a existing relative position in the document to a http link
Of note is the Auto Doc Ref feature:
It is common to refer to another document page in one document. We
usually use reference to do that. AutoStructify will translate these
reference block into a structured document reference. For example
[API Reference](api_ref.md)
will be translated to the AST of following reStructuredText code
:doc:`API Reference </api_ref>`
And it will be rendered as API Reference
Why is this necessary? Because, unlike Rst, Markdown does not have any knowledge of anything outside of the given document and has no support for Rst style directives. Therefore, there is no mechanism to transform a URL.
Instead, AutoStructify waits until after the recommonmark bridge converts the Markdown to Sphinx's underlying document structure (docutils document object), then it runs a series of transformers on it to provide limited Rst like functionality. Even with AutoStructify, you will never get full feature support when using Markdown. That would require Markdown to have native support for directives, which is not likely to ever happen.

Related

Stable diffusion with openVino: Failed to set input blob with precision: I64, if CNNNetwork input blob precision is: FP64

I'm trying to make this version work on my CPU (Linux):
https://github.com/bes-dev/stable_diffusion.openvino
And it works fine without any initial image. But when I try to pass an initial image, I get this error:
Traceback (most recent call last):
File "/home/ideruga/workspace/stable_diffusion.openvino/demo.py", line 79, in <module>
main(args)
File "/home/ideruga/workspace/stable_diffusion.openvino/demo.py", line 39, in main
image = engine(
File "/home/ideruga/workspace/stable_diffusion.openvino/stable_diffusion_engine.py", line 188, in __call__
noise_pred = result(self.unet.infer_new_request({
File "/home/ideruga/anaconda3/lib/python3.9/site-packages/openvino/runtime/ie_api.py", line 266, in infer_new_request
return self.create_infer_request().infer(inputs)
......
File "/home/ideruga/anaconda3/lib/python3.9/site-packages/openvino/runtime/ie_api.py", line 31, in set_scalar_tensor
request.set_tensor(key, tensor)
RuntimeError: [ PARAMETER_MISMATCH ] Failed to set input blob with precision: I64, if CNNNetwork input blob precision is: FP64
It's bizarre, because I am not messing with any parameters. It's as if model that it downloads is not compatible with parsed input image.
I've actually found a bug in the linked repository, I'll submit a fix later today. The used model expects f64 but is fed with i64 value. I'll post a comment with the PR when it's submitted.

Encountered an internal AutoML error- ClientException: Message: No objects to concatenate

I am trying to implement Hierarchical time series forecasting on azureautoml pipelines.
I followed this notebook for implementation
https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb
While I ran training pipeline on compute instance it worked, but when I am running the same on compute cluster it breaks at hts-proportion-calculation part.
This is the error I am getting,
system error:
Encountered an internal AutoML error. Error Message/Code: ClientException. Additional Info: ClientException:
      Message: No objects to concatenate
      InnerException: None
      ErrorResponse
{
"error": {
"message": "No objects to concatenate"
}
}
logs :
Loading arguments for scenario proportions-calculation
adding argument --input-medatadata
adding argument --hts-graph
adding argument --enable-event-logger
Input arguments dict is {'--input-medatadata': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_automl_training_workspaceblobstore/azureml/17ca5ae7-7269-4246-888f-e781071e3f5c/automl_training', '--hts-graph': '/mnt/azureml/cr/j/85509be625484b6caa3c1d97b7ab2e33/cap/data-capability/wd/INPUT_hts_graph_workspaceblobstore/azureml/a2c1b15a-c895-41e8-b6a6-1ca37ebe9e77/hts_graph', '--enable-event-logger': None}
Unknown file to proceed outputs.txt
processing: outputs.txt with type None.
Cleaning up all outstanding Run operations, waiting 300.0 seconds
3 items cleaning up...
Cleanup took 0.001676321029663086 seconds
Traceback (most recent call last):
File "proportions_calculation_wrapper.py", line 47, in <module>
runtime_wrapper.run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_many_models/automl_pipeline_step_wrapper.py", line 63, in run
self._run()
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 44, in _run
proportions_calculation(self.arguments_dict, self.event_logger, script_run=self.step_run)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 173, in proportions_calculation
proportion_files_list, forecasting_parameters.time_column_name, graph.label_column_name
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/azureml/train/automl/runtime/_hts/proportions_calculation.py", line 92, in calculate_time_agg_sum_for_all_files
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 304, in concat
sort=sort,
File "/azureml-envs/azureml_e34d0633ffc4cb2fa25d91e3da5f59be/lib/python3.7/site-packages/pandas/core/reshape/concat.py", line 351, in __init__
raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate
Please let me know how can I resolve this issue ?
This error was incurred as Iteration timeout was not less than experiment timeout , but the system error & logs are a kind of misleading.
df = pd.concat(pool.map(concat_func, files_batches), ignore_index=True)
logs was pointing to pandas "No objects to concatenate"
This error can be overcome by setting iterationtimeout value less than experimenttime out value.
I had set iteration_timeout_minutes=60 which caused the error.
automl_settings = AutoMLConfig(
task="forecasting",
primary_metric="normalized_root_mean_squared_error",
experiment_timeout_hours=1,
label_column_name=label_column_name,
track_child_runs=False,
forecasting_parameters=forecasting_parameters,
pipeline_fetch_max_batch_size=15,
model_explainability=model_explainability,
n_cross_validations="auto", # Feel free to set to a small integer (>=2) if runtime is an issue.
cv_step_size="auto",
# The following settings are specific to this sample and should be adjusted according to your own needs.
iteration_timeout_minutes=10,
iterations=15,
)
We are able to run the sample successfully using the compute cluster as given below.
from azureml.core.compute import ComputeTarget, AmlCompute
# Name your cluster
compute_name = "hts-compute"
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print("Found compute target: " + compute_name)
else:
print("Creating a new compute target...")
provisioning_config = AmlCompute.provisioning_configuration(
vm_size="STANDARD_D16S_V3", max_nodes=20
)
# Create the compute target
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# Can poll for a minimum number of nodes and for a specific timeout.
# If no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20
)
# For a more detailed view of current cluster status, use the 'status' property
print(compute_target.status.serialize())

I am getting this error: MermaidExtension.extendMarkdown() missing 1 required positional argument: 'md_globals'

This was working not too long ago (probably inadvertently upgraded a library somewhere.) All of my libraries are up to date.
Here is the stack trace:
File "C:\Users\jorda\Documents\projects\python\poolBoy\flaskApp.py", line 422, in about
html += markdown.markdown(text, extensions=['md_mermaid', 'fenced_code', 'tables'])
File "C:\Users\jorda\Documents\projects\python\poolBoy\venv\lib\site-packages\markdown\core.py", line 386, in markdown
md = Markdown(**kwargs)
File "C:\Users\jorda\Documents\projects\python\poolBoy\venv\lib\site-packages\markdown\core.py", line 96, in __init__
self.registerExtensions(extensions=kwargs.get('extensions', []),
File "C:\Users\jorda\Documents\projects\python\poolBoy\venv\lib\site-packages\markdown\core.py", line 125, in registerExtensions
ext.extendMarkdown(self)
TypeError: MermaidExtension.extendMarkdown() missing 1 required positional argument: 'md_globals'
Any suggestions would be most appreciated!
Must have upgraded Markdown inadvertently. I added the following to my requirements and it is working fine now:
Markdown<3.2

UnicodeDecodeError: invalid start byte in METADATA file at path:

I see that several Python-package related files have gibberish at their end.
Due to this, I am unable to do several pip operations (even basic ones like "pip list").
(Usually, I use conda by the way)
For example. When I pressed pip list. I get the following error.
ERROR: Exception:
Traceback (most recent call last):
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\cli\base_command.py", line 173, in _main
status = self.run(options, args)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 179, in run
self.output_package_listing(packages, options)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 255, in output_package_listing
data, header = format_for_columns(packages, options)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\commands\list.py", line 307, in format_for_columns
row = [proj.raw_name, str(proj.version)]
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\metadata\base.py", line 163, in raw_name
return self.metadata.get("Name", self.canonical_name)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\metadata\pkg_resources.py", line 96, in metadata
return get_metadata(self._dist)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_internal\utils\packaging.py", line 48, in get_metadata
metadata = dist.get_metadata(metadata_name)
File "C:\Users\shan_jaffry\Miniconda3\envs\SQL_version\lib\site-packages\pip\_vendor\pkg_resources\__init__.py", line 1424, in get_metadata
return value.decode('utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfd in position 14097: invalid start byte in METADATA file at path: c:\users\shan_jaffry\miniconda3\envs\sql_version\lib\site-packages\hupper-1.10.2.dist-info\METADATA
I went into the file META and found the following gibberish at the end. This (I found) has been done in several other files i.e. end of files are appended with gibberish and the actual thin is removed. Any help?
> 0.1 (2016-10-21)
> ================
> -
> - Initial rele9ýl·øA
I found that the by manually going to the site-packages folder, and removing the two folders, :: hupper and hupper-1.10.2.dist-info and then installing hupper again using "pip install hupper", problem was solved.
The issue was that the hupper package (and hupper-1.10.2.dist-info) were corrupted. Hence uninstall and re-install helped.

Pinterest API search not working anymore

I was looking for pinterest API endpoints...
I've founded some URL..
https://api.pinterest.com/v3/domains/<domains>/search/pins/?query=<query>&access_token=<access_token>
I was able to generate the access_token..but every time I've tried a POST on that URL it gave me:
{
"status":"failure",
"code":12,
"host":"ngapi2-b2fc674c",
"generated_at":"Mon, 09 Feb 2015 17:45:29 +0000",
"message":"Something went wrong on our end. Sorry about that.",
"data":"path: /v3/domains/www.vtracker.com.br/search/pins/\nparams:
[('access_token', [u'blablahblahblah']), ('query',
[u'como'])]\nTraceback (most recent call last):\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1504, in wsgi_app\n response =
self.full_dispatch_request()\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1264, in full_dispatch_request\n rv =
self.handle_user_exception(e)\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1262, in full_dispatch_request\n rv =
self.dispatch_request()\n File
\"/mnt/virtualenv/local/lib/python2.7/site-packages/flask/app.py\",
line 1248, in dispatch_request\n return
self.view_functions[rule.endpoint](**req.view_args)\n File
\"../api/pin_api.py\", line 715, in __call__\n
self._perform_auth()\n File \"../api/pin_api.py\", line 848, in
_perform_auth\n authorization.perform(dictified_values,
request.cookies, request.headers)\n File \"../api/pin_api.py\",
line 271, in perform\n params, cookies, headers)\n File
\"../api/pin_api.py\", line 121, in perform\n headers=headers)\n
File \"../api/decorators.py\", line 212, in
verify_user_authorization\n
core.Consumer.manager.get_scope_as_int(required_scope)):\n File
\"../core/managers/consumer_manager.py\", line 479, in
check_scope\n scope = migrate_legacy_scope(scope)\n File
\"../core/managers/consumer_manager.py\", line 475, in
migrate_legacy_scope\n if ~scope & old == 0:\nTypeError: bad
operand type for unary ~: 'NoneType'\n"
}
Is Pinterest API v3 closed or some other problem is going on?
Thks
You must first ensure that your app has been approved by Pinterest. You may need to reapply for approval (I had to reapply for my app). Once you are approved, you will see a link on your app page called "Visit API docs". This will link to the V3 documentation (https://developers.pinterest.com/docs/redoc/pinner_app). At least, this is the documentation I have been given access to. If you have a different type of app, maybe you will have access to other documentation.
After your app has been approved, the section of the documentation you will be interested in is "Search user pins" (https://developers.pinterest.com/docs/redoc/pinner_app/#tag/search).
The endpoint is: https://api.pinterest.com/v3/search/user_pins/{user}/
The documentation provides details about the query parameters that are allowed and the response data.

Resources