asciidoctor - offline user manual is nowhere to be found - asciidoctor

Does anyone know where to get an offline version of the Asciidoctor's user manual: https://asciidoctor.org/docs/user-manual/
It is weird, how developers brag about Asciidoctor being able to export to PDF, HTML... But at the same time they fail to present a nice PDF document for offline use...

You can get the raw adoc source from: https://github.com/asciidoctor/asciidoctor.org/blob/master/docs/user-manual.adoc and convert it using asciidoctor.
Feel free to grab the result directly from:
https://sqli.dev/asciidoctor/user-manual.pdf (asciidoctor-pdf threw a few errors with this document, which I haven’t investigated, so some things may not show up as intended)
https://sqli.dev/asciidoctor/user-manual.html (this html will still fetch online resources for fonts, mathjax, etc.)
You can use the following to limit the amount of online resources needed:
git clone https://github.com/asciidoctor/asciidoctor.org.git
cd asciidoctor.org/docs
curl -O https://fontawesome.com/v4.7.0/assets/font-awesome-4.7.0.zip
7z x font-awesome-4.7.0.zip
asciidoctor -a !iconfont-remote=# -a icons=font -a stylesdir=font-awesome-4.7.0/css -a !webfonts=# user-manual.adoc
The resulting user-manual.html will only try to fetch the MathJax.js from a remote site.
I would recommend to open an issue at https://github.com/asciidoctor/asciidoctor.org proposing that they offer offline download options.

Related

How can I make usbmon log file (*.mon)?

I'm trying to vusb-analyzer.
It requires *.mon log file.
How can I make usbmon log file (*.mon)?
https://www.kernel.org/doc/Documentation/usb/usbmon.txt
The document you linked in your question is actually the answer, please see the sections 1-3.
In section 3, it says:
# cat /sys/kernel/debug/usb/usbmon/0u > /tmp/1.mon.out
This will create a text file 1.mon.out. Its structure is also described in the same document.
Now, how do I know that this is the file to be opened by vusb-analyzer? From what I see, the website of this project doesn't make it clear what the *.mon file is.
However, you can see it in the source code:
https://github.com/scanlime/vusb-analyzer/blob/master/VUsbTools/Log.py#L498
It clearly states, that the program uses the syntax described in the document that you already know:
https://www.kernel.org/doc/Documentation/usb/usbmon.txt
The name of your file doesn't really matter, but if you want it to end with ".mon", you could simply use:
# cat /sys/kernel/debug/usb/usbmon/0u > ~/somefile.mon
Two warnings:
The line with cat I posted here is just an example and in order to use it, you will need to follow the steps in the document (it won't work without enabling usbmon first)
vusb-analyzer hasn't been updated for years and I wasn't able to run it on my machine. Its website mentions Ubuntu 8.10 so I wouldn't be surprised if others had problems running it, too. (For example, in order to reproduce your problem, provide more help).

WGET - how to download embedded pdf's that have a download button from a text file URL list? Is it possible?

Happy New Years!
I wanted to see if anybody has ever successfully downloaded embedded pdf file's from multiple url's contained in a .txt file for a website?
For instance;
I tried several combinations of wget -i urlist.txt (which downloads all the html files perfectly); however it doesn't also grab each html file's embedded .pdf?xxxxx <---- slug on the end of the .pdf?*
The exact example of this obstacle is the following:
This dataset I have placed all 2 pages of links into a url.txt:
https://law.justia.com/cases/washington/court-of-appeals-division-i/2014/
1 example URL within this dataset:
https://law.justia.com/cases/washington/court-of-appeals-division-i/2014/70147-9.html
The embedded pdf link is the following:
https://cases.justia.com/washington/court-of-appeals-division-i/2014-70147-9.pdf?ts=1419887549
The .pdf files are actually "2014-70147-9.pdf?ts=1419887549" .pdf?ts=xxxxxxxxxx
each one is different.
The URL list contains 795 links. Does anyone have a successful method to download every .html in my urls.txt while also downloading the .pdfxxxxxxxxxxxxxx file's also to go with the .html's ?
Thank you!
~ Brandon
Try using the following:
wget --level 1 --recursive --span-hosts --accept-regex 'https://law.justia.com/cases/washington/court-of-appeals-division-i/2014/.*html|https://cases.justia.com/washington/court-of-appeals-division-i/.*.pdf.*' --input-file=urllist.txt
Details about the options --level, --recursive, --span-hosts, --accept-regex, and --input-file can be found in wget documentation at https://www.gnu.org/software/wget/manual/html_node/index.html.
You will also need to know how regular expressions work. You can start at https://www.grymoire.com/Unix/Regular.html
You are looking for a web-scraper. Be careful to not break any rules if you ever use one.
You could also process the content you have received through wget using some string manipulation in a bash script.

Downloading pre-trained models from OpenVINO™ Toolkit Pre-Trained Models by Ubuntu Terminal

I am trying to use some pre-trained model from the intel Pretrained model zoo. Here is the address of that site https://docs.openvinotoolkit.org/latest/_models_intel_index.html. Is there any specific command for downloading these models in a Linux system.
downloader.py (model downloader) downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer;
USAGE
The basic usage is to run the script like this:
./downloader.py --all
This will download all models into a directory tree rooted in the current directory. To download into a different directory, use the -o/--output_dir option:
./downloader.py --all --output_dir my/download/directory
The --all option can be replaced with other filter options to download only a subset of models. See the "Shared options" section.
You may use --precisions flag to specify comma separated precisions of weights to be downloaded.
./downloader.py --name face-detection-retail-0004 --precisions FP16,INT8
By default, the script will attempt to download each file only once. You can use the --num_attempts option to change that and increase the robustness of the download process:
./downloader.py --all --num_attempts 5 # attempt each download five times
You can use the --cache_dir option to make the script use the specified directory as a cache. The script will place a copy of each downloaded file in the cache, or, if it is already there, retrieve it from the cache instead of downloading it again.
./downloader.py --all --cache_dir my/cache/directory
The cache format is intended to remain compatible in future Open Model Zoo versions, so you can use a cache to avoid redownloading most files when updating Open Model Zoo.
By default, the script outputs progress information as unstructured, human-readable text. If you want to consume progress information programmatically, use the --progress_format option:
./downloader.py --all --progress_format=json
When this option is set to json, the script's standard output is replaced by a machine-readable progress report, whose format is documented in the "JSON progress report format" section. This option does not affect errors and warnings, which will still be printed to the standard error stream in a human-readable format.
You can also set this option to text to explicitly request the default text format.
See the "Shared options" section for information on other options accepted by the script.
More details about model downloader can be found from the following url: https://docs.openvinotoolkit.org/latest/_tools_downloader_README.html
As mentioned in the following url:-https://docs.openvinotoolkit.org/latest/_models_intel_index.html, you can download the pretrained models using Model Downloader.(/deployment_tools/open_model_zoo/tools/downloader)
More details about model downloader can be found from the following url:
https://docs.openvinotoolkit.org/latest/_tools_downloader_README.html

Verifying and installing Hyperledger Fabric chaincode

We wish to exchange signed CDS packages with a partner organisation on our shared Hyperledger Fabric network. We are working according to the Operator Guide at https://hyperledger-fabric.readthedocs.io/en/latest/chaincode4noah.html#packaging.
We are able to receive and install signed packages with no problem, but how do we know what we are installing? Our installation procedures call for an inspection of what we receive, and potentially also creating tests against the object we intend to install.
My question is: How are we able to inspect the source code of what we are asked by our partner organisation to install? If we are not able to inspect it, we have no real transparency on the consensus that we are expected to give.
We have tried extracing the gzipped object from the .pak file, and unzpping it, but the .gz format does not seem to be in a standard format. I suspect we are missing something fundamental here, either in procedure or tooling.
For reference, we are extracting the code segment like this:
protoc --decode_raw < test_cc_signed_package.pak > test_cc_signed_package.decoded
then we extract the gzipped "code" portion like this (in our example signed package it is at at "1.2.1.3" of the file, but might be different for you)
cat test_cc_signed_package.decoded | grep "^ 3:" | sed -r 's/^ 3:\ \"(.*)\"$/\1/'
The output is in a format that we can perform a diff on, and which we were hoping to save to a binary file and simply gunzip it.Gzip however is refusing to decode the file, and inspecting it in xxd, we can see that the format is not correct for gzip.
Perhaps you can ask your partner organization to send you the files that were packaged so you can just package it yourself, and then compare the package to the package you are supposed to install?

Can one specify a file content-type to download using Wget?

I want to use wget to download files linked from the main page of a website, but I only want to download text/html files. Is it possible to limit wget to text/html files based on the mime content type?
I dont think they have implemented this yet. As it is still on there bug list.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=21148
You might have to do everything by file extension
Wget2 has this feature.
--filter-mime-type Specify a list of mime types to be saved or ignored`
### `--filter-mime-type=list`
Specify a comma-separated list of MIME types that will be downloaded. Elements of list may contain wildcards.
If a MIME type starts with the character '!' it won't be downloaded, this is useful when trying to download
something with exceptions. For example, download everything except images:
wget2 -r https://<site>/<document> --filter-mime-type=*,\!image/*
It is also useful to download files that are compatible with an application of your system. For instance,
download every file that is compatible with LibreOffice Writer from a website using the recursive mode:
wget2 -r https://<site>/<document> --filter-mime-type=$(sed -r '/^MimeType=/!d;s/^MimeType=//;s/;/,/g' /usr/share/applications/libreoffice-writer.desktop)
Wget2 has not been released as of today, but will be soon. Debian unstable already has an alpha version shipped.
Look at https://gitlab.com/gnuwget/wget2 for more info. You can post questions/comments directly to bug-wget#gnu.org.
Add the header to the options
wget --header 'Content-type: text/html'

Resources