I wanted to generate a PDF documentation of PyTorch for myself and finally executed the following commands after learning
pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html
git clone https://github.com/pytorch/pytorch
cd pytorch/docs/
make latexpdf
In between I installed some dependencies related to Sphinx whenever needed.
After running the three commands. Several files including pytorch.tex are created in pytorch/docs/build/latex, but I cannot see a PDF file or can obatain a PDF from that tex file.
The last few lines from the output I got after executing the final command make latexpdf is as follows
copying TeX support files... copying TeX support files...
done
build finished with problems, 7 warnings.
make: *** [Makefile:38: latexpdf] Error 1
How can I overcome the error? How to get the final PDF documentation file of PyTorch?
Related
so I have followed every guide I can find online from stackoverflow to random websites and I cant get it to work so would love some help.
FYI: using ubuntu, Qt creator 6.4.2 which was installed by going to the website downloading the installer and following setup process.
What I have done
ran command to find directory and check if there is a file called Qt5mqtt.dll there wasnt
qmake -query QT_INSTALL_LIBS
so I cloned the github repository using the command
git cline git://code.qt.io/qt/qtmqtt.git
Now I cd into the directory where there is a file called mqtt.pro (most guides say this file is called qtmqtt.pro but mine isnt) this files is found in qtmqtt->examples->mqtt
now I run the commands
qmake
make
make install
(tried all with sudo)
Here I get errors when i run make and make install saying
cd consolepubsub/ && ( test -e Makefile || /usr/lib/qt5/bin/qmake -o Makefile /home/dave/Qt/qtmqtt/examples/mqtt/consolepubsub/consolepubsub.pro ) && make -f Makefile
make[1]: Entering directory '/home/dave/Qt/qtmqtt/examples/mqtt/consolepubsub'
( test -e Makefile.qtmqtt_pub || /usr/lib/qt5/bin/qmake -o Makefile.qtmqtt_pub /home/dave/qtmqtt/examples/mqtt/consolepubsub/qtmqtt_pub.pro ) && make -f Makefile.qtmqtt_pub
Cannot find file: /home/dave/qtmqtt/examples/mqtt/consolepubsub/qtmqtt_pub.pro.
make[1]: *** [Makefile:45: sub-qtmqtt_pub-pro-make_first-ordered] Error 2
make[1]: Leaving directory '/home/dave/Qt/qtmqtt/examples/mqtt/consolepubsub'
make: *** [Makefile:51: sub-consolepubsub-make_first] Error 2
I have tried moving the cloned mqtt folder into where I have installed Qt,
I also tried the steps from Install MQTT Module in Open Source QT
I have also tried the mqtt library from https://github.com/emqx/qmqtt with similar results
Also if i add the QT += mqtt to my .pro file in a project I still get the error
Project ERROR: Unknown module(s) in QT: mqtt
Any help would be great, as I am very stuck and confused as what I am doing wrong
I have successfully installed pytorch from source using command git clone --recursive https://github.com/pytorch/pytorch.git on my Windows 11 with CPU. But I cannot run the pretrained DL model. It gives error on line: from caffe2.python import workspace. Even though I have workspace on pytorch/caffe2/python/workspace. Please guide if there is anything else I need to do?
Please enable BUILD_CAFFE2 while building PyTorch from source if not already done.
I wanted to build tensor-flow serving from source optimized for my cpu and I have followed the instructions given at tensorflow serving page.
I felt like the instructions is not completed. I was only able to find these three lines of instructions and I have done it.
git clone -b r2.3 https://github.com/tensorflow/serving.git
cd serving
tools/run_in_docker.sh -d tensorflow/serving:2.3.0-devel \
bazel build --config=nativeopt tensorflow_serving/...
So I'm wondering what to do next after the last step? How can I install it in my ubuntu so that I can access it via terminal using the command like this tensorflow_model_server --port=8500...?
After building Tensorflow Serving you can start testing it, a good starting point can be this Serving Basic from Tensorflow website.
I currently am trying to build from Tensorflow 1.5 source because I am trying to implement a model on mobile and had to include some kernels. I have run through .configure in TF and I'm pointing at the 3.5 python location.
I am now trying to build the wheel file and for the life of me cannot get around the invalid command bdist_wheel error. I am currently at the step of building the wheel file using:
bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
And I receive the following error:
Thu Feb 22 11:17:38 PST 2018 : === Using tmpdir: /tmp/tmp.XPe7Djtgg2
~/Documents/Git/tensorflow/bazel-bin/tensorflow/tools/pip_package/build_pip_package.runfiles
~/Documents/Git/tensorflow ~/Documents/Git/tensorflow
/tmp/tmp.XPe7Djtgg2 ~/Documents/Git/tensorflow Thu Feb 22 11:17:39 PST
2018 : === Building wheel usage: setup.py [global_opts] cmd1
[cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2
...] or: setup.py --help-commands or: setup.py cmd --help
error: invalid command 'bdist_wheel'
I have also tried making sure wheel is installed but when I type:
sudo pip3 install wheel
I receive the following message.
Requirement already satisfied: wheel in
/usr/local/lib/python3.5/dist-packages
I went into my .bashrc file and have an entry for
export PYTHON_BIN_PATH=/usr/bin/
Now looking in build_pip_package I saw that it is using the line below to trigger the wheel build:
"${PYTHON_BIN_PATH:-python}" setup.py bdist_wheel ${PKG_NAME_FLAG} >/dev/null
At the path /usr/bin/ the symlink for python is for 2.7 which I think is the issue, because when I installed wheel using pip2, the wheel file built but it was for 2.7 and not 3.5 so it stated that it couldnt install the wheel file for the current environment.
I thought that maybe modifying the line above to the below entry would work but I still get the same bdist error. I cannot figure out how to get the wheel to build under 3.5.
"${PYTHON_BIN_PATH:-python3.5}" setup.py bdist_wheel ${PKG_NAME_FLAG} >/dev/null
I also tried setting an alias for python to python3.5 which works fine when I try to call just python at the command prompt but it does not work when it is being called from within the "build_pip_package" wheel building call.
Anyone know how I might be able to resolve this? I assume if I were to install conda I can probably get around this, but I would prefer not to have to deal with that if at all possible.
Thanks!
So I found what my issue was in this specific case. I initially performed the Tensorflow configure and wasn't aware of PYTHON_BIN_PATH at the time. now from reading, it seems that when I ran configure, it should have created this entry (at least from what I understand). This did not happen and it was only when I was dealing with the error stated above that I dug deeper and found that build_pip_package was accessing that variable and that it was not set in my .bashrc. It was at this time that I added the value and was trying to just run the building of the wheel.
After posting here, I then figured I would try rerunning through the full TF building process again and see if I could spot any other errors that I might have missed along the way. It was here that I then got an error stating that /usr/bin/ was a path which just happened to be the same value assigned to PYTHON_BIN_PATH. I modified it to export PYTHON_BIN_PATH=/usr/bin/python3 and then rebuilt tensorflow and once I finished I continued with building the wheel and installing it and all went through correctly.
Hopefully this helps someone at one point.
I have run the tutorials and created my own neural network implementation in tensorflow successfully. I then decided to go one bit further an add my own op because I needed to do some of my own preprocessing on the data. I followed the tutorial on the tensorflow site to add an op. I successfully built tensorflow after writing my own c++ file. Then, when I try to use it from my code, I get
'module' object has no attribute 'sec_since_midnight'
My code does get reflected in bazel-genfiles/tensorflow/python/ops/gen_user_ops.py so the wrapper does get generated for it correctly. It just looks like I can't see the tensorflow/python/user_ops/user_ops.py which is what imports that file.
Now when I when I go through the testing of this module, I get the following odd behavior. It should not pass because the expected vector I give it does not match what the result should be. But maybe the test never gets executed despite saying passed?
INFO: Found 1 test target...
Target //tensorflow/python:sec_since_midnight_op_test up-to-date:
bazel-bin/tensorflow/python/sec_since_midnight_op_test
INFO: Elapsed time: 6.131s, Critical Path: 5.36s
//tensorflow/python:sec_since_midnight_op_test (1/0 cached) PASSED
Executed 0 out of 1 tests: 1 test passes.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
Hmmm. Well, I uninstalled tensorflow and then I reinstalled from what I just built and what I wrote was suddenly recognized. I have seen this behavior twice in a row now where an uninstall is necessary. So to sum, the steps after adding my own op are:
$ pip uninstall tensorflow
$ bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
# To build with GPU support:
$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
# The name of the .whl file will depend on your platform.
$ pip install /tmp/tensorflow_pkg/tensorflow-0.5.0-cp27-none-linux_x86_64.whl