Installing torchvision from source libavcodec/avcodec.h not found - pytorch

I am trying to install torchvision from source, was able to get pytorch installed (needed it from source to use GPU) and now can't get torchvision to work.
I am getting the following error when I run the setup.py:
C:\Users\hoski\vision\torchvision\csrc\cpu\decoder\defs.h(11): fatal error C1083: Cannot open include file: 'libavcodec/avcodec.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\bin\HostX86\x64\cl.exe' failed with exit status 2
I downloaded the source for ffmpeg and added it to my path, and I checked the libavcodec folder and the file is there, it just isn't seeing it I guess?
Any advice would be greatly appreciated!

Try change has_ffmpeg = ffmpeg_exe is not None in Setup.py to has_ffmpeg = False

Related

ImportError: libhdf5.so.200: cannot open shared object file: No such file or directory (while importing pycbc.waveform)

I have installed LALSuite and pycbc on my Ubuntu 20.04LTS python3.8 virtual environment. But when I am trying to import some library such as pycbc.waveform I get the following error:
ImportError: libhdf5.so.200: cannot open shared object file: No such file or directory
When I checked my h5py.libs file in the location: /home/abhijeet/lal/lib/python3.8/site-packages
I have four files in it with the names:
libaec-9c9e97eb.so.0.0.10
libhdf5-346dbfc8.so.200.1.0
libhdf5_hl-84bfe2a0.so.200.0.1
libsz-090daab4.so.2.0.1
I have tried to uninstall h5py and reinstall it but it doesn't work. Please help me to rectify this error.

Is it possible to choose which file tensorflow is installed in?

I've been getting a warning when trying to install tensorflow on my windows pc, through the command prompt:
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\\Users\\RCG\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\tensorflow\\include\\external\\com_github_grpc_grpc\\src\\core\\ext\\filters\\client_channel\\lb_policy\\grpclb\\client_load_reporting_filter.h'
When looking this up, I found out that windows can't accept file paths longer than 250 words or so.
I've tried installing through anaconda but then I get a different error in pycharm: module not found (I've used an anaconda environment and tried installing through pycharm packages)
So I'm wondering if I can manually decide the file path for TensorFlow so windows will accept it.
Python was installed through microsoft store (otherwise pip wouldn't work)

Dlib ImportError in Windows 10 on line _dlib_pybind11 import *, DLL Load Failed

I am able to successfully install Dlib with CUDA support in Windows 10 but getting an error during "import dlib" in my python code of computer vision project.
Environment: Windows 10, Python 3.7.6 (Anaconda), CUDA 11, CuDNN 10.2
Error Message:
>>> import dlib
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\lib\site-packages\dlib-19.20.99-py3.7-win-amd64.egg\dlib\__init__.py", line 12, in <module>
from _dlib_pybind11 import *
ImportError: DLL load failed: The specified module could not be found.
This can be solved by copying the cudnn64_7.dll (available here: https://developer.nvidia.com/cudnn)
into the %CUDA_PATH%/bin directory (probably something like this: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin)
I am able to find and fix this issue. CUDA 11 installation wasn't able to add few of the directories into the PATH environment variable (Windows 10). It was truncated due to the max length of 2048 characters. I have removed a few of the unused software paths from PATH value and after reinstallation, dlib 19.20 is working with CUDA 11 now.
I created an issue on DLIB Github under the following link which has more information regarding error logs and snapshots for this issue.
https://github.com/davisking/dlib/issues/2097
In my environment, the problem was due to an error somewhere in the build process that resulted code to load CuDNN dynamic libraries not being included in the generated file dlib/__init__.py despite having no build error. In my case the file always included this strange block of code:
if 'OFF' == 'ON':
add_lib_to_dll_path('cudnn-NOTFOUND')
add_lib_to_dll_path('C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.5/lib/x64/cudart.lib')
The second line's cudnn-NOTFOUND gave a clue of what happened with my build.
As I followed instructions on this page, copying all the binaries and include files to the right places within the CUDA directory, I only needed to modify the code to (similar to what Epic Chen's answer suggests but I got rid of the if clause and the bad code line):
add_lib_to_dll_path('C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.5/lib/x64/cudart.lib')
That workaround fixed the problem for me.
My configuration is CUDA 11.5, CuDNN 8.3.1.22, dlib compiled using Visual Studio 2019. The environment variable CMAKE_PREFIX_PATH to the CuDNN directory to get the compiler to find the include files and libraries.
If you are using Anaconda, uninstall dlib and reinstall dlib.
In anaconda command prompt, type
pip uninstall dlib
After successfully uninstalling, type
pip install dlib
It helped me fix the problem.
Try to check the __init__.py file which the error message indicate as below.
Your path is not the same as me.
In the __init__.py file, the if statement should be 'ON' == 'ON'
Besides, the following library paths must be correct. Your version may not be the same as me.

PyQt4 Installation for windows

I have downloaded PyQt4 for Windows7 from here: https://riverbankcomputing.com/software/pyqt/download
After unzipping the file I've run the command
python configure-ng.py
Unfortunately Iget the following error:
Error: Make sure you have a working Qt qmake on your PATH
How can i fix it?

Bazel extension file not found error when installing TensorFlow in python3?

I currently have an error while installing tensorflow with python 3. Previously I had successfully installed tensorflow with python 2, and that was after I commented out several sections for ios/android. I decided to copy over the exact tensorflow files that worked for the python 2, and reconfigured the settings to suit it for python 3. Is this correct? Because I figured the configurations would be overwritten if I reconfigure the same files, and the edited files I require for TensorFlow to work will still be present.
Here is my error code:
error loading package 'bazel-tensorflow/external/bazel_tools/tools/build_defs/docker/testdata': Extension file not found. Unable to load package for '//tools/build_defs/docker:docker.bzl': BUILD file not found on package path.

Resources