Cupy DLL load failed - pytorch

I am trying to use cupy to speed up some background numpy operations in my python code but when attempting to import cupy I am told that the DLL load failed, importError: DLL load failed.
I am fairly inexperienced when it comes to handling things like this so any help would be greatly appreciated. The entire error message and trace is copied below:
when I tried pip install cupy-cuda112, cupy import is working but pytorch import is not working.
Error loading "C:\Users\CT1\miniconda3\envs\DentiumRIP\lib\site-packages\torch\lib\cublas64_11.dll" or one of its dependencies.
And, when I do conda install pytorch torchvision torchaudio cudatoolkit=11.2 -c pytorch -c conda-forge, the cupy import doesn't work.
please tell me how to fix it..
Failed to import CuPy.
If you installed CuPy via wheels (cupy-cudaXXX or cupy-rocm-X-X), make sure that the package matches with the version of CUDA or ROCm installed.
On Linux, you may need to set LD_LIBRARY_PATH environment variable depending on how you installed CUDA/ROCm.
On Windows, try setting CUDA_PATH environment variable.
Check the Installation Guide for details:
https://docs.cupy.dev/en/latest/install.html
Original error:
ImportError: DLL load failed: 지정된 프로시저를 찾을 수 없습니다.

Related

Can't get the Datasets library to install in Python 3

I'm trying to install the datasets package (from Huggingface). Everything seems to be working, but when I try to actually import it
from datasets import load_dataset
I get an error in
import pyarrow
import pyarrow.lib as _lib
Specifically:
ImportError: DLL load failed: The specified procedure could not be found.
I've tried reinstalling it multiple times, through Pip, Pip3, Conda. I tried reinstalling pyarrow. Nothing seems to make it work. That said, the error message was different before.
Interestingly, when I try upgrading Pip, I keep getting
WARNING: Ignoring invalid distribution -yarrow
Any idea what I can do to make it work?
Many thanks

pyinstaller not working with several warnings and errors

I am trying to turn my script into .exe file. When I use Pyinstaller I get several warnings and errors and it does not work:
193691 WARNING: Unable to find package for requirement greenlet from package gevent. 193691 WARNING: Unable to find package for requirement zope.event from package gevent. 193691 WARNING: Unable to find package for requirement zope.interface from package gevent.
I installed gevent package but these warnings still appear
2)\anaconda3\lib\site-packages\numpy\__init__.py:138: UserWarning: mkl-service package failed to import, therefore Intel(R) MKL initialization ensuring its correct out-of-the box operation under condition when Gnu OpenMP had already been loaded by Python process is not assured. Please install mkl-service package, see http://github.com/IntelPython/mkl-service
I installed mkl-service package but this message still appears
3)ImportError: DLL load failed while importing _multiarray_umath: The module is not found.
I am using Anaconda navigator and I dont have any additional virtual environment. I have numpy 1.19.2 and Python3.8.
Finally I get PyInstaller.exceptions.ImportErrorWhenRunningHook: Failed to import module __PyInstaller_hooks_0_numpy_core required by hook for module anaconda3\lib\site-packages\PyInstaller\hooks\hook-numpy.core.py. Please check whether module __PyInstaller_hooks_0_numpy_core actually exists and whether the hook is compatible with your version of \anaconda3\lib\site-packages\PyInstaller\hooks\hook-numpy.core.py:
I tried uninstalling anaconda and numpy and reinstalling it. I tried checking Path environment variable. all with no luck.

ImportError: Unable to import required dependencies: numpy: error when trying to migrate

I am using Django and Python with Anaconda. At the beginning, I was able to migrate by the addition of a sqlite3.dll file. However, when I try to migrate again, it gives me this error. I have tried uninstalling and installing both numpy and pandas in Project Interpreter. I'm using Pycharm and I'm using Conda Package manager with the Conda environment. I also tried migrate --run-syncdb but it still gives me the error.
How would I fix this to be able to use .objects.bulk_create()?
To fix .objects.bulk_create(), one recommendation was to run migrate --run-syncdb and I cannot run it currently because of the error below.
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "C:\Users\PuTung\anaconda3\envs\swe_project\python.exe"
* The NumPy version is: "1.19.2"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.```

ClobberError when trying to install the nltk_data package using conda?

I am trying to install nltk_data package to my environment natlang using conda by giving the following command:
(natlang) C:\Users\asus>conda install -c conda-forge nltk_data
I receive the following errors:
Verifying transaction: failed
CondaVerificationError: The package for nltk_data located at
C:\Users\asus\Anaconda3\pkgs\nltk_data-2017.10.22-py_0
appears to be corrupted. The path
'lib/nltk_data/corpora/propbank/frames/con.xml'
specified in the package manifest cannot be found.
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-dictionaries'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-nombank-proposition-
structure'
I am working on Anaconda 3, python version 3.6.5, windows 10 enterprise.
Can someone please tell me why this error is occurring and how can I fix it.
Background: I originally wanted to use punkt in one of my programs using the code lines:
import nltk_data
nltk.download()
This would open the nltk downloader and after installing all the packages including punkt, on further running the program I would still encounter the following error:
LookupError:
Resource [93mpunkt[0m not found.
Please use the NLTK Downloader to obtain the resource:
[31m>>> import nltk
>>> nltk.download('punkt')
I tried rerunning the nltk.donwload() and nltk.download('punkt') a couple of times with no change. So then I decided to simply install the nltk_data package to my environment based on the assumption that if I install the package to the env itself, I won't have to use the nltk.download function to use punkt.
Summarizing, I have the following two questions:
If I install the nltk_data package to my evn, do I still need to use the nltk.download function in my code? If yes, how do I resolve the lookup error?
If installing to the evn is enough, then how do I resolve the clobber error?
(ps: I apologize if this sounds stupid, I am very new to machine learning and working with python in general.)
The nltk_data repository is a collection of zipfiles and xml meta data. Usually, it is not installation through packaging tools such as conda or pip.
But there is this utility from conda-forge that tries to install the nltk_data, https://github.com/conda-forge/nltk_data-feedstock
To use it, on the terminal/command prompt/console, first add the conda-forge channel:
conda config --add channels conda-forge
Then you shouldn't need the -c option, and just use:
conda install nltk_data
Please try the above and see whether that get rids of the ClobberError.
This error is requesting you to download a specific nltk dataset call punkt:
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
Running nltk.download() without specifying which specific dataset you want to download will call up a tkinter GUI which normally wouldn't be possible if you are accessing your machine remotely without a GUI.
If you're unsure of which resource you need, I would suggest using the popular collection.
import nltk
nltk.download('popular')
Answering 2 que first- there have been similar issues all across windows machines. Its better to use the ntlk.download() function if you want to use punkt or a similar module.
1) The lookup error can easily be resolved. It was because of a typo. Instead of
import nltk_data
it should be
import nltk.data

ImportError: No module named theano.sandbox using Jupyter notebook

I am trying to work with Keras on some small datasets locally, using my cpu.
I started a jupyter notebook, and selected what I think might be the right kernel, which would be the conda env:tensorflow:
So I try to import from the sandbox:
from theano.sandbox import cuda .
and I get
ImportError: No module named theano.sandbox
I don't get it. Theano is installed. Tensorflow is installed. Anaconda is installed. How can I figure out what is going on?

Resources