ImportError: Unable to import required dependencies: numpy: error when trying to migrate - python-3.x

I am using Django and Python with Anaconda. At the beginning, I was able to migrate by the addition of a sqlite3.dll file. However, when I try to migrate again, it gives me this error. I have tried uninstalling and installing both numpy and pandas in Project Interpreter. I'm using Pycharm and I'm using Conda Package manager with the Conda environment. I also tried migrate --run-syncdb but it still gives me the error.
How would I fix this to be able to use .objects.bulk_create()?
To fix .objects.bulk_create(), one recommendation was to run migrate --run-syncdb and I cannot run it currently because of the error below.
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "C:\Users\PuTung\anaconda3\envs\swe_project\python.exe"
* The NumPy version is: "1.19.2"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.```

Related

numba not working because of an inconsistency with numpy

I have been stuck with this problem since a couple of days ago. I wanted to use numba for a project, so I created a conda environment in python=3.10.4
conda create -n scycom
Then I install the packages I need from the main repository
conda install numpy scipy numba matplotlib jupyter
which results in the installation of numpy=1.21.5 and numba=0.55.1
When I am done installing I run jupyter lab jupyter lab --port=8888 and create a notebook. When I type from numba import jit, I get the following error message
ImportError: Numba needs NumPy 1.21 or less
with a problem apparently arising with _ensure_critical_deps(). This is puzzling because numpy is version 1.21.5.
My attempts to fix this problem
Given that it seemed a compatibility issues, I tried downgrading to numpy=1.20.1 but the problem persists and, sometimes, other compatibility issues with glib show up. In fact, the exact same error message arises
I created an environment with python=3.8.13, with numpy=1.20.1 and numba=0.55.1, facing the exact same error messages
I tried different combinations of libraries but I could solve this incompatibility
I tried a few internet solutions but I did not find that any really fit into my problem
I hope somebody willing to help has faced and solved a similar issue

Can't get the Datasets library to install in Python 3

I'm trying to install the datasets package (from Huggingface). Everything seems to be working, but when I try to actually import it
from datasets import load_dataset
I get an error in
import pyarrow
import pyarrow.lib as _lib
Specifically:
ImportError: DLL load failed: The specified procedure could not be found.
I've tried reinstalling it multiple times, through Pip, Pip3, Conda. I tried reinstalling pyarrow. Nothing seems to make it work. That said, the error message was different before.
Interestingly, when I try upgrading Pip, I keep getting
WARNING: Ignoring invalid distribution -yarrow
Any idea what I can do to make it work?
Many thanks

Why are these import errors occurring when running python scripts from cmd or windows task scheduler, but not anaconda?

I am encoutering import errors, but only when running my python scripts from cmd or windows task scheduler (effectively the same issue I assume). I have researched answers already and attempted various solutions (detailed below), but nothing has worked yet. I need to understand the problem in any case so that I can manage anything like it in the future.
Here is the issue:
Windows 10. Anaconda Python 3.9.7. Virtual enviromnent.
I have a script that works fine if I open an anaconda prompt, activate the virtual environment and run it.
However, this is where the fun starts. If I try to run the script from the non-anaconda cmd prompt deploying the commands: "C:\Users\user\anaconda3\envs\venv\python.exe" "C:\Users\user\scripts\script.py" if get the following error:
ImportError: DLL load failed while importing etree: The specified module could not be found.
Traceback includes:
"C:\Users\user\anaconda3\envs\venv\lib\site-packages\lxml\html\__init__.py", line 53, in <module>
from ..import etree
This is not as simple as one specific module not being installed, because of course running the script from within the anaconda prompt and the virtual environment works. Similar also happens when I run other scripts. Other errors I have seen include, for example:
ImportError: DLL load failed while importing _imaging: The specified module could not be found.
Traceback includes:
"C:\Users\user\anaconda3\envs\venv\lib\site-packages\PIL\Image.py", line 114, in <module>
from . import _imaging as core
Also, I think this may be somehow related. Importing numpy (1.22.3) from within the python interpreter in the virtual environment works fine, but when I try to run a test script that imports numpy it fails both from anaconda and the cmd with the following error:
ImportError: cannot import name SystemRandom
The oveall issue was noted originally when trying to run various scripts from Windows Task Scheduler with the path to python "C:\Users\user\anaconda3\envs\venv\python.exe" entered as the Program/script and the script "script.py" entered as an argument. The above errors were produced, then reproduced by running the scripts from a non-anaconda cmd.
I am looking to understand what is happening here and for a solution that can get the scripts running from the virtual enviroment from Windows Task Scheduler effectively.
Update:
I have uninstalled and reinstalled numpy (and pandas) using conda. This has left the venv with numpy==1.20.3 (and pandas=1.4.2). On attempting to re-run one of the scripts, it runs fine from within the venv in anaconda, but produces the following error when attempting to run from cmd or from within Windows Task Scheduler as above:
ImportError: Unable to import required dependencies:
numpy:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions faled. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed.
We have complied some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.9 from "C:\Users\user\anaconda3\envs\venv\python.exe"
* The NumPy version is "1.20.3"
and make sure that they are the versions you expect.
Please carefull study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.
I have looked into the solutions suggested, but am still completely at a loss, especially as to why the script runs from the venv in one place, but NOT the other.

Unable to use gitpython ImportError

Tried to run a script I wrote yesterday again today and ran into this:
ImportError: cannot import name 'Repo' from 'git' (/usr/local/lib/python3.7/site-packages/git/__init__.py)
I am at a complete loss. On a new computer so the only thing different install wise would have been that I installed pycharm. I am currently trying to run through bash shell on a mac. The exact code was running as excepted earlier, no code changes.
Things I have tried:
uninstalling/reinstalling python
uninstalling/reinstalling pip
uninstalling/reinstalling gitpython
Running on:
mac catalina
python version 3.7.6
pip version 20.0.1
As a side note, the script works as intended until the automated git push. Wondering if I should just make the os calls myself and not worry about this?
I really dont understand what I did/am doing wrong here.
EDIT:
Again, sorry as this is my first mac computer. I did a brew uninstall of python3 and reinstalled through the app store to 3.8.
ImportError: cannot import name 'is_cygwin_git' from partially initialized module 'git.util' (most likely due to a circular import) (/Library/Frameworks/Python.framework/
I know cygwin is for windows, but I figured I'd play along and trying a pip install pycygwin.
Install threw an error asking for cython, so I did another pip install and tried again. The pycygwin then complained gcc was missing so I did a brew install of gcc. With gcc installed and correctly on path, it still says it cant find it and exits with
build/cygwin/_cygwin.c:611:10: fatal error: 'sys/cygwin.h' file not found
#include <sys/cygwin.h>
^~~~~~~~~~~~~~
1 error generated.
error: command 'gcc' failed with exit status 1
Thinking I might just try a different package manager? Currently, an attempt to rerun the script yeilds
ImportError: cannot import name 'Repo' from partially initialized module 'git' (most likely due to a circular import)
To which I investigated but I shouldn't have any overlapping dependancies.
On the script throwing the error I'm using:
import csv
import yaml
import os
from git import Repo
and on the wrapper I made and imported, I'm using:
import subprocess
import re
Will update if I get any further on this, would love some suggestions.
EDIT:
Importing using just import git works throws a different error, like the python is trying to get itself?
ImportError: cannot import name '<file name>' from '<file name>'
If I change the file name and try to run it, it comes backs with:
ImportError: cannot import name '<old file name>' from '<old file name>'
***FIXED****
Uninstall of python through homebrew
Reinstall of python through mac app store
Uninstall/Reinstall of modules through pip
Saving the file under a new name and deleting the old one
Still have absolutely no idea why/how this happened but the above worked for me. If anyone knows why something like this can happen, I would love to know. Cheers.

ClobberError when trying to install the nltk_data package using conda?

I am trying to install nltk_data package to my environment natlang using conda by giving the following command:
(natlang) C:\Users\asus>conda install -c conda-forge nltk_data
I receive the following errors:
Verifying transaction: failed
CondaVerificationError: The package for nltk_data located at
C:\Users\asus\Anaconda3\pkgs\nltk_data-2017.10.22-py_0
appears to be corrupted. The path
'lib/nltk_data/corpora/propbank/frames/con.xml'
specified in the package manifest cannot be found.
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-dictionaries'
ClobberError: This transaction has incompatible packages due to a shared
path.
packages: conda-forge::nltk_data-2017.10.22-py_0, conda-forge::nltk_data-
2017.10.22-py_0
path: 'lib/nltk_data/corpora/nombank.1.0/readme-nombank-proposition-
structure'
I am working on Anaconda 3, python version 3.6.5, windows 10 enterprise.
Can someone please tell me why this error is occurring and how can I fix it.
Background: I originally wanted to use punkt in one of my programs using the code lines:
import nltk_data
nltk.download()
This would open the nltk downloader and after installing all the packages including punkt, on further running the program I would still encounter the following error:
LookupError:
Resource [93mpunkt[0m not found.
Please use the NLTK Downloader to obtain the resource:
[31m>>> import nltk
>>> nltk.download('punkt')
I tried rerunning the nltk.donwload() and nltk.download('punkt') a couple of times with no change. So then I decided to simply install the nltk_data package to my environment based on the assumption that if I install the package to the env itself, I won't have to use the nltk.download function to use punkt.
Summarizing, I have the following two questions:
If I install the nltk_data package to my evn, do I still need to use the nltk.download function in my code? If yes, how do I resolve the lookup error?
If installing to the evn is enough, then how do I resolve the clobber error?
(ps: I apologize if this sounds stupid, I am very new to machine learning and working with python in general.)
The nltk_data repository is a collection of zipfiles and xml meta data. Usually, it is not installation through packaging tools such as conda or pip.
But there is this utility from conda-forge that tries to install the nltk_data, https://github.com/conda-forge/nltk_data-feedstock
To use it, on the terminal/command prompt/console, first add the conda-forge channel:
conda config --add channels conda-forge
Then you shouldn't need the -c option, and just use:
conda install nltk_data
Please try the above and see whether that get rids of the ClobberError.
This error is requesting you to download a specific nltk dataset call punkt:
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
Running nltk.download() without specifying which specific dataset you want to download will call up a tkinter GUI which normally wouldn't be possible if you are accessing your machine remotely without a GUI.
If you're unsure of which resource you need, I would suggest using the popular collection.
import nltk
nltk.download('popular')
Answering 2 que first- there have been similar issues all across windows machines. Its better to use the ntlk.download() function if you want to use punkt or a similar module.
1) The lookup error can easily be resolved. It was because of a typo. Instead of
import nltk_data
it should be
import nltk.data

Resources