PyMC3 Error: AttributeError: 'TheanoConfigParser' object has no attribute 'gcc__cxxflags' - theano

I'm trying to run PyMC3 on a Slurm cluster. In my code, the following import
import pymc3 as pm
raises the following error:
File "/om2/user/rylansch/anaconda3/lib/python3.7/site-packages/pymc3/__init__.py", line 39, in <module>
__set_compiler_flags()
File "/om2/user/rylansch/anaconda3/lib/python3.7/site-packages/pymc3/__init__.py", line 35, in __set_compiler_flags
current = theano.config.gcc__cxxflags
AttributeError: 'TheanoConfigParser' object has no attribute 'gcc__cxxflags'
What is this error and how do I fix it?

You possibly still have the old theano installed. So try uninstalling theano and the currently installed pymc3 then install the new version: theano-pymc
here's a detailed guide: https://github.com/pymc-devs/pymc3#installation

Related

pudb3 and cftime incompatibility?

I am trying to debug a code with pudb (Python 3.10) but it imports xarray and while importing it there is an error with the cftime module. In pudb it says there is no module cftime._cftime. The exact error output is
File "src/cftime/_cftime.pyx", line 374, in init cftime._cftime
File "/home/lmod/software/SciPy-bundle/2021.10-intel-2022.00/lib/python3.10/site-packages/numpy/core/getlimits.py", line 651, in __init__
self.dtype = numeric.dtype(type(int_type))
TypeError: 'NoneType' object is not callable
Is there a way to fix this? It has happened before with other packages but I've been able to find a way not to use them, but I can't do that in this case.
Code imports every package perfectly fine without pudb. I've tried upgrading both xarray and numpy without success. I have numpy 1.22.0 and xarray 2022.9.0

Which Theano for PyMC3?

Disclaimer: SO is forcing me to express stdout output copied below as code. Simple quoting wouldn't allow me to post
I have a pyproject.toml file from a Proof of Concept that worked beautifully a few months ago, which includes:
[tool.poetry.dependencies]
theano = "^1.0.5"
pymc3 = "^3.9.3"
I copied this over to start extending from the Proof of Concept, and when trying to run it I get
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
The installed Theano(-PyMC) version (1.0.5) does not match the PyMC3 requirements.
For PyMC3 to work, Theano must be uninstalled and replaced with Theano-PyMC.
See https://github.com/pymc-devs/pymc3/wiki for installation instructions.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
So I followed instructions -- removed theano and added Theano-PyMC3. This gave me no big alert at the beginning of the output, but failed:
File "mycode.py", line 4, in <module>
import pymc3
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 34, in <module>
if not semver.match(theano.__version__, ">=1.1.2"):
AttributeError: module 'theano' has no attribute '__version__'
I then tried adding aesara with a just-try-it scattergun mentality, and got the same error. I tried adding stock theano again on top of all the above, and got the first error again, even with Theano-PyMC3 still installed, and the Traceback ends with:
File "/home/user/.../code.py", line 4, in <module>
import pymc3
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 50, in <module>
__set_compiler_flags()
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 46, in __set_compiler_flags
current = theano.config.gcc__cxxflags
AttributeError: 'TheanoConfigParser' object has no attribute 'gcc__cxxflags'
Why would the same version pymc3 that worked just fine with theano last time now have a problem with it? The link provided (https://github.com/pymc-devs/pymc3/wiki) in the !!!! alert is virtually blank, so not helpful.

How to fix ' module 'keras.backend.tensorflow_backend' has no attribute '_is_tf_1''

While training the yolov3 framework, there's always this module error
I have tried reinstalling keras and tensorflow, and the version of keras is 2.3.0 and the version of tensorflow is 1.14.0.
Traceback (most recent call last):
File "train.py", line 6, in <module>
import keras.backend as K
File "F:\Anacoda\lib\site-packages\keras\__init__.py", line 3, in <module>
from . import utils
File "F:\Anacoda\lib\site-packages\keras\utils\__init__.py", line 27, in <module>
from .multi_gpu_utils import multi_gpu_model
File "F:\Anacoda\lib\site-packages\keras\utils\multi_gpu_utils.py", line 7, in <module>
from ..layers.merge import concatenate
File "F:\Anacoda\lib\site-packages\keras\layers\__init__.py", line 4, in <module>
from ..engine.base_layer import Layer
File "F:\Anacoda\lib\site-packages\keras\engine\__init__.py", line 8, in <module>
from .training import Model
File "F:\Anacoda\lib\site-packages\keras\engine\training.py", line 21, in <module>
from . import training_arrays
File "F:\Anacoda\lib\site-packages\keras\engine\training_arrays.py", line 14, in <module>
from .. import callbacks as cbks
File "F:\Anacoda\lib\site-packages\keras\callbacks\__init__.py", line 19, in <module>
if K.backend() == 'tensorflow' and not K.tensorflow_backend._is_tf_1():
AttributeError: module 'keras.backend.tensorflow_backend' has no attribute '_is_tf_1'
I fix this problem by replacing keras.XXX to tensorflow.keras.XXX
try replace
import keras.backend as K
to
import tensorflow.keras.backend as K
import this:
import tensorflow as tf
then use
tf.compat.v1.keras.backend.
as prefix of your desired attribute
I had the same error and tried installing and uninstalling. In the end, I found that the library was not actually installed correctly.
I went through each library in my:
C:\Users\MyName\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\
I tracked down the file in within the site-packages in Keras, which was calling from the Tensorflow library, which was calling from another folder. I found the final folder had the get_session(), but this was not being called in. When I checked the directory, I found that get_session wasn't being loaded in. Within the file directory /tensorflow/keras/backend.py it was importing everything, but missed out the get_session.
To fix this I added this line:
from tensorflow.python.keras.backend import get_session
Then saved it. The next time I ran my code it was fine. I would suggest doing similar with your files, locating where the attribute is called in and making sure it is being imported.
pip3 uninstall keras
pip3 install keras --upgrade
https://github.com/keras-team/keras/issues/13352
installing TensorFlow 1.14.0 + Keras 2.2.5 on Python 3.6 fix this
make sure your keras version is right. if your backbend is tensorflow,you can
import tensorflow as tf
print(tf.VERSION)
print(tf.keras.__version__)
get the right version of keras,then install this version,I fix this problem by this way,i hope my answer can help you.

I am getting an error when I load Pandas and Numpy in

I am getting below error when execute below:
import numpy as np
The full stack trace:
File "C:\Anaconda3\lib\site-packages\numpy\__init__.py", line 142, in <module>
from . import add_newdocs
File "C:\Anaconda3\lib\site-packages\numpy\add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "C:\Anaconda3\lib\site-packages\numpy\lib\__init__.py", line 8, in <module>
from .type_check import *
File "C:\Anaconda3\lib\site-packages\numpy\lib\type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "C:\Anaconda3\lib\site-packages\numpy\core\__init__.py", line 14, in <module>
from . import multiarray
ImportError: DLL load failed: The specified module could not be found.
Process finished with exit code 1
This looks like some of the dependencies are missing for the package numpy. The DLL load failed error points towards either incorrectly installed and/or missing files for package numpy. You should just try installing numpy again through Anaconda CLI using the following command:
conda install numpy
This works most of the times, if your python environments are sorted correctly. Do make sure that the python installation you're using for Anaconda, is the same one as your coding environment. Conflicting versions can sometimes lead to corrupted packages.
You could also try the following command to update the Anaconda Environment just to make sure that the installations are up to date, and not conflicting with older versions..
conda update -y -all

Error importing sklearn in python [duplicate]

I'm trying to call a function from the cluster module, like so:
import sklearn
db = sklearn.cluster.DBSCAN()
and I get the following error:
AttributeError: 'module' object has no attribute 'cluster'
Tab-completing in IPython, I seem to have access to the base, clone, externals, re, setup_module, sys, and warning modules. Nothing else, though others (including cluster) are in the sklearn directory.
Following pbu's advice below and using
from sklearn import cluster
I get:
Traceback (most recent call last):
File "test.py", line 2, in <module>
from sklearn import cluster
File "C:\Python34\lib\site-packages\sklearn\cluster\__init__.py", line 6, in <module>
from .spectral import spectral_clustering, SpectralClustering
File "C:\Python34\lib\site-packages\sklearn\cluster\spectral.py", line 13, in <module>
from ..utils import check_random_state, as_float_array
File "C:\Python34\lib\site-packages\sklearn\utils\__init__.py", line 16, in <module>
from .class_weight import compute_class_weight, compute_sample_weight
File "C:\Python34\lib\site-packages\sklearn\utils\class_weight.py", line 7, in <module>
from ..utils.fixes import in1d
File "C:\Python34\lib\site-packages\sklearn\utils\fixes.py", line 318, in <module>
from scipy.sparse.linalg import lsqr as sparse_lsqr
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\__init__.py", line 109, in <module>
from .isolve import *
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module>
from .iterative import *
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 7, in <module>
from . import _iterative
ImportError: DLL load failed: The specified module could not be found.
I'm using Python 3.4 on Windows, scikit-learn 0.16.1.
You probably don't use Numpy+MKL, but only Numpy.
I had the same problem and reinstalling Numpy with MKL
pip install --upgrade --force-reinstall "numpy‑1.16.3+mkl‑cp37‑cp37m‑win32.whl"
fixed it.
Note: update the file to the latest version, possibly 64bit - see the list of available Windows binaries
Problem was with scipy/numpy install. I'd been using the (normally excellent!) unofficial installers from http://www.lfd.uci.edu/~gohlke/pythonlibs/. Uninstall/re-install from there made no difference, but installing with the official installers (linked from http://www.scipy.org/install.html) did the trick.
I am using anaconda got the same error as the OP, when loading Orange, or PlotNine.
I can't recall when this start to happen.
Tracing the dependency of Anaconda3\Lib\site-packages\scipy\special\_ufuncs.cp36-win32.pyd, libifcoremd.dll and libmmd.dll are missing in DependencyWalk. Searching them in anaconda root directry, they are located in both ICC_RT and one version of MKL package.
Adding Anaconda3\pkgs\mkl-2017.0.3-0\Library\bin to PATH, seems to fix SciPy and NumPy related DLL load failure, the above package starts to work again.
I still don't know how to fix this properly. Apparently the downside is that the MKL package could be updated and versions may change so does the path. In this aspect Its equally inconvenient as adding a non-managed package.
Reinstalling ICC_RT fixed the issue for me, libmmd.dll and the related dlls are automatically copied into anaconda3/library/bin afterwards, which is automatically added into PATH by activate command. All previous numpy/scipy related cant load DLL errors are gone now.
From the error log, it shows that scipy module is the most recent module fails to import
File "C:\Python34\lib\site-packages\sklearn\utils\fixes.py", line 318, in <module>
from scipy.sparse.linalg import lsqr as sparse_lsqr
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\__init__.py", line 109, in <module>
from .isolve import *
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\__init__.py", line 6, in <module>
from .iterative import *
File "C:\Python34\lib\site-packages\scipy\sparse\linalg\isolve\iterative.py", line 7, in <module>
from . import _iterative
ImportError: DLL load failed: The specified module could not be found.
I have the same error that show the same log, the problem'd gone when I uninstall/install scipy:
pip uninstall scipy
pip install scipy
Place this line on top of the python file
from sklearn import cluster
That should do it :))
For me what fixed it were these commands:
pip uninstall sklearn
pip uninstall scikit-learn
pip uninstall scipy
pip install scipy
pip install scikit-learnhere
I had the same issue and solved it by installing/updating the mkl package:
conda install mkl
or
pip install mkl
Just for full information, this also downgraded the following packages:
The following packages will be UPDATED:
mkl: 2017.0.4-h6d528fc_0 defaults --> 2018.0.3-1 defaults
The following packages will be DOWNGRADED:
numpy: 1.11.3-py34_0 defaults --> 1.10.1-py34_0 defaults
pandas: 0.19.2-np111py34_1 defaults --> 0.18.1-np110py34_0 defaults
scikit-learn: 0.18.1-np111py34_1 defaults --> 0.17-np110py34_1 defaults
scipy: 0.19.1-np111py34_0 defaults --> 0.16.0-np110py34_0 defaults
I struggled trying to figure this one out; tried to download and install the (unofficial) Numpy+MKL library from the website (risky/tedious?).
Ultimately found success by:
Login to command prompt using admin rights; how to here: https://superuser.com/questions/968214/open-cmd-as-admin-with-windowsr-shortcut
Uninstall existing/tangled version of Scipy & Numpy
pip uninstall scipy
pip uninstall numpy
Fresh install Scipy & Numpy
pip install scipy
pip install numpy
Run Jupyter notebook; it worked for me.
The message ImportError: DLL load failed: The specified module could not be found
informs that there is failure to identify and source the required DLL(s) to use the scikit-learn library; a fresh install of scipy/numpy probably enables a better routing of DLL connections called from Jupyter notebook code(s).
download microsoft visual c++ distribution
link : https://www.microsoft.com/en-in/download/details.aspx?id=53840
vc_redist.x64.exe
install and run this .exe file in your computer.. the DLL import module error will not appear after this
now it will work fine enjoy :)

Resources