Which Theano for PyMC3? - python-3.x

Disclaimer: SO is forcing me to express stdout output copied below as code. Simple quoting wouldn't allow me to post
I have a pyproject.toml file from a Proof of Concept that worked beautifully a few months ago, which includes:
[tool.poetry.dependencies]
theano = "^1.0.5"
pymc3 = "^3.9.3"
I copied this over to start extending from the Proof of Concept, and when trying to run it I get
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
The installed Theano(-PyMC) version (1.0.5) does not match the PyMC3 requirements.
For PyMC3 to work, Theano must be uninstalled and replaced with Theano-PyMC.
See https://github.com/pymc-devs/pymc3/wiki for installation instructions.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
So I followed instructions -- removed theano and added Theano-PyMC3. This gave me no big alert at the beginning of the output, but failed:
File "mycode.py", line 4, in <module>
import pymc3
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 34, in <module>
if not semver.match(theano.__version__, ">=1.1.2"):
AttributeError: module 'theano' has no attribute '__version__'
I then tried adding aesara with a just-try-it scattergun mentality, and got the same error. I tried adding stock theano again on top of all the above, and got the first error again, even with Theano-PyMC3 still installed, and the Traceback ends with:
File "/home/user/.../code.py", line 4, in <module>
import pymc3
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 50, in <module>
__set_compiler_flags()
File "/home/user/.cache/pypoetry/virtualenvs/project-xNaK0WN7-py3.8/lib/python3.8/site-packages/pymc3/__init__.py", line 46, in __set_compiler_flags
current = theano.config.gcc__cxxflags
AttributeError: 'TheanoConfigParser' object has no attribute 'gcc__cxxflags'
Why would the same version pymc3 that worked just fine with theano last time now have a problem with it? The link provided (https://github.com/pymc-devs/pymc3/wiki) in the !!!! alert is virtually blank, so not helpful.

Related

metpy import issue in Debian 10

I am running python3 on a Debian 10 (buster) system.
Up until yesterday, I was able to perform this import:
from metpy.plots import (StationPlot, StationPlotLayout, wx_code_map, current_weather)
After a general package update, I can no longer perform the import and instead get this string of errors:
Traceback (most recent call last): File
"/home/disk/bob/impacts/bin/ASOS_plot_data_hourly_ISU.py", line 37, in
from metpy.plots import (StationPlot, StationPlotLayout,
wx_code_map, current_weather) File
"/usr/lib/python3/dist-packages/metpy/init.py", line 35, in
from .xarray import * # noqa: F401, F403, E402 File
"/usr/lib/python3/dist-packages/metpy/xarray.py", line 27, in
from .units import DimensionalityError, UndefinedUnitError, units
File "/usr/lib/python3/dist-packages/metpy/units.py", line 40, in
lambda string: string.replace('%', 'percent') File
"/usr/lib/python3/dist-packages/pint/registry.py", line 74, in
call obj = super(_Meta, self).call(*args, **kwargs) TypeError: init() got an unexpected keyword argument
'preprocessors'
In fact, I can't even do a simple
import metpy
without getting the same error chain.
Obviously, there must be some sort of version discrepancy with xarray or some other package.
I currently have these versions installed: 1.0.0rc1.po of metpy and 0.12.1-1 of xarray.
Any thoughts about what the required combination of packages should be or who I might ask about this?
It's unclear from your post what versions of Pint and Python you have installed. From the error, it seems like you are having problems with too old a version of Pint installed, though MetPy 1.0.0rc1 should have had support to deal with that. Really, the whole 1.0.0rc1.po version makes me wonder almost if MetPy was installed from git at some point after rc1?
Regardless, MetPy 1.0.0rc1, which means that was the first Release Candidate for the 1.0 release of MetPy and is not a version I would rely upon. I would suggest updating to either MetPy 1.0.1 (if you are using Python 3.6) or MetPy 1.2 (for Python >= 3.7).

PyMC3 Error: AttributeError: 'TheanoConfigParser' object has no attribute 'gcc__cxxflags'

I'm trying to run PyMC3 on a Slurm cluster. In my code, the following import
import pymc3 as pm
raises the following error:
File "/om2/user/rylansch/anaconda3/lib/python3.7/site-packages/pymc3/__init__.py", line 39, in <module>
__set_compiler_flags()
File "/om2/user/rylansch/anaconda3/lib/python3.7/site-packages/pymc3/__init__.py", line 35, in __set_compiler_flags
current = theano.config.gcc__cxxflags
AttributeError: 'TheanoConfigParser' object has no attribute 'gcc__cxxflags'
What is this error and how do I fix it?
You possibly still have the old theano installed. So try uninstalling theano and the currently installed pymc3 then install the new version: theano-pymc
here's a detailed guide: https://github.com/pymc-devs/pymc3#installation

How to fix ' module 'keras.backend.tensorflow_backend' has no attribute '_is_tf_1''

While training the yolov3 framework, there's always this module error
I have tried reinstalling keras and tensorflow, and the version of keras is 2.3.0 and the version of tensorflow is 1.14.0.
Traceback (most recent call last):
File "train.py", line 6, in <module>
import keras.backend as K
File "F:\Anacoda\lib\site-packages\keras\__init__.py", line 3, in <module>
from . import utils
File "F:\Anacoda\lib\site-packages\keras\utils\__init__.py", line 27, in <module>
from .multi_gpu_utils import multi_gpu_model
File "F:\Anacoda\lib\site-packages\keras\utils\multi_gpu_utils.py", line 7, in <module>
from ..layers.merge import concatenate
File "F:\Anacoda\lib\site-packages\keras\layers\__init__.py", line 4, in <module>
from ..engine.base_layer import Layer
File "F:\Anacoda\lib\site-packages\keras\engine\__init__.py", line 8, in <module>
from .training import Model
File "F:\Anacoda\lib\site-packages\keras\engine\training.py", line 21, in <module>
from . import training_arrays
File "F:\Anacoda\lib\site-packages\keras\engine\training_arrays.py", line 14, in <module>
from .. import callbacks as cbks
File "F:\Anacoda\lib\site-packages\keras\callbacks\__init__.py", line 19, in <module>
if K.backend() == 'tensorflow' and not K.tensorflow_backend._is_tf_1():
AttributeError: module 'keras.backend.tensorflow_backend' has no attribute '_is_tf_1'
I fix this problem by replacing keras.XXX to tensorflow.keras.XXX
try replace
import keras.backend as K
to
import tensorflow.keras.backend as K
import this:
import tensorflow as tf
then use
tf.compat.v1.keras.backend.
as prefix of your desired attribute
I had the same error and tried installing and uninstalling. In the end, I found that the library was not actually installed correctly.
I went through each library in my:
C:\Users\MyName\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\
I tracked down the file in within the site-packages in Keras, which was calling from the Tensorflow library, which was calling from another folder. I found the final folder had the get_session(), but this was not being called in. When I checked the directory, I found that get_session wasn't being loaded in. Within the file directory /tensorflow/keras/backend.py it was importing everything, but missed out the get_session.
To fix this I added this line:
from tensorflow.python.keras.backend import get_session
Then saved it. The next time I ran my code it was fine. I would suggest doing similar with your files, locating where the attribute is called in and making sure it is being imported.
pip3 uninstall keras
pip3 install keras --upgrade
https://github.com/keras-team/keras/issues/13352
installing TensorFlow 1.14.0 + Keras 2.2.5 on Python 3.6 fix this
make sure your keras version is right. if your backbend is tensorflow,you can
import tensorflow as tf
print(tf.VERSION)
print(tf.keras.__version__)
get the right version of keras,then install this version,I fix this problem by this way,i hope my answer can help you.

Tensorflow r1.12: TypeError: Type already registered for SparseTensorValue when running a 2nd script

I have just built Tensorflow r1.12 from source in Ubuntu 16.04. The installation is successful.
When I run a certain script in Spyder at the 1st time, everything flows smoothly.
However, when I continue to run another script, following errors occur (which didn't happen previously):
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/init.py", line 24, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/python/init.py", line 70, in
from tensorflow.python.framework.framework_lib import * # pylint: disable=redefined-builtin
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/python/framework/framework_lib.py", line 30, in
from tensorflow.python.framework.sparse_tensor import SparseTensor
File "/home/haohua/tf_env/lib/python3.6/site-packages/tensorflow/python/framework/sparse_tensor.py", line 248, in
pywrap_tensorflow.RegisterType("SparseTensorValue", SparseTensorValue)
TypeError: Type already registered for SparseTensorValue
The temporary solution to avoid such TypeError is to restart the kernel.
But I don't want to restart kernel at every single step of running a script.
Thus, I would like to ask for a critical solution for such kind of issue. Thank you in advance.
(Spyder maintainer here) This error was fixed in Spyder 3.3.3, released on February/2019.
Get rid of the import statement:
import tensorflow as tf
Seems to work after that... it's kinda janky
import tensorflow as tf
node1 = tf.constant(3.0,tf.float32)
node2 = tf.constant(4.0)
sess = tf.Session()
print(sess.run([node1,node2]))
sess.close()
When you run this code the first time it will show the output, but when you run it for the second time it will show the error. For that, you have to select the whole program except import tensorflow as tf and run in
run in current cell mode
it will work and it will show the output;
Otherwise, restart your kernel it will work.

Problems using ambhas package: statistics module has no cpdf attribute

I am trying to use the ambhas package for copulas and the method
foo.generate_xy()
is not working.
This is error I get:
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
generate()
File "C:/Users/Mypc/Desktop/sfsdf.py", line 19, in genera
x1,y1 = foo.generate_xy()
File "C:\Python33\lib\site-packages\ambhas\copula.py", line 179, in generate_xy
self._inverse_cdf()
File "C:\Python33\lib\site-packages\ambhas\copula.py", line 256, in _inverse_cdf
x2, x1 = st.cpdf(self.X, kernel = 'Epanechnikov', n = 100)
AttributeError: 'module' object has no attribute 'cpdf'
Basically I checked the code in the ambhas module and it turns out it is trying to use a method from the statistics module (imported as st), st.cpdf() which I do not have in my statistics module.
How can I fix this problem?
Here is an example of the code working. This is the very same code I am trying to run through the generate() function:
https://code.google.com/p/ambhas/wiki/Cookbook
The AMBHAS code depends on this statistics module, not the "official" one that is now included in Python 3.4. Unfortunately, the module doesn't appear to have been updated in a while, and the latest Windows installer is for Python 3.2, so you'll need to build it from source. You can do this with Cygwin, or by installing Visual C++ 2010 Express (here is a blog post describing the process, but I haven't tried it myself so I don't know if it's completely accurate - YMMV).
Once you've compiled the extension, make sure you remove the statistics module you installed previously before running python setup.py install.

Resources