I have installed tensorflow-gpu and keras on my gpu machine for deep learning training. The tensorflow version is 1.12. However, nyoka (pmml converter package of python) has conflict because of tensorflow dependencies. I think it uses tensorflow 1.2. Can there be any workaround for it?
Which version of Nyoka are you using? The issue was there prior to version 3.0.1. The latest version of nyoka (3.0.6) does not have tensorflow as a dependency. Could you try with the latest version?
Related
I’ve been trying to install keras 2.4.0 for a while but I can’t, I tried to do “pip install keras 2.4.0” but it installs me keras 2.5.0 instead on colab like it can’t find the keras 2.4.0 version anymore.
Would you please have a solution to install the keras 2.4.0 version?
Thanks
I switched from Windows development to Ubuntu 19.04 and after installing my libaries using pip I got the runtime warning about numpy.ufunc size changed.
I am using the following versions and my project is in PyCharm Professional (lastest version).
Spacy 2.1.8
numpy 1.17.2
blis 0.4.0
thinc 7.0.8
gensim 2.8
I have uninstalled everything and installed SPACY last, I have uninstalled numpy and put version 1.15 up and various other tricks but I can't see to get it to install the correct version.
Does anyone know how to resolve this issue or which actual version of numpy/thinc/blis should be used in this case?
I am looking to use best_score_ parameter from GridSearchCV function, but it looks like that is not present in the latest version of the library spark-sklearn (version 0.2.3). When I'm trying to uninstall the latest version and reinstall and older version (with version 0.2.0) with the command
pip install spark-sklearn-0.2.0
It does not work. How can I install older versions of spark-sklearn library in my cluster environments? best_score_ parameter seems to work fine in version 0.2.0.
Thanks
There is a known issue with spark-sklearn version 0.2.3 for not having best_score_ parameter in gridSearchCV. The issue can be found here at
https://github.com/databricks/spark-sklearn/issues/73
To install older version of the library, use the following command:
pip install spark-sklearn==0.2.0
I am trying to use the CuDNNLSTM Keras cell to improve training speed for a recurrent neural network (doc here).
When I run:
from keras.layers import Bidirectional, CuDNNLSTM
I get this error:
ImportError: cannot import name 'CuDNNLSTM'
My configuration is Keras 2.0.8, python 3.5, tensorflow-gpu 1.4.0 (all managed by Anaconda) and I have both CUDA 8.0 and cudnn 6.0 installed that should be OK with the nvidia dependencies of tensorflow (here). My code setup makes Keras effectively use tensorflow backend, and every layer except the ones starting with CuDNN* work fine.
Anyone has an idea about the source of this import error ?
And for Tensorflow-2: You can just use LSTM with no activation function and it will automatically use the CuDNN version
It turns out keras 2.0.8 doesn't have the code for these kind of layers that came in more recent versions.
I used pip to upgrade to the lastest version:
pip install --upgrade keras
and it all works now.
These layers have been deprecated in the latest versions.
For a detailed tutorial you can see this Keras guide
in conda it will be (as of nov 2019)
conda config --add channels conda-forge
conda install keras==2.3.0
I ran a python program with Theano, but it errors with:
ImportError: cuDNN not available: Version is too old. Update to v5, was 3007.
So, is it possible to use Theano with CUDA 6.5 and CuDNN 3.0? Currently, I don't have the root privilege to install a newer version of CUDA (because the newer CUDA needs newer driver).
git clone theano from git repo, use git checkout to grab an older version, then install locally.