How to use keras as interface of theano? - theano

for tensorflow, this post very well explain how to using keras with tensorflow
https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
But, I don't find how to use keras with theano directly.
Is it impossible using like tensorflow??

The official documentation is here: https://keras.io/backend/
Basically, edit your $HOME/.keras/keras.json (linux) or %USERPROFILE%\.keras\keras.json (windows) configuration file.
Use:
{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
(Note the "backend" is set to "theano".)
This will of course change all your Keras projects to use Theano.
If you only want to change 1 project, you can set the KERAS_BACKEND environment variable, either from the command line or in code before you import keras:
import os
os.environ["KERAS_BACKEND"] = "theano"
import keras
(I tested this on Windows 10 with Python 3.5 with both Theano and TensorFlow installed (remove this, and it uses TensorFlow, include it and it will use Theano)).
It's nice to include in your Python source because this dependency is then included explicitly in source control. As the underlying ML library Keras is using is not 100% abstracted (there are lots of little differences that seep through), having the code indicate that it needs one or the other is probably a good idea.
I hope that helps,
Robert

Related

Run into the following issue: build_tensor_flow is not supported in Eager Mode

I am playing around with TensorFlow, and I am trying to export a Keras Model as a TensorFlow Model. And I ran into the above-mentioned error. I am following the "Build Deep Learning Applications with Keras 2.0" from Lynda (https://www.linkedin.com/learning/building-deep-learning-applications-with-keras-2-0/exporting-google-cloud-compatible-models?u=42751868)
While trying to build a tensor flow model, I came across this error, thrown at line 66 where the add meta graphs and variables function is defined.
line 66, in build_tensor_info
raise RuntimeError("build_tensor_info is not supported in Eager mode.")
RuntimeError: build_tensor_info is not supported in Eager mode.
...model_builder.add_meta_graph_and_variables(
K.get_session(),
tags=[tf.compat.v1.saved_model.tag_constants.SERVING],
signature_def_map={
tf.compat.v1.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
}
)
...
Any thoughts folks?
Is because you are using tensorflow v2. You have to use the tensorflow v2 compatibility and disable eager mode.
Be careful with the tensorflow imports that you use, for example if you use tensorflow_core, be sure that you are using all the dependencies from "tensorflow".
You have to add before your code:
import tensorflow as tf
if tf.executing_eagerly():
tf.compat.v1.disable_eager_execution()
Ran into that exact same problem when compiling that code from LinkedIn Learning for exporting the Keras model that was written using TensorFlow 1.x API. After replacing everything with the equivalent tf.compat.v1 functions, such as changing
model_builder = tf.saved_model.builder.SavedModelBuilder("exported_model")
to
model_builder = tf.compat.v1.saved_model.Builder("exported_model")
and disables eager execution as suggested above by Cristian Zumelzu, the code was able to run fine with the expected warnings about the deprecated functions.

SKLearn 0.20.2 - Import error with RandomizedPCA?

I'm trying to do the Udacity mini project and I've got the latest version of the SKLearn library installed (20.2).
When I run:
from sklearn.decomposition import RandomizedPCA
I get the error:
ImportError: cannot import name 'RandomizedPCA' from 'sklearn.decomposition' (/Users/kintesh/Documents/udacity_ml/python3/venv/lib/python3.7/site-packages/sklearn/decomposition/__init__.py)
I actually even upgraded the version using:
pip3 install -U scikit-learn
Which upgraded from 0.20.0 to 0.20.2, which also uninstalled and reinstalled... so I'm not sure why it can't initialise sklearn.decomposition.
Are there any solutions here that might not result in completely uninstalling python3 from my machine?! Would ideally like to avoid that.
Any help would be thoroughly appreciated!
Edit:
I'm doing some digging and trying to fix this, and it appears as though the __init__.py file in the decomposition library on the SKLearn GitHub doesn't reference RandomizedPCA... has it been removed or something?
Link to the GitHub page
As it turns out, RandomizePCA() was depreciated in an older version of SKLearn and is simply a parameter in PCA().
You can fix this by changing the import statement to:
from sklearn.decomposition import PCA as RandomizedPCA
... and then your classifier looks like this:
pca = RandomizedPCA(n_components=n_components, svd_solver='randomized', whiten=True).fit(X_train)
However, if you're here because you're doing the Udacity Machine Learning course on Eigenfaces.py, you'll notice that the PIL library is also deprecated.
Unfortunately I don't have a solution for that one, but here's the GitHub issue page, and here's a kind hearted soul that used a Jupyter Notebook to solve their mini-project back when these repositories worked.
I hope this helps, and gives enough information for the next person to get into Machine Learning. If I get some time I might take a crack at recoding eigenfaces.py for SKLearn 0.20.2, but for now I'm just going to crack on with the rest of this course.
In addition to what #Aaraeus said, the PIL library has been forked to Pillow.
You can fix the PIL import error using
pip3 install pillow

module 'statsmodels.tsa.api' has no attribute 'arima_model'

I'm trying to use "statsmodels.api" to work with time series data and trying to fit a simple ARIMA model using
sm.tsa.arima_model.ARIMA(dta,(4,1,1)).fit()
but I got the following error
module 'statsmodels.tsa.api' has no attribute 'arima_model'
I'm using 'statsmodels' version 0.9.0 with 'spyder' version 3.2.8 I'd be pleased to get your help thanks
The correct path is :
import statsmodels.api as sm
sm.tsa.ARIMA()
You can view it using a shell that allows autocomplete like ipython.
It is also viewable in the example provided by statsmodels such as this one.
And more informations about package structure may be found here.

Why the import "from tensorflow.train import Feature" doesn't work

That's probably totally noob question which has something to do with python module importing, but I can't understand why the following is valid:
> import tensorflow as tf
> f = tf.train.Feature()
> from tensorflow import train
> f = train.Feature()
But the following statement causes an error:
> from tensorflow.train import Feature
ModuleNotFoundError: No module named 'tensorflow.train'
Can please somebody explain me why it doesn't work this way? My goal is to use more short notation in the code like this:
> example = Example(
features=Features(feature={
'x1': Feature(float_list=FloatList(value=feature_x1.ravel())),
'x2': Feature(float_list=FloatList(value=feature_x2.ravel())),
'y': Feature(int64_list=Int64List(value=label))
})
)
tensorflow version is 1.7.0
Solution
Replace
from tensorflow.train import Feature
with
from tensorflow.core.example.feature_pb2 import Feature
Explanation
Remarks about TensorFlow's Aliases
In general, you have to remember that, for example:
from tensorflow import train
is actually an alias for
from tensorflow.python.training import training
You can easily check the real module name by printing the module. For the current example you will get:
from tensorflow import train
print (train)
<module 'tensorflow.python.training.training' from ....
Your Problem
In Tensorflow 1.7, you can't use from tensorflow.train import Feature, because the from clause needs an actual module name (and not an alias). Given train is an alias, you will get an ImportError.
By doing
from tensorflow import train
print (train.Feature)
<class 'tensorflow.core.example.feature_pb2.Feature'>
you'll get the complete path of train. Now, you can use the import path as shown above in the solution above.
Note
In TensorFlow 1.9.0, from tensorflow.train import Feature will work, because tensorflow.train is an actual package, which you can therefore import. (This is what I see in my installed Tensorflow 1.9.0, as well as in the documentation, but not in the Github repository. It must be generated somewhere.)
Info about the path of the modules
You can find the complete module path in the docs. Every module has a "Defined in" section. See image below (taken from Module: tf.train):
I would advise against importing Feature (or any other object) from the non-public API, which is inconvenient (you have to figure out where Feature is actually defined), verbose, and subject to change in future versions.
I would suggest as an alternative to simply define
import tensorflow as tf
Feature = tf.train.Feature

Keras with Theano on GPU

While trying to run my Keras code on GPU (CUDA installed), I am not able to execute the following statement, as has been suggested on many online references.
set THEANO_FLAGS="mode=FAST_RUN,device=gpu,floatX=float32" & python theanogpu_example.py
I am getting the following error.
ValueError: Invalid value ("FAST_RUN,device=gpu,floatX=float32") for configurati
on variable "mode". Valid options are ('Mode', 'DebugMode', 'FAST_RUN', 'NanGuar
dMode', 'FAST_COMPILE', 'DEBUG_MODE')
I have tried the other mode suggested as well from inside the code.
import theano
theano.config.device = 'gpu'
theano.config.floatX = 'float32'
I get the following error.
Exception: Can't change the value of this config parameter after initialization!
Apart from knowing how to make it run, I would also take this opportunity to ask a simpler question. How to know in Windows what is my device i.e. whether 'gpu' or 'gpu1' or 'gpu0'? I have tried all 3 for my case but it hasn't yielded result.
Any suggestions will be appreciated.
The best way is using THEANO_FLAGS before run code, because the config variables cannot be changed after importing Theano, try this:
import os
os.environ['THEANO_FLAGS'] = "device=cuda,force_device=True,floatX=float32"
import theano

Resources