I'm implementing Linear Regression in Tensorflow first time. Initially, I tried it using a linear model but after few iterations of training, my parameter shot up to infinity. So, I changed my model to a quadratic one and again tried training but still after few iterations of epochs, the same thing is happening.
Hence, the parameter in tf.summary.histogram('Weights', W0) is receiving inf as a parameter and similar is the case with W1 and b1.
I wanted to see my parameters in tensorboard(because I've never worked with it) but getting this error.
I have asked the question previously but the slight change was that I was using a linear model which again was giving the same problem(I didn't know that it was due to the parameters going to infinity because I was running this in my Ipython Notebook but when I ran the program in the terminal, the below-mentioned error was generated, which helped me figure out that the problem was due to the parameters shooting to infinity ). In the comments section, I got to know that it was working on someone's PC, and his tensorboard showed that the parameters were actually reaching infinity.
Here is the link to the problem asked earlier.
I hope that I've correctly declared Y_ in my program else do correct me!
Here is the code in Tensorflow:
import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt
boston=load_boston()
type(boston)
boston.feature_names
bd=pd.DataFrame(data=boston.data,columns=boston.feature_names)
bd['Price']=pd.DataFrame(data=boston.target)
np.random.shuffle(bd.values)
W0=tf.Variable(0.3)
W1=tf.Variable(0.2)
b=tf.Variable(0.1)
#print(bd.shape[1])
tf.summary.histogram('Weights', W0)
tf.summary.histogram('Weights', W1)
tf.summary.histogram('Biases', b)
dataset_input=bd.iloc[:, 0 : bd.shape[1]-1];
#dataset_input.head(2)
dataset_output=bd.iloc[:, bd.shape[1]-1]
dataset_output=dataset_output.values
dataset_output=dataset_output.reshape((bd.shape[0],1))
#converted (506,) to (506,1) because in pandas
#the shape was not changing and it was needed later in feed_dict
dataset_input=dataset_input.values #only dataset_input is in DataFrame form and converting it into np.ndarray
dataset_input = np.array(dataset_input, dtype=np.float32)
#making the datatype into float32 for making it compatible with placeholders
dataset_output = np.array(dataset_output, dtype=np.float32)
X=tf.placeholder(tf.float32, shape=(None,bd.shape[1]-1))
Y=tf.placeholder(tf.float32, shape=(None,1))
Y_=W0*X*X + W1*X + b #Hope this equation is rightly written
#Y_pred = tf.add(tf.multiply(tf.pow(X, pow_i), W), Y_pred)
print(X.shape)
print(Y.shape)
loss=tf.reduce_mean(tf.square(Y_-Y))
tf.summary.scalar('loss',loss)
optimizer=tf.train.GradientDescentOptimizer(0.001)
train=optimizer.minimize(loss)
init=tf.global_variables_initializer()#tf.global_variables_initializer()#tf.initialize_all_variables()
sess=tf.Session()
sess.run(init)
wb_=[]
with tf.Session() as sess:
summary_merge = tf.summary.merge_all()
writer=tf.summary.FileWriter("Users/ajay/Documents",sess.graph)
epochs=10
sess.run(init)
for i in range(epochs):
s_mer=sess.run(summary_merge,feed_dict={X: dataset_input, Y: dataset_output}) #ERROR________ERROR
sess.run(train,feed_dict={X:dataset_input,Y:dataset_output})
#CHANGED
sess.run(loss, feed_dict={X:dataset_input,Y:dataset_output})
writer.add_summary(s_mer,i)
#tf.summary.histogram(name="loss",values=loss)
if(i%5==0):
print(i, sess.run([W0,W1,b]))
wb_.append(sess.run([W0,W1,b]))
print(writer.get_logdir())
print(writer.close())
I'm getting this error :
/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
(?, 13)
(?, 1)
2018-07-22 02:04:24.826027: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
0 [-3833776.2, -7325.9595, -15.471448]
5 [inf, inf, inf]
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Infinity in summary histogram for: Biases
[[Node: Biases = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Biases/tag, Variable_2/read)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "LR.py", line 75, in <module>
s_mer=sess.run(summary_merge,feed_dict={X: dataset_input, Y: dataset_output}) #ERROR________ERROR
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Infinity in summary histogram for: Biases
[[Node: Biases = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Biases/tag, Variable_2/read)]]
Caused by op 'Biases', defined at:
File "LR.py", line 24, in <module>
tf.summary.histogram('Biases', b)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/summary/summary.py", line 187, in histogram
tag=tag, values=values, name=scope)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_logging_ops.py", line 283, in histogram_summary
"HistogramSummary", tag=tag, values=values, name=name)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
op_def=op_def)
File "/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): Infinity in summary histogram for: Biases
[[Node: Biases = HistogramSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](Biases/tag, Variable_2/read)]]
I believe this is caused due to high learning rate for Gradient descent.
Please refer Gradient descent explodes if learning rate is too large
Here the loss is actually getting bigger after each epoch.
I changed
optimizer=tf.train.GradientDescentOptimizer(0.001)
to
optimizer=tf.train.GradientDescentOptimizer(0.0000000001)
Then printed the loss after each epoch. By changing
sess.run(loss, feed_dict={X:dataset_input,Y:dataset_output})
to
print("loss",sess.run(loss, feed_dict={X:dataset_input,Y:dataset_output}))
in your code. The error was gone. The output was
(?, 13)
(?, 1)
loss = 44061484.0
0 [-0.08337769, 0.19926739, 0.099998444]
loss = 3373030.2
loss = 258605.05
loss = 20211.799
loss = 1964.4918
loss = 567.7717
5 [-0.0001616638, 0.19942635, 0.099998794]
loss = 460.862
loss = 452.67877
loss = 452.05255
loss = 452.00452
Users/ajay/Documents
None
Related
I'm trying to train a deep dense neural network, using core tensorflow. Basically I 'm adapting the code used in this post https://www.kaggle.com/mohitguptaomg/4-layer-dense-neural-net-using-tensorflow to my data set and my own style of coding.
Here is the dataset that I'm using :
https://drive.google.com/open?id=1bDZVuiKyEDxUaY_mZgAKMIionicLs0yK
The main difference with the code, is that I'm working starting with a dataframe data, instead of a numpy array, even so, I believe I have adapted it properly. The error I get is
Cannot feed value of shape (242,) for Tensor 'Placeholder_1:0', which has shape '(242, 1)'
Here my entire code : Data load and library imports
import pandas as pd
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import numpy as np
import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
df= pd.read_csv('/home/nacho/Descargas/datasets/heart-disease-uci/heart.csv')
Variable assignments:
X = df.drop('target', axis = 1)
Y = df["target"]
X,Y = shuffle (X, Y, random_state = 0)
train_x, test_x, train_y, test_y = train_test_split(X, Y, test_size = 0.20, random_state = 0)
Theoretic architecture of the network :
# The learning rate we want for our gradient descent, and the number of epochs
# We want to use to train our data
learning_rate = 0.2
training_epochs = 500
# The number of layers we want, with the number of neurons we want them
n_hidden_1 = 60
n_hidden_2 = 60
n_hidden_3 = 60
n_hidden_4 = 60
# Define cost function and training algorithm
costf = 'cross entropy'
traininga = "gradient descent optimizer"
Creating the objects that will be placed in the neural network
# We define the inputs as placeholder, we shall fill them when we execute our code
n_dim = X.shape[1] # This will help define the vectors and matrices for calculation correctly
n_class = 1 # The number of categorie values possibles for Y
# We need to solve the nlen thing
x = tf.placeholder( tf.float32, [None, n_dim]) # Specifying where we are going to put the vectors
y_ = tf.placeholder(tf.float32, [None, n_class])
# We define out weights and bias as variables also
W = tf.Variable(tf.zeros([n_dim, n_class]))
b = tf.Variable(tf.zeros([n_class]))
weights = {
'h1': tf.Variable(tf.truncated_normal([n_dim, n_hidden_1])),
'h2': tf.Variable(tf.truncated_normal([n_hidden_1, n_hidden_2])),
'h3': tf.Variable(tf.truncated_normal([n_hidden_2, n_hidden_3])),
'h4': tf.Variable(tf.truncated_normal([n_hidden_3, n_hidden_4])),
'out': tf.Variable(tf.truncated_normal([n_hidden_4, n_class]))
}
biases = {
'b1': tf.Variable(tf.truncated_normal([n_hidden_1])),
'b2': tf.Variable(tf.truncated_normal([n_hidden_2])),
'b3': tf.Variable(tf.truncated_normal([n_hidden_3])),
'b4': tf.Variable(tf.truncated_normal([n_hidden_4])),
'out': tf.Variable(tf.truncated_normal([n_class]))
}
Coding the model :
def multilayer_perceptron(x, weights, biases):
# Hidden layer with RELU activationsd
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with sigmoid activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Hidden layer with sigmoid activation
layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
layer_3 = tf.nn.relu(layer_3)
# Hidden layer with RELU activation
layer_4 = tf.add(tf.matmul(layer_3, weights['h4']), biases['b4'])
layer_4 = tf.nn.sigmoid(layer_4)
# Output layer with linear activation
out_layer = tf.matmul(layer_4, weights['out']) + biases['out']
return out_layer
# Calling model
y = multilayer_perceptron(x, weights, biases) # Basically, this will execute all our layers computations, resulting
# in a tensor y with our predicted results.
cost_function = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=y_)) # Calculates the cross_entropy
training_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)
Coding extra objects, that will allow us to get extra data afterwards
# We are going to create lists, that will allow us to plot the evolution of the epochs accuracy and error after traini
mse_history = []
accuracy_history = []
Coding the execution (note, here is where the error happens )
init = tf.global_variables_initializer()
# The session object we are going to need for execution
sess = tf.Session()
sess.run(init) # We initialize the global variables
for epoch in range(training_epochs):
sess.run(training_step, feed_dict = {x: train_x, y_: train_y}) # We start with the training
cost = sess.run(cost_function, feed_dict={x: train_x, y_: train_y}) #We calculate the loss for that epoch
cost_history = np.append(cost_history, cost) # With that loss calculted we append it to a list
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) # We calculate what would be the correct prediction
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # We define a function to calculate accuracy
pred_y = sess.run(y, feed_dict = {x: test_x}) # Predict after training in the epoch
mse = tf.reduce_mean(tf.square(pred_y - test_y)) # define a function to Calculate the error of that epoch
mse_ = sess.run(mse) # we run said function
mse_history.append(mse_) # We append the result to a list
accuracy = (sess.run(accuracy, feed_dict={x: train_x, y_: train_y})) # Execute the accuracy function
accuracy_history.append(accuracy) # Append the result of the accuracy to a list
The error we get:
ValueError Traceback (most recent call last)
<ipython-input-33-91216a39c8b4> in <module>
1 for epoch in range(training_epochs):
----> 2 sess.run(training_step, feed_dict = {x: train_x, y_: train_y}) # We start with the training
3 cost = sess.run(cost_function, feed_dict={x: train_x, y_: train_y}) #We calculate the loss for that epoch
4 cost_history = np.append(cost_history, cost) # With that loss calculted we append it to a list
5 correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) # We calculate what would be the correct prediction
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
928 try:
929 result = self._run(None, fetches, feed_dict, options_ptr,
--> 930 run_metadata_ptr)
931 if run_metadata:
932 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1127 'which has shape %r' %
1128 (np_val.shape, subfeed_t.name,
-> 1129 str(subfeed_t.get_shape())))
1130 if not self.graph.is_feedable(subfeed_t):
1131 raise ValueError('Tensor %s may not be fed.' % subfeed_t)
ValueError: Cannot feed value of shape (242,) for Tensor 'Placeholder_1:0', which has shape '(242, 1)'
I have tried replacing in the execution section, ( last one before the error ) all references to the data sets with .values, soy they will be trated as numpy arrays. For example, instead of calling X, I call X.values, the error persisted.
I'm new to tensorlow, I know I could probably code this easier with the estimator API, but I really want to be able to code low level networks to make sure I understand them properly. Tried specifying the exact number of rows of data when calling x and y_ placeholders, error also persisted but with slight variation on thee exact message.
Versions used :
jupyterlab 0.35.4 py36hf63ae98_0
jupyterlab_server 0.2.0 py36_0
keras-applications 1.0.7 pypi_0 pypi
keras-preprocessing 1.0.9
scikit-image 0.14.2 py36he6710b0_0
scikit-learn 0.20.3 py36hd81dba3_0
scipy 1.2.1 py36h7c811a0_0
tensorflow 2.0.0a0
anaconda 2019.03 py36_0
anaconda-client 1.7.2 py36_0
anaconda-project 0.8.2
I'll keep working from my end, I just want to understand tensorflow.
****Edit day 2/05/2019:****
So I changed the following lines, and I'm getting interesting advances :
x = tf.placeholder(tf.float32) # Specifying where we are going to put the vectors
y_ = tf.placeholder(tf.float32)
Just changing these two lines in the beginning, changed the presented error to the following:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-142-91216a39c8b4> in <module>
6 accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # We define a function to calculate accuracy
7 pred_y = sess.run(y, feed_dict = {x: test_x}) # Predict after training in the epoch
----> 8 mse = tf.reduce_mean(tf.square(pred_y - test_y)) # define a function to Calculate the error of that epoch
9 mse_ = sess.run(mse) # we run said function
10 mse_history.append(mse_) # We append the result to a list
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/pandas/core/ops.py in wrapper(left, right)
1583 result = safe_na_op(lvalues, rvalues)
1584 return construct_result(left, result,
-> 1585 index=left.index, name=res_name, dtype=None)
1586
1587 wrapper.__name__ = op_name
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/pandas/core/ops.py in _construct_result(left, result, index, name, dtype)
1472 not be enough; we still need to override the name attribute.
1473 """
-> 1474 out = left._constructor(result, index=index, dtype=dtype)
1475
1476 out.name = name
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/pandas/core/series.py in __init__(self, data, index, dtype, name, copy, fastpath)
260 else:
261 data = sanitize_array(data, index, dtype, copy,
--> 262 raise_cast_failure=True)
263
264 data = SingleBlockManager(data, index, fastpath=True)
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/pandas/core/internals/construction.py in sanitize_array(data, index, dtype, copy, raise_cast_failure)
656 elif subarr.ndim > 1:
657 if isinstance(data, np.ndarray):
--> 658 raise Exception('Data must be 1-dimensional')
659 else:
660 subarr = com.asarray_tuplesafe(data, dtype=dtype)
Exception: Data must be 1-dimensional
After that, I changed the execution code, by adding the .value arguments ( the dimensions seemed to be fine, had to be the fact that I was using a dataframe as argument ), causing the code to look like this :
for epoch in range(training_epochs):
sess.run(training_step, feed_dict = {x: train_x.values, y_: train_y.values}) # We start with the training
cost = sess.run(cost_function, feed_dict={x: train_x.values, y_: train_y.values}) #We calculate the loss for that epoch
cost_history = np.append(cost_history, cost) # With that loss calculted we append it to a list
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) # We calculate what would be the correct prediction
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # We define a function to calculate accuracy
pred_y = sess.run(y, feed_dict = {x: test_x.values}) # Predict after training in the epoch
mse = tf.reduce_mean(tf.square(pred_y - test_y.values)) # define a function to Calculate the error of that epoch
mse_ = sess.run(mse) # we run said function
mse_history.append(mse_) # We append the result to a list
accuracy = (sess.run(accuracy, feed_dict={x: train_x.values, y_: train_y.values})) # Execute the accuracy function
accuracy_history.append(accuracy)
this changed, again the error
to the following :
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1334 try:
-> 1335 return fn(*args)
1336 except errors.OpError as e:
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
1319 return self._call_tf_sessionrun(
-> 1320 options, feed_dict, fetch_list, target_list, run_metadata)
1321
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
1407 self._session, options, feed_dict, fetch_list, target_list,
-> 1408 run_metadata)
1409
InvalidArgumentError: Expected dimension in the range [-1, 1), but got 1
[[{{node ArgMax_1561}}]]
During handling of the above exception, another exception occurred:
InvalidArgumentError Traceback (most recent call last)
<ipython-input-176-fc9234678b87> in <module>
9 mse_ = sess.run(mse) # we run said function
10 mse_history.append(mse_) # We append the result to a list
---> 11 accuracy = (sess.run(accuracy, feed_dict={x: train_x.values, y_: train_y.values})) # Execute the accuracy function
12 accuracy_history.append(accuracy)
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
928 try:
929 result = self._run(None, fetches, feed_dict, options_ptr,
--> 930 run_metadata_ptr)
931 if run_metadata:
932 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1151 if final_fetches or final_targets or (handle and feed_dict_tensor):
1152 results = self._do_run(handle, final_targets, final_fetches,
-> 1153 feed_dict_tensor, options, run_metadata)
1154 else:
1155 results = []
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1327 if handle is None:
1328 return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1329 run_metadata)
1330 else:
1331 return self._do_call(_prun_fn, handle, feeds, fetches)
~/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1347 pass
1348 message = error_interpolation.interpolate(message, self._graph)
-> 1349 raise type(e)(node_def, op, message)
1350
1351 def _extend_graph(self):
InvalidArgumentError: Expected dimension in the range [-1, 1), but got 1
[[node ArgMax_1561 (defined at <ipython-input-176-fc9234678b87>:5) ]]
Errors may have originated from an input operation.
Input Source operations connected to node ArgMax_1561:
Placeholder_19 (defined at <ipython-input-166-844432d3b8cf>:11)
Original stack trace for 'ArgMax_1561':
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 505, in start
self.io_loop.start()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/platform/asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/asyncio/base_events.py", line 438, in run_forever
self._run_once()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/asyncio/base_events.py", line 1451, in _run_once
handle._run()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/ioloop.py", line 743, in _run_callback
ret = callback()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/gen.py", line 781, in inner
self.run()
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/gen.py", line 742, in run
yielded = self.gen.send(value)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 357, in process_one
yield gen.maybe_future(dispatch(*args))
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 267, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 534, in execute_request
user_expressions, allow_stdin,
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2848, in run_cell
raw_cell, store_history, silent, shell_futures)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2874, in _run_cell
return runner(coro)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/IPython/core/async_helpers.py", line 67, in _pseudo_sync_runner
coro.send(None)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3049, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3214, in run_ast_nodes
if (yield from self.run_code(code, result)):
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-176-fc9234678b87>", line 5, in <module>
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) # We calculate what would be the correct prediction
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 137, in argmax
return argmax_v2(input, axis, output_type, name)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 166, in argmax_v2
return gen_math_ops.arg_max(input, axis, name=name, output_type=output_type)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 938, in arg_max
name=name)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 800, in _apply_op_helper
op_def=op_def)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3479, in create_op
op_def=op_def)
File "/home/nacho/anaconda3/envs/deepl1/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1961, in __init__
self._traceback = tf_stack.extract_stack()
afterwards, I tried by removing this last two lines, as I read somewhere that they were part of a similar problem for someone else
accuracy = (sess.run(accuracy, feed_dict={x: train_x.values, y_: train_y.values})) # Execute the accuracy function
accuracy_history.append(accuracy)
And the code seems to work. SO the problem must be somewhere around there.
One of the following approaches could be tried:
For placeholder, the shape argument is optional. From the documentation, The shape of the tensor to be fed (optional). If the shape is not specified, you can feed a tensor of any shape.
x_ = tf.placeholder(tf.float32)
Expand the dimension of train_x or train_y using np.expand_dims.
train_x = np.expand_dims(train_x, -1) # add a new axis
I am writing a code for classification between two types of images based on a CNN.
I want to measure the accuracy, sensitivity, and specificity for my work but unfortunately, I have the following error.
Could you please let me know what my problem is.
m = tf.keras.metrics.SensitivityAtSpecificity(0.5)
model.compile(optimizer='adam', loss=keras.losses.binary_crossentropy, metrics=['accuracy',m])
error:
Traceback (most recent call last):
File "C:/Users/Hamed/PycharmProjects/Deep Learning/CNN.py", line 77, in <module>
validation_steps = 1600//batch_size)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\engine\training_generator.py", line 217, in fit_generator
class_weight=class_weight)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
run_metadata_ptr)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Resource localhost/false_negatives/class tensorflow::Var does not exist.
[[{{node metrics/sensitivity_at_specificity/AssignAddVariableOp_1}}]]
[[{{node metrics/sensitivity_at_specificity/Mean}}]]
The metric tf.keras.metrics.SensitivityAtSpecificity calculates sensitivity at a given specificity Click here.
Unfortunately sensitivity and specificity metrics are not yet included in Keras, so you have to write your own custom metric as is specified here.
The following is one simple way to calculate specificity found at this answer.
def specificity(y_true, y_pred):
"""
param:
y_pred - Predicted labels
y_true - True labels
Returns:
Specificity score
"""
neg_y_true = 1 - y_true
neg_y_pred = 1 - y_pred
fp = K.sum(neg_y_true * y_pred)
tn = K.sum(neg_y_true * neg_y_pred)
specificity = tn / (tn + fp + K.epsilon())
return specificity
You can get Keras implementations for specificity and sensitivity on this link.
You Can try this, if it helps...
import keras
model.compile(optimizer="adam",
loss="categorical_crossentropy",
metrics=[keras.metrics.Precision(), keras.metrics.Recall(), keras.metrics.SpecificityAtSensitivity(0.5), keras.metrics.SensitivityAtSpecificity(0.5), 'accuracy'])
I am extremely new to tensorflow and trying to learn how to save and load a previously trained model. I created a simple model using Estimator and trained it.
classifier = tf.estimator.Estimator(model_fn=bag_of_words_model)
# Train
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"words": x_train}, # x_train is 2D numpy array of shape (26, 5)
y=y_train, # y_train is 1D panda series of length 26
batch_size=1000,
num_epochs=None,
shuffle=True)
classifier.train(input_fn=train_input_fn, steps=300)
I then try to save the model:
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
receiver_tensors = {"predictor_inputs": serialized_tf_example}
features = {"words": tf.placeholder(tf.int64, shape=(None, 5))}
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
full_model_dir = classifier.export_savedmodel(export_dir_base="E:/models/", serving_input_receiver_fn=serving_input_receiver_fn)
I have actually copied the serving_input_receiver_fn from this similar question. I don't understand exactly what is going on in that function. But this stores my model in E:/models/<some time stamp>.
I now try to load this saved model:
from tensorflow.contrib import predictor
classifier = predictor.from_saved_model("E:\\models\\<some time stamp>")
The models perfectly loaded. Hereafter, I am struck on how to use this classifier object to get predictions on new data. I have followed a guide here to achieve it but couldn't do it :(. Here is what I did:
predictions = classifier({'predictor_inputs': x_test})["output"] # x_test is 2D numpy array same like x_train in the training part
I get error as follows:
2019-01-10 12:43:38.603506: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:tensorflow:Restoring parameters from E:\models\1547101005\variables\variables
Traceback (most recent call last):
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[{{node Placeholder}} = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 85, in <module>
predictions = classifier({'predictor_inputs': x_test})["output"]
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor.py", line 77, in __call__
return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153) = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'Placeholder', defined at:
File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 82, in <module>
classifier = predictor.from_saved_model("E:\\models\\1547101005")
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor_factories.py", line 153, in from_saved_model
config=config)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py", line 153, in __init__
loader.load(self._session, tags.split(','), export_dir)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 197, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 350, in load
**saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 278, in load_graph
meta_graph_def, import_scope=import_scope, **saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\training\saver.py", line 1696, in _import_meta_graph_with_return_elements
**kwargs))
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 234, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3299, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153) = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
It says that I have to feed value to the placeholder (I think the one defined in serving_input_receiver_fn). I have no idea how to do it without using a Session object of tensorflow.
Please feel free to ask for more information if required.
After somewhat vague understanding of serving_input_receiver_fn, I figured out that the features must not be a placeholder as it creates 2 placeholders (1 for serialized_tf_example and the other for features). I modified the function as follows (the changes is just for the features variable):
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
receiver_tensors = {"predictor_inputs": serialized_tf_example}
features = {"words": tf.tile(serialized_tf_example, multiples=[1, 1])} # Changed this
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
When I try to predict the output from the loaded model, I get no error now. It works! Only thing is that the output is incorrect (for which I am posting a new question :) ).
My try to obtain the batch size within a custom loss function using K.int_shape() demonstrated by the code below.
from keras import layers, Input, Model
import keras.backend as K
import numpy as np
train_X=np.random.random([100, 5])
train_Y=train_X.sum(axis=1)
inputs=Input(shape=(5,), dtype='float32', name='posts')
outputs=layers.Dense(1, activation='relu')(inputs)
model = Model(inputs, outputs)#, net_qc])
model.summary()
def myloss(y_true, y_pred):
n=K.int_shape(y_pred)[0]
return K.sum(y_pred)/n
model.compile(optimizer='adam', loss=myloss)
model.fit(train_X, train_Y, epochs=10, batch_size=10)
The error message below suggest K.int_shape returns None. I have tried several things without success, would really appreciate some helps.
Traceback (most recent call last):
File "./test_intshape.py", line 21, in <module>
model.compile(optimizer='adam', loss=myloss)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/engine/training.py", line 830, in compile
sample_weight, mask)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/engine/training.py", line 429, in weighted
score_array = fn(y_true, y_pred)
File "./test_intshape.py", line 19, in myloss
return K.sum(y_pred)/n
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 820, in binary_op_wrapper
y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 639, in convert_to_tensor
as_ref=False)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 704, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 113, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 102, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 360, in make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.
That is the expected behaviour because K.int_shape() doesn't return a symbolic tensor but the current known shape. Well you would only know the batch size at runtime and when constructing the graph it will be None. What you are looking for is K.shape() instead which will return the symbolic tensor that will have the batch size set at runtime, ie:
n = K.shape(y_pred)[0]
I am trying to write a lambda layer which converts an input tensor into a numpy array and performs a set of affine transforms on slices of said array. To get the underlying numpy array of the tensor I am calling K.eval(). Once I have done all of the processing on the numpy array, I need to convert it back into a keras tensor so it can be returned. Is there an operation in the keras backend which I can use to do this? Or should I be updating the original input tensor using a different backend function?
def apply_affine(x, y):
# Get dimensions of main tensor
dimens = K.int_shape(x)
# Get numpy array behind main tensor
filter_arr = K.eval(x)
if dimens[0] is not None:
# Go through batch...
for i in range(0, dimens[0]):
# Get the correpsonding affine transformation in the form of a numpy array
affine = K.eval(y)[i, :, :]
# Create an skimage affine transform from the numpy array
transform = AffineTransform(matrix=affine)
# Loop through each filter output from the previous layer of the CNN
for j in range(0, dims[1]):
# Warp each filter output according to the corresponding affine transform
warp(filter_arr[i, j, :, :], transform)
# Need to convert filter array back to a keras tensor HERE before return
return None
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
EDIT: Added some context...
AffineTransform: https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_geometric.py#L715
warp: https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_warps.py#L601
I am trying to re-implement the CNN in "Unsupervised learning of object landmarks by factorized spatial embeddings". filter_arr is the output from a convolutional layer containing 10 filters. I want to apply the same affine transform to all of the filter outputs. There is an affine transform associated with each data input. The affine transforms for each data input are passed to the neural net as a tensor and are passed to the lambda layer as the second input transformInput. I have left the structure of my current network below.
twin = Sequential()
twin.add(Conv2D(20, (3, 3), activation=None, input_shape=(28, 28, 1)))
# print(twin.output_shape)
# twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
twin.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# print(twin.output_shape)
twin.add(Conv2D(48, (3, 3), activation=None))
# print(twin.output_shape)
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
twin.add(Conv2D(64, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
twin.add(Conv2D(80, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
twin.add(Conv2D(256, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
twin.add(Conv2D(no_filters, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
# Reshape the image outputs to a 1D list so softmax can be used on them
finalDims = twin.layers[-1].output_shape
twin.add(Reshape((finalDims[1], finalDims[2]*finalDims[3])))
twin.add(Activation('softmax'))
twin.add(Reshape(finalDims[1:]))
originalInput = Input(shape=(28, 28, 1))
warpedInput = Input(shape=(28, 28, 1))
transformInput = Input(shape=(3, 3))
twin1 = twin(originalInput)
def apply_affine(x, y):
# Get dimensions of main tensor
dimens = K.int_shape(x)
# Get numpy array behind main tensor
filter_arr = K.eval(x)
if dimens[0] is not None:
# Go through batch...
for i in range(0, dimens[0]):
# Get the correpsonding affine transformation in the form of a numpy array
affine = K.eval(y)[i, :, :]
# Create an skimage affine transform from the numpy array
transform = AffineTransform(matrix=affine)
# Loop through each filter output from the previous layer of the CNN
for j in range(0, dims[1]):
# Warp each filter output according to the corresponding affine transform
warp(filter_arr[i, j, :, :], transform)
# Need to convert filter array back to a keras tensor
return None
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
twin2 = twin(warpedInput)
siamese = Model([originalInput, warpedInput, transformInput], [transformed_twin, twin2])
EDIT: Traceback when using K.variable()
Traceback (most recent call last):
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1039, in _do_call
return fn(*args)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _run_fn
status, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
Traceback (most recent call last):
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1039, in _do_call
return fn(*args)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _run_fn
status, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 96, in <module>
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\engine\topology.py", line 585, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\layers\core.py", line 659, in call
return self.function(inputs, **arguments)
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 96, in <lambda>
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 81, in apply_affine
filter_arr = K.eval(x)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 533, in eval
return to_dense(x).eval(session=get_session())
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 569, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 3741, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 778, in run
run_metadata_ptr)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'batch_normalization_1/keras_learning_phase', defined at:
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 36, in <module>
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\models.py", line 466, in add
output_tensor = layer(self.outputs[0])
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\engine\topology.py", line 585, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\layers\normalization.py", line 190, in call
training=training)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 2559, in in_train_phase
training = learning_phase()
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 112, in learning_phase
name='keras_learning_phase')
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1507, in placeholder
name=name)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1997, in _placeholder
name=name)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
op_def=op_def)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Exception ignored in: <bound method BaseSession.__del__ of <tensorflow.python.client.session.Session object at 0x0000023AB66D9C88>>
Traceback (most recent call last):
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 587, in __del__
AttributeError: 'NoneType' object has no attribute 'TF_NewStatus'
Process finished with exit code 1
As stated in the comments above it is best to implement lambda layer functions using the Keras backend. Since there are currently no functions in the Keras backend that perform affine transformations, I decided to use a tensorflow function in my Lambda layer instead of implementing an affine transform function from scratch using existing Keras backend functions:
def apply_affine(x):
import tensorflow as tf
return tf.contrib.image.transform(x[0], x[1])
def apply_affine_output_shape(input_shapes):
return input_shapes[0]
The downside to this approach is that my lambda layer will only work when using Tensorflow as the backend (as opposed to Theano or CNTK). If you wanted an implementation that is compatible with any backend you could check the current backend being used by Keras and then perform the transformation function from the backend currently in use.