Keras - inverse of K.eval() - keras

I am trying to write a lambda layer which converts an input tensor into a numpy array and performs a set of affine transforms on slices of said array. To get the underlying numpy array of the tensor I am calling K.eval(). Once I have done all of the processing on the numpy array, I need to convert it back into a keras tensor so it can be returned. Is there an operation in the keras backend which I can use to do this? Or should I be updating the original input tensor using a different backend function?
def apply_affine(x, y):
# Get dimensions of main tensor
dimens = K.int_shape(x)
# Get numpy array behind main tensor
filter_arr = K.eval(x)
if dimens[0] is not None:
# Go through batch...
for i in range(0, dimens[0]):
# Get the correpsonding affine transformation in the form of a numpy array
affine = K.eval(y)[i, :, :]
# Create an skimage affine transform from the numpy array
transform = AffineTransform(matrix=affine)
# Loop through each filter output from the previous layer of the CNN
for j in range(0, dims[1]):
# Warp each filter output according to the corresponding affine transform
warp(filter_arr[i, j, :, :], transform)
# Need to convert filter array back to a keras tensor HERE before return
return None
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
EDIT: Added some context...
AffineTransform: https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_geometric.py#L715
warp: https://github.com/scikit-image/scikit-image/blob/master/skimage/transform/_warps.py#L601
I am trying to re-implement the CNN in "Unsupervised learning of object landmarks by factorized spatial embeddings". filter_arr is the output from a convolutional layer containing 10 filters. I want to apply the same affine transform to all of the filter outputs. There is an affine transform associated with each data input. The affine transforms for each data input are passed to the neural net as a tensor and are passed to the lambda layer as the second input transformInput. I have left the structure of my current network below.
twin = Sequential()
twin.add(Conv2D(20, (3, 3), activation=None, input_shape=(28, 28, 1)))
# print(twin.output_shape)
# twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
twin.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), padding='same'))
# print(twin.output_shape)
twin.add(Conv2D(48, (3, 3), activation=None))
# print(twin.output_shape)
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
twin.add(Conv2D(64, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
twin.add(Conv2D(80, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
twin.add(Conv2D(256, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
twin.add(Conv2D(no_filters, (3, 3), activation=None))
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
twin.add(Activation('relu'))
# print(twin.output_shape)
# Reshape the image outputs to a 1D list so softmax can be used on them
finalDims = twin.layers[-1].output_shape
twin.add(Reshape((finalDims[1], finalDims[2]*finalDims[3])))
twin.add(Activation('softmax'))
twin.add(Reshape(finalDims[1:]))
originalInput = Input(shape=(28, 28, 1))
warpedInput = Input(shape=(28, 28, 1))
transformInput = Input(shape=(3, 3))
twin1 = twin(originalInput)
def apply_affine(x, y):
# Get dimensions of main tensor
dimens = K.int_shape(x)
# Get numpy array behind main tensor
filter_arr = K.eval(x)
if dimens[0] is not None:
# Go through batch...
for i in range(0, dimens[0]):
# Get the correpsonding affine transformation in the form of a numpy array
affine = K.eval(y)[i, :, :]
# Create an skimage affine transform from the numpy array
transform = AffineTransform(matrix=affine)
# Loop through each filter output from the previous layer of the CNN
for j in range(0, dims[1]):
# Warp each filter output according to the corresponding affine transform
warp(filter_arr[i, j, :, :], transform)
# Need to convert filter array back to a keras tensor
return None
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
twin2 = twin(warpedInput)
siamese = Model([originalInput, warpedInput, transformInput], [transformed_twin, twin2])
EDIT: Traceback when using K.variable()
Traceback (most recent call last):
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1039, in _do_call
return fn(*args)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _run_fn
status, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
Traceback (most recent call last):
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1039, in _do_call
return fn(*args)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _run_fn
status, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 96, in <module>
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\engine\topology.py", line 585, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\layers\core.py", line 659, in call
return self.function(inputs, **arguments)
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 96, in <lambda>
transformed_twin = Lambda(function=lambda x: apply_affine(x[0], x[1]))([twin1, transformInput])
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 81, in apply_affine
filter_arr = K.eval(x)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 533, in eval
return to_dense(x).eval(session=get_session())
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 569, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 3741, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 778, in run
run_metadata_ptr)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 982, in _run
feed_dict_string, options, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1032, in _do_run
target_list, options, run_metadata)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 1052, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Caused by op 'batch_normalization_1/keras_learning_phase', defined at:
File "C:/Users/nickb/PycharmProjects/testing/MNIST_implementation.py", line 36, in <module>
twin.add(BatchNormalization(axis=1, momentum=0.99, epsilon=0.001, center=True))
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\models.py", line 466, in add
output_tensor = layer(self.outputs[0])
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\engine\topology.py", line 585, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\layers\normalization.py", line 190, in call
training=training)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 2559, in in_train_phase
training = learning_phase()
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\keras\backend\tensorflow_backend.py", line 112, in learning_phase
name='keras_learning_phase')
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1507, in placeholder
name=name)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1997, in _placeholder
name=name)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
op_def=op_def)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'batch_normalization_1/keras_learning_phase' with dtype bool
[[Node: batch_normalization_1/keras_learning_phase = Placeholder[dtype=DT_BOOL, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Exception ignored in: <bound method BaseSession.__del__ of <tensorflow.python.client.session.Session object at 0x0000023AB66D9C88>>
Traceback (most recent call last):
File "C:\Users\nickb\Anaconda3\envs\py35\lib\site-packages\tensorflow\python\client\session.py", line 587, in __del__
AttributeError: 'NoneType' object has no attribute 'TF_NewStatus'
Process finished with exit code 1

As stated in the comments above it is best to implement lambda layer functions using the Keras backend. Since there are currently no functions in the Keras backend that perform affine transformations, I decided to use a tensorflow function in my Lambda layer instead of implementing an affine transform function from scratch using existing Keras backend functions:
def apply_affine(x):
import tensorflow as tf
return tf.contrib.image.transform(x[0], x[1])
def apply_affine_output_shape(input_shapes):
return input_shapes[0]
The downside to this approach is that my lambda layer will only work when using Tensorflow as the backend (as opposed to Theano or CNTK). If you wanted an implementation that is compatible with any backend you could check the current backend being used by Keras and then perform the transformation function from the backend currently in use.

Related

Using albumentation with Tensorflow Sequence API

I am trying to use tf.keras.utils.Sequence object as input to my keras model so,that I can apply augmentations that are not available in tensorflow using albumentations library. But I am getting error while doing so. (The image pre-processing operations mentioned here are just for clarity)
import albumentations as A
from tensorflow.keras.utils import Sequence
import os
import glob
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense, Conv2D, Flatten, MaxPool2D, Dropout
from tensorflow.keras.models import Sequential
TRAIN_DIR = os.path.join('..', 'Data', 'PetImages')
def load_data():
list_of_fpaths = glob.glob('../Data/PetImages/Cat/*')
labels = [1] * len(list_of_fpaths)
temp = glob.glob('../Data/PetImages/Dog/*')
list_of_fpaths.extend(temp)
labels.extend([0] * len(temp))
return list_of_fpaths, labels
# Now list of fpaths contain the list of file paths and labels contain
# corresponding labels
class DataSequence(Sequence):
def __init__(self, x_set, y_set, batch_size, augmentations):
self.x, self.y = x_set, y_set
self.batch_size = batch_size
self.augment = augmentations
def __len__(self):
return int(np.ceil(len(self.x) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
a = np.array([
self.augment(image=plt.imread(file_name))["image"] for file_name in
batch_x
])
b = np.array(batch_y)
return a,b
def get_model(input_shape):
model = Sequential([
Conv2D(8, 3, activation='relu', input_shape=input_shape),
MaxPool2D(2),
Conv2D(16, 3, activation='relu'),
MaxPool2D(2),
Conv2D(32, 3, activation='relu'),
MaxPool2D(2),
Conv2D(32, 3, activation='relu'),
MaxPool2D(2),
Conv2D(32, 3, activation='relu'),
MaxPool2D(2),
Flatten(),
Dense(1024, activation='relu'),
Dropout(0.3),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
return model
ALBUMENTATIONS_TRAIN = A.Compose([
A.Resize(256, 256),
# A.Resize(512, 512),
A.ToFloat(),
# A.RandomCrop(384, 384, p=0.5),
])
ALBUMENTATIONS_TEST = A.Compose([
A.ToFloat(),
A.Resize(256, 256)
])
X, Y = load_data()
train_gen = DataSequence(X, Y, 16, ALBUMENTATIONS_TRAIN)
model = get_model(input_shape=(256,256,3))
model.fit(train_gen,epochs=100)
The error that I am getting is
17/748 [..............................] - ETA: 1:06 - loss: 0.4304 - accuracy: 0.92282020-07-08 13:25:47.751964: W tensorflow/core/framework/op_kernel.cc:1741] Invalid argument: ValueError: could not broadcast input array from shape (256,256,3) into shape (256,256)
Traceback (most recent call last):
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\ops\script_ops.py", line 243, in __call__
ret = func(*args)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 309, in wrapper
return func(*args, **kwargs)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 785, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 801, in wrapped_generator
for data in generator_fn():
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 932, in generator_fn
yield x[i]
File "D:/ACAD/TENSORFLOW/Rough/data_aug_pipeline.py", line 40, in __getitem__
a = np.array([
ValueError: could not broadcast input array from shape (256,256,3) into shape (256,256)
Traceback (most recent call last):
File "D:/ACAD/TENSORFLOW/Rough/data_aug_pipeline.py", line 89, in <module>
model.fit(train_gen,epochs=100)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\training.py", line 848, in fit
tmp_logs = train_function(iterator)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\eager\def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\eager\def_function.py", line 611, in _call
return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\eager\function.py", line 2420, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\eager\function.py", line 1661, in _filtered_call
return self._call_flat(
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\eager\function.py", line 1745, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\eager\function.py", line 593, in call
outputs = execute.execute(
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\eager\execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: ValueError: could not broadcast input array from shape (256,256,3) into shape (256,256)
Traceback (most recent call last):
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\ops\script_ops.py", line 243, in __call__
ret = func(*args)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 309, in wrapper
return func(*args, **kwargs)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 785, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 801, in wrapped_generator
for data in generator_fn():
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 932, in generator_fn
yield x[i]
File "D:/ACAD/TENSORFLOW/Rough/data_aug_pipeline.py", line 40, in __getitem__
a = np.array([
ValueError: could not broadcast input array from shape (256,256,3) into shape (256,256)
[[{{node PyFunc}}]]
[[IteratorGetNext]]
[[IteratorGetNext/_4]]
(1) Invalid argument: ValueError: could not broadcast input array from shape (256,256,3) into shape (256,256)
Traceback (most recent call last):
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\ops\script_ops.py", line 243, in __call__
ret = func(*args)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 309, in wrapper
return func(*args, **kwargs)
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 785, in generator_py_func
values = next(generator_state.get_iterator(iterator_id))
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 801, in wrapped_generator
for data in generator_fn():
File "C:\Users\aksha\Anaconda3\envs\tf\lib\site-packages\tensorflow\python\keras\engine\data_adapter.py", line 932, in generator_fn
yield x[i]
File "D:/ACAD/TENSORFLOW/Rough/data_aug_pipeline.py", line 40, in __getitem__
a = np.array([
ValueError: could not broadcast input array from shape (256,256,3) into shape (256,256)
[[{{node PyFunc}}]]
[[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_1195]
Function call stack:
train_function -> train_function
Process finished with exit code 1
Please help me to understand what mistake I am making.
Base on the error messages, there is at least one grayscale image in your dataset that was resize to 256x256 and thus cannot fit into your network.

Keras 2.2.4 ERROR:AttributeError: 'NoneType' object has no attribute 'inbound_nodes'

I'm building a new channel wise operation for my network.
A global average pooling result will multiply(element-wise) the first x(input) value.
But, when i run the train.py file, it will occur errors which i couldn't understand. pls HELP!!!
The error message:
Traceback (most recent call last):
File "E:/githubRemote/train.py", line 49, in <module>
model = init_model()
File "E:/githubRemote/train.py", line 37, in init_model
model = Model(inputs=im_n, outputs=resd)
File "C:\Users\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\Anaconda3\lib\site-packages\keras\engine\network.py", line 93, in __init__
self._init_graph_network(*args, **kwargs)
File "C:\Users\Anaconda3\lib\site-packages\keras\engine\network.py", line 231, in _init_graph_network
self.inputs, self.outputs)
File "C:\Users\Anaconda3\lib\site-packages\keras\engine\network.py", line 1366, in _map_graph_network
tensor_index=tensor_index)
File "C:\Users\Anaconda3\lib\site-packages\keras\engine\network.py", line 1353, in build_map
node_index, tensor_index)
File "C:\Users\Anaconda3\lib\site-packages\keras\engine\network.py", line 1353, in build_map
node_index, tensor_index)
File "C:\Users\Anaconda3\lib\site-packages\keras\engine\network.py", line 1353, in build_map
node_index, tensor_index)
[Previous line repeated 3 more times]
File "C:\Users\Anaconda3\lib\site-packages\keras\engine\network.py", line 1325, in build_map
node = layer._inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
My error code is the Multiply layer operation.
When i comment the net = Multiply()([x, excitation])
It will work!
I think the keras model may consider that code line don't make a layer of Keras. So it's a NoneType -.-
My code:
def CAlayer(x, channel, reduction=16):
# tensorflow implement
# avg_pool = tflearn.global_avg_pool(inputx)
# conv_1 = slim.conv2d(avg_pool, channel // reduction, 1)
# conv_2 = slim.conv2d(conv_1, channel, 1, activation_fn=None)
# excitation = tf.nn.sigmoid(conv_2)
# keras implementation
avg_pool = GlobalAveragePooling2D()(x)
avg_pool = expand_dims(avg_pool, axis=1)
avg_pool = expand_dims(avg_pool, axis=1)
conv_1 = Conv2D(channel//reduction, 1, activation=None, padding='same')(avg_pool)
conv_1_ac = Activation('relu')(conv_1)
conv_2 = Conv2D(channel, 1, activation=None, padding='same')(conv_1_ac)
excitation = Activation('sigmoid')(conv_2)
--> net = Multiply()([excitation, x])
# print (net.shape)
return net
In your code where you have used :
avg_pool = expand_dims(avg_pool, axis=1)
this is causing the problem, as expand_dims is a function defined under keras.backend which
gives TensorFlow tensor as an output but all operations should be encapsulated in Keras layers.
You must use its equivalent Keras layer function.
A rule of thumb: All Keras layer functions start with a capital letter.

Prediction from tensorflow model fails

I am extremely new to tensorflow and trying to learn how to save and load a previously trained model. I created a simple model using Estimator and trained it.
classifier = tf.estimator.Estimator(model_fn=bag_of_words_model)
# Train
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"words": x_train}, # x_train is 2D numpy array of shape (26, 5)
y=y_train, # y_train is 1D panda series of length 26
batch_size=1000,
num_epochs=None,
shuffle=True)
classifier.train(input_fn=train_input_fn, steps=300)
I then try to save the model:
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
receiver_tensors = {"predictor_inputs": serialized_tf_example}
features = {"words": tf.placeholder(tf.int64, shape=(None, 5))}
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
full_model_dir = classifier.export_savedmodel(export_dir_base="E:/models/", serving_input_receiver_fn=serving_input_receiver_fn)
I have actually copied the serving_input_receiver_fn from this similar question. I don't understand exactly what is going on in that function. But this stores my model in E:/models/<some time stamp>.
I now try to load this saved model:
from tensorflow.contrib import predictor
classifier = predictor.from_saved_model("E:\\models\\<some time stamp>")
The models perfectly loaded. Hereafter, I am struck on how to use this classifier object to get predictions on new data. I have followed a guide here to achieve it but couldn't do it :(. Here is what I did:
predictions = classifier({'predictor_inputs': x_test})["output"] # x_test is 2D numpy array same like x_train in the training part
I get error as follows:
2019-01-10 12:43:38.603506: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:tensorflow:Restoring parameters from E:\models\1547101005\variables\variables
Traceback (most recent call last):
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[{{node Placeholder}} = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 85, in <module>
predictions = classifier({'predictor_inputs': x_test})["output"]
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor.py", line 77, in __call__
return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153) = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'Placeholder', defined at:
File "E:/ml_classif/tensorflow_bow_with_prob/load_model.py", line 82, in <module>
classifier = predictor.from_saved_model("E:\\models\\1547101005")
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\predictor_factories.py", line 153, in from_saved_model
config=config)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py", line 153, in __init__
loader.load(self._session, tags.split(','), export_dir)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 197, in load
return loader.load(sess, tags, import_scope, **saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 350, in load
**saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 278, in load_graph
meta_graph_def, import_scope=import_scope, **saver_kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\training\saver.py", line 1696, in _import_meta_graph_with_return_elements
**kwargs))
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\importer.py", line 234, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3440, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3299, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "E:\ml_classif\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype int64 and shape [?,5]
[[node Placeholder (defined at E:\ml_classif\venv\lib\site-packages\tensorflow\contrib\predictor\saved_model_predictor.py:153) = Placeholder[dtype=DT_INT64, shape=[?,5], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
It says that I have to feed value to the placeholder (I think the one defined in serving_input_receiver_fn). I have no idea how to do it without using a Session object of tensorflow.
Please feel free to ask for more information if required.
After somewhat vague understanding of serving_input_receiver_fn, I figured out that the features must not be a placeholder as it creates 2 placeholders (1 for serialized_tf_example and the other for features). I modified the function as follows (the changes is just for the features variable):
def serving_input_receiver_fn():
serialized_tf_example = tf.placeholder(dtype=tf.int64, shape=(None, 5), name='words')
receiver_tensors = {"predictor_inputs": serialized_tf_example}
features = {"words": tf.tile(serialized_tf_example, multiples=[1, 1])} # Changed this
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
When I try to predict the output from the loaded model, I get no error now. It works! Only thing is that the output is incorrect (for which I am posting a new question :) ).

Keras int_shape returns None in custom loss function

My try to obtain the batch size within a custom loss function using K.int_shape() demonstrated by the code below.
from keras import layers, Input, Model
import keras.backend as K
import numpy as np
train_X=np.random.random([100, 5])
train_Y=train_X.sum(axis=1)
inputs=Input(shape=(5,), dtype='float32', name='posts')
outputs=layers.Dense(1, activation='relu')(inputs)
model = Model(inputs, outputs)#, net_qc])
model.summary()
def myloss(y_true, y_pred):
n=K.int_shape(y_pred)[0]
return K.sum(y_pred)/n
model.compile(optimizer='adam', loss=myloss)
model.fit(train_X, train_Y, epochs=10, batch_size=10)
The error message below suggest K.int_shape returns None. I have tried several things without success, would really appreciate some helps.
Traceback (most recent call last):
File "./test_intshape.py", line 21, in <module>
model.compile(optimizer='adam', loss=myloss)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/engine/training.py", line 830, in compile
sample_weight, mask)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/keras/engine/training.py", line 429, in weighted
score_array = fn(y_true, y_pred)
File "./test_intshape.py", line 19, in myloss
return K.sum(y_pred)/n
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 820, in binary_op_wrapper
y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 639, in convert_to_tensor
as_ref=False)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 704, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 113, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/constant_op.py", line 102, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/framework/tensor_util.py", line 360, in make_tensor_proto
raise ValueError("None values not supported.")
ValueError: None values not supported.
That is the expected behaviour because K.int_shape() doesn't return a symbolic tensor but the current known shape. Well you would only know the batch size at runtime and when constructing the graph it will be None. What you are looking for is K.shape() instead which will return the symbolic tensor that will have the batch size set at runtime, ie:
n = K.shape(y_pred)[0]

Tensorflow 1.4 Bidirectional RNN not working as expected

I am trying to use Bidirectional RNN and pass the output through a CNN for text classification. However, I am getting all sorts of shape errors with bidirectional RNN. Although, If I use two dynamic rnn with reverse op in the second layer, it appears to work fine:
Here is bidirectional RNN code that DOES NOT work for me:
# Bidirectional LSTM layer
with tf.name_scope("bidirectional-lstm"):
lstm_fw_cell = tf.nn.rnn_cell.BasicLSTMCell(hidden_size, forget_bias=1.0)
lstm_bw_cell = tf.nn.rnn_cell.BasicLSTMCell(hidden_size, forget_bias=1.0)
self.lstm_outputs, _ = tf.nn.bidirectional_dynamic_rnn(
lstm_fw_cell,
lstm_bw_cell,
self.embedded_chars,
sequence_length=self.seqlen,
dtype=tf.float32)
self.lstm_outputs = tf.concat(self.lstm_outputs, axis=2)
Here is the two layer dynamic rnn that DOES work for me:
# Bidirectional LSTM layer
with tf.name_scope("bidirectional-lstm"):
lstm_fw_cell = tf.nn.rnn_cell.BasicLSTMCell(hidden_size, forget_bias=1.0)
lstm_bw_cell = tf.nn.rnn_cell.BasicLSTMCell(hidden_size, forget_bias=1.0)
with tf.variable_scope("lstm-output-fw"):
self.lstm_outputs_fw, _ = tf.nn.dynamic_rnn(
lstm_fw_cell,
self.embedded_chars,
sequence_length=self.seqlen,
dtype=tf.float32)
with tf.variable_scope("lstm-output-bw"):
self.embedded_chars_rev = array_ops.reverse_sequence(self.embedded_chars, seq_lengths=self.seqlen, seq_dim=1)
tmp, _ = tf.nn.dynamic_rnn(
lstm_bw_cell,
self.embedded_chars_rev,
sequence_length=self.seqlen,
dtype=tf.float32)
self.lstm_outputs_bw = array_ops.reverse_sequence(tmp, seq_lengths=self.seqlen, seq_dim=1)
Concatenate outputs
self.lstm_outputs = tf.add(self.lstm_outputs_fw, self.lstm_outputs_bw, name="lstm_outputs")
What am I doing wrong with bidirectional RNN ?
I am passing the output of this to CNN and error occurs when computing the
Here is the rest of the code:
# Convolution + maxpool layer for each filter size
pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Convolution Layer
filter_shape = [filter_size, hidden_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.lstm_outputs_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(
h,
ksize=[1, sequence_length - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding='VALID',
name="pool")
pooled_outputs.append(pooled)
# Combine all the pooled features
num_filters_total = num_filters * len(filter_sizes)
self.h_pool = tf.concat(axis=3, values=pooled_outputs)
self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])
# Dropout layer
with tf.name_scope("dropout"):
self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)
# Final (unnormalized) scores and predictions
with tf.name_scope("output"):
# Standard output weights initialization
W = tf.get_variable(
"W",
shape=[num_filters_total, num_classes],
initializer=tf.contrib.layers.xavier_initializer())
b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b")
# # Initialized output weights to 0.0, might improve accuracy
# W = tf.Variable(tf.constant(0.0, shape=[num_filters_total, num_classes]), name="W")
# b = tf.Variable(tf.constant(0.0, shape=[num_classes]), name="b")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
self.predictions = tf.argmax(self.scores, 1, name="predictions")
# Calculate mean cross-entropy loss
with tf.name_scope("loss"):
losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y)
self.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_loss
# Accuracy
with tf.name_scope("accuracy"):
correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")
And here is the error message:
Traceback (most recent call last):
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1323, in _do_call
return fn(*args)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
status, run_metadata)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[7550,2] labels_size=[50,2]
[[Node: loss/SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss/Reshape, loss/Reshape_1)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "train_upgraded.py", line 209, in <module>
train_step(x_batch, seqlen_batch, y_batch)
File "train_upgraded.py", line 177, in train_step
feed_dict)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
options, run_metadata)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[7550,2] labels_size=[50,2]
[[Node: loss/SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss/Reshape, loss/Reshape_1)]]
Caused by op 'loss/SoftmaxCrossEntropyWithLogits', defined at:
File "train_upgraded.py", line 87, in <module>
l2_reg_lambda=FLAGS.l2_reg_lambda)
File "/media/hemant/MVV/MyValueVest-local/learning/Initial Embeddings/STEP 2 lstm-context-embeddings-master/model_upgraded.py", line 138, in __init__
losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 1783, in softmax_cross_entropy_with_logits
precise_logits, labels, name=name)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 4364, in _softmax_cross_entropy_with_logits
name=name)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/home/hemant/anaconda3/envs/tf14/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[7550,2] labels_size=[50,2]
[[Node: loss/SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](loss/Reshape, loss/Reshape_1)]]
All I had to do was multiply the hidden size by 2 since output size of birrectional RNN is twice that of rnn.
filter_shape = [filter_size, hidden_size*2, 1, num_filters]3
Problem Solved.

Resources