Reshape layer after a dense layer in Keras - keras

I am trying to understand why there is a mismatch dimensionality between a Dense Layer and a Reshape Layer. Shouldn't this snippet code be correct? The dimensionality of the Dense Layer output will be image_resize^2 * 128, why is there a conflict in the reshape?
input_shape = (28,28,1)
inputs = Input(shape=input_shape)
image_size = 28
image_resize = image_size // 4
x = Dense(image_resize * image_resize * 128)(inputs)
x = Reshape((image_resize, image_resize, 128))(x)
This is the error that shows up:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/venv/lib/python3.7/site-packages/keras/engine/base_layer.py", line 474, in __call__
output_shape = self.compute_output_shape(input_shape)
File "/Users/venv/lib/python3.7/site-packages/keras/layers/core.py", line 398, in compute_output_shape
input_shape[1:], self.target_shape)
File "/Users/venv/lib/python3.7/site-packages/keras/layers/core.py", line 386, in _fix_unknown_dimension
raise ValueError(msg)
ValueError: total size of new array must be unchanged

Dense layers act on the last dimension of the input data, if you want to give image input to a Dense layer, you should first flatten it:
x = Flatten()(x)
x = Dense(image_resize * image_resize * 128)(x)
x = Reshape((image_resize, image_resize, 128))(x)
Then the Reshape will work.

Your input is 784 and enters Dense layer with 7*7*128
So you will have and output of 784*7*7*128 in Reshape, not 7*7*128

Related

Tensorflow HammingLoss gives ValueError with keras.utils.Sequence

I am working on a multi-label image classification problem with 13 labels. I want to use Hamming Loss to evaluate the performance of the model. So I specified tfa.metrics.HammingLoss(mode = 'multilabel') in the metrics parameter during model compilation. This worked when I provided both X_train and y_train to model.fit(), but it threw a ValueError when I used a Sequence object (described below) for training.
Data Generator description
I used a keras.utils.Sequence input object similar to what is present here. The generator returns 2 numpy arrays for each batch - the first array consists of the input images of shape (128, 128, 3) and the second array consists of labels each of shape (13,).
This is what my code looks like:
model.compile(
loss='binary_crossentropy',
optimizer='rmsprop',
metrics=[tfa.metrics.HammingLoss(mode = 'multilabel')]
)
model.fit(
train_datagen,
epochs = 5,
batch_size = BATCH_SIZE,
steps_per_epoch = TOTAL // BATCH_SIZE
)
And this is the error that I obtained:
Epoch 1/5
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-140-978987a2bbaa> in <module>
3 epochs=5,
4 batch_size=BATCH_SIZE,
----> 5 steps_per_epoch = 2000 // BATCH_SIZE
6 # validation_data=validation_generator,
7 )
4 frames
/usr/local/lib/python3.7/dist-packages/tensorflow_addons/metrics/hamming.py in else_body_2()
64 try:
65 do_return = True
---> 66 retval_ = (ag__.ld(nonzero) / ag__.converted_call(ag__.ld(y_true).get_shape, (), None, fscope)[(- 1)])
67 except:
68 do_return = False
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1051, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/metrics/utils.py", line 66, in update_state *
matches = self._fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/metrics/hamming.py", line 133, in hamming_loss_fn *
return nonzero / y_true.get_shape()[-1]
ValueError: None values not supported.
How do I correct this? Is there any issue with the format of the labels?

Convert tflearn to keras

Since tflearn is outdated and I am watching a chatbot tutorial that uses tflearn, I want to write the neural network model in keras. However, I got this error right here:
WARNING:tensorflow:Model was constructed with shape (None, 58) for input KerasTensor(type_spec=TensorSpec(shape=(None, 58), dtype=tf.float32, name='input_22'), name='input_22', description="created by layer 'input_22'"), but it was called on an input with incompatible shape (None,).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-164-1a494613d0d2> in <module>()
1 convert_input("Hello")
----> 2 chat()
2 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1801, in predict_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1790, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1783, in run_step **
outputs = model.predict_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1751, in predict_step
return self(x, training=False)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 228, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" '
ValueError: Exception encountered when calling layer "sequential_24" (type Sequential).
Input 0 of layer "dense_59" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (None,)
Call arguments received:
• inputs=('tf.Tensor(shape=(None,), dtype=int64)',)
• training=False
• mask=None
I have tried to set up the keras model myself
model = keras.Sequential()
# model.add(keras.layers.InputLayer(input_shape = len(words)))
model.add(keras.Input(shape=len(training[0])))
model.add(keras.layers.Dense(8, activation = "relu"))
model.add(keras.layers.Dense(8, activation = "relu"))
model.add(keras.layers.Dense(len(output[0]), activation= "softmax"))
model.compile(optimizer = "adam", loss = "categorical_crossentropy", metrics=['accuracy'])
# convert_input(inp) -> return a 1D numpy array filled with 1 and 0
prediction = model.predict([convert_input(inp)])
versus the tflearn model
network = tflearn.input_data(shape=[None, len(training[0])])
network = tflearn.fully_connected(network, 8)
network = tflearn.fully_connected(network, 8)
network = tflearn.fully_connected(network, len(output[0]), activation = "softmax")
network = tflearn.regression(network)
model = tflearn.DNN(network)
model.fit(training, output, n_epoch = 1000, batch_size=8, show_metric=True)
# convert_input(inp) -> return a 1D numpy array filled with 1 and 0
prediction = model.predict([convert_input(inp)])
However, when I call model.predict only the tflearn model works and not the keras. Please help!
I was in a similar predicament. However I was finally able to resolve my problem with the following code:
import numpy as np
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
model=Sequential()
model.add(Dense(8,input_shape=(len(train_x[0]),),kernel_initializer='normal'))
model.add(Dense(8,kernel_initializer='normal'))
model.add(Dense(len(train_y[0]),activation='softmax',kernel_initializer='normal'))
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['accuracy'])
model.fit(np.array(train_x),np.array(train_y), epochs=1000, batch_size=8, verbose=1)
During prediction,
model.predict(np.array([p]))
Basically converting the list into an array using np.array() helped me solve the problem.

Tensorflow map_fn Error PartialTensorShape: Incompatible ranks during merge

The following code is giving me an error which I cannot find the answer to. I am trying to apply a python function to each element of a tensor, which transforms the element into a vector of shape 3, so I can calculate a custom evaluation metric. It needs to be a Python function as it is used in other places too.
The error (log below) is Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0, and I assume it has to do with the result of map_fn and its shape. However, it only happens at runtime as if I have any other shape then it throws an error with incompatible shapes when I do model.compile(). Have I misundertood how to use map_fn? Any suggestions?
Thanks in advance!
2021-04-09 12:19:31.357542: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at list_kernels.h:101 : Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0
Traceback (most recent call last):
File "test.py", line 93, in <module>
validation_data=(val_input, val_output))
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 840, in _call
return self._stateless_fn(*args, **kwds)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2829, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1848, in _filtered_call
cancellation_manager=cancellation_manager)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 1924, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 550, in call
ctx=ctx)
File "/home/user/anaconda3/envs/tf_models/lib/python3.6/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0
[[node map/TensorArrayV2Stack/TensorListStack (defined at test.py:27) ]]
[[map_1/while/LoopCond/_50/_64]]
(1) Invalid argument: PartialTensorShape: Incompatible ranks during merge: 1 vs. 0
[[node map/TensorArrayV2Stack/TensorListStack (defined at test.py:27) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_823]
Function call stack:
train_function -> train_function
This is the code to reproduce the issue, using Tensorflow 2.3.1 and Python 3.6.
from typing import List
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input, Flatten
INPUT_SHAPE = (2, 10, 10)
class CustomMetric(tf.keras.metrics.Metric):
def __init__(self, name='custom_metric', **kwargs):
super().__init__(name=name, **kwargs)
self.mean_custom_metric = self.add_weight(name='mean_custom_metric', initializer='zeros', dtype=float)
def update_state(self, y_true, y_pred, sample_weight=None):
# y_true is a probability distribution (batch, 2*10*10), so find index of most likely position
y_pred = tf.argmax(y_pred, axis=1)
# y_pred and y_true are both tensors with shape (batch, 1)
print(f"y_pred: {y_pred}")
# apply python func to convert each value to a 3D value (single scalar to vector with 3 scalars)
# according to docs: map_fn(fn, elems).shape = [elems.shape[0]] + fn(elems[0]).shape.
# So: elems.shape[0] == batch | fn(elems[0]).shape == 3,
# error happens when trying to do anything with the result of map_fn below
y_true_positions = tf.map_fn(self.wrapper, y_true, fn_output_signature=tf.float32)
y_pred_positions = tf.map_fn(self.wrapper, y_pred, fn_output_signature=tf.float32)
# y_true_positions, y_pred_positions: tensors with shape (batch, 3)
print(f"y_true_positions: {y_true_positions}")
# do something with y_true_positions and y_pred_positions
y_final = y_true_positions
mean = tf.reduce_sum(y_final)
print('---')
self.mean_custom_metric.assign(mean)
def result(self):
return self.mean_custom_metric
def reset_states(self):
self.mean_custom_metric.assign(0.0)
def wrapper(self, x):
# x: tensor with shape (1,)
print(f"x: {x}")
result = tf.py_function(python_function, [int(x)], tf.float32)
# result is a tensor of shape unknown
print(f"result: {result}")
result.set_shape(tf.TensorShape(3))
# result: tensor with shape (3,)
print(f"result: {result}")
return result
def python_function(index: int) -> List[float]:
# dummy function
return [0, 0, 0]
# dummy model
block_positions = Input(shape=(*INPUT_SHAPE, 1), dtype=tf.float32)
block_positions_layer = Flatten()(block_positions)
target_output_layer = Dense(128, activation='relu')(block_positions_layer)
target_output = Dense(np.prod(INPUT_SHAPE), activation='softmax', name='regions')(target_output_layer)
model = tf.keras.models.Model(
inputs=[block_positions],
outputs=(target_output))
custom_metric = CustomMetric()
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer=tf.optimizers.Adam(learning_rate=0.001),
metrics=['accuracy', custom_metric])
print(model.summary())
# placeholder data
train_input = np.zeros(shape=(100, *INPUT_SHAPE), dtype=np.float32)
train_output = np.zeros(shape=(100, 1), dtype=np.int32)
val_input = np.zeros(shape=(100, *INPUT_SHAPE), dtype=np.float32)
val_output = np.zeros(shape=(100, 1), dtype=np.int32)
history = model.fit(
train_input, train_output, epochs=10, verbose=1,
validation_data=(val_input, val_output))
I found the solution after a while. The wrapper function was returning a tensor of shape (3,), whereas the map_fn was applied over a tensor of shape (batch, 1). I don't fully understand why, but it seems that map_fn requires a return tensor of shape (batch, 1,) and not fn(elems[0]).shape as the documentation suggests.
Changing the line:
result.set_shape(tf.TensorShape(3))
for
result = tf.reshape(tf.concat(result, 1), (1, 3)) in wrapper
so that the return value is (1, 3) instead of (3) fixed the issue. After map_fn, you end up with a tensor of shape (batch, 1, 3), which I reshaped to be (batch, 3).

Conv2D + LSTM network giving errors

I have a list of matrices called charMatrixList, of length 40744. I convert this list to numpy array, and the shape changes to (40744,32,30). This numpy array is passed as an input to the neural network.
The errors I'm getting are related to the shape of the Conv2D layer output, when passed as an input into an LSTM layer.
from keras.models import Sequential
from keras.layers import Embedding,LSTM,Flatten,Conv2D,Reshape
import numpy as np
def phase22(charMatrixList ):
model = Sequential()
model.add(Conv2D(32, (3, 3), strides=(1,1) , padding="same", activation="relu",input_shape=(40744,32,30)))
model.add(LSTM(16, return_sequences=True))
model.add(LSTM(16, return_sequences=True))
model.add(Flatten())
model.compile('rmsprop', 'mse')
input_array = charMatrixList
model.compile('rmsprop', 'mse')
output_array = model.predict(input_array)
return output_array
p2out = phase22(charMatrixList)
I'm getting the below error :
Traceback (most recent call last):
File "<ipython-input-56-f615f91b6704>", line 1, in <module>
p2out = phase22(np.array(charMatrixList) )
File "<ipython-input-55-9a4fd292a04f>", line 4, in phase22
model.add(LSTM(16, return_sequences=True))
File "C:\Users\Kishore\Anaconda3\lib\site-packages\keras\engine\sequential.py", line 185, in add
output_tensor = layer(self.outputs[0])
File "C:\Users\Kishore\Anaconda3\lib\site-packages\keras\layers\recurrent.py", line 500, in __call__
return super(RNN, self).__call__(inputs, **kwargs)
File "C:\Users\Kishore\Anaconda3\lib\site-packages\keras\engine\base_layer.py", line 414, in __call__
self.assert_input_compatibility(inputs)
File "C:\Users\Kishore\Anaconda3\lib\site-packages\keras\engine\base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer lstm_11: expected ndim=3, found ndim=4
Keras ignores the first dimension when defining input size because that is just the number of training examples, m. Keras is able to work with any m, so it only cares about the actual input dimensions. That is why Kears sees (40744,32,30) as 4 dimensions.
I'm confused by the dimensions of your input, is 40744 the number of training examples? If it is do input_size = (32, 30).
If your input has 3 dimensions include number of training examples in your input, ie. charMatrixList = (m, 40744,32,30)

Sequential Model: TypeError: call() takes at least 3 arguments (2 given)

I am trying to create a multilayer bidirectional LSTM in Keras. Upon running the code below I get a TypeError raised initiating the forward model at forward_model = keras.Sequential([fw_cell])
It says the call() takes at least 3 arguments and that only two are given. I am not sure what this is referring to and therefore how to debug it. Is it saying I am not passing enough arguments to keras.Sequential()? or is it in reference to my StackedRNNCells/LSTMCell(s). Cheers!
class Model_Keras():
def __init__(self, word_dim, sentence_length, class_size):
rnn_size = 256
num_layers = 2
self.input_data = keras.Input(
shape=[sentence_length, word_dim], dtype='float32')
#self.output_data = tf.placeholder(tf.float32, [None, sentence_length, class_size])
#----------------------------------------------------------------------
#----------------------------------------------------------------------
# Build independent forward stack of LSTM cells
fw_cell = keras.layers.LSTMCell(rnn_size, dropout=0.5)
# Stack the LSTM Cells
fw_cell = keras.layers.StackedRNNCells([fw_cell] * num_layers, input_shape=(sentence_length, word_dim), name="fwd")
# Instantiate model
forward_model = keras.Sequential([fw_cell])
# Build independent backward stack of cells
# the same way as forward stack cells.
bw_cell = keras.layers.LSTMCell(rnn_size, dropout=0.5)
bw_cell = keras.layers.StackedRNNCells([bw_cell] * num_layers, input_shape=(sentence_length, word_dim))
backward_model = keras.Sequential([bw_cell])
# Outputs of forward and backward stacks are depth-concatenated
merged_model = keras.layers.Concatenate(
[forward_model, backward_model])
# You can use TF functions to process the input
# if using a TF backend for Keras
processed_input = tf.unstack(tf.transpose(
self.input_data, perm=[1, 0, 2]))
# I don't think this is needed.
# used = tf.sign(tf.reduce_max(tf.abs(self.input_data), reduction_indices=2))
# self.length = tf.cast(tf.reduce_sum(used, reduction_indices=1), tf.int32)
# Put together inputs and merged stacked LSTM models
# apply bidirectional wrapper layer
bidirectional = keras.Sequential(
[processed_input,
merged_model,
keras.layers.Bidirectional()])
# ----------------------------------------------------------------------
# ----------------------------------------------------------------------
# Process bidirectional output
model = keras.Sequential(
[tf.transpose(tf.unstack(bidirectional), perm=[1, 0, 2])])
model.add(keras.Reshape([-1, 2 * rnn_size]))
# This step replaces weight_and_bias
model.add(keras.Dense(class_size, input_dim=2 * rnn_size))
model.add(keras.Activation('softmax'))
model.add(keras.Reshape([-1, sentence_length, class_size]))
model.compile(optimizer='adam', loss='categorical_crossentropy')
return model
Full error message:
Traceback (most recent call last):
File "model.py", line 97, in <module>
epoch=10, lr=0.002, model_path="./trained_wordvec_model.pkl")
File "model.py", line 87, in train
model = Model_Keras(word_dim, sentence_length, class_size)
File "model.py", line 32, in __init__
forward_model = keras.Sequential([fw_cell])
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 411, in __init__
self.add(layer)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 467, in add
layer(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 619, in __call__
output = self.call(inputs, **kwargs)
TypeError: call() takes at least 3 arguments (2 given)

Resources