Error in Bidirectional wrapper in Keras - keras

I am experimenting the Bidirectional wrapper in Keras, and a sample code is as follows.
T = 8
D = 2
M = 3
input_ = Input(shape=(T, D))
rnn = Bidirectional(LSTM(M, return_state=True, return_sequences=True))
#rnn = LSTM(M, return_state=True, return_sequences=True)
x = rnn(input_)
model = Model(inputs=input_, outputs=x)
X = np.random.randn(1,T,D)
o, h1, c1, h2, c2 = model.predict(X)
However, it gives an error
ValueError Traceback (most recent call last)
<ipython-input-82-53f1c7a28b54> in <module>()
19 print("c:", c1)
20
---> 21 lstm1()
<ipython-input-82-53f1c7a28b54> in lstm1()
8 rnn = Bidirectional(LSTM(M, return_state=True, return_sequences=True))
9 #rnn = LSTM(M, return_state=True, return_sequences=True)
---> 10 x = rnn(input_)
...
ValueError: Tried to convert 'tensor' to a tensor and failed. Error:
Shapes must be equal rank, but are 3 and 2
From merging shape 0 with other shapes. for 'bidirectional_26/ReverseV2_1
/packed' (op: 'Pack') with input shapes: [?,?,3], [?,3], [?,3].
If I remove the Bidirectional wrapper, i.e.
rnn = LSTM(M, return_state=True, return_sequences=True)
then it does not have a problem. Any advise would be greatly appreciated!

Related

Multi input Functional API CNN model

I am making multi input cnn model using keras functional api but it
is giving error... data:
trainset1 = trainset.flow_from_directory(
'/content/',
target_size=(404,410),
batch_size=32,
#seed=50,
class_mode='categorical') print('In Training Set..Entropy....') trainset12 = trainset.flow_from_directory(
'/content/',
target_size=(404,410),
batch_size=32,
#seed=50,
class_mode='categorical')
model: input1 = Input(shape=(404,410,3)) input2 = Input(shape =
(404,410,3))
# x = layers.Dense(128, activation= 'relu')
x = layers.Conv2D(25, (5, 5), activation='relu',
padding='same')(input1) x = layers.MaxPool2D(pool_size=(2, 2),
padding='same')(x)
x1 = layers.Conv2D(25, (5, 5), activation='relu',
padding='same')(input2) x1 = layers.MaxPool2D(pool_size=(2, 2),
padding='same')(x1) flat_layer1 = Flatten()(x) flat_layer2 =
Flatten()(x1)
print(flat_layer1.shape)
print(flat_layer2.shape) concat_layer= Concatenate()([flat_layer1,flat_layer2])
concat_layer= concatenate([flat_layer1,flat_layer2])
x = layers.Dense(16, activation= 'relu')(flat_layer1) #(concat_layer) outputs = layers.Dense(2, activation='softmax')(concat_layer) model =
keras.Model(inputs=[input1,input2], outputs = outputs)
model.compile(
loss = keras.losses.BinaryCrossentropy(),
optimizer=keras.optimizers.Adam(learning_rate=0.001),
metrics=["accuracy"] )
model.fit([trainset1,trainset12] ,batch_size=32,epochs=5, verbose=2)
GIVING ERROR:
--------------------------------------------------------------------------- ValueError Traceback (most recent
call last) in ()
----> 1 model.fit([trainset1,trainset12] ,batch_size=32,epochs=5, verbose=2)
1 frames
/usr/local/lib/python3.7/dist-packages/keras/engine/data_adapter.py
in select_data_adapter(x, y)
989 "Failed to find data adapter that can handle "
990 "input: {}, {}".format(
--> 991 _type_name(x), _type_name(y)))
992 elif len(adapter_cls) > 1:
993 raise RuntimeError(
ValueError: Failed to find data adapter that can handle input:
(<class 'list'> containing values of types {"<class
'keras.preprocessing.image.DirectoryIterator'>"}), <class
'NoneType'"""
what should i do now?

Exception encountered when calling layer "attention_weight" (type Attention)

I am new to using attention. My input shape is per sample is of shape (6,128). I can't get my head around what the solution might be.
def MLSTM_FCN(shape, num_classes):
x = Input(shape=(6, 128))
ip = x
x = Masking()(ip)
x = LSTM(units=8)(x)
x = Dropout(0.8)(x)
y = Permute((2, 1))(ip)
y = Conv1D(32, 3, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = squeeze_excite_block(y)
y = Conv1D(512, 3, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = squeeze_excite_block(y)
y = Conv1D(512, 9, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = GlobalAveragePooling1D()(y)
x = concatenate([x,y])
x = keras.layers.Attention(name='attention_weight')(x)
out = Dense(num_classes, activation='softmax')(x)
model = Model(ip, out)
model.compile(optimizer="adam", loss="categorical_crossentropy",metrics=['accuracy','AUC','Recall'])
model.summary()
return model
The error code is given below. Please help me solve the problem. Just a bit of additional information. I am trying to add the attention layer to a feature map concatenating the features of a CNN model and an LSTM model.
ValueError Traceback (most recent call last)
<ipython-input-20-ddc4e6d2fec2> in <module>()
----> 1 model = MLSTM_FCN((X_train.shape[1], X_train.shape[2]), train_label.shape[1])
2 frames
<ipython-input-19-ac6ce541a216> in MLSTM_FCN(shape, num_classes)
19 y = GlobalAveragePooling1D()(y)
20 x = concatenate([x,y])
---> 21 x = keras.layers.Attention(name='attention_weight')(x)
22 out = Dense(num_classes, activation='softmax')(x)
23 model = Model(ip, out)
/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
/usr/local/lib/python3.7/dist-packages/keras/layers/dense_attention.py in _validate_call_args(self, inputs, mask)
186 if not isinstance(inputs, list):
187 raise ValueError(
--> 188 f'{class_name} layer must be called on a list of inputs, '
189 'namely [query, value] or [query, value, key]. '
190 f'Received: {inputs}.')
ValueError: Exception encountered when calling layer "attention_weight" (type Attention).
Attention layer must be called on a list of inputs, namely [query, value] or [query, value, key]. Received: Tensor("Placeholder:0", shape=(None, 520), dtype=float32).
Call arguments received:
• inputs=tf.Tensor(shape=(None, 520), dtype=float32)
• mask=None
• training=None
• return_attention_scores=False

Why does Keras tell me "ValueError: setting an array element with a sequence." despite having all arrays as numpy arrays?

I am trying to train a 2D neural network using keras. I have a weird error message, "ValueError: setting an array element with a sequence." when I try to use model.fit function in keras. Specifically, the error says that my "tensor_train_labels" is a sequence instead of an array. But my labels are indeed numpy arrays (not a sequence). I am not sure why does keras complain about it ?
I am following this tutorial for building my network
tensor_train_data.shape
#TensorShape([Dimension(209), Dimension(64), Dimension(64), Dimension(3)])
tensor_test_data.shape
#TensorShape([Dimension(50), Dimension(64), Dimension(64), Dimension(3)])
tensor_train_labels = tf.reshape(tensor_train_labels, [209,1])
tensor_test_labels = tf.reshape(tensor_test_labels, [50,1])
batch_size = 10
epochs = 8
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3,3), activation='relu',
input_shape=(64, 64, 3)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2,2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation = 'relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(2, activation = 'softmax'))
model.compile(loss='categorical_crossentropy', optimizer =
tf.keras.optimizers.Adam(lr=0.0001, decay=1e-6), metrics=['accuracy'])
model.fit(tensor_train_data/255.0,
tf.keras.utils.to_categorical(tensor_train_labels),
batch_size = batch_size,
shuffle = True,
epochs = epochs,
validation_data = (tensor_test_data/ 255.0,
tf.keras.utils.to_categorical(tensor_test_labels)))
scores = model.evaluate(tensor_test_labels/ 255.0,
tf.keras.utils.to_categorical(tensor_test_labels))
print('Loss: %.3f' % scores[0])
print('Accuracy: %.3f' % scores[1])
The Error :
ValueError Traceback (most recent call last)
<ipython-input-224-80431a1b3e79> in <module>
1 model.compile(loss='categorical_crossentropy', optimizer = tf.keras.optimizers.Adam(lr=0.0001, decay=1e-6), metrics=['accuracy'])
----> 2 model.fit(tensor_train_data/255.0, tf.keras.utils.to_categorical(tensor_train_labels),
3 batch_size = batch_size,
4 shuffle = True,
5 epochs = epochs,
~\AppData\Local\conda\conda\envs\deeplearning\lib\site-packages\tensorflow\python\keras\utils\np_utils.py in to_categorical(y,
num_classes)
37 last.
38 """
---> 39 y = np.array(y, dtype='int')
40 input_shape = y.shape
41 if input_shape and input_shape[-1] == 1 and len(input_shape) > 1:
ValueError: setting an array element with a sequence.
The possible error is that you have arrays of different sizes when you are trying to convert it into the numpy array. Possible solution : https://stackoverflow.com/a/49617425/8185479

keras multi dimensions input to simpleRNN: dimension mismatch

The input element has 3 rows each having 199 columns and the output has 46 rows and 1 column
Input.shape, output.shape
((204563, 3, 199), (204563, 46, 1))
When the input is given the following error is thrown:
from keras.layers import Dense
from keras.models import Sequential
from keras.layers.recurrent import SimpleRNN
model = Sequential()
model.add(SimpleRNN(100, input_shape = (Input.shape[1], Input.shape[2])))
model.add(Dense(output.shape[1], activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(Input, output, epochs = 20, batch_size = 200)
error thrown:
Epoch 1/20
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-134-378dd431cf45> in <module>()
3 model.add(Dense(y_target.shape[1], activation = 'softmax'))
4 model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
----> 5 model.fit(X_input, y_target, epochs = 20, batch_size = 200)
.
.
.
ValueError: Error when checking model target: expected dense_6 to have 2 dimensions, but got array with shape (204563, 46, 1)
Please explain the reason for the problem and possible soution
The problem is that SimpleRNN(100) returns a tensor of shape (204563, 100), hence, the Dense(46) (since output.shape[1]=46) will return a tensor of shape (204563, 46), but your y_target have shape (204563, 46, 1). You need to remove the last dimension with, for example, y_target = np.squeeze(y_target), so that the dimension are consistent

keras LSTM model input and output dimensions mismatch

model = Sequential()
model.add(Embedding(630, 210))
model.add(LSTM(1024, dropout = 0.2, return_sequences = True))
model.add(LSTM(1024, dropout = 0.2, return_sequences = True))
model.add(Dense(210, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
filepath = 'ner_2-{epoch:02d}-{loss:.5f}.hdf5'
checkpoint = ModelCheckpoint(filepath, monitor = 'loss', verbose = 1, save_best_only = True, mode = 'min')
callback_list = [checkpoint]
model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list)
X: the input vector is of the shape (204564, 630, 1)
y: the target vector is of the shape (204564, 210, 1)
i.e. for every 630 inputs 210 outputs have to be predicted but the code throws the following error on compilation
ValueError Traceback (most recent call last)
<ipython-input-57-05a6affb6217> in <module>()
50 callback_list = [checkpoint]
51
---> 52 model.fit(X, y , epochs = 20, batch_size = 1024, callbacks = callback_list)
53 print('successful')
ValueError: Error when checking model input: expected embedding_8_input to have 2 dimensions, but got array with shape (204564, 630, 1)
Please someone explain why this error is occurring and how to solve this
The message says:
Your first layer expects an input with 2 dimensions: (BatchSize, SomeOtherDimension). But your input has 3 dimensions (BatchSize=204564,SomeOtherDimension=630, 1).
Well... remove the 1 from your input, or reshape it inside the model:
Solution 1 - Removing it from the input:
X = X.reshape((204564,630))
Solution 2 - Adding a reshape layer:
model = Sequential()
model.add(Reshape((630,),input_shape=(630,1)))
model.add(Embedding.....)

Resources