keras model.fit() with data generator error - python-3.x

I want to use a DataGenerator but I get this error.
Found 32 validated image filenames belonging to 2 classes.
Epoch 1/10
Traceback (most recent call last):
File "C:\Users\nickm\anaconda3\lib\site-packages\spyder_kernels\py3compat.py", line 356, in compat_exec
exec(code, globals, locals)
File "f:\scamscan\domain-blacklist\ai\learning_big.py", line 114, in <module>
history = model.fit(train_gen, steps_per_epoch=len(train_gen) // train_gen.batch_size, validation_data=test_gen, validation_steps=len(test_gen) // test_gen.batch_size, epochs=10, callbacks=callbacksList, verbose=True)
File "C:\Users\nickm\anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\nickm\AppData\Local\Temp\__autograph_generated_file77ecj_93.py", line 15, in tf__train_function
retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
ValueError: in user code:
File "C:\Users\nickm\anaconda3\lib\site-packages\keras\engine\training.py", line 1160, in train_function *
return step_function(self, iterator)
File "C:\Users\nickm\anaconda3\lib\site-packages\keras\engine\training.py", line 1146, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "C:\Users\nickm\anaconda3\lib\site-packages\keras\engine\training.py", line 1135, in run_step **
outputs = model.train_step(data)
File "C:\Users\nickm\anaconda3\lib\site-packages\keras\engine\training.py", line 993, in train_step
y_pred = self(x, training=True)
File "C:\Users\nickm\anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\nickm\anaconda3\lib\site-packages\keras\engine\input_spec.py", line 216, in assert_input_compatibility
raise ValueError(
ValueError: Layer "model" expects 4 input(s), but it received 67 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, None, None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:2' shape=(None, None) dtype=float32>, <tf.Tensor 'IteratorGetNext:3' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:4' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:5' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:6' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:7' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:8' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:9' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:10' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:11' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:12' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:13' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:14' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:15' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:16' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:17' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:18' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:19' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:20' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:21' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:22' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:23' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:24' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:25' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:26' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:27' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:28' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:29' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:30' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:31' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:32' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:33' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:34' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:35' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:36' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:37' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:38' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:39' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:40' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:41' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:42' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:43' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:44' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:45' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:46' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:47' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:48' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:49' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:50' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:51' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:52' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:53' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:54' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:55' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:56' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:57' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:58' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:59' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:60' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:61' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:62' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:63' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:64' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:65' shape=() dtype=float32>, <tf.Tensor 'IteratorGetNext:66' shape=() dtype=float32>]
My DataGenerator uses images, two vectorized texts and some nunbers.
class DataGenerator(Sequence):
def __init__(self, image_paths, text1, text2, numbers, labels, shuffle, batch_size=32):
self.image_paths = [path for path in image_paths if '\x00' not in path]
self.text1 = text1
self.text2 = text2
self.numbers = numbers
self.labels = labels
self.batch_size = batch_size
self.image_gen = ImageDataGenerator()
self.shuffle = shuffle
def __len__(self):
return math.ceil(len(self.labels) / self.batch_size)
def __getitem__(self, idx):
start = idx * self.batch_size
end = (idx + 1) * self.batch_size
data = {'filename': self.image_paths[start:end], 'class': self.labels[start:end]}
df = pd.DataFrame(data)
image_batch = self.image_gen.flow_from_dataframe(
dataframe=df,
x_col='filename',
y_col='class',
target_size=(256,256),
color_mode='rgb',
batch_size=self.batch_size,
class_mode='binary',
)
x, y = image_batch.next()
x = [x, self.text1[start:end], self.text2[start:end], self.numbers[start:end]]
return x, self.labels[start:end]
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.list_IDs))
if self.shuffle == True:
np.random.shuffle(self.indexes)
# Create the data generator
image_paths_train, image_paths_test, text1_train, text1_test, text2_train, text2_test, numbers_train, numbers_test, labels_train, labels_test = train_test_split(images, text_array, title_array, links, YTrain, test_size=0.2, random_state=42)
train_gen = DataGenerator(image_paths_train, text1_train, text2_train, numbers_train, labels_train, True, batch_size=32)
test_gen = DataGenerator(image_paths_test, text1_test, text2_test, numbers_test, labels_test, True, batch_size=32)
history = model.fit(train_gen, steps_per_epoch=len(train_gen) // train_gen.batch_size, validation_data=test_gen, validation_steps=len(test_gen) // test_gen.batch_size, epochs=10, verbose=True)
How can I solve it?
I can print the first element from my train_gen.
Its a tuple with a list of the size of 4 and an array with my labels.
The datagenerator should not return <tf.Tensor 'IteratorGetNext:2' shape=(None, None) dtype=float32>. Or the fit() function should work with it.

Related

ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape= (None, 864, 864, 2), found shape=(None, 864, 2)

Can anybody explain to me why I still receive the expected shape=(None, 864, 864, 2), found shape=(None, 864, 2), even after I defined the input shape as a 3-dimension [864, 864, 2]?
I am trying to implement the complex-valued convolutional neural network. And I am using this pip install cvnn library (Link of library)
Here is my code:
import cvnn.layers as complex_layers
import tensorflow as tf
model = tf.keras.models.Sequential()
model.add(complex_layers.ComplexInput(input_shape=[864, 864, 2])) # Always use ComplexInput at the start
model.add(complex_layers.ComplexConv2D(32, (3, 3), activation='cart_relu'))
model.add(complex_layers.ComplexAvgPooling2D((2, 2)))
model.add(complex_layers.ComplexConv2D(64, (3, 3), activation='cart_relu'))
model.add(complex_layers.ComplexMaxPooling2D((2, 2)))
model.add(complex_layers.ComplexConv2D(64, (3, 3), activation='cart_relu'))
model.add(complex_layers.ComplexFlatten())
model.add(complex_layers.ComplexDense(64, activation='cart_relu'))
model.add(complex_layers.ComplexDense(10, activation='convert_to_real_with_abs'))
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
return model
Input Shape for the dataset:
data shape: (8010, 864, 2)
Model Summary:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
complex_conv2d (ComplexConv (None, 862, 862, 32) 1216
2D)
complex_avg_pooling2d (Comp (None, 431, 431, 32) 0
lexAvgPooling2D)
complex_conv2d_1 (ComplexCo (None, 429, 429, 64) 36992
nv2D)
complex_max_pooling2d (Comp (None, 214, 214, 64) 0
lexMaxPooling2D)
complex_conv2d_2 (ComplexCo (None, 212, 212, 64) 73856
nv2D)
complex_flatten (ComplexFla (None, 2876416) 0
tten)
complex_dense (ComplexDense (None, 64) 368181376
)
complex_dense_1 (ComplexDen (None, 10) 1300
se)
=================================================================
Total params: 368,294,740
Trainable params: 368,294,740
Non-trainable params: 0
_________________________________________________________________
The error I am getting:
ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=
(None, 864, 864, 2), found shape=(None, 864, 2)

Keras - How to remove useless dimension without hurting the computation graph?

While generating a deep learning model, I used K.squeeze function to squeeze useless dimension when the first two dimensions were None shape.
import keras.backend as K
>>> K.int_shape(user_input_for_TD)
(None, None, 1, 32)
>>> K.int_shape(K.squeeze(user_input_for_TD, axis=-2))
(None, None, 32)
However, this gives below error, It seems like K.squeeze function hurts the computation graph, is there any solution to escape from this issue? Maybe that function does not support calculating gradients, which isn't differentiable.
File "/home/sundong/anaconda3/envs/py36/lib/python3.6/site-packages/keras/engine/network.py", line 1325, in build_map
node = layer._inbound_nodes[node_index]
AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
Below code block is the whole code block which causes that error.
user_embedding_layer = Embedding(
input_dim=len(self.data.visit_embedding),
output_dim=32,
weights=[np.array(list(self.data.visit_embedding.values()))],
input_length=1,
trainable=False)
...
all_areas_lstm = LSTM(1024, return_sequences=True)(all_areas_rslt) # (None, None, 1024)
user_input_for_TD = Lambda(lambda x: x[:, :, 0:1])(multiple_inputs) # (None, None, 1)
user_input_for_TD = TimeDistributed(user_embedding_layer)(user_input_for_TD) # (None, None, 1, 32)
user_input_for_TD = K.squeeze(user_input_for_TD, axis=-2) # (None, None, 32)
aggre_threeway_inputs = Concatenate()([user_input_for_TD, all_areas_lstm]) # should be (None, None, 1056)
threeway_encoder = TimeDistributed(ThreeWay(output_dim=512))
three_way_rslt = threeway_encoder(aggre_threeway_inputs) # should be (None, None, 512)
logits = Dense(365, activation='softmax')(three_way_rslt) # should be (None, None, 365)
self.model = keras.Model(inputs=multiple_inputs, outputs=logits)
By removing below two lines (by not making it go through the embedding layer) , the code works without any issues. In this case, dimension of aggre_threeway_inputs = Concatenate()([user_input_for_TD, all_areas_lstm]) is (None, None, 1025).
user_input_for_TD = TimeDistributed(user_embedding_layer)(user_input_for_TD)
user_input_for_TD = K.squeeze(user_input_for_TD, axis=-2)
I solved it by using the Lambda layer with indexing, instead of K.squeeze function.
from keras.layers import Lambda
>>> K.int_shape(user_input_for_TD)
(None, None, 1, 32)
>>> K.int_shape(Lambda(lambda x: x[:, :, 0, :])(user_input_for_TD))
(None, None, 32)

Python: memmap list of objects become 'None' type inside joblib parallel

I am doing the following:
I have a list of tensorflow DNN layers. nn.append(tf.layers.dense(...))
Each of the above list is appended to a list of np.memmap objects. nnList[i] = nn
I can access the memmap list and retrieve the tensors. But when try to access the tensors inside joblib.parallel it returns 'None' type object. However, the length of the memmap list is correct inside joblib.parallel.
I have attached a sample code below.
import os
import tempfile
import numpy as np
import tensorflow as tf
from joblib import Parallel, delayed, load, dump
tmpFolder = tempfile.mkdtemp()
__nnFile = os.path.join(tmpFolder, 'nn.mmap')
nnList = np.memmap(__nnFile, dtype=object, mode='w+', shape=(5))
def main():
for i in range(5):
nn = []
input = tf.placeholder(dtype=tf.float32, shape=(1, 8))
nn.append(tf.layers.dense(inputs=input, units=8, activation=tf.sigmoid,
trainable=False))
nn.append(tf.layers.dense(inputs=nn[0], units=2, activation=tf.sigmoid,
trainable=False))
nnList[i] = nn
print('nnList: ' + str(len(nnList)))
for i in range(5):
nn = nnList[i]
print(nn)
print(nn[-1])
print('--------------------------- ' + str(i))
with Parallel(n_jobs = -1) as parallel:
parallel(delayed(func1)(i) for i in range(5))
def func1(i):
print('nnList: ' + str(len(nnList)))
for x in range(5):
nn = nnList[x]
print(nn)
print('--------------------------- ' + str(x))
if __name__ == '__main__':
main()
The above code gives this output. Note the length of the arrays and how the tensors become None.
nnList: 5
[<tf.Tensor 'dense/Sigmoid:0' shape=(1, 8) dtype=float32>, <tf.Tensor 'dense_1/Sigmoid:0' shape=(1, 2) dtype=float32>]
Tensor("dense_1/Sigmoid:0", shape=(1, 2), dtype=float32)
--------------------------- 0
[<tf.Tensor 'dense_2/Sigmoid:0' shape=(1, 8) dtype=float32>, <tf.Tensor 'dense_3/Sigmoid:0' shape=(1, 2) dtype=float32>]
Tensor("dense_3/Sigmoid:0", shape=(1, 2), dtype=float32)
--------------------------- 1
[<tf.Tensor 'dense_4/Sigmoid:0' shape=(1, 8) dtype=float32>, <tf.Tensor 'dense_5/Sigmoid:0' shape=(1, 2) dtype=float32>]
Tensor("dense_5/Sigmoid:0", shape=(1, 2), dtype=float32)
--------------------------- 2
[<tf.Tensor 'dense_6/Sigmoid:0' shape=(1, 8) dtype=float32>, <tf.Tensor 'dense_7/Sigmoid:0' shape=(1, 2) dtype=float32>]
Tensor("dense_7/Sigmoid:0", shape=(1, 2), dtype=float32)
--------------------------- 3
[<tf.Tensor 'dense_8/Sigmoid:0' shape=(1, 8) dtype=float32>, <tf.Tensor 'dense_9/Sigmoid:0' shape=(1, 2) dtype=float32>]
Tensor("dense_9/Sigmoid:0", shape=(1, 2), dtype=float32)
--------------------------- 4
nnList: 5
None
--------------------------- 0
None
--------------------------- 1
None
--------------------------- 2
None
--------------------------- 3
None
--------------------------- 4
How can I access the tensors inside joblib.parallel? Please help.
Found the issue back then. Hope it helps someone in the future.
The None problem had nothing to do with the tensors. I was using the joblib.Parallel function the wrong way.
One should pass the variable to delayed to be accessible to the forked processes (how did I overlook that in the documentation!). The correct way:
with Parallel(n_jobs = -1) as parallel:
parallel(delayed(func1)(i, WHATEVER_VARIABLE_I_WANT) for i in range(5))

Error when checking target: expected softmax to have shape (1100,)

I'm trying to create a model on some data with 2 classes, but I keep getting an error saying:
ValueError: Error when checking target: expected softmax to have shape (1100,) but got array with shape (2,)
I know it's a fairly common error, but I can't seem to fix mine. I believe the error suggests that the model has an output shape of (1100,) but the outputs have dimension (2,). Anyone know how it can be fixed?
Here's my model:
def TestModel(nb_classes=2, inputs=(3, 224, 224)):
input_img = Input(shape=inputs)
conv1 = Convolution2D(
96, 7, 7, activation='relu', init='glorot_uniform',
subsample=(2, 2), border_mode='same', name='conv1')(input_img)
maxpool1 = MaxPooling2D(
pool_size=(3, 3), strides=(2, 2), name='maxpool1', dim_ordering="th")(conv1)
fire2_squeeze = Convolution2D(
16, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire2_squeeze')(maxpool1)
fire2_expand1 = Convolution2D(
64, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire2_expand1')(fire2_squeeze)
fire2_expand2 = Convolution2D(
64, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire2_expand2')(fire2_squeeze)
merge2 = merge(
[fire2_expand1, fire2_expand2], mode='concat', concat_axis=1)
fire3_squeeze = Convolution2D(
16, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire3_squeeze')(merge2)
fire3_expand1 = Convolution2D(
64, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire3_expand1')(fire3_squeeze)
fire3_expand2 = Convolution2D(
64, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire3_expand2')(fire3_squeeze)
merge3 = merge(
[fire3_expand1, fire3_expand2], mode='concat', concat_axis=1)
fire4_squeeze = Convolution2D(
32, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire4_squeeze')(merge3)
fire4_expand1 = Convolution2D(
128, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire4_expand1')(fire4_squeeze)
fire4_expand2 = Convolution2D(
128, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire4_expand2')(fire4_squeeze)
merge4 = merge(
[fire4_expand1, fire4_expand2], mode='concat', concat_axis=1)
maxpool4 = MaxPooling2D(
pool_size=(3, 3), strides=(2, 2), name='maxpool4')(merge4)
fire5_squeeze = Convolution2D(
32, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire5_squeeze')(maxpool4)
fire5_expand1 = Convolution2D(
128, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire5_expand1')(fire5_squeeze)
fire5_expand2 = Convolution2D(
128, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire5_expand2')(fire5_squeeze)
merge5 = merge(
[fire5_expand1, fire5_expand2], mode='concat', concat_axis=1)
fire6_squeeze = Convolution2D(
48, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire6_squeeze')(merge5)
fire6_expand1 = Convolution2D(
192, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire6_expand1')(fire6_squeeze)
fire6_expand2 = Convolution2D(
192, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire6_expand2')(fire6_squeeze)
merge6 = merge(
[fire6_expand1, fire6_expand2], mode='concat', concat_axis=1)
fire7_squeeze = Convolution2D(
48, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire7_squeeze')(merge6)
fire7_expand1 = Convolution2D(
192, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire7_expand1')(fire7_squeeze)
fire7_expand2 = Convolution2D(
192, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire7_expand2')(fire7_squeeze)
merge7 = merge(
[fire7_expand1, fire7_expand2], mode='concat', concat_axis=1)
fire8_squeeze = Convolution2D(
64, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire8_squeeze')(merge7)
fire8_expand1 = Convolution2D(
256, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire8_expand1')(fire8_squeeze)
fire8_expand2 = Convolution2D(
256, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire8_expand2')(fire8_squeeze)
merge8 = merge(
[fire8_expand1, fire8_expand2], mode='concat', concat_axis=1)
maxpool8 = MaxPooling2D(
pool_size=(3, 3), strides=(2, 2), name='maxpool8')(merge8)
fire9_squeeze = Convolution2D(
64, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire9_squeeze')(maxpool8)
fire9_expand1 = Convolution2D(
256, 1, 1, activation='relu', init='glorot_uniform',
border_mode='same', name='fire9_expand1')(fire9_squeeze)
fire9_expand2 = Convolution2D(
256, 3, 3, activation='relu', init='glorot_uniform',
border_mode='same', name='fire9_expand2')(fire9_squeeze)
merge9 = merge(
[fire9_expand1, fire9_expand2], mode='concat', concat_axis=1)
fire9_dropout = Dropout(0.5, name='fire9_dropout')(merge9)
conv10 = Convolution2D(
nb_classes, 1, 1, init='glorot_uniform',
border_mode='valid', name='conv10')(fire9_dropout)
# The size should match the output of conv10
avgpool10 = AveragePooling2D((13, 13), name='avgpool10')(conv10)
flatten = Flatten(name='flatten')(avgpool10)
softmax = Activation("softmax", name='softmax')(flatten)
return Model(input=input_img, output=softmax)
Here's the code creating the model:
def main():
np.random.seed(45)
nb_class = 2
width, height = 224, 224
sn = model.TestModel(nb_classes=nb_class, inputs=(height, width, 3))
print('Build model')
sgd = SGD(lr=0.001, decay=0.0002, momentum=0.9, nesterov=True)
sn.compile(
optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
print(sn.summary())
# Training
train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 2000
nb_validation_samples = 800
nb_epoch = 500
# Generator
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
#train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(width, height),
batch_size=32,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(width, height),
batch_size=32,
class_mode='categorical')
# Instantiate AccLossPlotter to visualise training
plotter = AccLossPlotter(graphs=['acc', 'loss'], save_graph=True)
early_stopping = EarlyStopping(monitor='val_loss', patience=3, verbose=0)
checkpoint = ModelCheckpoint(
'weights.{epoch:02d}-{val_loss:.2f}.h5',
monitor='val_loss',
verbose=0,
save_best_only=True,
save_weights_only=True,
mode='min',
period=1)
sn.fit_generator(
train_generator,
samples_per_epoch=nb_train_samples,
nb_epoch=nb_epoch,
validation_data=validation_generator,
nb_val_samples=nb_validation_samples,
callbacks=[plotter, checkpoint])
sn.save_weights('weights.h5')
Here's summary():
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
____________________________________________________________________________________________________
conv1 (Convolution2D) (None, 112, 112, 96) 14208 input_1[0][0]
____________________________________________________________________________________________________
maxpool1 (MaxPooling2D) (None, 112, 55, 47) 0 conv1[0][0]
____________________________________________________________________________________________________
fire2_squeeze (Convolution2D) (None, 112, 55, 16) 768 maxpool1[0][0]
____________________________________________________________________________________________________
fire2_expand1 (Convolution2D) (None, 112, 55, 64) 1088 fire2_squeeze[0][0]
____________________________________________________________________________________________________
fire2_expand2 (Convolution2D) (None, 112, 55, 64) 9280 fire2_squeeze[0][0]
____________________________________________________________________________________________________
merge_1 (Merge) (None, 224, 55, 64) 0 fire2_expand1[0][0]
fire2_expand2[0][0]
____________________________________________________________________________________________________
fire3_squeeze (Convolution2D) (None, 224, 55, 16) 1040 merge_1[0][0]
____________________________________________________________________________________________________
fire3_expand1 (Convolution2D) (None, 224, 55, 64) 1088 fire3_squeeze[0][0]
____________________________________________________________________________________________________
fire3_expand2 (Convolution2D) (None, 224, 55, 64) 9280 fire3_squeeze[0][0]
____________________________________________________________________________________________________
merge_2 (Merge) (None, 448, 55, 64) 0 fire3_expand1[0][0]
fire3_expand2[0][0]
____________________________________________________________________________________________________
fire4_squeeze (Convolution2D) (None, 448, 55, 32) 2080 merge_2[0][0]
____________________________________________________________________________________________________
fire4_expand1 (Convolution2D) (None, 448, 55, 128) 4224 fire4_squeeze[0][0]
____________________________________________________________________________________________________
fire4_expand2 (Convolution2D) (None, 448, 55, 128) 36992 fire4_squeeze[0][0]
____________________________________________________________________________________________________
merge_3 (Merge) (None, 896, 55, 128) 0 fire4_expand1[0][0]
fire4_expand2[0][0]
____________________________________________________________________________________________________
maxpool4 (MaxPooling2D) (None, 447, 27, 128) 0 merge_3[0][0]
____________________________________________________________________________________________________
fire5_squeeze (Convolution2D) (None, 447, 27, 32) 4128 maxpool4[0][0]
____________________________________________________________________________________________________
fire5_expand1 (Convolution2D) (None, 447, 27, 128) 4224 fire5_squeeze[0][0]
____________________________________________________________________________________________________
fire5_expand2 (Convolution2D) (None, 447, 27, 128) 36992 fire5_squeeze[0][0]
____________________________________________________________________________________________________
merge_4 (Merge) (None, 894, 27, 128) 0 fire5_expand1[0][0]
fire5_expand2[0][0]
____________________________________________________________________________________________________
fire6_squeeze (Convolution2D) (None, 894, 27, 48) 6192 merge_4[0][0]
____________________________________________________________________________________________________
fire6_expand1 (Convolution2D) (None, 894, 27, 192) 9408 fire6_squeeze[0][0]
____________________________________________________________________________________________________
fire6_expand2 (Convolution2D) (None, 894, 27, 192) 83136 fire6_squeeze[0][0]
____________________________________________________________________________________________________
merge_5 (Merge) (None, 1788, 27, 192) 0 fire6_expand1[0][0]
fire6_expand2[0][0]
____________________________________________________________________________________________________
fire7_squeeze (Convolution2D) (None, 1788, 27, 48) 9264 merge_5[0][0]
____________________________________________________________________________________________________
fire7_expand1 (Convolution2D) (None, 1788, 27, 192) 9408 fire7_squeeze[0][0]
____________________________________________________________________________________________________
fire7_expand2 (Convolution2D) (None, 1788, 27, 192) 83136 fire7_squeeze[0][0]
____________________________________________________________________________________________________
merge_6 (Merge) (None, 3576, 27, 192) 0 fire7_expand1[0][0]
fire7_expand2[0][0]
____________________________________________________________________________________________________
fire8_squeeze (Convolution2D) (None, 3576, 27, 64) 12352 merge_6[0][0]
____________________________________________________________________________________________________
fire8_expand1 (Convolution2D) (None, 3576, 27, 256) 16640 fire8_squeeze[0][0]
____________________________________________________________________________________________________
fire8_expand2 (Convolution2D) (None, 3576, 27, 256) 147712 fire8_squeeze[0][0]
____________________________________________________________________________________________________
merge_7 (Merge) (None, 7152, 27, 256) 0 fire8_expand1[0][0]
fire8_expand2[0][0]
____________________________________________________________________________________________________
maxpool8 (MaxPooling2D) (None, 3575, 13, 256) 0 merge_7[0][0]
____________________________________________________________________________________________________
fire9_squeeze (Convolution2D) (None, 3575, 13, 64) 16448 maxpool8[0][0]
____________________________________________________________________________________________________
fire9_expand1 (Convolution2D) (None, 3575, 13, 256) 16640 fire9_squeeze[0][0]
____________________________________________________________________________________________________
fire9_expand2 (Convolution2D) (None, 3575, 13, 256) 147712 fire9_squeeze[0][0]
____________________________________________________________________________________________________
merge_8 (Merge) (None, 7150, 13, 256) 0 fire9_expand1[0][0]
fire9_expand2[0][0]
____________________________________________________________________________________________________
fire9_dropout (Dropout) (None, 7150, 13, 256) 0 merge_8[0][0]
____________________________________________________________________________________________________
conv10 (Convolution2D) (None, 7150, 13, 2) 514 fire9_dropout[0][0]
____________________________________________________________________________________________________
avgpool10 (AveragePooling2D) (None, 550, 1, 2) 0 conv10[0][0]
____________________________________________________________________________________________________
flatten (Flatten) (None, 1100) 0 avgpool10[0][0]
____________________________________________________________________________________________________
softmax (Activation) (None, 1100) 0 flatten[0][0]
====================================================================================================
Total params: 683,954
Trainable params: 683,954
Non-trainable params: 0
____________________________________________________________________________________________________
None
Found 22778 images belonging to 2 classes.
Found 2222 images belonging to 2 classes.
Epoch 1/500
Any thought appreciated.
You shouldn't be using AveragePooling2D, but GlobalAveragePooling2D, that will reduce the spatial dimensions to 1, making the Flatten work and produce an output of (None, 2).

Getting error while running convolutional autoencoder in keras

I am getting error while running the following code in keras
Traceback (most recent call last):
File "my_conv_ae.py", line 74, in <module>
validation_steps = nb_validation_samples // batch_size)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py", line 1890, in fit_generator
class_weight=class_weight)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py", line 1627, in train_on_batch
check_batch_axis=True)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py", line 1309, in _standardize_user_data
exception_prefix='target')
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py", line 127, in _standardize_input_data
str(array.shape))
ValueError: Error when checking target: expected conv2d_transpose_8 to have 4 dimensions, but got array with shape (20, 1)
The code is:
import keras
from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D, Conv2DTranspose
from keras.models import Model
from keras import backend as K
from keras.preprocessing.image import ImageDataGenerator
import numpy as np
input_img = Input(shape=(512, 512, 1))
nb_train_samples = 1700
nb_validation_samples = 420
epochs = 10
batch_size = 20
x = Conv2D(64, (11, 11), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(input_img)
x = Conv2D(64, (11, 11), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(128, (7, 7), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = Conv2D(128, (5, 5), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(256, (5, 5), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = Conv2D(256, (3, 3), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(512, (3, 3), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = Conv2D(512, (3, 3), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
encoded = MaxPooling2D((2, 2))(x)
print (K.int_shape(encoded))
at this point the representation is (26, 26, 512)
x = UpSampling2D((2, 2))(encoded)
x = Conv2DTranspose(512, (3, 3), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = Conv2DTranspose(512, (3, 3), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2DTranspose(256, (3, 3), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = Conv2DTranspose(256, (5, 5), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2DTranspose(128, (5, 5), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = Conv2DTranspose(128, (7, 7), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2DTranspose(64, (11, 11), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
decoded = Conv2DTranspose(1, (11, 11), activation='relu', strides= 1, padding='valid', kernel_initializer='glorot_uniform')(x)
print (K.int_shape(decoded))
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer = 'adadelta', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
x_train = train_datagen.flow_from_directory(
'data/train',
target_size = (512, 512), color_mode = 'grayscale',
batch_size = batch_size,
class_mode = 'binary')
x_test = test_datagen.flow_from_directory(
'data/validation',
target_size = (512, 512), color_mode = 'grayscale',
batch_size = batch_size,
class_mode = 'binary')
autoencoder.fit_generator(
x_train,
steps_per_epoch = nb_train_samples // batch_size,
epochs = epochs,
validation_data = x_test,
validation_steps = nb_validation_samples // batch_size)
decoded_imgs = autoencoder.predict(x_test)
Summary of model is as follows:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 512, 512, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 502, 502, 64) 7808
_________________________________________________________________
conv2d_2 (Conv2D) (None, 492, 492, 64) 495680
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 246, 246, 64) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 240, 240, 128) 401536
_________________________________________________________________
conv2d_4 (Conv2D) (None, 236, 236, 128) 409728
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 118, 118, 128) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 114, 114, 256) 819456
_________________________________________________________________
conv2d_6 (Conv2D) (None, 112, 112, 256) 590080
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 56, 56, 256) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 54, 54, 512) 1180160
_________________________________________________________________
conv2d_8 (Conv2D) (None, 52, 52, 512) 2359808
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 26, 26, 512) 0
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 52, 52, 512) 0
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr (None, 54, 54, 512) 2359808
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr (None, 56, 56, 512) 2359808
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 112, 112, 512) 0
_________________________________________________________________
conv2d_transpose_3 (Conv2DTr (None, 114, 114, 256) 1179904
_________________________________________________________________
conv2d_transpose_4 (Conv2DTr (None, 118, 118, 256) 1638656
_________________________________________________________________
up_sampling2d_3 (UpSampling2 (None, 236, 236, 256) 0
_________________________________________________________________
conv2d_transpose_5 (Conv2DTr (None, 240, 240, 128) 819328
_________________________________________________________________
conv2d_transpose_6 (Conv2DTr (None, 246, 246, 128) 802944
_________________________________________________________________
up_sampling2d_4 (UpSampling2 (None, 492, 492, 128) 0
_________________________________________________________________
conv2d_transpose_7 (Conv2DTr (None, 502, 502, 64) 991296
_________________________________________________________________
conv2d_transpose_8 (Conv2DTr (None, 512, 512, 1) 7745
=================================================================
Total params: 16,423,745
Trainable params: 16,423,745
Non-trainable params: 0
_________________________________________________________________
Please help me. Is this because of Conv2DTranspose() which I have used for decoding?
It's definitly not a problem with model architecture itself (because it working on my side). Seems like problem with your ground truth data. It must have same dimensions as your input image, but flow_from_directory don't provide such ground truth data. I guess you need use your own custom data generator.

Resources