I trained my model using transfer learning. Now when I am predicting my image in Colab it shows me an error:
WARNING:tensorflow:Model was constructed with shape (None, 128, 128, 3) for input Tensor("xception_input:0", shape=(None, 128, 128, 3), dtype=float32), but it was called on an input with incompatible shape (None, 275, 3).
WARNING:tensorflow:Model was constructed with shape (None, 128, 128, 3) for input Tensor("input_1:0", shape=(None, 128, 128, 3), dtype=float32), but it was called on an input with incompatible shape (None, 275, 3).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-142a5ca8cbef> in <module>()
1 import numpy as np
----> 2 classes = np.argmax(model.predict(img), axis=-1)
3 print(classes)
.
.
.
ValueError: Input 0 of layer block1_conv1 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, 275, 3]
Basically during training, you were feeding a batch of images as input the network, the same will be required at the test/evaluation time. So, the easy solution would be to expand the dimension of img tensor to [1, img.shape].
img_test = tf.expand_dims(img, axis=0)
The message is saying that you trained your model using a shape of
shape=(None, 128, 128, 3)
but when you try to predict from the model you provided an input of
[None, 275, 3]
Obviously, this cannot be used by your model. First of all, you provided a 3dim dimension input but you should have provided a 4dim one. Typically images are (height, width, 3) and if you provide them in batches this becomes (batch_size, height, width, 3) and if you have just one image it becomes:
(1, height, width, 3)
So, you should check the input you provide your model with. With numpy you typically use something like
np.expand_dims(original_image, axis=0)
to go from 3dim to 4dim input.
Related
I have extracted the features using AlexNet and want to use it as input to VGG19.
The shape of features is (2144, 3), and the input shape of VGG 19 is (224, 224, 3)
How to reshape the features?
ValueError: Input 0 of layer "model_1" is incompatible with the layer: expected shape=(None, 224, 224, 3), found shape=(None, 3)
I am trying to use ResNet50 Pretrained network for segmentation problem.
I remove the last layer and add my desired layer. But when I try to fit, I get the following error:
ValueError: Error when checking target: expected conv2d_1 to have shape (16, 16, 1) but got array with shape (512, 512, 1)
I have two folders: images and masks. images are RGB and masks are in grayscale.
The shape is 512x512 for all images.
I can not figure in which part am I doing wrong.
Any help will be appreciated.
from keras.applications.resnet50 import ResNet50
image_input=Input(shape=(512, 512, 3))
model = ResNet50(input_tensor=image_input,weights='imagenet',include_top=False)
x = model.output
x = Conv2D(1, (1,1), padding="same", activation="sigmoid")(x)
model = Model(inputs=model.input, outputs=x)
model.summary()
conv2d_1 (Conv2D) (None, 16, 16, 1) 2049 activation_49[0][0]
for layer in model.layers[:-1]:
layer.trainable = False
for layer in model.layers[-1:]:
layer.trainable = True
model.compile(optimizer=Adam(), loss='binary_crossentropy', metrics=['accuracy'])
Your network gives an output of shape (16, 16, 1) but your y (target) has shape (512, 512, 1)
Run the following to see this.
from keras.applications.resnet50 import ResNet50
from keras.layers import Input
image_input=Input(shape=(512, 512, 3))
model = ResNet50(input_tensor=image_input,weights='imagenet',include_top=False)
model.summary()
# Output shows that the ResNet50 network has output of shape (16,16,2048)
from keras.layers import Conv2D
conv2d = Conv2D(1, (1,1), padding="same", activation="sigmoid")
conv2d.compute_output_shape((None, 16, 16, 2048))
# Output shows the shape your network's output will have.
Either your y or the way you use ResNet50 has to change. Read about ResNet50 to see what you are missing.
I was trying to follow this tutorial
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
In the baseline model it has
model.add(Conv2D(32, (3, 3), input_shape=(3, 150, 150)))
I don't quite follow the output shape here. If input shape is 3x150x150 with a kernel size 3x3, isn't the output shape 3x148x148? (Assuming no padding). However, according to Keras Doc:
Output shape: 4D tensor with shape: (batch, filters, new_rows,
new_cols)
That seems to me output shape will be 32x148x148. My question is whether this understanding correct? If so, where do the additional filters come from?
If the input shape is (3, 150, 150), after applying Conv2D layer the output is (?, 32, 148, 148). Check it out with following example:
inps = Input(shape=(3, 150, 150))
conv = Conv2D(32, (3, 3), data_format='channels_first')(inps)
print(conv)
>> Tensor("conv2d/BiasAdd:0", shape=(?, 32, 148, 148), dtype=float32)
The first dimension which specified by ? symbol is batch size.
The second dimension is filter size (32).
The two last are image width and height (148).
How do channels change from 3 to 32? Let's assume we have RGB image (3 channels) and the output channel size is 1. The following things happen:
When you use filters=32 and kernel_size=(3,3), you are creating 32 different filters, each of them with shape (3,3,3). The result will bring 32 different convolutions. Note that, according to Keras, all kernels initialize by glorot_uniform at the beginning.
Image from this blog post.
I've built a model using Keras Functional API and it was working correct when calling fit on train set. Now I decided to change model to use my generator
def data_generator():
while 1:
for i in range(len(sequences1)):
yield ([sequences1[i], sequences2[i]], trainLabels[i])
and here is a sample data from my dataset
sample = next(data_generator())
print(sample)
print(sample[0][0].shape)
# output:
# ([array([ 0, 0, 0, ..., 10, 14, 16], dtype=int32), array([ 0, 0, 0, ..., 19, 1, 4], dtype=int32)], 1)
# (34350,)
and here is my model summary (just the first two part)
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 34350) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 34350) 0
but when I'm trying to fit my model using this code
model.fit_generator(data_generator(), epochs=15, steps_per_epoch=64)
I'm getting this error
ValueError: Error when checking input: expected input_1 to have shape (34350,) but got array with shape (1,)
How can I fix it?
The problem is that the generator must generate the data batch-by-batch. In other words, sample[0][0].shape should be (BATCH_SIZE, 34350), and the same applies to the second sequence and the labels.
I'm a bit new to Keras and deep learning. I'm currently trying to replicate this paper but when I'm compiling the first model (without the LSTMs) I get the following error:
"ValueError: Error when checking target: expected dense_3 to have shape (None, 120, 40) but got array with shape (8, 40, 1)"
The description of the model is this:
Input (length T is appliance specific window size)
Parallel 1D convolution with filter size 3, 5, and 7
respectively, stride=1, number of filters=32,
activation type=linear, border mode=same
Merge layer which concatenates the output of
parallel 1D convolutions
Dense layer, output_dim=128, activation type=ReLU
Dense layer, output_dim=128, activation type=ReLU
Dense layer, output_dim=T , activation type=linear
My code is this:
from keras import layers, Input
from keras.models import Model
# the window sizes (seq_length?) are 40, 1075, 465, 72 and 1246 for the kettle, dish washer,
# fridge, microwave, oven and washing machine, respectively.
def ae_net(T):
input_layer = Input(shape= (T,))
branch_a = layers.Conv1D(32, 3, activation= 'linear', padding='same', strides=1)(input_layer)
branch_b = layers.Conv1D(32, 5, activation= 'linear', padding='same', strides=1)(input_layer)
branch_c = layers.Conv1D(32, 7, activation= 'linear', padding='same', strides=1)(input_layer)
merge_layer = layers.concatenate([branch_a, branch_b, branch_c], axis=1)
dense_1 = layers.Dense(128, activation='relu')(merge_layer)
dense_2 =layers.Dense(128, activation='relu')(dense_1)
output_dense = layers.Dense(T, activation='linear')(dense_2)
model = Model(input_layer, output_dense)
return model
model = ae_net(40)
model.compile(loss= 'mean_absolute_error', optimizer='rmsprop')
model.fit(X, y, batch_size= 8)
where X and y are numpy arrays of 8 sequences of a length of 40 values. So X.shape and y.shape are (8, 40, 1). It's actually one batch of data. The thing is I cannot understand how the output would be of shape (None, 120, 40) and what these sizes would mean.
As you noted, your shapes contain batch_size, length and channels: (8,40,1)
Your three convolutions are, each one, creating a tensor like (8,40,32).
Your concatenation in the axis=1 creates a tensor like (8,120,32), where 120 = 3*40.
Now, the dense layers only work on the last dimension (the channels in this case), leaving the length (now 120) untouched.
Solution
Now, it seems you do want to keep the length at the end. So you won't need any flatten or reshape layers. But you will need to keep the length 40, though.
You're probably doing the concatenation in the wrong axis. Instead of the length axis (1), you should concatenate in the channels axis (2 or -1).
So, this should be your concatenate layer:
merge_layer = layers.Concatenate()([branch_a, branch_b, branch_c])
#or layers.Concatenate(axis=-1)([branch_a, branch_b, branch_c])
This will output (8, 40, 96), and the dense layers will transform the 96 in something else.