import keras.layers as KL
input_image = KL.Input([None, None, 3], name = 'input_image')
x = KL.Conv2D(64, (3,3), padding='same')(input_image)
after Conv, I want to add a dense as below:
KL.Dense(2)(KL.Flatten()(x))
but there will be an error:
ValueError: The shape of the input to "Flatten" is not fully defined
(got (None, None, 64). Make sure to pass a complete "input_shape" or
"batch_input_shape" argument to the first layer in your model.
So if I want a model contained conv followed by dense which can accept any size of input, how should I do?
Neural networks don't work with variable sized inputs. Unless you are dealing with recurrent neural networks.
With a network with variable sized input, what would the weights of the network look like?
Typically, you will pick a size for your input layer and resize or pad your input to match that size.
Although it's not the same as flattening your input you could use Global Max Pooling:
x = KL.GlobalMaxPooling2D()(x)
This will change your dimension from (None, None, None 64) to (None, 64) (including batch dimension). Global Max Pooling is a common way to close up convultional Networks and feed the output into a Dense Neural Network.
To build a CNN model you should use a pooling layer and then a flatten one, as you can see in the example below.
The pooling layer will reduce the number of data to be analysed in the convolutional network, and then we use Flatten to have the data as a "normal" input to a Dense layer. Moreover, after a convolutional layer, we always add a pooling one.
The example below is for 1D CNN but has the same structure as the 2D ones. Again, Flatten() changes the shape of the output to use properly in the last Dense layer.
model = Sequential()
model.add(Conv1D(num_filters_to_use, (filters_size_tuple), input_shape=features_array_shape, activation='relu'))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
Related
I want to implement MNIST with MLP using keras, for beginning I just use 2 layer, but I got the error:"expected activation_9 to have 3 dimensions, but got array with shape (60000, 10)".How can I fix it?
**
input_shape = x_train[0].shape
model = Sequential()
model.add(Dense(64,activation='relu',input_shape=input_shape))
model.add(Dense(10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
mdl=model.fit(x_train, y_train, epochs=5, batch_size=128)
**
As your first layer try using:
tf.keras.layers.Flatten()
The dense layer needs a 1-dimensional array but the images are 2d. This layer flattens them to 1d
Dense expects usually 2-d data (batch, _). So, you need to use Flatten() or better use Conv2D layers with Flatten() which is better suited for image classification tasks.
I have 7 categorical Featues
And i am trying to add A CNN Layer after Embedding Layer
My first Layer is input Layer
Second Layer is Embedding Layer
Third Layer I want to add a Conv2D Layer
I've tried input_shape=(7,36,1) in Conv_2D but that didn't work
input2 = Input(shape=(7,))
embedding2 = Embedding(76474, 36)(input2)
# 76474 is the number of datapoints (rows)
# 36 is the output dim of embedding Layer
cnn1 = Conv2D(64, (3, 3), activation='relu')(embedding2)
flat2 = Flatten()(cnn1)
But i'm getting this error
Input 0 of layer conv2d is incompatible with the layer: expected
ndim=4, found ndim=3. Full shape received: [None, 7, 36]
The output of an embedding layer is 3D, namely (samples, seq_length, features), where features = 36 is the dimensionality of the embedding space, and seq_length = 7 is the sequence length. A Conv2D layer requires an image, which is usually represented as a 4D tensor (samples, width, height, channels).
Only a Conv1D layer would make sense, as it also takes 3D-shaped data, typically (samples, width, channels), and then you need to decide if you want to do convolution across the sequence length, or across the features dimension. That's something you need to experiment with, which in the end is to decide which is the "spatial dimension" in the output of the embedding
I'm making an autoEncoder for depth estimation from monocular images. The first layer is a convolutional layer and the second layer is a convolutional LSTM layer. How do I add the ConvLSTM2D layer after the Conv2D layer.
This is the code I've tried but it gives an error.
autoencoder = Sequential()
autoencoder.add(Conv2D(64, (3, 3),strides = 2 , input_shape = (640, 480, 3), activation = 'linear'))
autoencoder.add(LeakyReLU(alpha = 0.1))
autoencoder.add(ConvLSTM2D(256, (3,3), strides = 2, input_shape = (None, 32), return_sequences = True))
I get the following error
ValueError: Input 0 is incompatible with layer conv_gr_u2d_1: expected
ndim=5, found ndim=4
You have maybe misunderstood what ConvLSTM2D is good for. It is designed for the scenario that you have a series of data where each data point is a picture. So, a movie would be a typical use case.
So, whatever you feed into it must have the shape (batch_size, timesteps, rows, cols, channels). On the other hand, Conv2D has an output shape of (batch_size, rows, cols, features). This is what the error is telling you.
Technically, you could just add a Reshape layer between those and generate whatever shape you want, but I don't see how this would make any sense in your scenario.
Having it vice versa (ConvLSTM2D first, then Conv2D) would make much more sense. But then you need "movie-like" input data. If I understand you correctly, you don't have that.
input shape to Conv2D should be:
input_shape = (batch_size, img_wd, img_hg, channels)
eg:
input_shape = (None, 640, 480, 3)
and u dont have to add input_shape argument in ConvGRU2D
I have a simple network defined:
model = Sequential()
model.add(Conv1D(5, 3, activation='relu', input_shape=(10, 1),name="conv1",padding="same"))
model.add(MaxPooling1D())
model.add(Conv1D(5, 3, activation='relu',name="conv2",padding="same"))
model.add(MaxPooling1D())
model.add(Dense(1, activation='relu',name="dense1"))
model.compile(loss='mse', optimizer='rmsprop')
The shape of the layers is as follows:
conv1-(None, 10, 5)
max1-(None, 5, 5)
conv2-(None,5,5)
max2-(None,2,5)
dense1-(None,2,1)
The model has a total of 106 parameters, however if I remove max pooling layer then the model summary looks as follows:
conv1-(None, 10, 5)
conv2-(None,10,5)
dense1-(None,10,1)
In both the cases total parameters remain 106, but why is it commonly written that the max-pooling layer reduces the number of parameters?
Which kind of network? It's all up to you.
Conv layers: no
Dense layers:
Directly after Conv or Pooling:
With "channels_last": no
With "channels_first": yes
After Flatten layers: yes
After GlobalPooling layers: no
Your network: no.
Explanations
Poolings and GlobalPoolings change the image sizes, but don't change the number of channels
Conv layers are fixed size filters that stride along the images.The filter size is independent of the image size, thus there is no change. Filters depend on kernel size and channels
Dense layers work on the last dimension only.
If the last dimension is channels, the pooling layers don't affect it
If the last dimension is an image side, it's affected
Flatten layers transform the image sizes and channels into a single dimension.
I want to use an LSTM neural Network with keras to forecast groups of time series and I am having troubles in making the model match what I want. The dimensions of my data are:
input tensor: (data length, number of series to train, time steps to look back)
output tensor: (data length, number of series to forecast, time steps to look ahead)
Note: I want to keep the dimensions exactly like that, no
transposition.
A dummy data code that reproduces the problem is:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, TimeDistributed, LSTM
epoch_number = 100
batch_size = 20
input_dim = 4
output_dim = 3
look_back = 24
look_ahead = 24
n = 100
trainX = np.random.rand(n, input_dim, look_back)
trainY = np.random.rand(n, output_dim, look_ahead)
print('test X:', trainX.shape)
print('test Y:', trainY.shape)
model = Sequential()
# Add the first LSTM layer (The intermediate layers need to pass the sequences to the next layer)
model.add(LSTM(10, batch_input_shape=(None, input_dim, look_back), return_sequences=True))
# add the first LSTM layer (the dimensions are only needed in the first layer)
model.add(LSTM(10, return_sequences=True))
# the TimeDistributed object allows a 3D output
model.add(TimeDistributed(Dense(look_ahead)))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
model.fit(trainX, trainY, nb_epoch=epoch_number, batch_size=batch_size, verbose=1)
This trows:
Exception: Error when checking model target: expected
timedistributed_1 to have shape (None, 4, 24) but got array with shape
(100, 3, 24)
The problem seems to be when defining the TimeDistributed layer.
How do I define the TimeDistributed layer so that it compiles and trains?
The error message is a bit misleading in your case. Your output node of the network is called timedistributed_1 because that's the last node in your sequential model. What the error message is trying to tell you is that the output of this node does not match the target your model is fitting to, i.e. your labels trainY.
Your trainY has a shape of (n, output_dim, look_ahead), so (100, 3, 24) but the network is producing an output shape of (batch_size, input_dim, look_ahead). The problem in this case is that output_dim != input_dim. If your time dimension changes you may need padding or a network node that removes said timestep.
I think the problem is that you expect output_dim (!= input_dim) at the output of TimeDistributed, while it's not possible. This dimension is what it considers as the time dimension: it is preserved.
The input should be at least 3D, and the dimension of index one will
be considered to be the temporal dimension.
The purpose of TimeDistributed is to apply the same layer to each time step. You can only end up with the same number of time steps as you started with.
If you really need to bring down this dimension from 4 to 3, I think you will need to either add another layer at the end, or use something different from TimeDistributed.
PS: one hint towards finding this issue was that output_dim is never used when creating the model, it only appears in the validation data. While it's only a code smell (there might not be anything wrong with this observation), it's something worth checking.