So when I was trying to train a model with LSTM, I have reshaped my input data to (1000, 96, 1), and output data to (1000, 24, 1), which means I want to predict futural 24 data with previous 96 data.
When I add a timedistributed dense layer as the last layer, I get an error:
ValueError: Error when checking target: expected time_distributed_1 to have shape (96, 1) but got array with shape (24, 1)
So what's wrong?
Here are my codes:
modelA.add(LSTM(units=64, return_sequences=True,
input_shape=[xProTrain_3D.shape[1], xProTrain_3D.shape[2]]))
modelA.add(LSTM(units=128, return_sequences=True))
modelA.add(Dropout(0.25))
modelA.add(Dropout(0.25))
modelA.add(LSTM(units=256, return_sequences=True))
modelA.add(Dropout(0.25))
modelA.add(LSTM(units=128, return_sequences=True))
modelA.add(Dropout(0.25))
modelA.add(LSTM(units=64, return_sequences=True))
modelA.add(Dropout(0.25))
modelA.add(TimeDistributed(Dense(units=1, activation='relu', input_shape=(24, 1))))
modelA.compile(optimizer='Adam',
loss='mse',
metrics=['mse'])
modelA.summary()
modelA.fit(x=xProTrain_3D, y=yProTrain_3D, epochs=epoch, batch_size=batch_size)
By the way, the input shape is (1000, 96, 1) and output shape is (1000, 24, 1)
Related
model.add(LSTM(100, input_shape=(156,156, 3), return_sequences=True)) #error
model.add(LSTM(Embedding(8192, 256)))
model.add(LSTM(SpatialDropout1D(0.3)))
model.add(LSTM(256, dropout=0.3, recurrent_dropout=0.3))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(5, activation='softmax'))
ValueError: Input 0 of layer "lstm_10" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 1)
The shapes of my data:
x_train shape : (1532, 156, 156, 3) y_train shape : (1532,)
x_test shape : (384, 156, 156, 3) y_test shape : (384,)
Trying to build a cnn-lstm model for a project. The LSTM layer as mentioned is throwing an error.
I am developing LSTM Program for NLP Problem.
Shape of my data and Label is = (10,20,1)
My Model code looks like this :
model.add(Embedding(18,17,input_length=20,weights=[embedding_weights])) ( Shape of Embedding (18,17))
# encoder layer
model.add(LSTM(100, activation='relu', input_shape=(20, 1)))
# repeat vector
model.add(RepeatVector(20))
# decoder layer
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.compile(optimizer='adam', loss='mse')
I am getting following error
"input_length" is 20, but received input has shape (None, 20, 1)
Here's my code:
(x_train, y_train), (x_test, y_test) = mnist.load_data()
def create_model():
model = tf.keras.models.Sequential()
model.add(Conv2D(64, (3, 3), input_shape=x_train.shape[1:], activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
model = create_model()
the input data shape is (60000, 28, 28). its the keras mnist dataset.
and here's the error
ValueError: Input 0 of layer conv2d_1 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 28, 28]
An I have no idea whats wrong with it.
Input shape
4D tensor with shape: (batch, channels, rows, cols) if data_format is "channels_first" or 4D tensor with shape: (batch, rows, cols, channels) if data_format is "channels_last".
The Input shape is expected as (batch,channels,rows,cols) you have given number of images.
create a variable like image_size=(3,28,28)
and
input_shape = image_size
... This might work for you. or try
input_shape = (3,28,28)
I realized my mistake mnist data has a shape: (sample, width, height) and Conv2D layers require a shape (samples, width, height, depth), so the solution would be to add an extra dimension.
x_train = x_train[..., np.newaxis]
x_test = x_test[..., np.newaxis]
I am trying to fit my data into a conv2d+lstm layers but I got an error in the last dense layer
i already tried to reshape but it gives me the same error .. and because I am new in python I couldn't understand how to fix my error My model is about combining cnn with lstm layer and i have 2892 training images and 1896 testing images with total 4788 images each image with size 128*128
And here is the final model summary
Here some of my code
cnn_model = Sequential()
cnn_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(128,128,3)))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(32, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(64, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(128, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Flatten())
model = Sequential()
model.add(cnn_model)
model.add(Reshape((4608, 1)))
model.add(LSTM(16, return_sequences=True, dropout=0.5))
model.add(Dense(3, activation='softmax'))
model.compile(optimizer='adadelta', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
X_data = np.array(X_data)
X_datatest = np.array(X_datatest)
X_data= X_data.astype('float32') / 255.
X_datatest = X_datatest.astype('float32') / 255.
hist=model.fit(X_data, X_data,epochs=15,batch_size=128,verbose = 2,validation_data=(X_datatest, X_datatest))
The error I expected to be in the dense layer as its outputs the following error
Traceback (most recent call last): File
"C:\Users\bdyssm\Desktop\Master\LSTMCNN2.py", line 212, in
hist=model.fit(X_data, X_data,epochs=15,batch_size=128,verbose = 2,validation_data=(X_datatest, X_datatest)) File
"C:\Users\bdyssm\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py",
line 952, in fit
batch_size=batch_size) File "C:\Users\bdyssm\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py",
line 789, in _standardize_user_data
exception_prefix='target') File "C:\Users\bdyssm\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training_utils.py",
line 128, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_1 to have 3
dimensions, but got array withshape (2892, 128, 128, 3)
and this is the cnn_model summary
My problem here that I want to make the number of input channels in python equals to dimension of filters
i already tried to reshape but it gives me the same error .. and because I am new in python I couldn't understand how to fix my error
My model is about combining cnn with lstm layer and i have 2892 training images and 1896 testing images with total 4788 images each image with size 128*128
here some code of what i had tried
cnn_model = Sequential()
cnn_model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(128,128,3)))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(32, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(64, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Conv2D(128, (3, 3), activation='relu'))
cnn_model.add(MaxPooling2D(pool_size=(2, 2)))
cnn_model.add(Flatten())
model = Sequential()
model.add(TimeDistributed(cnn_model, input_shape=(1,128, 128,3)))
model.add(LSTM(16, return_sequences=True, dropout=0.5))
model.add(Dense(1, activation='softmax'))
model.compile(optimizer='adadelta', loss='categorical_crossentropy', metrics=['accuracy'])
X_data = np.array(X_data)
X_datatest = np.array(X_datatest)
X_data= X_data.astype('float32') / 255.
X_datatest = X_datatest.astype('float32') / 255.
hist=model.fit(X_data, X_data,epochs=15,batch_size=128,verbose = 2,validation_data=(X_datatest, X_datatest))
when trying the previuos code the following error showed up
Traceback (most recent call last): File
"C:\Users\bdyssm\Desktop\Master\LSTMCNN2.py", line 219, in
hist=model.fit(X_data, X_data,epochs=15,batch_size=128,verbose = 2,validation_data=(X_datatest, X_datatest)) File
"C:\Users\bdyssm\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py",
line 952, in fit
batch_size=batch_size) File "C:\Users\bdyssm\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training.py",
line 751, in _standardize_user_data
exception_prefix='input') File "C:\Users\bdyssm\AppData\Local\Programs\Python\Python35\lib\site-packages\keras\engine\training_utils.py",
line 128, in standardize_input_data
'with shape ' + str(data_shape)) ValueError: Error when checking input: expected time_distributed_1_input to have 5 dimensions, but got
array with shape (2892, 28, 28, 3)
This is the model summary
This is the cnn_model summary
The problem is that your cnn_model has changed the shape of your signal to have 128 channels insteal of 3 color channels, but you are not taking this into account when declaring the input shape of model.
Examine the output shape of cnn_model with cnn_model.summary() and make sure to have input shape of model equal to the output shape of cnn_model.