simpleRNN input/output shape - keras

I have defined a simpleRNN in keras with the following code :
# define RNN architecture
from keras.layers import Input
from keras.models import Model
from keras.layers import SimpleRNN
from keras.models import Sequential
model = Sequential()
model.add(SimpleRNN(units = 10,
return_sequences=False,
unroll=True,
input_shape=(6, 2)))
model.compile(loss='mse',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
then I feed it with input data having shape (batch_size, 6, 2) i.e. 6 timesteps each having two features. I therefore expect 6 simpleRNN cells.
When launching the training, I get the following error message :
Error when checking target: expected simple_rnn_2 to have shape (10,) but got array with shape (1,)
and I don't understand why.
The point of the RNN (my understanding) is to have its input fed by the previous RNN cell in case it is not the first RNN cell and the new timestep input.
So in this case, I expect the second RNN cell to be fed by the first RNN cell a vector of shape (10,) since units = 10. How come that it gets a (1,) sized vector ?
What is strange is that as soon as I add a Dense layer in the model, this solves the issue. So the following architecture :
# define RNN architecture
from keras.layers import Input
from keras.models import Model
from keras.layers import SimpleRNN, Dense
from keras.models import Sequential
model = Sequential()
model.add(SimpleRNN(units = 10,
return_sequences=False,
unroll=False,
input_shape=(6, 2)))
model.add(Dense(1, activation='relu'))
model.compile(loss='mse',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
does not throw an error. Any idea why ?

Assuming you are actually training the model (you did not include that code), the problem is that you are feeding it target outputs of shape (1,) while the SimpleRNN expects input of shape (10,). You can look up the docs here: https://keras.io/layers/recurrent/
The docs clearly state that the output of the SimpleRNN is equal to units, which is 10. Each unit produces one output.
The second sample does work because you have added a Dense layer that reduces the output size to (1,). Now the model can accept your training target outputs and they are backpropped through the network.

Related

First CNN and shapes error

I just started to build my first CNN. I'm practicing with the MNIST dataset, this is the code I just wrote:
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv1D, Dropout, Flatten, Dense
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.optimizers import Adam
from sklearn.preprocessing import RobustScaler
import os
import numpy as np
import matplotlib.pyplot as plt
# CONSTANTS
EPOCHS = 300
TIME_STEPS = 30000
NUM_CLASSES = 10
# Loading data
print('Loading data:')
(train_X, train_y), (test_X, test_y) = mnist.load_data()
print('X_train: ' + str(train_X.shape))
print('Y_train: ' + str(train_y.shape))
print('X_test: ' + str(test_X.shape))
print('Y_test: ' + str(test_y.shape))
print('------------------------------')
# Splitting train/val
print('Splitting training/validation set:')
X_train = train_X[0:TIME_STEPS, :]
X_val = train_X[TIME_STEPS:TIME_STEPS*2, :]
print('X_train: ' + str(X_train.shape))
print('X_val: ' + str(X_val.shape))
# Normalizing data
print('------------------------------')
print('Normalizing data:')
X_train = X_train/255
X_val = X_val/255
print('X_train: ' + str(X_train.shape))
print('X_val: ' + str(X_val.shape))
# Building model
model = Sequential()
model.add(Conv1D(filters=32, kernel_size=5, input_shape=(28, 28)))
model.add(Conv1D(filters=16, kernel_size=4, activation="relu"))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(NUM_CLASSES, activation='softmax'))
model.compile(optimizer=Adam(), loss=categorical_crossentropy, metrics=['accuracy'])
model.summary()
model.fit(x=X_train, y=X_train, batch_size=10, epochs=EPOCHS, shuffle=False)
I'm going to explain what I did, any correction would be helpful so I can learn more:
The first thing I did is splitting the training set in two parts: a training part and a validation part, on which I would like to do the training before testing it on the test set.
Then, I normalized the data (is this a standard when we work with images?)
I then built my CNN with a simple structure: the first layer is the one which gets the inputs (with dimension 28x28) and I've chosen 32 filters that should be enough to perform well on this dataset. The kernel size is the one I did not understood since I thought that the kernel was the equivalent of the filter. I selected a low number to avoid problems. The second layer is similar to the previous one, but now it has an activation function (relu, but I'm not convinced, I was thinking to use a softmax to pass a set of probabilities to the full connected layer).
The last 3 layers are the full connected layer to get the output.
In the fit function I used a batch size of 10 and I think that this could be one of the reason I get the error:
ValueError: Shapes (10, 28, 28) and (10, 10) are incompatible
Even removing it I still getting the following error:
ValueError: Shapes (None, 28, 28) and (None, 10) are incompatible
Am I missing something important?
You are passing in the X_train variable twice, once as the x argument and once as the y argument. Instead of passing in X_train as the y argument in .fit() you should pass in an array of values you are trying to predict. Given that you are using MNIST is assume that you are trying to predict the written digit, so your y array should be of shape (n_samples, 10) with the digit being one-hot encoded.

Keras Conv1D: error of dimensions

I am trying to perform a rating using the CNN template.
I have 150 classes. My train base has 19470 rows and 1945 columns. It is an matrix that contains 0 and 1.
import keras
from keras.models import Sequential
from keras.layers import Conv1D
from keras.layers.advanced_activations import LeakyReLU
model = Sequential()
model.add(Conv1D(150,kernel_size=3,input_shape(19470,1945),activation='linear',padding='same'))
model.add(LeakyReLU(alpha=0.1))
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy'])
model.fit(x_train, y_train)
This raises:
ValueError: Error when checking input: expected conv1d_39_input to have 3 dimensions, but got array with shape (19470, 1945)
Did you check your xtrain shape?
According to the error that keras is raising you should do: x_train = xtrain.reshape(19470, 1945, 1)
I don't understand why are you using as many layers of conv1d as classes you have?
I can't give advice on the architecture of your NN, but I your last layer should be a Dense layer with 150 units and softmax activation. Don't you have 150 classes?

how to build LSTM RNN network for binary classification?

I am trying to build a deep learning network for binary classification using LSTM based RNN.
Here is what I have tried using python
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import LSTM
import numpy as np
train = np.loadtxt("TrainDatasetFinal.txt", delimiter=",")
test = np.loadtxt("testDatasetFinal.txt", delimiter=",")
y_train = train[:,7]
y_test = test[:,7]
train_spec = train[:,6]
test_spec = test[:,6]
model = Sequential()
model.add(Embedding(8, 256, input_length=1))
model.add(LSTM(output_dim=128, activation='sigmoid',
inner_activation='hard_sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop')
model.fit(train_spec, y_train, batch_size=2000, nb_epoch=11)
score = model.evaluate(test_spec, y_test, batch_size=2000)
Here is a sample from the dataset
(Patient Number, time in millisecond, accelerometer x-axis,y-axis,
z-axis,magnitude, spectrogram,label (0 or 1))
1,15,70,39,-970,947321,596768455815000,0
1,31,70,39,-970,947321,612882670787000,0
1,46,60,49,-960,927601,602179976392000,0
1,62,60,49,-960,927601,808020878060000,0
1,78,50,39,-960,925621,726154800929000,0
I believe that the my problem in those lines but I cannot recognize the error
model.add(Embedding(8, 256, input_length=1))
model.add(LSTM(output_dim=128, activation='sigmoid',
inner_activation='hard_sigmoid'))
and this is the error I have got
InvalidArgumentError (see above for traceback): indices[0,0] = -2147483648 is not in [0, 8)
Is the sample from your dataset provided above, the data you are trying to feed into the model? If so, there is a problem because your data is 2-dimensional, but for an RNN you need a 3-dimensional input tensor. You need a feature dimension, a batch size dimension and a time dimension. It looks like you are missing a proper time dimension. You should not have a column with 15, 31, 46,... (time in milliseconds) this should be shaped into its own dimension, so your input data looks like a "cube". Otherwise, you don't need a temporal model at all. Furthermore, you should standardize your input since your features have vastly different orders of magnitude. Moreover, the batch size of 2000 is almost certainly too large. Are you trying to express that your whole training set has 2000 samples? In this case, you may not have enough training data for the model you are building.

Incompatible input in Keras Layer LSTM

I'm trying to replicate the example on Keras's website:
# as the first layer in a Sequential model
model = Sequential()
model.add(LSTM(32, input_shape=(10, 64)))
# now model.output_shape == (None, 32)
# note: `None` is the batch dimension.
# for subsequent layers, no need to specify the input size:
model.add(LSTM(16))
But when I run the following:
# only lines I've added:
from keras.models import Sequential
from keras.layers import Dense, LSTM
# all else is the same:
model = Sequential()
model.add(LSTM(32, input_shape=(10, 64)))
model.add(LSTM(16))
However, I get the following:
ValueError: Input 0 is incompatible with layer lstm_4: expected ndim=3, found ndim=2
Versions:
Keras: '2.0.5'
Python: '3.4.3'
Tensorflow: '1.2.1'
LSTM layer as their default option has to return only the last output from a sequence. That's why your data loses its sequential nature. In order to change that try:
model.add(LSTM(32, input_shape=(10, 64), return_sequences=True))
What makes LSTM to return a whole sequence of predictions.

Python Keras LSTM input output shape issue

I am running keras over tensorflow, trying to implement a multi-dimensional LSTM network to predict a linear continuous target variable , a single value for each example(return_sequences = False).
My sequence length is 10 and number of features (dim) is 11.
This is what I run:
import pprint, pickle
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
# Input sequence
wholeSequence = [[0,0,0,0,0,0,0,0,0,2,1],
[0,0,0,0,0,0,0,0,2,1,0],
[0,0,0,0,0,0,0,2,1,0,0],
[0,0,0,0,0,0,2,1,0,0,0],
[0,0,0,0,0,2,1,0,0,0,0],
[0,0,0,0,2,1,0,0,0,0,0],
[0,0,0,2,1,0,0,0,0,0,0],
[0,0,2,1,0,0,0,0,0,0,0],
[0,2,1,0,0,0,0,0,0,0,0],
[2,1,0,0,0,0,0,0,0,0,0]]
# Preprocess Data:
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
data = wholeSequence
target = np.array([20])
# Reshape training data for Keras LSTM model
data = data.reshape(1, 10, 11)
target = target.reshape(1, 1, 1)
# Build Model
model = Sequential()
model.add(LSTM(11, input_shape=(10, 11), unroll=True, return_sequences=False))
model.add(Dense(11))
model.add(Activation('linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(data, target, nb_epoch=1, batch_size=1, verbose=2)
and get the error ValueError: Error when checking target: expected activation_1 to have 2 dimensions, but got array with shape (1, 1, 1)
Not sure what should the activation layer should get (shape wise)
Any help appreciated
thanks
If you just want to have a single linear output neuron, you can simply use a dense layer with one hidden unit and supply the activation there. Your output then can be a single vector without the reshape- I adjusted your given example code to make it work:
wholeSequence = np.array(wholeSequence, dtype=float) # Convert to NP array.
data = wholeSequence
target = np.array([20])
# Reshape training data for Keras LSTM model
data = data.reshape(1, 10, 11)
# Build Model
model = Sequential()
model.add(LSTM(11, input_shape=(10, 11), unroll=True, return_sequences=False))
model.add(Dense(1, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(data, target, nb_epoch=1, batch_size=1, verbose=2)

Resources