Input for LSTM in case of time series data - python-3.x

I have a (not text) dataset of size (1152,151). I wished to divide this into 8 batches, each containing 144 samples for training a LSTM network with it in Keras. I reshaped the data for sending into a LSTM as (8,144,151). Is this the right input shape? Because when I sent this as input and had return_sequences=False for this layer and the next LSTM layer as well, I got an error:
expected n_dim=3, got n_dim=2.
X_train = X_train.reshape((8,144,151))
def deepmodel():
model = Sequential()
model.add(LSTM(8,input_shape=(144,151),return_sequences=False))
model.add(LSTM(8,return_sequences=False))
model.add(Dense(8))
model.add(Activation('softmax'))
adam=Adam()
model.compile(loss = 'categorical_crossentropy', optimizer = adam)
return model

You will set your batch size in model.fit(...., batch_size=8). Look at the example below, it should clear your error message. If you are looking for multiple time lag, be sure to check out this wonderful blog post.
X_train = X_train.reshape((X_train.shape[0], 1, X_train.shape[1]))
def deepmodel():
model = Sequential()
model.add(LSTM(8,input_shape=(train_X.shape[1], train_X.shape[2]), return_sequences=False))
model.add(LSTM(8,return_sequences=False))
model.add(Dense(8))
model.add(Activation('softmax'))
adam=Adam()
model.compile(loss = 'categorical_crossentropy', optimizer = adam)
return model

Related

GradCam applied to video sequence classification with TimeDistributed CNN and LSTM

after days working on it, I have found any reasonable way of doing, so here I am.
I have a network that aims to predict the next video class, given the features of the current one. Each video is composed by 30 frames. The idea is to apply a feature extraction method to each input, then feed into an LSTM + Dense layer to make prediction.
Here the code:
video = Input(shape=(30,299,299,3))
inc = InceptionV3(weights='imagenet', include_top=False, input_shape=(299, 299, 3))
cnn_out = GlobalAveragePooling2D()(inc.output)
cnn = Model(inputs=inc.input, outputs=cnn_out)
encoded_frames = TimeDistributed(cnn)(video)
encoded_sequence = LSTM(128, activation='relu', return_sequences=False, kernel_initializer=he_uniform(), bias_initializer='zeros', dropout=0.5)(encoded_frames)
hidden_layer = Dense(1024, activation='relu', kernel_initializer=he_uniform(), bias_initializer='zeros')(encoded_sequence)
outputs = Dense(4, activation="softmax", kernel_initializer=glorot_normal(), bias_initializer='zeros')(hidden_layer)
model = Model(inputs=[video], outputs=outputs)
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])
I would like to visualize the feature activations at the CNN stage for each image. So if I look at the saliency map for each input image I can understand which features are more importante than others to make this kind of prediction.
All the examples on internet are facing with just one CNN and one input image, is there any way of doing this?
Any help is really appreciated, thanks!

How to merge new features at later stage of model?

I have training data in the form of numpy arrays, that I will use in ConvLSTM.
Following are dimensions of array.
trainX = (5000, 200, 5) where 5000 are number of samples. 200 is time steps per sample, and 8 is number of features per timestep. (samples, timesteps, features).
out of these 8 features, 3 features remains the same throghout all timesteps in a sample (In other words, these features are directly related to samples). for example, day of the week, month number, weekday (these changes from sample to sample). To reduce the complexity, I want to keep these three features separate from initial training set and merge them with the output of convlstm layer before applying dense layer for classication (softmax activiation). e,g
Intial training set dimension would be (7000, 200, 5) and auxiliary input dimensions to be merged would be (7000, 3) --> because these 3 features are directly related to sample. How can I implement this using keras?
Following is my code that I write using Functional API, but don't know how to merge these two inputs.
#trainX.shape=(7000,200,5)
#trainy.shape=(7000,4)
#testX.shape=(3000,200,5)
#testy.shape=(3000,4)
#trainMetadata.shape=(7000,3)
#testMetadata.shape=(3000,3)
verbose, epochs, batch_size = 1, 50, 256
samples, n_features, n_outputs = trainX.shape[0], trainX.shape[2], trainy.shape[1]
n_steps, n_length = 4, 50
input_shape = (n_steps, 1, n_length, n_features)
model_input = Input(shape=input_shape)
clstm1 = ConvLSTM2D(filters=64, kernel_size=(1,3), activation='relu',return_sequences = True)(model_input)
clstm1 = BatchNormalization()(clstm1)
clstm2 = ConvLSTM2D(filters=128, kernel_size=(1,3), activation='relu',return_sequences = False)(clstm1)
conv_output = BatchNormalization()(clstm2)
metadata_input = Input(shape=trainMetadata.shape)
merge_layer = np.concatenate([metadata_input, conv_output])
dense = Dense(100, activation='relu', kernel_regularizer=regularizers.l2(l=0.01))(merge_layer)
dense = Dropout(0.5)(dense)
output = Dense(n_outputs, activation='softmax')(dense)
model = Model(inputs=merge_layer, outputs=output)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit([trainX, trainMetadata], trainy, validation_data=([testX, testMetadata], testy), epochs=epochs, batch_size=batch_size, verbose=verbose)
_, accuracy = model.evaluate(testX, testy, batch_size=batch_size, verbose=0)
y = model.predict(testX)
but I am getting Value error at merge_layer statement. Following is the ValueError
ValueError: zero-dimensional arrays cannot be concatenated
What you are saying can not be done using the Sequential mode of Keras.
You need to use the Model class API Guide to Keras Model.
With this API you can build the complex model you are looking for
Here you have an example of how to use it: How to Use the Keras Functional API for Deep Learning

ValueError when Fine-tuning Inception_v3 in Keras

I am trying to fine-tune pre-trained Inceptionv3 in Keras for a multi-label (17) prediction problem.
Here's the code:
# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)
# add a new top layer
x = base_model.output
predictions = Dense(17, activation='sigmoid')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# we need to recompile the model for these modifications to take effect
# we use SGD with a low learning rate
from keras.optimizers import SGD
model.compile(loss='binary_crossentropy', # We NEED binary here, since categorical_crossentropy l1 norms the output before calculating loss.
optimizer=SGD(lr=0.0001, momentum=0.9))
# Fit the model (Add history so that the history may be saved)
history = model.fit(x_train, y_train,
batch_size=128,
epochs=1,
verbose=1,
callbacks=callbacks_list,
validation_data=(x_valid, y_valid))
But I got into the following error message and had trouble deciphering what it is saying:
ValueError: Error when checking target: expected dense_1 to have 4
dimensions, but got array with shape (1024, 17)
It seems to have something to do with that it doesn't like my one-hot encoding for the labels as target. But how do I get 4 dimensions target?
It turns out that the code copied from https://keras.io/applications/ would not run out-of-the-box.
The following post has helped me:
Keras VGG16 fine tuning
The changes I need to make are the following:
Add in the input shape to the model definition base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(299,299,3)), and
Add a Flatten() layer to flatten the tensor output:
x = base_model.output
x = Flatten()(x)
predictions = Dense(17, activation='sigmoid')(x)
Then the model works for me!

Accuracy goes to 0.0000 when training RNN with Keras?

I'm trying to use custom word-embeddings from Spacy for training a sequence -> label RNN query classifier. Here's my code:
word_vector_length = 300
dictionary_size = v.num_tokens + 1
word_vectors = v.get_word_vector_dictionary()
embedding_weights = np.zeros((dictionary_size, word_vector_length))
max_length = 186
for word, index in dictionary._get_raw_id_to_token().items():
if word in word_vectors:
embedding_weights[index,:] = word_vectors[word]
model = Sequential()
model.add(Embedding(input_dim=dictionary_size, output_dim=word_vector_length,
input_length= max_length, mask_zero=True, weights=[embedding_weights]))
model.add(Bidirectional(LSTM(128, activation= 'relu', return_sequences=False)))
model.add(Dense(v.num_labels, activation= 'sigmoid'))
model.compile(loss = 'binary_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
model.fit(X_train, Y_train, batch_size=200, nb_epoch=20)
here the word_vectors are stripped from spacy.vectors and have length 300, the input is an np_array which looks like [0,0,12,15,0...] of dimension 186, where the integers are the token ids in the input, and I've constructed the embedded weight matrix accordingly. The output layer is [0,0,1,0,...0] of length 26 for each training sample, indicating the label that should go with this piece of vectorized text.
This looks like it should work, but during the first epoch the training accuracy is continually decreasing... and by the end of the first epoch/for the rest of training, it's exactly 0 and I'm not sure why this is happening. I've trained plenty of models with keras/TF before and never encountered this issue.
Any idea what might be happening here?
Are the labels always one-hot? Meaning only one of the elements of the label vector is one and the rest zero.
If so, then maybe try using a softmax activation with a categorical crossentropy loss like in the following official example:
https://github.com/fchollet/keras/blob/master/examples/babi_memnn.py#L202
This will help constraint the network to output probability distributions on the last layer (i.e. the softmax layer outputs sum up to 1).

Training only one output of a network in Keras

I have a network in Keras with many outputs, however, my training data only provides information for a single output at a time.
At the moment my method for training has been to run a prediction on the input in question, change the value of the particular output that I am training and then doing a single batch update. If I'm right this is the same as setting the loss for all outputs to zero except the one that I'm trying to train.
Is there a better way? I've tried class weights where I set a zero weight for all but the output I'm training but it doesn't give me the results I expect?
I'm using the Theano backend.
Outputting multiple results and optimizing only one of them
Let's say you want to return output from multiple layers, maybe from some intermediate layers, but you need to optimize only one target output. Here's how you can do it:
Let's start with this model:
inputs = Input(shape=(784,))
x = Dense(64, activation='relu')(inputs)
# you want to extract these values
useful_info = Dense(32, activation='relu', name='useful_info')(x)
# final output. used for loss calculation and optimization
result = Dense(1, activation='softmax', name='result')(useful_info)
Compile with multiple outputs, set loss as None for extra outputs:
Give None for outputs that you don't want to use for loss calculation and optimization
model = Model(inputs=inputs, outputs=[result, useful_info])
model.compile(optimizer='rmsprop',
loss=['categorical_crossentropy', None],
metrics=['accuracy'])
Provide only target outputs when training. Skipping extra outputs:
model.fit(my_inputs, {'result': train_labels}, epochs=.., batch_size=...)
# this also works:
#model.fit(my_inputs, [train_labels], epochs=.., batch_size=...)
One predict to get them all
Having one model you can run predict only once to get all outputs you need:
predicted_labels, useful_info = model.predict(new_x)
In order to achieve this I ended up using the 'Functional API'. You basically create multiple models, using the same layers input and hidden layers but different output layers.
For example:
https://keras.io/getting-started/functional-api-guide/
from keras.layers import Input, Dense
from keras.models import Model
# This returns a tensor
inputs = Input(shape=(784,))
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation='relu')(inputs)
x = Dense(64, activation='relu')(x)
predictions_A = Dense(1, activation='softmax')(x)
predictions_B = Dense(1, activation='softmax')(x)
# This creates a model that includes
# the Input layer and three Dense layers
modelA = Model(inputs=inputs, outputs=predictions_A)
modelA.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
modelB = Model(inputs=inputs, outputs=predictions_B)
modelB.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])

Resources