Can a neural network predict a timeseries based on other timeseries? - keras

I want to predict a time series X1 of n points based on two different time series X2 and X3, also of n points each. Those time series interact, so I was hoping to use similar methods as with combining images to produce another image.
So far I have successfully implemented an autoencoder to learn and return all time series (X1, X2, X3). When I tried to set up a neural network to use X2 and X3 only to predict X1 (of 3000 units) the model doesn't compile and I am getting an error:
Error when checking target: expected sequential_9 to have 2 dimensions, but got array with shape (61, 3000, 1, 1)
In different combinations it breaks at flatten_x or dense_x.
It works if my output is of only one unit and not 3000.
The network I tried would have the following layers:
Layer (type) Output Shape Param #
=================================================================
input_8 (InputLayer) (None, 3000, 2, 1) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 3000, 2, 32) 96
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 1500, 2, 32) 0
_________________________________________________________________
flatten_7 (Flatten) (None, 96000) 0
_________________________________________________________________
dense_6 (Dense) (None, 3000, 1, 32)
Here is the code that I'm using to create the net:
network = Sequential((
Conv2D(filters=32, kernel_size=(1,2), activation='relu', input_shape=(x, y, inChannel)),
MaxPooling2D(pool_size = (2, 1)),
Flatten(),
Dense(3000, activation='relu'),
))
network.compile(loss='mean_squared_error', optimizer = RMSprop())
The input has shape (61, 3000, 2, 1).
Should I specify the expected inputs/outputs somewhere and I'm not doing that? Make some data transformations on the way? Maybe use a different architecture?
Thanks for all suggestions!

The network you've coded does not have the architecture you intended
If you print out
network.summary()
you get
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_6 (Conv2D) (None, 3000, 1, 32) 96
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 1500, 1, 32) 0
_________________________________________________________________
flatten_6 (Flatten) (None, 48000) 0
_________________________________________________________________
dense_6 (Dense) (None, 3000) 144003000
=================================================================
Total params: 144,003,096
Trainable params: 144,003,096
Non-trainable params: 0
So, you have to change your architecture to get the desired output shape.

Related

Keras-rl ValueError"Model has more than one output. DQN expects a model that has a single output"

Is there any way to get around this error? I have a model with a 15x15 input grid, which leads to two outputs. Each output has 15 possible values, which are x or y coordinates. I did this because it is significantly simpler than having 225 separate outputs for every location on the grid.
The problem is that when i try to train the model using this code:
def build_agent(model,actions)
policy = BoltzmannQPolicy()
memory = SequentialMemory(limit=100000, window_length=1)
dqn = DQNAgent(model=model, memory=memory,policy=policy,nb_actions=actions,nb_steps_warmup=100, target_model_update=1e-2)
return(dqn)
dqn = build_agent(model, np.array([15,15]))
dqn.compile(Adam(learning_rate = 0.01), metrics=['mae'])
dqn.fit(env, nb_steps=10000, action_repetition=1, visualize=False, verbose=1,nb_max_episode_steps=10000)
plt.show()
I get the error: "Model has more than one output. DQN expects a model that has a single output".
The model summary is below so you can see there are 2 output layers.
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) [(None, 1, 15, 15)] 0 []
conv2d_2 (Conv2D) (None, 12, 13, 13) 120 ['input_2[0][0]']
conv2d_3 (Conv2D) (None, 10, 11, 3) 354 ['conv2d_2[0][0]']
flatten_1 (Flatten) (None, 330) 0 ['conv2d_3[0][0]']
dropout_1 (Dropout) (None, 330) 0 ['flatten_1[0][0]']
dense_2 (Dense) (None, 15) 4965 ['dropout_1[0][0]']
dense_3 (Dense) (None, 15) 4965 ['dropout_1[0][0]']
==================================================================================================
Total params: 10,404
Trainable params: 10,404
Non-trainable params: 0
__________________________________________________________________________________________________
Standard Keras allows a model with multiple outputs using the functional api but from the errpr message i assume that feature is just not supported for Keras-rl? If thats true, is there any way to get around this issue?
The solution was that i had to just use one output of 225. This didn't work great, but it was the best i could find. Two different outputs will not work using keras-rl, so this was all i could think of. Another possibility would be using a different library such as stable baselines2, but that would be completely different to the already built code.

KERAS: Pretrained a CNN+Dense model. How to freeze CNN weights and substitute Dense with LSTM?

I trained and load a cnn+dense model:
# load model
cnn_model = load_model('my_cnn_model.h5')
cnn_model.summary()
The output is this (I have images dimension 2 X 3600):
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 2, 3600, 32) 128
_________________________________________________________________
conv2d_2 (Conv2D) (None, 2, 1800, 32) 3104
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 2, 600, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 2, 600, 64) 6208
_________________________________________________________________
conv2d_4 (Conv2D) (None, 2, 300, 64) 12352
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 2, 100, 64) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 2, 100, 128) 24704
_________________________________________________________________
conv2d_6 (Conv2D) (None, 2, 50, 128) 49280
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 2, 16, 128) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4096) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 4195328
_________________________________________________________________
dense_2 (Dense) (None, 1024) 1049600
_________________________________________________________________
dense_3 (Dense) (None, 3) 3075
=================================================================
Total params: 5,343,779
Trainable params: 5,343,779
Non-trainable params: 0
Now, what I want is to leave weights up to flatten and replace dense layers with LSTM to train the added LSTM part.
I just wrote:
# freeze model
base_model = cnn_model(input_shape=(2, 3600, 1))
#base_model = cnn_model
base_model.trainable = False
# Adding the first lstm layer
x = LSTM(1024,activation='relu',return_sequences='True')(base_model.output)
# Adding the second lstm layer
x = LSTM(1024, activation='relu',return_sequences='False')(x)
# Adding the output
output = Dense(3,activation='linear')(x)
# Final model creation
model = Model(inputs=[base_model.input], outputs=[output])
But I obtained:
base_model = cnn_model(input_shape=(2, 3600, 1))
TypeError: __call__() missing 1 required positional argument: 'inputs'
I know I have to add TimeDistributed ideally in the Flatten layer, but I do not know how to do.
Moreover I'm not sure about base_model.trainable = False if it do exactly what I want.
Can you please help me to do the job?
Thank you very much!
You can't directly take the output from Flatten(), LSTM needs 2-d features (time, filters). You have to reshape your tensors.
You can take the output from the layer before flatten (max-pooling), let's say this layer has index i in the model, we can take the output from that layer and reshape it based on our needs and pass it to LSTM.
before_flatten = base_model.layers[i].output # i is the index of the layer from which you want to take the model output
conv2lstm_reshape = Reshape((-1, 2))(before_flatten) # you have to select it, the temporal dim and filters
# Adding the first lstm layer
x = LSTM(1024,activation='relu',return_sequences='True')(conv2lstm_reshape)
# Adding the second lstm layer
x = LSTM(1024, activation='relu',return_sequences='False')(x)
# Adding the output
output = Dense(3,activation='linear')(before_flatten)
# Final model creation
model = Model(inputs=[base_model.input], outputs=[output])
model.summary()

Keras loaded model output is different from the training model output

When I train my model it has a two-dimension output - it is (none, 1) - corresponding to the time series I'm trying to predict. But whenever I load the saved model in order to make predictions, it has a three-dimensional output - (none, 40, 1) - where 40 corresponds to the n_steps required to fit the conv1D network. What is wrong?
Here is the code:
df = np.load('Principal.npy')
# Conv1D
#model = load_model('ModeloConv1D.h5')
model = autoencoder_conv1D((2, 20, 17), n_steps=40)
model.load_weights('weights_35067.hdf5')
# summarize model.
model.summary()
# load dataset
df = df
# split into input (X) and output (Y) variables
X = f.separar_interface(df, n_steps=40)
# THE X INPUT SHAPE (59891, 17) length and attributes, respectively ##
# conv1D input format
X = X.reshape(X.shape[0], 2, 20, X.shape[2])
# Make predictions
test_predictions = model.predict(X)
## test_predictions.shape = (59891, 40, 1)
test_predictions = model.predict(X).flatten()
##test_predictions.shape = (2395640, 1)
plt.figure(3)
plt.plot(test_predictions)
plt.legend('Prediction')
plt.show()
In the plot below you can see that it is plotting the input format.
Here is the network architecture:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
time_distributed_70 (TimeDis (None, 1, 31, 24) 4104
_________________________________________________________________
time_distributed_71 (TimeDis (None, 1, 4, 24) 0
_________________________________________________________________
time_distributed_72 (TimeDis (None, 1, 4, 48) 9264
_________________________________________________________________
time_distributed_73 (TimeDis (None, 1, 1, 48) 0
_________________________________________________________________
time_distributed_74 (TimeDis (None, 1, 1, 64) 12352
_________________________________________________________________
time_distributed_75 (TimeDis (None, 1, 1, 64) 0
_________________________________________________________________
time_distributed_76 (TimeDis (None, 1, 64) 0
_________________________________________________________________
lstm_17 (LSTM) (None, 100) 66000
_________________________________________________________________
repeat_vector_9 (RepeatVecto (None, 40, 100) 0
_________________________________________________________________
lstm_18 (LSTM) (None, 40, 100) 80400
_________________________________________________________________
time_distributed_77 (TimeDis (None, 40, 1024) 103424
_________________________________________________________________
dropout_9 (Dropout) (None, 40, 1024) 0
_________________________________________________________________
dense_18 (Dense) (None, 40, 1) 1025
=================================================================
As I've found my mistake, and as I think it may be useful for someone else, I'll reply to my own question:
In fact, the network output has the same format as the training dataset labels. It means, the saved model is generating an output with shape (None, 40, 1) since it is exactly the same shape you (me) have given to the training output labels.
You (i.e. me) appreciate a difference between the network output while training and the network while predicting because you are most probably using a method such as train_test_split while training, which randomize the network output. Therefore, What you see at end of training is the production of this randomized batch.
In order to correct your problem (my problem), you should change the shape of the dataset labels from (None, 40, 1) to (None, 1), as you have a regression problem for a time series. For fixing that in your above network, you'd better set a flatten layer before the dense output layer. Therefore, I'll get the result your are looking for.

Keras Flatten not creating 1D output

I am trying to build a 1D CNN but I can't get the right dimensions passed to my last dense layer
The architecture of my model is
model_CNN=Sequential()
model_CNN.add(Conv1D(14, 29, activation='relu', input_shape=(X_train.shape[1], 1)))
model_CNN.add(Conv1D(30, 22, activation='relu'))
model_CNN.add(Flatten())
model_CNN.add(Dense(176,activation='relu'))
model_CNN.add(Dense(Y_train.shape[1],activation='linear'))
With a summary that looks like
Layer (type) Output Shape Param #
=================================================================
conv1d_71 (Conv1D) (None, 3304, 14) 420
_________________________________________________________________
conv1d_72 (Conv1D) (None, 3283, 30) 9270
_________________________________________________________________
flatten_18 (Flatten) (None, 98490) 0
_________________________________________________________________
dense_102 (Dense) (None, 176) 17334416
_________________________________________________________________
dense_103 (Dense) (None, 5) 885
=================================================================
Total params: 17,344,991
Trainable params: 17,344,991
Non-trainable params: 0
When I try to fit my model, I confirm that my input shape is correct (240, 3332, 1), but then I get the following error
ValueError: Error when checking target: expected dense_103
to have 2 dimensions, but got array with shape (240, 5, 1)
So my flatten function is not creating a 1D array, but also somehow the input only fails on the second dense layer, not the first. What's going on?

How to train a siamese neural network for image matching?

I need to identify if two fingerprints (from id card and sensor) match or not. Below some examples from my database (3000 pairs of images):
Example of matching images
Example of non-matching images
I am trying to train a siamese network which receives a pair of images and its output is [1, 0] if they don't match and [0, 1] if they match, then I created my model with Keras:
image_left = Input(shape=(200, 200, 1))
image_right = Input(shape=(200, 200, 1))
vector_left = conv_base(image_left)
vector_right = conv_base(image_right)
merged_features = concatenate([vector_left, vector_right], axis=-1)
fc1 = Dense(64, activation='relu')(merged_features)
fc1 = Dropout(0.2)(fc1)
# # fc2 = Dense(128, activation='relu')(fc1)
pred = Dense(2, activation='softmax')(fc1)
model = Model(inputs=[image_left, image_right], outputs=pred)
Where conv_base is a convolutional architecture. Actually, I have tried with ResNet, leNet, MobileNetV2 and NASNet from keras.applications, but they don't work.
conv_base = NASNetMobile(weights = None,
include_top=True,
classes=256)
My model summary is similar as shown below (depending on corresponding network used):
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 200, 200, 1) 0
__________________________________________________________________________________________________
input_3 (InputLayer) (None, 200, 200, 1) 0
__________________________________________________________________________________________________
NASNet (Model) (None, 256) 4539732 input_2[0][0]
input_3[0][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate) (None, 512) 0 NASNet[1][0]
NASNet[2][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 64) 32832 concatenate_5[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 64) 0 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 2) 130 dropout_1[0][0]
==================================================================================================
Total params: 4,572,694
Trainable params: 4,535,956
Non-trainable params: 36,738
Additional to convolutional architecture changes, I've tried with using pre-trained weights, setting all layers as trainable, setting last convolutional layers as trainable, data augmentation, using categorical_crossentropy and contrastive_loss functions, changing learning rate, but they all have same behavior. It is, training and validation accuracy are always 0.5.
Does anybody have an idea about what I am missing/doing wrong?
Thank you.

Resources