Two dimensional (position) time series prediction in Keras using LSTM - keras

I am trying to use the Keras LSTM implementation to predict a series of x, y pairs into the future. The x, y pairs specify location in a 2D plane. I would like to predict them 60 steps into the future.
I have 36k data pairs, which I have split up into 30k for training and 5880k for testing. I have prepared the training data by creating a 3D array of shape (30000, 60, 2) where each element is a rolling 60 length snippet of the training data, e.g. [[[x0, y0], [x1, y1], ... [x59, y59]], [x1, y1], [x2, y2], ... [x60, y60]], ... [x30000, y30000], [x30001, y30001], ... [x30059, y30059]]]. The target data is the exact same thing, only offset by 60 elements. The idea is to basically use 60 pairs to predict the next 60 pairs.
I'm getting the following error, indicating that the model is expecting the target data to have only two dimensions.
ValueError: Error when checking model target: expected lstm_1 to have 2 dimensions, but got array with shape (30000, 60, 2)
It looks like the model is dropping the fact that my data is 2D. Clearly I am missing something conceptually here, but I'm not sure what it is. I'd be grateful if someone could put me on the right track.
Here is my code:
import numpy as np
from numpy import genfromtxt
from keras.models import Sequential
from keras.layers import Dense, LSTM, Activation, GRU, Dropout
TRAINING_SET_SIZE = 30000
epochs = 1
original_data = genfromtxt('training_data.txt', delimiter=',', dtype='int')
training_set = []
for i in range(len(original_data) - 120):
training_set.append(original_data[i:i+120])
training_set = np.array(training_set)
train_input = []
train_output = []
for i in range(TRAINING_SET_SIZE):
train_input.append(training_set[i][0:60])
train_output.append(training_set[i][60:120])
train_input = np.array(train_input)
train_output = np.array(train_output)
test_input = []
test_output = []
for i in range(TRAINING_SET_SIZE, len(original_data) - 120):
test_input.append(training_set[i][0:60])
test_output.append(training_set[i][60:120])
test_input = np.array(test_input)
test_output = np.array(test_output)
s = (train_input.shape[1], train_input.shape[2])
model = Sequential()
model.add(LSTM(60, input_shape=s, unroll=True))
model.compile(loss='mean_squared_error', optimizer='adam')
model.summary()
print("Inputs: {}".format(model.input_shape))
print("Outputs: {}".format(model.output_shape))
print("Actual input: {}".format(train_input.shape))
print("Actual output: {}".format(train_output.shape))
print('Training')
model.fit(train_input, train_output, validation_split=0.2, batch_size=1, epochs=epochs, verbose=1, shuffle=False)
model.save('my_model.h5')
score = model.evaluate(test_input, test_output, batch_size=1)
print(score)
print('Predicting')
predicted_output = model.predict(test_input, batch_size=1)

Related

Does the sequence length of a RNN/LSTM have to be the same for the input and output?

I have a question about the input and output data in a RNN or LSTM. A RNN expects a 3-dimensional vector as input of the form (Batch_size, sequence_length_input, features_input) and a 3-dimensional output vector of the form (Batch_size, sequence_length_output, features_output).
I know that the features_input and features_output don't have to have the same number while the Batch_size has to be equal for input and output. But what about the middle part sequence_length_input and sequence_length_output. Do they have to be the same? At least in my example (with Keras and Tensorflow) I always get an error if they are not the same. So I am wondering whetever I have a bug in the code or if this is generally not possible.
So can I for example use as input for the training, the data X_train =(1000, 100, 10) and the output Y_train = (1000, 20, 3) such that I have a mapping for each of the 1000 itmes (Batch_size) from a 10-dimensional (features_input) time series with 100 time steps (sequence_length_input) to a 3-dimensional (features_output) time series with 20 time steps (sequence_length_output).
Update: Here is my code with a RNN for time series forecasting that only works if the sequence_length of the input steps_backward is equal to the sequence_length of the output steps_forward otherwise it will throw a ValueError:
ValueError: Dimensions must be equal, but are 192 and 96 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](sequential_5/time_distributed_5/Reshape_1, IteratorGetNext:1)' with input shapes: [?,192,1], [?,96,1].
In the code I use the 96 past timesteps (or 2*96=192 timesteps) to predict the future 96 timesteps. When the number of past and future timesteps are equal (equal sequence_length), everything works fine. Otherwise (unequal sequence_length) I get the ValueError.
Code:
#Import modules
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.preprocessing import StandardScaler
from tensorflow import keras
# Define the parameters of the RNN and the training
epochs = 1
batch_size = 50
steps_backwards = 2 * 96
steps_forward = 96
split_fraction_trainingData = 0.70
split_fraction_validatinData = 0.90
randomSeedNumber = 50
#Read dataset
df = pd.read_csv('C:/Users/User1/Desktop/TestData.csv', sep=';', header=0, low_memory=False, infer_datetime_format=True, parse_dates={'datetime':[0]}, index_col=['datetime'])
# standardize data
data = df.values
indexWithYLabelsInData = 0
data_X = data[:, 0:3]
data_Y = data[:, indexWithYLabelsInData].reshape(-1, 1)
scaler_standardized_X = StandardScaler()
data_X = scaler_standardized_X.fit_transform(data_X)
data_X = pd.DataFrame(data_X)
scaler_standardized_Y = StandardScaler()
data_Y = scaler_standardized_Y.fit_transform(data_Y)
data_Y = pd.DataFrame(data_Y)
# Prepare the input data for the RNN
series_reshaped_X = np.array([data_X[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])
series_reshaped_Y = np.array([data_Y[i:i + (steps_backwards+steps_forward)].copy() for i in range(len(data) - (steps_backwards+steps_forward))])
timeslot_x_train_end = int(len(series_reshaped_X)* split_fraction_trainingData)
timeslot_x_valid_end = int(len(series_reshaped_X)* split_fraction_validatinData)
X_train = series_reshaped_X[:timeslot_x_train_end, :steps_backwards]
X_valid = series_reshaped_X[timeslot_x_train_end:timeslot_x_valid_end, :steps_backwards]
X_test = series_reshaped_X[timeslot_x_valid_end:, :steps_backwards]
Y_train = series_reshaped_Y[:timeslot_x_train_end, steps_backwards:]
Y_valid = series_reshaped_Y[timeslot_x_train_end:timeslot_x_valid_end, steps_backwards:]
Y_test = series_reshaped_Y[timeslot_x_valid_end:, steps_backwards:]
# Build the model and train it
np.random.seed(randomSeedNumber)
tf.random.set_seed(randomSeedNumber)
model = keras.models.Sequential([
keras.layers.SimpleRNN(10, return_sequences=True, input_shape=[None, 3]),
keras.layers.SimpleRNN(10, return_sequences=True),
keras.layers.TimeDistributed(keras.layers.Dense(1))
])
model.compile(loss="mean_squared_error", optimizer="adam")
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size, validation_data=(X_valid, Y_valid))
#Predict the test data
Y_pred = model.predict(X_test)
and here is some test data https://filetransfer.io/data-package/ufbzh09o#link
Reminder: The code and the data provide a Minimal reproducible example. Maybe you can have a look at it as in this code the sequence_length has to be equal for the input and output data, otherwise I get an error. Unfortuantely I still have not figured out why this this problem occurs
I have encountered the same problems. My input data shape is [512,10,3], and the output data is [512,20,1], which means that the last ten-time time steps data is used to predict the future twenty-time time steps. When I tried to implement it in PyTorch, the same problem as you appeared. Finally, I just used the last state of the LSTM to repeat 20 times and feed into the next fully connected layers. However, I cannot do it in the classic backpropagation (just made up of fully connected layers) neural network.

Calculate gradient of validation error w.r.t inputs using Keras/Tensorflow or autograd

I need to calculate the gradient of the validation error w.r.t inputs x. I'm trying to see how much the validation error changes when I perturb one of the training samples.
The validation error (E) explicitly depends on the model weights (W).
The model weights explicitly depend on the inputs (x and y).
Therefore, the validation error implicitly depends on the inputs.
I'm trying to calculate the gradient of E w.r.t x directly.
An alternative approach would be to calculate the gradient of E w.r.t W (can easily be calculated) and the gradient of W w.r.t x (cannot do at the moment), which would allow the gradient of E w.r.t x to be calculated.
I have attached a toy example. Thanks in advance!
import numpy as np
import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
import tensorflow as tf
from autograd import grad
train_images = mnist.train_images()
train_labels = mnist.train_labels()
test_images = mnist.test_images()
test_labels = mnist.test_labels()
# Normalize the images.
train_images = (train_images / 255) - 0.5
test_images = (test_images / 255) - 0.5
# Flatten the images.
train_images = train_images.reshape((-1, 784))
test_images = test_images.reshape((-1, 784))
# Build the model.
model = Sequential([
Dense(64, activation='relu', input_shape=(784,)),
Dense(64, activation='relu'),
Dense(10, activation='softmax'),
])
# Compile the model.
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
# Train the model.
model.fit(
train_images,
to_categorical(train_labels),
epochs=5,
batch_size=32,
)
model.save_weights('model.h5')
# Load the model's saved weights.
# model.load_weights('model.h5')
calculate_mse = tf.keras.losses.MeanSquaredError()
test_x = test_images[:5]
test_y = to_categorical(test_labels)[:5]
train_x = train_images[:1]
train_y = to_categorical(train_labels)[:1]
train_y = tf.convert_to_tensor(train_y, np.float32)
train_x = tf.convert_to_tensor(train_x, np.float64)
with tf.GradientTape() as tape:
tape.watch(train_x)
model.fit(train_x, train_y, epochs=1, verbose=0)
valid_y_hat = model(test_x, training=False)
mse = calculate_mse(test_y, valid_y_hat)
de_dx = tape.gradient(mse, train_x)
print(de_dx)
# approach 2 - does not run
def calculate_validation_mse(x):
model.fit(x, train_y, epochs=1, verbose=0)
valid_y_hat = model(test_x, training=False)
mse = calculate_mse(test_y, valid_y_hat)
return mse
train_x = train_images[:1]
train_y = to_categorical(train_labels)[:1]
validation_gradient = grad(calculate_validation_mse)
de_dx = validation_gradient(train_x)
print(de_dx)
Here's how you can do this. Derivation is as below.
Few things to note,
I have reduced the feature size from 784 to 256 as I was running out of memory in colab (line marked in the code) . Might have to do some mem profiling to find out why
Only computed grads for the first layer. Easily extendable to other layers
Disclaimer: this derivation is correct to best of my knowledge. Please do some research and verify that it is the case. You will run into memory issues for larger inputs and layer sizes.
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
import tensorflow as tf
f = 256
model = Sequential([
Dense(64, activation='relu', input_shape=(f,)),
Dense(64, activation='relu'),
Dense(10, activation='softmax'),
])
# Compile the model.
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'],
)
w = model.weights[0]
# Inputs and labels
x_tr = tf.Variable(np.random.normal(size=(1,f)), shape=(1, f), dtype='float32')
y_tr = np.random.choice([0,1,2,3,4,5,6,7,8,9], size=(1,1))
y_tr_onehot = tf.keras.utils.to_categorical(y_tr, num_classes=10).astype('float32')
x_v = tf.Variable(np.random.normal(size=(1,f)), shape=(1, f), dtype='float32')
y_v = np.random.choice([0,1,2,3,4,5,6,7,8,9], size=(1,1))
y_v_onehot = tf.keras.utils.to_categorical(y_v, num_classes=10).astype('float32')
# In the context of GradientTape
with tf.GradientTape() as tape1:
with tf.GradientTape() as tape2:
y_tr_pred = model(x_tr)
tr_loss = tf.keras.losses.MeanSquaredError()(y_tr_onehot, y_tr_pred)
tmp_g = tape2.gradient(tr_loss, w)
print(tmp_g.shape)
# d(dE_tr/d(theta))/dx
# Warning this step consumes lot of memory for large layers
lr = 0.001
grads_1 = -lr * tape1.jacobian(tmp_g, x_tr)
with tf.GradientTape() as tape3:
y_v_pred = model(x_v)
v_loss = tf.keras.losses.MeanSquaredError()(y_v_onehot, y_v_pred)
# dE_val/d(theta)
grads_2 = tape3.gradient(v_loss, w)[tf.newaxis, :]
# Just crunching the dimension to get the final desired shape of (1,256)
grad = tf.matmul(tf.reshape(grads_2,[1, -1]), tf.reshape(tf.transpose(grads_1,[2,1,0,3]),[1, -1, 256]))

with tensorflow and keras how to find the "category" of a given string

Hello ML/AI newbie here,
I'm asking this question because I've no idea about machine learning, ai, e.t.c and I've no idea how to continue, what questions to ask. Even if I accidentally find the solution i wouldn't know.
Ok, I followed this tutorial about "Text Classification" and it went pretty well, no problems up to here.
https://www.youtube.com/watch?v=6g4O5UOH304&list=WL&index=8&t=0s
It classifies IMDB comments and checks if a review is "Positive" or "Negative", "0" or "1"
My question is
Let say I've my own dataset, similar to IMDB but instead of "0" and "1" I have several categories as numbers like "1,2,3,4,5,6,7,8,9,10,11,12,...." for each string. So I need it to return one of these numbers (since it's learning let say two of them if it can't decide)
What should I do?
A link to a tutorial related to what I need would be great too.
import tensorflow as tf
from tensorflow import keras
import numpy as np
data = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = data.load_data(num_words=3000)
word_index = data.get_word_index()
word_index = {k:(v+3) for k, v in word_index.items()}
word_index["<PAD>"] = 0;
word_index["<START>"] = 1;
word_index["<UNK>"] = 2;
word_index["<UNUSED>"] = 3;
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
train_data = keras.preprocessing.sequence.pad_sequences(train_data, value=word_index["<PAD>"], padding="post", maxlen=250)
test_data = keras.preprocessing.sequence.pad_sequences(test_data, value=word_index["<PAD>"], padding="post", maxlen=250)
def decode_review(text):
return " ".join([reverse_word_index.get(i, "?") for i in text])
model = keras.Sequential()
model.add(keras.layers.Embedding(10000, 6))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation="relu"))
model.add(keras.layers.Dense(1, activation="sigmoid"))
#model.summary()
model.compile(optimizer="adam", loss="binary_crossentropy", metrics="accuracy")
x_val = train_data[:10000]
x_train = train_data[10000:]
y_val = train_labels[:10000]
y_train = train_labels[10000:]
fitModel = model.fit(x_train, y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1)
results = model.evaluate(test_data, test_labels)
print(results)
for index in range(20):
test_review = test_data[index]
predict = model.predict([test_review])
if predict[0] > 0.8:
print(decode_review(test_data[index]))
print(str(predict[0]))
print(str(test_labels[index]))
your task is a multiclass classification problem and for this reason, you have to modify your output layer. you have two possibilities.
if you have 1D integer encoded target you can use sparse_categorical_crossentropy as loss function, softmax as the last activation and the dimension of the last dense output equal to the number of class to predict
X = np.random.randint(0,10, (1000,100))
y = np.random.randint(0,3, 1000)
model = Sequential([
Dense(128, input_dim = 100),
Dense(3, activation='softmax'),
])
model.summary()
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X, y, epochs=3)
Otherwise, if you have one-hot encoded your target you can use categorical_crossentropy, softmax as the last activation and the dimension of the last dense output equal to the number of class to predict
X = np.random.randint(0,10, (1000,100))
y = pd.get_dummies(np.random.randint(0,3, 1000)).values
model = Sequential([
Dense(128, input_dim = 100),
Dense(3, activation='softmax'),
])
model.summary()
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history = model.fit(X, y, epochs=3)
the usage of softmax enables to interpret the output as probability scores which sum to 1
when you compute the final prediction, to obtain the predicted class you can simply to in this way np.argmax(model.predict(X), axis=1)
these are some basic tutorials for multiclass text classification:
https://towardsdatascience.com/multi-class-text-classification-with-lstm-using-tensorflow-2-0-d88627c10a35
https://towardsdatascience.com/multi-class-text-classification-with-lstm-1590bee1bd17

How to create an autoencoder where each layer of encoder should represent the same as a layer of the decoder

I want to build an autoencoder where each layer in the encoder has the same meaning as a correspondent layer in the decoder. So if the autoencoder is perfectly trained, the values of those layers should be roughly the same.
So lets say the autoencoder consists of e1 -> e2 -> e3 -> d2 -> d1, whereas e1 is the input and d1 the output. A normal autoencoder trains to have the same result in d1 as e1, but I want the additional constraint, that e2 and d2 are the same. Therefore I want an additional backpropagation path which leads from d2 to e2 and trains at the same time as the normal path from d1 to e1. (d stands for decoder, e for encoder).
I tried to use the error between e2 and d2 as a regularization term with the CustomRegularization layer from the first answer of this link https://github.com/keras-team/keras/issues/5563. I also use this for the error between e1 and d1 instead of the normal path.
The following code is written such that more than 1 intermediate layer can be handled and also uses 4 layers.
In the out commented code is a normal autoencoder which only propagates from start to end.
from keras.layers import Dense
import numpy as np
from keras.datasets import mnist
from keras.models import Model
from keras.engine.topology import Layer
from keras import objectives
from keras.layers import Input
import keras
import matplotlib.pyplot as plt
#A layer which can be given as an output to force a regularization term between two layers
class CustomRegularization(Layer):
def __init__(self, **kwargs):
super(CustomRegularization, self).__init__(**kwargs)
def call(self, x, mask=None):
ld=x[0]
rd=x[1]
bce = objectives.binary_crossentropy(ld, rd)
loss2 = keras.backend.sum(bce)
self.add_loss(loss2, x)
return bce
def get_output_shape_for(self, input_shape):
return (input_shape[0][0],1)
def zero_loss(y_true, y_pred):
return keras.backend.zeros_like(y_pred)
#Create regularization layer between two corresponding layers of encoder and decoder
def buildUpDownRegularization(layerNo, input, up_layers, down_layers):
for i in range(0, layerNo):
input = up_layers[i](input)
start = input
for i in range(layerNo, len(up_layers)):
input = up_layers[i](input)
for j in range(0, len(down_layers) - layerNo):
input = down_layers[j](input)
end = input
cr = CustomRegularization()([start, end])
return cr
# Define shape of the network, layers, some hyperparameters and training data
sizes = [784, 400, 200, 100, 50]
up_layers = []
down_layers = []
for i in range(1, len(sizes)):
layer = Dense(units=sizes[i], activation='sigmoid', input_dim=sizes[i-1])
up_layers.append(layer)
for i in range(len(sizes)-2, -1, -1):
layer = Dense(units=sizes[i], activation='sigmoid', input_dim=sizes[i+1])
down_layers.append(layer)
batch_size = 128
num_classes = 10
epochs = 100
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
x_train = x_train.reshape([x_train.shape[0], 28*28])
x_test = x_test.reshape([x_test.shape[0], 28*28])
y_train = x_train
y_test = x_test
optimizer = keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
"""
### Normal autoencoder like in base mnist example
model = keras.models.Sequential()
for layer in up_layers:
model.add(layer)
for layer in down_layers:
model.add(layer)
model.compile(optimizer=optimizer, loss=keras.backend.binary_crossentropy)
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs)
score = model.evaluate(x_test, y_test, verbose=0)
#print('Test loss:', score[0])
#print('Test accuracy:', score[1])
decoded_imgs = model.predict(x_test)
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
"""
### My autoencoder where each subpart is also an autoencoder
#This part is only because the model needs a path from start to end, contentwise this should do nothing
output = input = Input(shape=(sizes[0],))
for i in range(0, len(up_layers)):
output = up_layers[i](output)
for i in range(0, len(down_layers)):
output = down_layers[i](output)
crs = [output]
losses = [zero_loss]
#Build the regularization layer
for i in range(len(up_layers)):
crs.append(buildUpDownRegularization(i, input, up_layers, down_layers))
losses.append(zero_loss)
#Create and train model with adapted training data
network = Model([input], crs)
optimizer = keras.optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
network.compile(loss=losses, optimizer=optimizer)
dummy_train = np.zeros([y_train.shape[0], 1])
dummy_test = np.zeros([y_test.shape[0], 1])
training_data = [y_train]
test_data = [y_test]
for i in range(len(network.outputs)-1):
training_data.append(dummy_train)
test_data.append(dummy_test)
network.fit(x_train, training_data, batch_size=batch_size, epochs=epochs,verbose=1, validation_data=(x_test, test_data))
score = network.evaluate(x_test, test_data, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
decoded_imgs = network.predict(x_test)
n = 10 # how many digits we will display
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[0][i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
If you run the code as is it will show, that the reproduction ability is no longer given in my code.
I expect a similar behavior to the uncommented code, which shows a normal autoencoder.
Edit: As mentioned in the answers this works well with MSE instead of crossentropy and a lr of .01. 100 epochs with that setting produce really good results.
Edit 2: I would like that the backpropagation works as in this [image] (https://imgur.com/OOo757x). So the backpropagation of the loss of a certain layer stops at the corresponding layer. I think I didn't make this clear before and I don't know if the code currently does that.
Edit 3: Although this code runs and returns a good looking solution the CustomRegularization layer is not doing what I thought it would do, therefore it does not do the same things as in the description.
It seems like the main issue is the use of binary cross-entropy to minimize the difference between encoder and decoder. The internal representation in the network is not going to be a single class probability like the output might be if you were classifying MNIST digits. I was able to get your network to output some reasonable-looking reconstructions with these simple changes:
Using objectives.mean_squared_error instead of objectives.binary_crossentropy in the CustomRegularization class
Changing number of epochs to 5
Changing learning rate to .01
Changes 2 and 3 were simply made to speed up the testing. Change 1 is the key here. Cross entropy is designed for problems where there is a binary "ground truth" variable and an estimate of that variable. However, you do not have a binary truth value in the middle of your network, only at the output layer. Thus a cross entropy loss function in the middle of the network doesn't make much sense (at least to me) -- it will be trying to measure entropy for a variable that isn't binary. Mean squared error, on the other hand, is a bit more generic and should work for this case since you are simply minimizing the difference between two real values. In essence, the middle of the network is performing regression (difference between activations in two continuous values, i.e. layers), not classification, so it needs a loss function that is appropriate for regression.
I also want to suggest that there may be a better approach to accomplish what you want. If you really want the encoder and decoder to be exactly the same, you can share weights between them. Then they will be identical, not just highly similar, and your model will have fewer parameters to train. There is a decent explanation of shared (tied) weights autoencoders with Keras here if you're curious.
Reading your code it does seem like it is doing what you want in your illustration, but I am not really sure how to verify that.

Changing batch_size parameter in keras leads to broadcast error

I am running a simple encoder-decoder setup to train a representation for a one dimensional image. In this sample the input are lines with varying slopes and in the encoded layer we would expect something that resembles the slope. My setup is keras with a tensorflow backend. I am very new to this as well.
It all works fine, at least until I move away from steps_per_epoch to batch_size in the model.fit() method. Certain values of the batch_size, such as 1,2,3, 8 and 16 do work, for others I get a value error. My initial guess was 2^n, but that did not work.
The error I get for batch_size = 5
ValueError: operands could not be broadcast together with shapes (5,50) (3,50) (5,50)
I am trying to understand which relation between batch_size and training data is valid such that it always passes. I assumed that the training set would be simply divided into floor(N/batch_size) batches and the remainder would be processed as such.
My questions are:
What is the relation between size of data set and batch_size that are allowed.
What exactly is the keras/tensorflow trying to do such that the batch_size is important?
Thank you very much for the help.
The code to reproduce this is
import numpy as np
from keras.models import Model
from keras.layers import Input, Dense, Conv1D, Concatenate
from keras.losses import mse
from keras.optimizers import Adam
INPUT_DIM = 50
INTER_DIM = 15
LATENT_DIM = 1
# Prepare Sample Data
one_line = np.linspace(1, 30, INPUT_DIM).reshape(1, INPUT_DIM)
test_array = np.repeat(one_line, 1000, axis=0)
slopes = np.linspace(0, 1, 1000).reshape(1000, 1)
data = test_array * slopes
# Train test split
train_mask = np.where(np.random.sample(1000) < 0.8, 1, 0).astype('bool')
x_train = data[train_mask].reshape(-1, INPUT_DIM, 1)
x_test = data[~train_mask].reshape(-1, INPUT_DIM, 1)
# Define Model
input = Input(shape=(INPUT_DIM, 1), name='input')
conv_layer_small = Conv1D(filters=1, kernel_size=[3], padding='same')(input)
conv_layer_medium = Conv1D(filters=1, kernel_size=[5], padding='same')(input)
merged_convs = Concatenate()(
[conv_layer_small, conv_layer_medium])
latent = Dense(LATENT_DIM, name='latent_layer',
activation='relu')(merged_convs)
encoder = Model(input, latent)
decoder_int = Dense(INTER_DIM, name='dec_int_layer', activation='relu')(latent)
output = Dense(INPUT_DIM, name='output', activation='linear')(decoder_int)
encoder_decoder = Model(input, output, name='encoder_decoder')
# Add Loss
reconstruction_loss = mse(input, output)
encoder_decoder.add_loss(reconstruction_loss)
encoder_decoder.compile(optimizer='adam')
if __name__ == '__main__':
epochs = 100
encoder_decoder.fit(
x_train,
epochs=epochs,
batch_size=4,
verbose=2
)

Resources