keras, LSTM - predict on inputs of different length? - keras

I have fitted an LSTM that deals with inputs of different length:
model = Sequential()
model.add(LSTM(units=10, return_sequences=False, input_shape=(None, 5)))
model.add(Dense(units=1, activation='sigmoid'))
Having fitted the model, I want to test it on inputs of different size.
x_test.shape # = 100000
x_test[0].shape # = (1, 5)
x_test[1].shape # = (3, 5)
x_test[2].shape # = (8, 5)
Testing on single instances j is not a problem (model.predict(x_test[j]), but looping on all of them is really slow.
Is there a way of speeding up the computation? model.predict(x_test) does not work.
Thank you!

The most common way to speed up model inference is to run inference on GPU, instead of the CPU (I'm assuming you are not already doing that). You can set up GPU support by following the official guide here. Unless you are explicitly asking keras to run inference on CPU, your code should work as is, without any changes. To confirm if you are using GPU, you can use this article.
Hope the answer was helpful!

The best solution that I have found so far is grouping together data windows with the same length. For my problem, it's enough to significantly speed up the computation.
Hope this trick would help other people.
import numpy as np
def predict_custom(model, x):
"""x should be a list of np.arrays with different number of rows, but same number of columns"""
# dictionary with key = length of the window, value = indices of samples with such length
dic = {}
for i, x in enumerate(x):
if dic.get(x.shape[0]):
dic[x.shape[0]].append(i)
else:
dic[x.shape[0]] = [i]
y_pred = np.full((len(x),1), np.nan)
# loop over dictionary and predict together samples of the same length
for key, indexes in dic.items():
# select samples of the same length (conversion to np.array is used for subsetting "x" using "indexes")
x = np.asarray(x, dtype=object)[indexes].tolist()
# gather such samples in a 3D np.array
x_3d = np.stack(x, axis=0)
# use dictionary values to insert results in the correspondent row of y_pred
y_pred[indexes] = model.predict(x_3d)
return y_pred

Related

Keras LSTM network predictions align with input

The above may sound ideal, but I'm trying to predict a step in front - i.e. with a look_back of 1. My code is as follows:
def create_scaled_datasets(data, scaler_transform, train_perc = 0.9):
# Set training size
train_size = int(len(data)*train_perc)
# Reshape for scaler transform
data = data.reshape((-1, 1))
# Scale data to range (-1,1)
data_scaled = scaler_transform.fit_transform(data)
# Reshape again
data_scaled = data_scaled.reshape((-1, 1))
# Split into train and test data keeping time order
train, test = data_scaled[0:train_size + 1, :], data_scaled[train_size:len(data), :]
return train, test
# Instantiate scaler transform
scaler = MinMaxScaler(feature_range=(0, 1))
model.add(LSTM(5, input_shape=(1, 1), activation='tanh', return_sequences=True))
model.add(Dropout(0.1))
model.add(LSTM(12, input_shape=(1, 1), activation='tanh', return_sequences=True))
model.add(Dropout(0.1))
model.add(LSTM(2, input_shape=(1, 1), activation='tanh', return_sequences=False))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
# Create train/test data sets
train, test = create_scaled_datasets(data, scaler)
trainY = []
for i in range(len(train) - 1):
trainY = np.append(trainY, train[i + 1])
train = np.reshape(train, (train.shape[0], 1, train.shape[1]))
plotting_test = test
test = np.reshape(test, (test.shape[0], 1, test.shape[1]))
model.fit(train[:-1], trainY, epochs=150, verbose=0)
testPredict = model.predict(test)
plt.plot(testPredict, 'g')
plt.plot(plotting_test, 'r')
plt.show()
with output plot of:
In essence, what I want to achieve is for the model to predict the next value, and I attempt to do this by training on the actual values as the features, and the labels being the actual values shifted along one (look_back of 1). Then I predict on the test data. As you can see from the plot, the model does a pretty good job, except it doesn't seem to be predicting the future, but instead seems to be predicting the present... I would expect the plot to look similar, except the green line (the predictions) to be shifted one point to the left. I have tried increasing the look_back value, but it seems to always do the same thing, which makes me think I'm training the model wrong, or attempting to predict incorrectly. If I am reading this wrong and the model is indeed doing what I want but I'm interpreting wrong (also very possible) how do I then predict further into the future?
To add on #MSalters' comment, and somewhat basing on this, it is possible, although not guaranteed that you could "help" your model learn something better than the identity, if you force it to learn not the actual value of the next step, but instead, make it learn the difference from the current step to the next.
To take this one step further, you could also keep an exponential moving average and learn the difference from that, somewhat like was done here.
In short, it makes statistical sense to predict the same value, as it is a low-risk guess. Maybe learning a difference won't converge to zero.
Other things I noticed:
Dropout - no need to use any normalization before you were able to over-fit. It just complicates debugging.
Just one step into the past - it is likely you are losing a lot of required information, thus in fact forcing your net to have no idea what to do, and thus guess the same value. If you even gave it a single value more into the past, it could have a nice approximation of the derivative. That sounds important (only you know)

How to build an RNN using numpy

I'm trying to Implement a Recurrent Neural Network using Numpy in python. I'm trying to implement a Many-to-One RNN, for a classification problem. I'm a little fuzzy on the psuedo code, especially on the BPTT concept. I'm comfortable with the forward pass ( not entirely sure if my implementation is correct ) but really confused with back ward pass, and I need some advice from experts in this field.
I did check out related posts :
1) Implementing RNN in numpy
2) Output for RNN
3) How can I build RNN
But I feel my issue is with understanding the psuedo code / concept first up, code in those posts is complete and have reached further stage than mine.
My Implementation is inspired from the tutorial:
WildML RNN from scratch
I did implement a Feed-Forward Neural Network following part of tutorial from the same author, but I'm really confused with this implementation of his. Andrew Ng's RNN video suggests 3 different weights ( Weights for activation, Input and Output layers ) but the above tutorial only has two sets of weights ( correct me if I'm wrong ).
The nomenclature in my code follows that of Andrew Ng's RNN pseudo code ...
I'm reshaping my input samples in to 3D ( batch_size, n_time steps, n_ dimensions ) ... Once , I reshape my samples I'm doing forward pass on each sample seperately ...
Here's my code:
def RNNCell(X, lr, y=None, n_timesteps=None, n_dimensions=None, return_sequence = None, bias = None):
'''Simple function to compute forward and bakward passes for a Many-to-One Recurrent Neural Network Model.
This function Reshapes X,Y in to 3D array of shape (batch_size, n_timesteps, n_ dimensions) and then performs
recurrent operations on each sample of the data for n_timesteps'''
# If user has specified some target variable
if len(y) != 0:
# No. of unique values in the target variables will be the dimesions for the output layer
_,n_unique = np.unique(y, return_counts=True)
else:
# If there's no target variable given, then dimensions of target variable by default is 2
n_unique = 2
# Weights of Vectors to multiply with input samples
Wx = np.random.uniform(low = 0.0,
high = 0.3,
size = (n_dimensions, n_dimensions))
# Weights of Vectors to multiply with resulting activations
Wy = np.random.uniform(low = 0.0,
high = 0.3,
size = (n_dimensions, n_timesteps))
# Weights of Vectors to multiple with activations of previous time steps
Wa = np.random.randn(n_dimensions, n_dimensions)
# List to hold activations of each time step
activations = {'a-0' : np.zeros(shape=(n_timesteps-1, n_dimensions),
dtype=float)}
# List to hold Yhat at each time step
Yhat = []
try:
# Reshape X to align with the shape of RNN architecture
X = np.reshape(X, newshape=(len(X), n_timesteps, n_dimensions))
except:
return "Sorry can't reshape and array in to your shape"
def Forward_Prop(sample):
# Outputs at the last time step
Ot = 0
# In each time step
for time_step in range(n_timesteps+1):
if time_step < n_timesteps:
# activation G ( Wa.a<t> + X<t>.Wx )
activations['a-' + str(time_step+1)] = ReLu( np.dot( activations['a-' + str(time_step)], Wa )
+ np.dot( sample[time_step, :].reshape(1, n_dimensions) , Wx ) )
# IF it's the last time step then use softmax activation function
elif time_step == n_timesteps:
# Wy.a<t> and appending that to Yhat list
Ot = softmax( np.dot( activations['a-' + str(time_step)], Wy ) )
# Return output probabilities
return Ot
def Backward_Prop(Yhat):
# List to hold errors for the last layer
error = []
for ind in range(len(Yhat)):
error.append( y[ind] - Yhat[ind] )
error = np.array(error)
# Calculating Delta for the output layer
delta_out = error * lr
#* relu_derivative(activations['a-' + str(n_timesteps)])
# Calculating gradient for the output layer
grad_out = np.dot(delta_out.reshape(len(X), n_timesteps),
activations['a-' + str(n_timesteps)])
# I'm basically stuck at this point
# Adjusting weights for the output layer
Wy = Wy - (lr * grad_out.reshape((n_dimesions, n_timesteps)))
for sample in X:
Yhat.append( Forward_Prop(sample) )
Backward_Prop(Yhat)
return Yhat
# DUMMY INPUT DATA
X = np.random.random_integers(low=0, high = 5, size = (10, 10 ));
# DUMMY LABELS
y = np.array([[0],
[1],
[1],
[1],
[0],
[0],
[1],
[1],
[0],
[1]])
I understand that my BPTT implementation is wrong, but I'm not thinking clearly and I need some experts' perspective on where exactly I'm missing the trick. I don't expect a detailed debugging of my code, I only require a high level overview of the pseudo code on back propagation ( assuming my forward prop is correct ). I think my fundamental problem can also be with the way I'm doing my forward pass on each sample individually.
I'm stuck on this problem since 3 days now, and it's really frustrating not being able to think clearly. I'd be really grateful if someone could point me in the right direction and clear my confusion. Thank you for your time in advance !! I really appreciate it once again !

keras: unsupervised learning with external constraint

I have to train a network on unlabelled data of binary type (True/False), which sounds like unsupervised learning. This is what the normalised data look like:
array([[-0.05744527, -1.03575495, -0.1940105 , -1.15348956, -0.62664491,
-0.98484037],
[-0.05497629, -0.50935675, -0.19396862, -0.68990988, -0.10551919,
-0.72375012],
[-0.03275552, 0.31480204, -0.1834951 , 0.23724946, 0.15504367,
0.29810553],
...,
[-0.05744527, -0.68482282, -0.1940105 , -0.87534175, -0.23580062,
-0.98484037],
[-0.05744527, -1.50366446, -0.1940105 , -1.52435329, -1.14777063,
-0.98484037],
[-0.05744527, -1.26970971, -0.1940105 , -1.33892142, -0.88720777,
-0.98484037]])
However, I do have a constraint on the total number of True labels in my data. This doesn't mean I can build a classical custom loss function in Keras taking (y_true, y_pred) arguments as required: my external constraint is just on the predicted total of True and False, not on the individual labels.
My question is whether there is a somewhat "standard" approach to this kind of problems, and how that is implementable in Keras.
POSSIBLE SOLUTION
Should I assign y_true randomly as 0/1, have a network return y_pred as 1/0 with a sigmoid activation function, and then define my loss function as
sum_y_true = 500 # arbitrary constant known a priori
def loss_function(y_true, y_pred):
loss = np.abs(y_pred.sum() - sum_y_true)
return loss
In the end, I went with the following solution, which worked.
1) Define batches in your dataframe df with a batch_id column, so that in each batch Y_train is your identical "batch ground truth" (in my case, the total number of True labels in the batch). You can then pass these instances together to the network. This can be done with a generator:
def grouper(g,x,y):
while True:
for gr in g.unique():
# this assigns indices to the entire set of values in g,
# then subsects to all the rows in which g == gr
indices = g == gr
yield (x[indices],y[indices])
# train set
train_generator = grouper(df.loc[df['set'] == 'train','batch_id'], X_train, Y_train)
# validation set
val_generator = grouper(df.loc[df['set'] == 'val','batch_id'], X_val, Y_val)
2) define a custom loss function, to track how close the total number of instances predicted as true matches the ground truth:
def custom_delta(y_true, y_pred):
loss = K.abs(K.mean(y_true) - K.sum(y_pred))
return loss
def custom_wrapper():
def custom_loss_function(y_true, y_pred):
return custom_delta(y_true, y_pred)
return custom_loss_function
Note that here
a) Each y_true label is already the sum of the ground truth in our batch (cause we don't have individual values). That's why y_true is not summed over;
b) K.mean is actually a bit of an overkill to extract a single scalar from this uniform tensor, in which all y_true values in each batch are identical - K.min or K.max would also work, but I haven't tested whether their performance is faster.
3) Use fit_generator instead of fit:
fmodel = Sequential()
# ...your layers...
# Create the loss function object using the wrapper function above
loss_ = custom_wrapper()
fmodel.compile(loss=loss_, optimizer='adam')
history1 = fmodel.fit_generator(train_generator, steps_per_epoch=total_batches,
validation_data=val_generator,
validation_steps=df.loc[encs.df['set'] == 'val','batch_id'].nunique(),
epochs=20, verbose = 2)
This way the problem is basically addressed as one of supervised learning, although without individual labels, which means that notions like true/false positive are meaningless here.
This approach not only managed to give me a y_pred that closely matches the totals I know per batch. It actually finds two groups (True/False) that occupy the expected different portions of parameter space.

Multi-label classification with class weights in Keras

I have a 1000 classes in the network and they have multi-label outputs. For each training example, the number of positive output is same(i.e 10) but they can be assigned to any of the 1000 classes. So 10 classes have output 1 and rest 990 have output 0.
For the multi-label classification, I am using 'binary-cross entropy' as cost function and 'sigmoid' as the activation function. When I tried this rule of 0.5 as the cut-off for 1 or 0. All of them were 0. I understand this is a class imbalance problem. From this link, I understand that, I might have to create extra output labels.Unfortunately, I haven't been able to figure out how to incorporate that into a simple neural network in keras.
nclasses = 1000
# if we wanted to maximize an imbalance problem!
#class_weight = {k: len(Y_train)/(nclasses*(Y_train==k).sum()) for k in range(nclasses)}
inp = Input(shape=[X_train.shape[1]])
x = Dense(5000, activation='relu')(inp)
x = Dense(4000, activation='relu')(x)
x = Dense(3000, activation='relu')(x)
x = Dense(2000, activation='relu')(x)
x = Dense(nclasses, activation='sigmoid')(x)
model = Model(inputs=[inp], outputs=[x])
adam=keras.optimizers.adam(lr=0.00001)
model.compile('adam', 'binary_crossentropy')
history = model.fit(
X_train, Y_train, batch_size=32, epochs=50,verbose=0,shuffle=False)
Could anyone help me with the code here and I would also highly appreciate if you could suggest a good 'accuracy' metric for this problem?
Thanks a lot :) :)
I have a similar problem and unfortunately have no answer for most of the questions. Especially the class imbalance problem.
In terms of metric there are several possibilities: In my case I use the top 1/2/3/4/5 results and check if one of them is right. Because in your case you always have the same amount of labels=1 you could take your top 10 results and see how many percent of them are right and average this result over your batch size. I didn't find a possibility to include this algorithm as a keras metric. Instead, I wrote a callback, which calculates the metric on epoch end on my validation data set.
Also, if you predict the top n results on a test dataset, see how many times each class is predicted. The Counter Class is really convenient for this purpose.
Edit: If found a method to include class weights without splitting the output.
You need a numpy 2d array containing weights with shape [number classes to predict, 2 (background and signal)].
Such an array could be calculated with this function:
def calculating_class_weights(y_true):
from sklearn.utils.class_weight import compute_class_weight
number_dim = np.shape(y_true)[1]
weights = np.empty([number_dim, 2])
for i in range(number_dim):
weights[i] = compute_class_weight('balanced', [0.,1.], y_true[:, i])
return weights
The solution is now to build your own binary crossentropy loss function in which you multiply your weights yourself:
def get_weighted_loss(weights):
def weighted_loss(y_true, y_pred):
return K.mean((weights[:,0]**(1-y_true))*(weights[:,1]**(y_true))*K.binary_crossentropy(y_true, y_pred), axis=-1)
return weighted_loss
weights[:,0] is an array with all the background weights and weights[:,1] contains all the signal weights.
All that is left is to include this loss into the compile function:
model.compile(optimizer=Adam(), loss=get_weighted_loss(class_weights))

Tensorflow- How to display accuracy rate for a linear regression model

I have a linear regression model that seems to work. I first load the data into X and the target column into Y, after that I implement the following...
X_train, X_test, Y_train, Y_test = train_test_split(
X_data,
Y_data,
test_size=0.2
)
rng = np.random
n_rows = X_train.shape[0]
X = tf.placeholder("float")
Y = tf.placeholder("float")
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
pred = tf.add(tf.multiply(X, W), b)
cost = tf.reduce_sum(tf.pow(pred-Y, 2)/(2*n_rows))
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(cost)
init = tf.global_variables_initializer()
init_local = tf.local_variables_initializer()
with tf.Session() as sess:
sess.run([init, init_local])
for epoch in range(FLAGS.training_epochs):
avg_cost = 0
for (x, y) in zip(X_train, Y_train):
sess.run(optimizer, feed_dict={X:x, Y:y})
# display logs per epoch step
if (epoch + 1) % FLAGS.display_step == 0:
c = sess.run(
cost,
feed_dict={X:X_train, Y:Y_train}
)
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(c))
print("Optimization Finished!")
accuracy, accuracy_op = tf.metrics.accuracy(labels=tf.argmax(Y_test, 0), predictions=tf.argmax(pred, 0))
print(sess.run(accuracy))
I cannot figure out how to print out the model's accuracy. For example, in sklearn, it is simple, if you have a model you just print model.score(X_test, Y_test). But I do not know how to do this in tensorflow or if it is even possible.
I think I'd be able to calculate the Mean Squared Error. Does this help in any way?
EDIT
I tried implementing tf.metrics.accuracy as suggested in the comments but I'm having an issue implementing it. The documentation says it takes 2 arguments, labels and predictions, so I tried the following...
accuracy, accuracy_op = tf.metrics.accuracy(labels=tf.argmax(Y_test, 0), predictions=tf.argmax(pred, 0))
print(sess.run(accuracy))
But this gives me an error...
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value accuracy/count
[[Node: accuracy/count/read = IdentityT=DT_FLOAT, _class=["loc:#accuracy/count"], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
How exactly does one implement this?
Turns out, since this is a multi-class Linear Regression problem, and not a classification problem, that tf.metrics.accuracy is not the right approach.
Instead of displaying the accuracy of my model in terms of percentage, I instead focused on reducing the Mean Square Error (MSE) instead.
From looking at other examples, tf.metrics.accuracy is never used for Linear Regression, and only classification. Normally tf.metric.mean_squared_error is the right approach.
I implemented two ways of calculating the total MSE of my predictions to my testing data...
pred = tf.add(tf.matmul(X, W), b)
...
...
Y_pred = sess.run(pred, feed_dict={X:X_test})
mse = tf.reduce_mean(tf.square(Y_pred - Y_test))
OR
mse = tf.metrics.mean_squared_error(labels=Y_test, predictions=Y_pred)
They both do the same but obviously the second approach is more concise.
There's a good explanation of how to measure the accuracy of a Linear Regression model here.
I didn't think this was clear at all from the Tensorflow documentation, but you have to declare the accuracy operation, and then initialize all global and local variables, before you run the accuracy calculation:
accuracy, accuracy_op = tf.metrics.accuracy(labels=tf.argmax(Y_test, 0), predictions=tf.argmax(pred, 0))
# ...
init_global = tf.global_variables_initializer
init_local = tf.local_variables_initializer
sess.run([init_global, init_local])
# ...
# run accuracy calculation
I read something on Stack Overflow about the accuracy calculation using local variables, which is why the local variable initializer is necessary.
After reading the complete code you posted, I noticed a couple other things:
In your calculation of pred, you use
pred = tf.add(tf.multiply(X, W), b). tf.multiply performs element-wise multiplication, and will not give you the fully connected layers you need for a neural network (which I am assuming is what you are ultimately working toward, since you're using TensorFlow). To implement fully connected layers, where each layer i (including input and output layers) has ni nodes, you need separate weight and bias matrices for each pair of successive layers. The dimensions of the i-th weight matrix (the weights between the i-th layer and the i+1-th layer) should be (ni, ni + 1), and the i-th bias matrix should have dimensions (ni + 1, 1). Then, going back to the multiplication operation - replace tf.multiply with tf.matmul, and you're good to go. I assume that what you have is probably fine for a single-class linear regression problem, but this is definitely the way you want to go if you plan to solve a multiclass regression problem or implement a deeper network.
Your weight and bias tensors have a shape of (1, 1). You give the variables the initial value of np.random.randn(), which according to the documentation, generates a single floating point number when no arguments are given. The dimensions of your weight and bias tensors need to be supplied as arguments to np.random.randn(). Better yet, you can actually initialize these to random values in Tensorflow: W = tf.Variable(tf.random_normal([dim0, dim1], seed = seed) (I always initialize random variables with a seed value for reproducibility)
Just a note in case you don't know this already, but non-linear activation functions are required for neural networks to be effective. If all your activations are linear, then no matter how many layers you have, it will reduce to a simple linear regression in the end. Many people use relu activation for hidden layers. For the output layer, use softmax activation for multiclass classification problems where the output classes are exclusive (i.e., where only one class can be correct for any given input), and sigmoid activation for multiclass classification problems where the output classes are not exlclusive.

Resources