Keras LSTM with varying timesteps [duplicate] - keras

This question already has an answer here:
Classifying sequences of different lengths [duplicate]
(1 answer)
Closed 5 years ago.
Suppose the input to the model is a series of vectors, each with equal length. However, the number of vectors in each input can change. I want to make an LSTM model using Keras, but if I were to write
input = keras.layers.input(dims)
img_out = keras.layers.recurrent.LSTM(16)
Then what would I put for "dims"? Thanks so much.

You can fix an upper bound for dims. When the input is less than dims, you can pad the rest with zero vector.

Related

Lstm with different input sizes for each batch (pyTorch)

I am switching from TensorFlow to PyTorch and I having some troubles with my net.
I have made a Collator(for the DataLoader) that pads each tensor(originally sentence) in each batch into the Maslen of each batch.
so I have different input sizes per each batch.
my network consists of LSTM -> LSTM -> DENSE
my question is, how can I specify this variable input size to the LSTM?
I assume that in TensorFlow I would do Input((None,x)) bedsore the LSTM.
Thank you in advance
The input size of the LSTM is not how long a sample is. So lets say you have a batch with three samples: The first one has the length of 10, the second 12 and the third 15. So what you already did is pad them all with zeros so that all three have the size 15. But the next batch may have been padded to 16. Sure.
But this 15 is not the input size of the LSTM. The input size in the size of one element of a sample in the batch. And that should always be the same.
For example when you want to classify names:
the inputs are names, for example "joe", "mark", "lucas".
But what the LSTM takes as input are the characters. So "J" then "o" so on. So as input size you have to put how many dimensions one character have.
If you use embeddings, the embedding size. When you use one-hot encoding, the vector size (probably 26). An LSTM takes iteratively the characters of the words. Not the entire word at once.
self.lstm = nn.LSTM(input_size=embedding_size, ...)
I hope this answered your question, if not please clearify it! Good luck!

Confusion matrix - variables with inconsistent number of samples [duplicate]

This question already has answers here:
sklearn: Found arrays with inconsistent numbers of samples when calling LinearRegression.fit()
(10 answers)
Closed 2 years ago.
Supervised learning with numeric data applying KNN classifier giving an error while I am calling metrics.confusion_matrix(y_test, y_pred). The conducting error message is: Found input variables with inconsistent numbers of samples: [40000, 2000]. Thanks in advance for showing lights.
The confusion matrix compares the values in both arrays and basically tells how many samples were labelled same and how many differed in both of them. For this, the number of elements in both arrays should be the same. This is what the error says.
So make sure that both of them have same number of elements.
Perhaps if you include previous code that deals with y_test and y_pred, it would be easier to see why the sizes are different.

How to apply map function to a tensor in tensorflow [duplicate]

This question already has answers here:
Tensorflow "map operation" for tensor?
(3 answers)
Closed 4 years ago.
I have a 4D tensor and if I run it, all values expect for 1/100 part are rather close to what I need, so I want to apply a map function which will set all bad numbers to some fixed value. And the question is: how can I apply a map function in terms of tensors, i.e. BEFORE the Session()'s run function (as I need to calculate a loss function and backpropagate the result, these operations are made before sess.run).
You can't work with python's map function because tensorflow works with symbolic tensors which will only be filled the moment you run session.run(). But there is a map function for tensors provided by tensorflow, called tf.map_fn see https://www.tensorflow.org/api_docs/python/tf/map_fn

Input shape for Keras LSTM/GRU for floats

I'm sorry for asking that stupid thing. I can't apply answers from other questions to my task.
Currently I got well-known error:
expected lstm_input_1 to have 3 dimensions, but got array with shape (7491, 1025)
My data:
matrix - 1025 float numbers in row. 7491 rows
So how to make it 3d? Or am I trying to use wrong layer model?
You need to have an explicit time dimension and a batch dimension. You always have a batch dimension (1 if you are using only one batch) and for recurrent models you need a time dimension as well, as these are sequential models and they operate over time.
Reshape your data to (1,7491,1025) for 1 batch and a sequence of length 7491 with 1025 features per time-step.

Difference between keras.backend.max and keras.backend.argmax [duplicate]

This question already has answers here:
numpy: what is the logic of the argmin() and argmax() functions?
(5 answers)
Closed 2 years ago.
I am a beginner in Deep Learning and while performing a practical assignment, came across the Keras documentation on keras.backend.
I went through the explanation a number of times. however, i cannot exactly understand the difference between max and argmax function.
argmax is the index of maximum in an array and max is maximum value in that array. Please check the example given below
import tensorflow as tf
x = tf.constant([1,10,2,4,15])
print(tf.keras.backend.argmax(x, axis=-1).numpy()) # output 4 (index of max value 15, which is 4)
print(tf.keras.backend.max(x, axis=-1).numpy()) # output 15

Resources