Keras SimpleRNN - Shape MFCC vectors - python-3.x

I'm currently trying to implement a Recurrent Neural Network in Keras. The data consists of a collection of 45.000 whereby each entry is a collection (of variable length) of MFCC vectors with each 13 coefficients:
spoken = numpy.load('spoken.npy')
print(spoken[0]) # Gives:
example_row = [
[
5.67170000e-01 -1.79430000e-01 -7.27360000e+00 -9.59300000e-02
-9.30140000e-02 -1.62960000e-01 4.11620000e-01 3.00590000e-01
6.86360000e-02 1.07130000e+00 1.07090000e-01 5.00890000e-01
7.51750000e-01],
[.....]
]
print(spoken.shape) # Gives: (45000,0)
print(spoken[0].shape) # Gives (N, 13) --> N amount of MFCC vectors
I'm struggling to understand how I need to reshape this Numpy array in order to feed it to the SimpleRNN of Keras:
model = Sequential()
model_spoken.add(SimpleRNN(units=10, activation='relu', input_shape=?))
.....
Therefore, my question is how do I need to reshape a collection of variable length MFCC vectors so that I can feed it to the SimpleRNN object of Keras?

It was actually quite simple since Keras has built in function for reformatting the array and padding zeros to get a static length:
spoken_train = pad_sequences(spoken_train, maxlen=100)
See github issue

Related

numpy array for sequential network: varying sequence length

I have a recurrent network (RNN) whose task is to learn to classify vectors (float32) in two classes. My model is really simple so far:
model = Sequential([
SimpleRNN(units=10, input_shape=(None, len_vector)),
Dense(1, activation="relu")
])
model.compile(loss='mse', optimizer='Adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=30)
To train this network, I create a dataset with 1000 instances of sequences of vectors. When I create sequences with the same length each, the training works perfectly and the dataset has shape:
[<number of sequences>, <number of vectors in each sequence>, <number of floats in each vector>]
The problem is that my model must be able to work on sequence with various length. I don't know how (or even if it is possible) to create a numpy array where one dimension is not constant.
While searching a solution, I saw that setting the array dtype=object made it possible to assign list of different shapes to element of a numpy array, but the keras model will only accept dtype="float32".
Is there a way I can make this numpy array dataset? Or should I change the algorithm to train the model? Or is the only solution to pad sequences with nul vectors to unify their length?
(Thanks for the help. I'm fairly new to deep learning so I apologize if I'm asking for something obvious.)
Use Ragged Tensors, they provide you to make variable length inputs,
import numpy as np
_input = tf.keras.layers.Input(shape=(None, 100))
lstm = tf.keras.layers.LSTM(20,)(_input)
func = tf.keras.backend.function(inputs=_input, outputs=lstm)
rt = tf.ragged.constant([np.random.randn(1,34,100),
np.random.randn(1,55,100) ,
np.random.randn(1,60,100) ,
np.random.randn(1,70,100)])
func(rt[1])

How do I convert BERT embeddings into a tensor for feeding into an LSTM?

I am trying to replace Word2Vec word embeddings by sentence embeddings by BERT in a siamese LSTM network (https://github.com/eliorc/Medium/blob/master/MaLSTM.ipynb). However my BERT embeddings are (1,768) shaped matrix and not tensors that can be fed to a keras layer. I wanted to know if it would be possible to convert it.
I have found a way to replace word embeddings by Universal sentence embeddings (http://hunterheidenreich.com/blog/google-universal-sentence-encoder-in-keras/) I tried to modify the code of the LSTM to use BERT sentence embeddings from the following service (https://github.com/hanxiao/bert-as-service#what-is-it).
# Model variables for LSTM
n_hidden = 50
gradient_clipping_norm = 1.25
batch_size = 64
n_epoch = 25
def BERTEmbedding(x):
#x is an input tensor
encoded= bc.encode(tf.squeeze(tf.cast(x, tf.string)))
return encoded
def exponent_neg_manhattan_distance(left, right):
''' Helper function for the similarity estimate of the LSTMs outputs'''
return K.exp(-K.sum(K.abs(left-right), axis=1, keepdims=True))
left_input_text = Input(shape=(1,), dtype=tf.string)
right_input_text = Input(shape=(1,), dtype=tf.string)
encoded_left = Lambda(BERTEmbedding, output_shape=(768, ))(left_input_text)
encoded_right = Lambda(BERTEmbedding, output_shape=(768, ))(right_input_text)
# Since this is a siamese network, both sides share the same LSTM
shared_lstm = LSTM(n_hidden)
left_output = shared_lstm(encoded_left)
right_output = shared_lstm(encoded_right)
I am getting the following error message TypeError: "Tensor("lambda_3/Squeeze:0", dtype=string)" must be , but received class 'tensorflow.python.framework.ops.Tensor'
LSTM takes three dimensional input [ Batch_size, sequence_length, feature_dim ]
From bert you can get two types of embeddings :
Token representation for each sequence
'CLS' token representation [ where 'CLS' represent 'CLASSIFICATION ]
If you take Token 'CLS' representation, it would be [1,768] but if
you take all sequence output it will be [ len of sequence, 768 ]
Now if you train the model in batch, it will become
[ Batch_size,len_of_sentence, 768] that's what LSTM encoder takes.
An alternative way, You can add one extra dim [batch_size, 768, 1]
and feed it to LSTM.
Adding extra dim in sequence length doesn't make sense because LSTM
unfold according to the len of sequence.

Embedding vs inserting word vectors directly to input layer

I used gensim to build a word2vec embedding of my corpus.
Currently I'm converting my (padded) input sentences to the word vectors using the gensim model.
This vectors are used as input for the model.
model = Sequential()
model.add(Masking(mask_value=0.0, input_shape=(MAX_SEQUENCE_LENGTH, dim)))
model.add(Bidirectional(
LSTM(num_lstm, dropout=0.5, recurrent_dropout=0.4, return_sequences=True))
)
...
model.fit(training_sentences_vectors, training_labels, validation_data=validation_data)
Are there any drawbacks using the word vectors directly without a keras embedding layer?
I'm also currently adding additional (one-hot encoded) tags to the input tokens by concatenating them to each word vector, does this approach make sense?
In your current setup, the drawback will be that you will not be able to set your word vectors to be trainable. You will not be able to fine tune your model for your task.
What I mean by this is that Gensim has only learned the "Language Model". It understands your corpus and its contents. However, it does not know how to optimize for whatever downstream task you are using keras for. Your model's weights will help to fine tune your model, however you will likely experience an increase in performance if you extract the embeddings from gensim, use them to initialize a keras embedding layer, and then pass in indexes instead of word vectors for your input layer.
There's an elegant way to do what you need.
Problem with your solution is that:
the size of the input is large: (batch_size, MAX_SEQUENCE_LENGTH, dim) and may not fit in memory.
You won't be able to train and update the word vectors as per your task
You can instead get away with just: (batch_size, MAX_SEQUENCE_LENGTH). The keras embedding layer allows you to pass in a word index and get a vector. So, 42 -> Embedding Layer -> [3, 5.2, ..., 33].
Conveniently, gensim's w2v model has a function get_keras_embedding which creates the needed embedding layer for you with the trained weights.
gensim_model = # train it or load it
embedding_layer = gensim_model.wv.get_keras_embedding(train_embeddings=True)
embedding_layer.mask_zero = True # No need for a masking layer
model = Sequential()
model.add(embedding_layer) # your embedding layer
model.add(Bidirectional(
LSTM(num_lstm, dropout=0.5, recurrent_dropout=0.4, return_sequences=True))
)
But, you have to make sure the index for a word in the data is the same as the index for the word2vec model.
word2index = {}
for index, word in enumerate(model.wv.index2word):
word2index[word] = index
Use the above word2index dictionary to convert your input data to have the same index as the gensim model.
For example, your data might be:
X_train = [["hello", "there"], ["General", "Kenobi"]]
new_X_train = []
for sent in X_train:
temp_sent = []
for word in sent:
temp_sent.append(word2index[word])
# Add the padding for each sentence. Here I am padding with 0
temp_sent += [0] * (MAX_SEQUENCE_LENGTH - len(temp_sent))
new_X_train.append(temp_sent)
X_train = numpy.as_array(new_X_train)
Now you can use X_train and it will be like: [[23, 34, 0, 0], [21, 63, 0, 0]]
The Embedding Layer will map the index to that vector automatically and train it if needed.
I think this is the best way of doing it but I'll dig into how gensim wants it to be done and update this post if needed.

2-dimensional LSTM in Keras

I am new to Keras and LSTMs -- I want to train a model on 2-dimensional sequences (ie, movement in a grid-space), as opposed to 1-dimensional sequences (like characters of text).
As a test, I first tried just one dimension, and I am doing it successfully with the following setup:
model = Sequential()
model.add(LSTM(512, return_sequences=True, input_shape=X[0].shape, dropout=0.2, recurrent_dropout=0.2))
model.add(LSTM(512, return_sequences=False, dropout=0.2))
model.add(Dense(len(y[0]), activation="softmax"))
model.compile(loss="categorical_crossentropy", optimizer="rmsprop", metrics=['accuracy'])
model.fit(X, y, epochs=50)
I'm formatting the data like this:
data = ## list of integers (1D)
inputs = []
outputs = []
for i in range(len(data) - SEQUENCE_LENGTH):
inputs.append(data[i:i + SEQUENCE_LENGTH])
outputs.append(data[i + SEQUENCE_LENGTH])
X = np.array([to_categorical(np.array(input), CATEGORY_LENGTH) for input in inputs])
y = to_categorical(np.array(outputs), CATEGORY_LENGTH)
This is straightforward and converges quickly.
But if instead of a list of integers, my data consists of 2D tuples, I can no longer create categorical (one-hot) arrays to pass to the LSTM layers.
I've tried not using categorical arrays and simply passing the tuples to the model. In this case, I've changed my output layer to:
model.add(Dense(1, activation="linear"))
But that does not converge, or at least moves incredibly slowly.
How can I adapt this code to handle input with additional dimensions?
This previous answer should apply to your question as well. The only difference is that you will have to convert your tuple to a data frame beforehand.

Keras: LSTM with class weights

my question is quite closely related to this question but also goes beyond it.
I am trying to implement the following LSTM in Keras where
the number of timesteps be nb_tsteps=10
the number of input features is nb_feat=40
the number of LSTM cells at each time step is 120
the LSTM layer is followed by TimeDistributedDense layers
From the question referenced above I understand that I have to present the input data as
nb_samples, 10, 40
where I get nb_samples by rolling a window of length nb_tsteps=10 across the original timeseries of shape (5932720, 40). The code is hence
model = Sequential()
model.add(LSTM(120, input_shape=(X_train.shape[1], X_train.shape[2]),
return_sequences=True, consume_less='gpu'))
model.add(TimeDistributed(Dense(50, activation='relu')))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(20, activation='relu')))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(10, activation='relu')))
model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(3, activation='relu')))
model.add(TimeDistributed(Dense(1, activation='sigmoid')))
Now to my question (assuming the above is correct so far):
The binary responses (0/1) are heavily imbalanced and I need to pass a class_weight dictionary like cw = {0: 1, 1: 25} to model.fit(). However I get an exception class_weight not supported for 3+ dimensional targets. This is because I present the response data as (nb_samples, 1, 1). If I reshape it into a 2D array (nb_samples, 1) I get the exception Error when checking model target: expected timedistributed_5 to have 3 dimensions, but got array with shape (5932720, 1).
Thanks a lot for any help!
I think you should use sample_weight with sample_weight_mode='temporal'.
From the Keras docs:
sample_weight: Numpy array of weights for the training samples, used
for scaling the loss function (during training only). You can either
pass a flat (1D) Numpy array with the same length as the input samples
(1:1 mapping between weights and samples), or in the case of temporal
data, you can pass a 2D array with shape (samples, sequence_length),
to apply a different weight to every timestep of every sample. In this
case you should make sure to specify sample_weight_mode="temporal" in
compile().
In your case you would need to supply a 2D array with the same shape as your labels.
If this is still an issue.. I think the TimeDistributed Layer expects and returns a 3D array (kind of similar to if you have return_sequences=True in the regular LSTM layer). Try adding a Flatten() layer or another LSTM layer at the end before the prediction layer.
d = TimeDistributed(Dense(10))(input_from_previous_layer)
lstm_out = Bidirectional(LSTM(10))(d)
output = Dense(1, activation='sigmoid')(lstm_out)
Using temporal is a workaround. Check out this stack. The issue is also documented on github.

Resources