How to use masking with Convolution1D layer in keras? - keras

I am trying to perform a sentiment classification task for which I am using Attention based architecture which has both Convolution layer and BiLSTM layers. The first layer of my model is a Embedding layer followed by a Convolution1D layer. I have used mask_zero=True for the Embedding layer since I have padded the sequence with zeros. This however creates an error for the Convolution1D layer since this layer does not support masking. However, I do need to mask the zero inputs since I have LSTM layers after the convolutional layers. Does anyone have any solution for this. I have attached a sample code of my model till the Convolution1D layer for reference.
wordsInputs = Input(shape=(maxSeq,), name='words_input')
embed_reg = l2(l=0.001)
emb = Embedding(vocabSize, 300, mask_zero=True, init='glorot_uniform', W_regularizer=embed_reg)(wordsInputs)
convOutput = Convolution1D(nb_filter=100, filter_length=3, activation='relu', border_mode='same')(emb)

It looks like you have defined a maxSeq length and you say you are padding the sequence with zeros. The mask_zero means something else, specifically that zero is a reserved input word index that you are not supposed to use and is reserved for the internals of the program to mark the end of a variable length sequence.
I think the solution is simply to remove the parameter mask_zero=True, as it is unneeded (because it is for variable length sequences), and to use zero as your padding word index.

Related

Doesn't keras.layers.Flatten lose information?

Brand new to keras and ML in general. I'm looking at https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/, and it uses Flatten between Embedding and Dense because Embedding produces a 2D vector but Dense requires a single dimension.
I'm sure I'm missing something obvious here, but why doesn't this lose which words are in which input vectors? How are we able to still know that input #3 was "nice work" and is associated with label #3, 1, for "positive"?
I guess the original dimensions are retained from the original input and then somehow restored for Dense's output? Or am I just totally missing a major conceptual aspect?
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
Thanks for any guidance!
Embedding layer gives you a vector for each word token, so the output is 2-d. We need to use flatten before any classifier block.
There is some information lost, for example when we use Convolutional layers, and then flat the feature maps, the spatial information is lost. But we already extract the most important features using Conv layers and we feed those features to fully connected layers.
In your example, the temporal dimension is no longer maintained, usually, it's desired to pass the output of the embedding matrix to an RNN/Conv layer for further feature extraction.
Flatten only is applied on the non-batch dimension, meaning the examples are still separated (if you mean that).
For each sample, let's say nice work, we get 2 vectors (1 for nice, 1 for work), now we only want to know the overall sentiment from the sentence so, once we extract the features, we can apply flatten.

I need to understand this LSTM and Masking layers result

I'm new at keras lstm could you please explain to me this model.summary()
in rasa core training
![model after training][1]
Also, what is the Masking layer doing and what does the value -1 in it mean?
A Masking layer is meant to "ignore steps" in sequences.
Your LSTM is working with sequences of 5 steps and 42 features per step.
If all features in a step have the same value defined in Masking (-1 in the example), that step will be ignored during training.
The idea is to simulate variable length sequences.
Not sure exactly, what exactly you don't understand but model.summary()
prints a summary representation of your model. (keras.io)
It lists all layers used in the given model with its respective size.
This particular model obviously starts with a masking layer for input sequences (I guess because of padding) and is followed by the simplest LSTM model possible.

Using Dropout with Keras and LSTM/GRU cell

In Keras you can specify a dropout layer like this:
model.add(Dropout(0.5))
But with a GRU cell you can specify the dropout as a parameter in the constructor:
model.add(GRU(units=512,
return_sequences=True,
dropout=0.5,
input_shape=(None, features_size,)))
What's the difference? Is one preferable to the other?
In Keras' documentation it adds it as a separate dropout layer (see "Sequence classification with LSTM")
The recurrent layers perform the same repeated operation over and over.
In each timestep, it takes two inputs:
Your inputs (a step of your sequence)
Internal inputs (can be states and the output of the previous step, for instance)
Note that the dimensions of the input and output may not match, which means that "your input" dimensions will not match "the recurrent input (previous step/states)" dimesions.
Then in every recurrent timestep there are two operations with two different kernels:
One kernel is applied to "your inputs" to process and transform it in a compatible dimension
Another (called recurrent kernel by keras) is applied to the inputs of the previous step.
Because of this, keras also uses two dropout operations in the recurrent layers. (Dropouts that will be applied to every step)
A dropout for the first conversion of your inputs
A dropout for the application of the recurrent kernel
So, in fact there are two dropout parameters in RNN layers:
dropout, applied to the first operation on the inputs
recurrent_dropout, applied to the other operation on the recurrent inputs (previous output and/or states)
You can see this description coded either in GRUCell and in LSTMCell for instance in the source code.
What is correct?
This is open to creativity.
You can use a Dropout(...) layer, it's not "wrong", but it will possibly drop "timesteps" too! (Unless you set noise_shape properly or use SpatialDropout1D, which is currently not documented yet)
Maybe you want it, maybe you dont. If you use the parameters in the recurrent layer, you will be applying dropouts only to the other dimensions, without dropping a single step. This seems healthy for recurrent layers, unless you want your network to learn how to deal with sequences containing gaps (this last sentence is a supposal).
Also, with the dropout parameters, you will be really dropping parts of the kernel as the operations are dropped "in every step", while using a separate layer will let your RNN perform non-dropped operations internally, since your dropout will affect only the final output.

LSTM with variable sequences & return full sequences

How can I set up a keras model such that the final LSTM layer outputs a prediction for each time step while having variable sequence lengths as input?
I'd then like to provide labels for each of the timesteps after a dense layer with linear activation.
When I try to add a reshape or a dense layer to the LSTM model that is returning the full sequence and has a masking layer to take care of variable sequence lengths, it says:
The reshape and the dense layers do not support masking.
Would this be possible to do?
You can use the TimeDistributed layer wrapper for this. This applies the layer you want to each timestep. In your case, you could also just use TimeDistributedDense.

How to write an LSTM in Keras without an Embedding layer?

How do you write a simple sequence copy task in keras using the LSTM architecture without an Embedding layer? I already have the word vectors.
If you say that you have word vectors, I guess you have a dictionary to map a word to its vector representation (calculated from word2vec, GloVe...).
Using this dictionary, you replace all words in your sequence by their corresponding vectors. You will also need to make all your sequences the same length, since LSTMs need all the input sequences to be of constant length. So you will need to determine a max_length value and trim all sequences that are longer and pad all sequences that are shorter with zeros (see Keras pad_sequences function).
Then you can pass the vectorized sequences directly to the LSTM layer of your neural network. Since the LSTM layer is the first layer of the network, you will need to define the input shape, which in your case is (max_length, embedding_dim, ).
This ways you skip the Embedding layer and use your own precomputed word vectors instead.
I had same problem after searching in Keras at "Stacked LSTM for sequence classification" part , I found following code might be useful:
model = Sequential()
model.add(LSTM(3NumberOfLSTM, return_sequences=True,
input_shape=(YourSequenceLenght, YourWord2VecLenght)))

Resources