Many tutorials for seq2seq encoder-decoder architecture based on LSTM, (for example English-French translation), define the model as follow:
encoder_inputs = Input(shape=(None,))
en_x= Embedding(num_encoder_tokens, embedding_size)(encoder_inputs)
# Encoder lstm
encoder = LSTM(50, return_state=True)
encoder_outputs, state_h, state_c = encoder(en_x)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
# french word embeddings
dex= Embedding(num_decoder_tokens, embedding_size)
final_dex= dex(decoder_inputs)
# decoder lstm
decoder_lstm = LSTM(50, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(final_dex,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# While training, model takes eng and french words and outputs #translated french word
fullmodel = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# rmsprop is preferred for nlp tasks
fullmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['acc'])
fullmodel.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=128,
epochs=100,
validation_split=0.20)
Then for prediction, they define infernce models as follow:
# define the encoder model
encoder_model = Model(encoder_inputs, encoder_states)
encoder_model.summary()
# Redefine the decoder model with decoder will be getting below inputs from encoder while in prediction
decoder_state_input_h = Input(shape=(50,))
decoder_state_input_c = Input(shape=(50,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
final_dex2= dex(decoder_inputs)
decoder_outputs2, state_h2, state_c2 = decoder_lstm(final_dex2, initial_state=decoder_states_inputs)
decoder_states2 = [state_h2, state_c2]
decoder_outputs2 = decoder_dense(decoder_outputs2)
# sampling model will take encoder states and decoder_input(seed initially) and output the predictions(french word index) We dont care about decoder_states2
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs2] + decoder_states2)
Then predict using:
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1,1))
# Populate the first character of target sequence with the start character.
target_seq[0, 0] = target_token_index['START_']
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += ' '+sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '_END' or
len(decoded_sentence) > 52):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1,1))
target_seq[0, 0] = sampled_token_index
# Update states
states_value = [h, c]
return decoded_sentence
My question is, they trained the model with the name 'fullmodel' to get best weights ... in prediction part, they used the inference models with names (encoder_model & decoder_model) ... so they didn't use any weights from the 'fullmodel' ?!
I don't understand how they benefit from the trained model!
The trick is that everything is in the same variable scope, so the variables got reused.
If you notice carefully, the trained layer weights are being reused.
For example, while creating decoder_model we use decoder_lstm layer which was defined as a part of full model,
decoder_outputs2, state_h2, state_c2 = decoder_lstm(final_dex2, initial_state=decoder_states_inputs),
and encoder model too uses, encoder_inputs and encoder_states layer defined previously.
encoder_model = Model(encoder_inputs, encoder_states)
Due to the architecture of the encoder-decoder model, we need to perform these implementations hacks.
Also, as the keras documentation mentions, With the functional API, it is easy to reuse trained models: you can treat any model as if it were a layer, by calling it on a tensor. Note that by calling a model you aren't just reusing the architecture of the model, you are also reusing its weights. For more details refer - https://keras.io/getting-started/functional-api-guide/#all-models-are-callable-just-like-layers
Related
I have a seq to seq model trained of some clever bot data:
justphrases_X is a list of sentences and justphrases_Y is a list of responses to those sentences.
maxlen = 62
#low is a list of all the unique words.
def Convert_To_Encoding(just_phrases):
encodings = []
for sentence in just_phrases:
onehotencoded = one_hot(sentence, len(low))
encodings.append(np.array(onehotencoded))
encodings_padded = pad_sequences(encodings, maxlen=maxlen, padding='post', value = 0.0)
return encodings_padded
encodings_X_padded = Convert_To_Encoding(just_phrases_X)
encodings_y_padded = Convert_To_Encoding(just_phrases_y)
model = Sequential()
embedding_layer = Embedding(len(low), output_dim=8, input_length=maxlen)
model.add(embedding_layer)
model.add(GRU(128)) # input_shape=(None, 496)
model.add(RepeatVector(numberofwordsoutput)) #number of characters?
model.add(GRU(128, return_sequences = True))
model.add(Flatten())
model.add(Dense(62, activation = 'softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer= 'adam', metrics=['accuracy'])
model.summary()
model.fit(encodings_X_padded, encodings_y_padded, batch_size = 1, epochs=1) #, validation_data = (testX, testy)
model.save("cleverbottheseq-uel.h5")
When I use this model for prediction, the output will be between 0 and 1 because of my use of softmax. However as I have around 3000 unique words, each with a separate integer assigned to it, how do I essentially repeat what the model did during training and convert the output back to an integer which has a word assigned to it?
I dont think it is possible to create seq2seq with Sequential API. Try to create encoder and decoder separately with Functional API. You need two inputs - first for encoder, second - for decoder.
I used keras to train a seq2seq model (keras.models.Model). The X and y to the model are [X_encoder, X_decoder] and y i.e. a list of encoder and decoder inputs and labels (Note that the decoder input, X_decoder is ‘y’ with one position ahead than the actual y. Basically, teacher forcing).
So my question is now after training, when it comes to actual prediction where I do not have any labels how do I provide ‘X_decoder’ to my input? Or do I train on something else?
This is a snippet of the model definition if at all that helps:)
# Encoder
encoder_inputs = Input(batch_shape=(batch_size, max_len,), dtype='int32')
encoder_embedding = embedding_layer(encoder_inputs)
encoder_LSTM = CuDNNLSTM(hidden_dim, return_state=True, stateful=True)
encoder_outputs, state_h, state_c = encoder_LSTM(encoder_embedding)
# Decoder
decoder_inputs = Input(shape=(max_len,), dtype='int32')
decoder_embedding = embedding_layer(decoder_inputs)
decoder_LSTM = CuDNNLSTM(hidden_dim, return_state=True, return_sequences=True)
decoder_outputs, _, _ = decoder_LSTM(decoder_embedding, initial_state=[state_h, state_c])
# Output
outputs = TimeDistributed(Dense(vocab_size, activation='softmax'))(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], outputs)
# model fitting:
model.fit([X_encoder, X_decoder], y, steps_per_epoch=int(number_of_train_samples/batch_size),
epochs=epochs)
Usually, when you train a seq2seq model, the first token of decoder_inputs is a special <start> token. So when you try to generate a sentence, you do it like
first_token = decoder(encoder_state, [<start>])
second_token = decoder(encoder_state, [<start>, first_token])
third_token = decoder(encoder_state, [<start>, first_token, second_token])
...
You execute this recursion, until your decoder generates another special token - <end>; then you stop.
Here is a very crude pythonic decoder for your model. It is inefficient, because it reads the input over and over again, instead of memorizing the RNN state - but it works.
input_seq = # some array of token indices
result = np.array([[START_TOKEN]])
next_token = -1
for i in range(100500):
next_token = model.predict([input_seq, result])[0][-1].argmax()
if next_token == END_TOKEN:
break
result = np.concatenate([result, [[next_token]]], axis=1)
output_seq = result[0][1:] # omit the first INPUT_TOKEN
A more efficient solution would output the RNN state along with each token and use it to produce the next token.
I've successfully trained a model in Keras using an encoder/decoder structure + attention + glove following several examples, most notably this one and this one. It's based on a modification of machine translation. This is a chatbot, so the input is words and so is the output. However, I've struggled to setup inference (prediction) properly and can't figure it out how to get past a graph disconnect. My bidirectional RNN encoder/decoder with embedding and attention is training fine. I've tried modifying the decoder, but feel there is something obvious that I'm not seeing.
Here is the basic model:
from keras.models import Model
from keras.layers.recurrent import LSTM
from keras.layers import Dense, Input, Embedding, Bidirectional, RepeatVector, concatenate, Concatenate
## PARAMETERS
HIDDEN_UNITS = 100
encoder_max_seq_length = 1037 # maximum size of input sequence
decoder_max_seq_length = 187 # maximum size of output sequence
num_encoder_tokens = 6502 # a.k.a the size of the input vocabulary
num_decoder_tokens = 4802 # a.k.a the size of the output vocabulary
## ENCODER
encoder_inputs = Input(shape=(encoder_max_seq_length, ), name='encoder_inputs')
encoder_embedding = Embedding(input_dim = num_encoder_tokens,
output_dim = HIDDEN_UNITS,
input_length = encoder_max_seq_length,
weights = [embedding_matrix],
name='encoder_embedding')(encoder_inputs)
encoder_lstm = Bidirectional(LSTM(units = HIDDEN_UNITS,
return_sequences=True,
name='encoder_lstm'))(encoder_embedding)
## ATTENTION
attention = AttentionL(encoder_max_seq_length)(encoder_lstm)
attention = RepeatVector(decoder_max_seq_length)(attention)
## DECODER
decoder_inputs = Input(shape = (decoder_max_seq_length, num_decoder_tokens),
name='decoder_inputs')
merge = concatenate([attention, decoder_inputs])
decoder_lstm = Bidirectional(LSTM(units = HIDDEN_UNITS*2,
return_sequences = True,
name='decoder_lstm'))(merge)
decoder_dense = Dense(units=num_decoder_tokens,
activation='softmax',
name='decoder_dense')(decoder_lstm)
decoder_outputs = decoder_dense
## Configure the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.load_weights('trained_models/viable-2/word-weights.h5')
encoder_model = Model(encoder_inputs, attention)
model.compile(loss='categorical_crossentropy', optimizer='adam')
It looks like this:
enter image description here
Here is where I run into trouble:
## INFERENCE decoder setup
decoder_inputs_2 = Concatenate()([decoder_inputs, attention])
decoder_lstm = Bidirectional(LSTM(units=HIDDEN_UNITS*2, return_state = True, name='decoder_lstm'))
decoder_outputs, forward_h, forward_c, backward_h, backward_c= decoder_lstm(decoder_inputs_2)
state_h = Concatenate()([forward_h, backward_h])
state_c = Concatenate()([forward_c, backward_c])
decoder_states = [state_h, state_c]
decoder_dense = Dense(units=num_decoder_tokens, activation='softmax', name='decoder_dense')
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs, attention], [decoder_outputs] + decoder_states)
This generates a graph disconnection error: Graph disconnected: cannot obtain value for tensor Tensor("encoder_inputs_61:0", shape=(?, 1037), dtype=float32) at layer "encoder_inputs". The following previous layers were accessed without issue: []
It should be possible to do inference like this, but I can't get past this error. It isn't possible for me to simply add decoder_output and attention together because they're of different shapes.
I’m trying to implement a Visual Storytelling model using Keras with a hierarchical RNN model, basically Neural Image Captioner style but over a sequence of photos with a bidirectional RNN on top of the decoder RNNs.
I implemented and tested the three parts of this model, CNN, BRNN and decoder RNN separately but got this error when trying to connect them:
ValueError: An operation has None for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
My code are as follows:
#vgg16 model with the fc2 layer as output
cnn_base_model = self.cnn_model.base_model
brnn_model = self.brnn_model.model
rnn_model = self.rnn_model.model
cnn_part = TimeDistributed(cnn_base_model)
img_input = Input((self.story_length,) + self.cnn_model.input_shape, name='brnn_img_input')
extracted_feature = cnn_part(img_input)
#[None, 5, 512], a 512 length vector for each picture in the story
brnn_feature = brnn_model(extracted_feature)
#[None, 5, 25], input groundtruth word indices fed as input when training
decoder_input = Input((self.story_length, self.max_length), name='brnn_decoder_input')
decoder_outputs = []
for i in range(self.story_length):
#separate timesteps for decoding
decoder_input_i = Lambda(lambda x: x[:, i, :])(decoder_input)
brnn_feature_i = Lambda(lambda x: x[:, i, :])(brnn_feature)
#the problem persists when using Dense instead of the Lambda layers above
#decoder_input_i = Dense(25)(Reshape((125,))(decoder_input))
#brnn_feature_i = Dense(512)(Reshape((5 * 512,))(brnn_feature))
decoder_output_i = rnn_model([decoder_input_i, brnn_feature_i])
decoder_outputs.append(decoder_output_i)
decoder_output = Concatenate(axis=-2, name='brnn_decoder_output')(decoder_outputs)
self.model = Model([img_input, decoder_input], decoder_output)
And codes for the BRNN:
image_feature = Input(shape=(self.story_length, self.img_feature_dim,))
image_emb = TimeDistributed(Dense(self.lstm_size))(image_feature)
brnn = Bidirectional(LSTM(self.lstm_size, return_sequences=True), merge_mode='concat')(image_emb)
brnn_emb = TimeDistributed(Dense(self.lstm_size))(brnn)
self.model = Model(inputs=image_feature, outputs=brnn_emb)
And RNN:
#[None, 512], the vector to be decoded
initial_input = Input(shape=(self.input_dim,), name='rnn_initial_input')
#[None, 25], the groundtruth word indices fed as input when training
decoder_inputs = Input(shape=(None,), name='rnn_decoder_inputs')
decoder_input_masking = Masking(mask_value=0.0)(decoder_inputs)
decoder_input_embeddings = Embedding(self.vocabulary_size, self.emb_size,
embeddings_regularizer=l2(regularizer))(decoder_input_masking)
decoder_input_dropout = Dropout(.5)(decoder_input_embeddings)
initial_emb = Dense(self.emb_size,
kernel_regularizer=l2(regularizer))(initial_input)
initial_reshape = Reshape((1, self.emb_size))(initial_emb)
initial_masking = Masking(mask_value=0.0)(initial_reshape)
initial_dropout = Dropout(.5)(initial_masking)
decoder_lstm = LSTM(self.hidden_dim, return_sequences=True, return_state=True,
recurrent_regularizer=l2(regularizer),
kernel_regularizer=l2(regularizer),
bias_regularizer=l2(regularizer))
_, initial_hidden_h, initial_hidden_c = decoder_lstm(initial_dropout)
decoder_outputs, decoder_state_h, decoder_state_c = decoder_lstm(decoder_input_dropout,
initial_state=[initial_hidden_h, initial_hidden_c])
decoder_output_dense_layer = TimeDistributed(Dense(self.vocabulary_size, activation='softmax',
kernel_regularizer=l2(regularizer)))
decoder_output_dense = decoder_output_dense_layer(decoder_outputs)
self.model = Model([decoder_inputs, initial_input], decoder_output_dense)
I’m using adam as optimizer and sparse_categorical_crossentropy as loss.
At first I thought the problem is with the Lambda layers used for splitting the timesteps but the problem persists when I replaced them with Dense layers (which are guarantee
I had a similar error and it turned out I was suppose to build the layers (in my custom layer or model) in the init() like so:
self.lstm_custom_1 = keras.layers.LSTM(128,batch_input_shape=batch_input_shape, return_sequences=False,stateful=True)
self.lstm_custom_1.build(batch_input_shape)
self.dense_custom_1 = keras.layers.Dense(32, activation = 'relu')
self.dense_custom_1.build(input_shape=(batch_size, 128))```
The issue is actually with the Embedding layer, I think. Gradients can't pass through an Embedding layer, so unless it's the first layer in the model it won't work.
I build a seq2seq model using keras like this:
def build_bidi_model():
rnn = layers.LSTM
# Encoder
# encoder inputs
encoder_inputs = Input(shape=(None,), name='encoder_inputs')
# encoder embedding
encoder_embedding = Embedding(num_encoder_tokens, encoder_embedding_dim,name='encoder_embedding')(encoder_inputs)
# encoder lstm
encoder_lstm = Bidirectional(rnn(latent_dim, return_state=True, dropout=0.2,recurrent_dropout=0.5),name='encoder_lstm')
_, *encoder_states = bidi_encoder_lstm(encoder_embedding)
# Decoder
# decoder inputs
decoder_inputs = Input(shape=(None,), name='decoder_inputs')
# decoder embeddding
decoder_embedding = Embedding(num_decoder_tokens, decoder_embedding_dim, name='decoder_embedding')(decoder_inputs)
# decoder lstm,
decoder_lstm = Bidirectional(rnn(latent_dim, return_state=True,
return_sequences=True, dropout=0.2,
recurrent_dropout=0.5),name='decoder_lstm')
# get outputs and decoder states
rnn_outputs, *decoder_states = decoder_lstm(decoder_embedding, initial_state=encoder_states)
# decoder dense
decoder_dense = Dense(num_decoder_tokens, activation='softmax', name='decoder_dense')
decoder_outputs = decoder_dense(rnn_outputs)
bidi_model = Model([encoder_inputs,decoder_inputs], [decoder_outputs])
bidi_model.compile(optimizer='adam', loss='categorical_crossentropy')
return bidi_model
The training loss and validaiton loss is really low, but when I try to do inference with the trained model, it turns out a bad results.
here is the inference code:
reverse_input_char_index = dict([(i, char) for char, i in input_token_index.items()])
reverse_target_word_index = dict([(i, char) for char, i in target_token_index.items()])
def decode_sequence(input_seq, encoder_model, decoder_model):
# get encoder states
states_value = encoder_model.predict(input_seq)
# create a empty sequence
target_seq = np.zeros((1, 1))
target_seq[0, 0] = target_token_index['start']
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output, *decoder_states = decoder_model.predict([target_seq] + states_value)
sampled_token_index = np.argmax(output[0, -1, :])
sampled_word = reverse_target_word_index[sampled_token_index]
decoded_sentence += sampled_word
if sampled_word == 'end' or len(decoded_sentence) > max_decoder_seq_length:
stop_condition = True
# update target_seq
target_seq = np.zeros((1, 1))
target_seq[0, 0] = sampled_token_index
# update states
states_value = decoder_states
return decoded_sentence
It is weird that the inference result is pretty good when I build another similar model which remove the Bidirectional layer in decoder. So I am wondering if my code is wrong? Please help.
I think the issue is that: for decoder, it is not possible to do the inference using bidirectional.
Since bidirectional is actually the original sequence and also the reverse of the sequence, when in inference mode, your model is only able to predict the starting character of the results, it does not know the ending character yet. Feeding only the starting character would be a bad input for the decoder. Garbage in and garbage out, so your model performance is way off.
Hope this makes sense.