I'm trying to build a chatbot using the seq2seq model in Keras. I have used the standard seq2seq model specified in the Keras Blog. I have used Word2vec for word embeddings. My problem is that I am getting negative values for loss while training.
Why is this happening and how can I fix it ? Thanks.
from keras.models import Model
from keras.layers import Input, LSTM, Dense
# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
I found the reason of the negative values.
mainly because the vector representation from word2vec contains negative values so the loss function does not calculate the loss correctly
Instead of using the vector representation in the decoder_target_data, use one hot encoder of the vocabulary of all the words in the word2vec model
Related
Decoder model inference is giving error graph disconnected error during inference Decoder model inference is giving error graph disconnected error during inference Decoder model inference is giving error graph disconnected error during inference
# TRAINING WITH TEACHER FORCING
# Define an input sequence and process it.
encoder_inputs= Input(shape=(n_timesteps_in, n_features))
encoder_lstm = LSTM(LSTMoutputDimension, return_sequences=True, return_state=True, name='encoder_lstm')
LSTM_outputs, state_h, state_c = encoder_lstm(encoder_inputs)
# We discard `LSTM_outputs` and only keep the other states.
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(None, n_features), name='decoder_inputs')
attention= BahdanauAttention(LSTMoutputDimension, verbose = 1)
decoder_lstm = LSTM(LSTMoutputDimension, return_sequences=True, name='decoder_lstm')
decoder_lstm1 = LSTM(LSTMoutputDimension, return_sequences=True, return_state=True, name='decoder_lstm1')
# Set up the decoder, using `context vector` as initial state.
decoder_outputs= decoder_lstm(decoder_inputs,initial_state=encoder_states)
context_vector, weights = attention(decoder_outputs, LSTM_outputs)
#context_vector = tf.expand_dims(context_vector, 1)
decoder_outputs2 = tf.concat([context_vector, decoder_outputs], axis=-1)
decoder_outputs, s, h = decoder_lstm1(decoder_outputs2)
#complete the decoder model by adding a Dense layer with Softmax activation function
#for prediction of the next output
#Dense layer will output one-hot encoded representation as we did for input
#Therefore, we will use n_features number of neurons
decoder_dense = Dense(n_features, activation='softmax', name='decoder_dense')
decoder_outputs = decoder_dense(decoder_outputs)
Inference given under is not working
decoder_state_input_h = Input(shape=(LSTMoutputDimension,))
decoder_state_input_c = Input(shape=(LSTMoutputDimension,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
context_vector, weights = attention(decoder_outputs, LSTM_outputs)
decoder_outputs2 = tf.concat([context_vector, decoder_outputs], axis=-1)
decoder_outputs3, dh, dc = decoder_lstm1(decoder_outputs2)
deStates = [dh, dc]
decoder_model = Model( [decoder_inputs] + decoder_states_inputs, [decoder_outputs3] +deStates)
I wanted to see dimensions while model is training because model trains in a black box.
I wanted to see dimension of decoder input iteratively as decoder moves in time series. It is a Teacher forcing decoder. But when i fit model it just show me accuracy, loss, epochs and iterations even if i appy verbose 1 or 2. Code is below in keras.
# TRAINING WITH TEACHER FORCING
# Define an input sequence and process it.
encoder_inputs= Input(shape=(n_timesteps_in, n_features))
encoder_lstm=LSTM(LSTMoutputDimension, return_state=True)
LSTM_outputs, state_h, state_c = encoder_lstm(encoder_inputs)
# We discard `LSTM_outputs` and only keep the other states.
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(None, n_features), name='decoder_inputs')
decoder_lstm = LSTM(LSTMoutputDimension, return_sequences=True, return_state=True, name='decoder_lstm')
# Set up the decoder, using `context vector` as initial state.
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
#complete the decoder model by adding a Dense layer with Softmax activation function
#for prediction of the next output
#Dense layer will output one-hot encoded representation as we did for input
#Therefore, we will use n_features number of neurons
decoder_dense = Dense(n_features, activation='softmax', name='decoder_dense')
decoder_outputs = decoder_dense(decoder_outputs)
Many tutorials for seq2seq encoder-decoder architecture based on LSTM, (for example English-French translation), define the model as follow:
encoder_inputs = Input(shape=(None,))
en_x= Embedding(num_encoder_tokens, embedding_size)(encoder_inputs)
# Encoder lstm
encoder = LSTM(50, return_state=True)
encoder_outputs, state_h, state_c = encoder(en_x)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
# french word embeddings
dex= Embedding(num_decoder_tokens, embedding_size)
final_dex= dex(decoder_inputs)
# decoder lstm
decoder_lstm = LSTM(50, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(final_dex,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# While training, model takes eng and french words and outputs #translated french word
fullmodel = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# rmsprop is preferred for nlp tasks
fullmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['acc'])
fullmodel.fit([encoder_input_data, decoder_input_data], decoder_target_data,
batch_size=128,
epochs=100,
validation_split=0.20)
Then for prediction, they define infernce models as follow:
# define the encoder model
encoder_model = Model(encoder_inputs, encoder_states)
encoder_model.summary()
# Redefine the decoder model with decoder will be getting below inputs from encoder while in prediction
decoder_state_input_h = Input(shape=(50,))
decoder_state_input_c = Input(shape=(50,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
final_dex2= dex(decoder_inputs)
decoder_outputs2, state_h2, state_c2 = decoder_lstm(final_dex2, initial_state=decoder_states_inputs)
decoder_states2 = [state_h2, state_c2]
decoder_outputs2 = decoder_dense(decoder_outputs2)
# sampling model will take encoder states and decoder_input(seed initially) and output the predictions(french word index) We dont care about decoder_states2
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs2] + decoder_states2)
Then predict using:
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1,1))
# Populate the first character of target sequence with the start character.
target_seq[0, 0] = target_token_index['START_']
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += ' '+sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '_END' or
len(decoded_sentence) > 52):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1,1))
target_seq[0, 0] = sampled_token_index
# Update states
states_value = [h, c]
return decoded_sentence
My question is, they trained the model with the name 'fullmodel' to get best weights ... in prediction part, they used the inference models with names (encoder_model & decoder_model) ... so they didn't use any weights from the 'fullmodel' ?!
I don't understand how they benefit from the trained model!
The trick is that everything is in the same variable scope, so the variables got reused.
If you notice carefully, the trained layer weights are being reused.
For example, while creating decoder_model we use decoder_lstm layer which was defined as a part of full model,
decoder_outputs2, state_h2, state_c2 = decoder_lstm(final_dex2, initial_state=decoder_states_inputs),
and encoder model too uses, encoder_inputs and encoder_states layer defined previously.
encoder_model = Model(encoder_inputs, encoder_states)
Due to the architecture of the encoder-decoder model, we need to perform these implementations hacks.
Also, as the keras documentation mentions, With the functional API, it is easy to reuse trained models: you can treat any model as if it were a layer, by calling it on a tensor. Note that by calling a model you aren't just reusing the architecture of the model, you are also reusing its weights. For more details refer - https://keras.io/getting-started/functional-api-guide/#all-models-are-callable-just-like-layers
I've successfully trained a model in Keras using an encoder/decoder structure + attention + glove following several examples, most notably this one and this one. It's based on a modification of machine translation. This is a chatbot, so the input is words and so is the output. However, I've struggled to setup inference (prediction) properly and can't figure it out how to get past a graph disconnect. My bidirectional RNN encoder/decoder with embedding and attention is training fine. I've tried modifying the decoder, but feel there is something obvious that I'm not seeing.
Here is the basic model:
from keras.models import Model
from keras.layers.recurrent import LSTM
from keras.layers import Dense, Input, Embedding, Bidirectional, RepeatVector, concatenate, Concatenate
## PARAMETERS
HIDDEN_UNITS = 100
encoder_max_seq_length = 1037 # maximum size of input sequence
decoder_max_seq_length = 187 # maximum size of output sequence
num_encoder_tokens = 6502 # a.k.a the size of the input vocabulary
num_decoder_tokens = 4802 # a.k.a the size of the output vocabulary
## ENCODER
encoder_inputs = Input(shape=(encoder_max_seq_length, ), name='encoder_inputs')
encoder_embedding = Embedding(input_dim = num_encoder_tokens,
output_dim = HIDDEN_UNITS,
input_length = encoder_max_seq_length,
weights = [embedding_matrix],
name='encoder_embedding')(encoder_inputs)
encoder_lstm = Bidirectional(LSTM(units = HIDDEN_UNITS,
return_sequences=True,
name='encoder_lstm'))(encoder_embedding)
## ATTENTION
attention = AttentionL(encoder_max_seq_length)(encoder_lstm)
attention = RepeatVector(decoder_max_seq_length)(attention)
## DECODER
decoder_inputs = Input(shape = (decoder_max_seq_length, num_decoder_tokens),
name='decoder_inputs')
merge = concatenate([attention, decoder_inputs])
decoder_lstm = Bidirectional(LSTM(units = HIDDEN_UNITS*2,
return_sequences = True,
name='decoder_lstm'))(merge)
decoder_dense = Dense(units=num_decoder_tokens,
activation='softmax',
name='decoder_dense')(decoder_lstm)
decoder_outputs = decoder_dense
## Configure the model
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.load_weights('trained_models/viable-2/word-weights.h5')
encoder_model = Model(encoder_inputs, attention)
model.compile(loss='categorical_crossentropy', optimizer='adam')
It looks like this:
enter image description here
Here is where I run into trouble:
## INFERENCE decoder setup
decoder_inputs_2 = Concatenate()([decoder_inputs, attention])
decoder_lstm = Bidirectional(LSTM(units=HIDDEN_UNITS*2, return_state = True, name='decoder_lstm'))
decoder_outputs, forward_h, forward_c, backward_h, backward_c= decoder_lstm(decoder_inputs_2)
state_h = Concatenate()([forward_h, backward_h])
state_c = Concatenate()([forward_c, backward_c])
decoder_states = [state_h, state_c]
decoder_dense = Dense(units=num_decoder_tokens, activation='softmax', name='decoder_dense')
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs, attention], [decoder_outputs] + decoder_states)
This generates a graph disconnection error: Graph disconnected: cannot obtain value for tensor Tensor("encoder_inputs_61:0", shape=(?, 1037), dtype=float32) at layer "encoder_inputs". The following previous layers were accessed without issue: []
It should be possible to do inference like this, but I can't get past this error. It isn't possible for me to simply add decoder_output and attention together because they're of different shapes.
With reference to this blog in the return state section:
https://machinelearningmastery.com/return-sequences-and-return-states-for-lstms-in-keras/
I am trying to implement a multivariate (predicting 2 outputs- y1 & y2) stateful LSTM model. Here is the snippet:
## defining the model
def my_model():
input_x = Input(batch_shape=(batch_size, look_back, x_train.shape[2]), name='input')
drop = Dropout(0.5)
lstm_1, state_h, state_c = LSTM(50, return_sequences=False,batch_input_shape=(batch_size, look_back, x_train.shape[2]),return_state=True)(input_x)
lstm_1_drop = drop(lstm_1)
y1 = Dense(1, activation='linear', name='op1')(lstm_1_drop)
y2 = Dense(1, activation='linear', name='op2')(lstm_1_drop)
model = Model(inputs=input_x, outputs=[y1,y2])
optimizer = Adam(lr=0.0005, decay=0.00001)
model.compile(loss='mse', optimizer=optimizer,metrics=['mse'])
model.summary()
return model
model = my_model()
history = model.fit(x_train, [y_11_train,y_22_train], epochs=1, batch_size=batch_size, verbose=0, shuffle=False)
Question
I have some issues here: I am not sure if it has been properly implemented here yet as I didn't feed the lstm_1, state_h, state_c in the outputs of Model() as explained in the blog, because here I have two different predictions instead of 1 as stated in the blog.
If I have to feed lstm_1, state_h, state_c in the outputs of Model() how can it be implemented and how it will affect the model.fit section.
Any help will be highly appreciated.
Thanks
Reading the Blog post it seems like the author did implement state_h in the output to investigate the insides of the lstm layers. Such an output can be usefull for an already trained network but not for training.
For training you can safely leave the information out of your output.
If yu want to have the information during prediction, simply define a second model:
model = Model(inputs=input_x, outputs=[y1,y2,state_h,state_c])
Keras will then reuse your already trained layers and you have the information in your output without worrying about your training.